OpenMediaVault

Backing Up All The Things

Having a backup of your data is important, and for me it’s taken several different forms over the years — morphing as my needs have changed, as I’ve gotten better at doing backups, and as my curiosity has compelled me.

For various reasons that will become clear, I’ve iterated through yet another backup system/strategy which I think would be useful to share.

The Backup System That Was

The most recently incarnation of my backup strategy was centered around CrashPlan and looked something like this:

Atlas is my NAS and where a bulk of the data I care about is located. It backs up its data to CrashPlan Cloud.

Andrew and Rachel are the laptops we have. I also care about that data and they also backup to CrashPlan Cloud. Additionally, they also backup to Atlas using CrashPlan’s handy peer-to-peer system.

Brother and Mom are extended family member’s laptops that just backup to CrashPlan Cloud

Fremont is the web server (decommissioned recently though), it used to backup to CrashPlan as well.

This all worked great because CrashPlan offered a (frankly) unbelievably good CrashPlan+ Family Plan deal that allowed up ten computers and “unlimited” data — which CrashPlan took to mean somewhere around 20TB of total backups1 — for $150/year. In terms of pure data storage cost this was $0.000625/GB/month2, which is an order of magnitude less than Amazon Glacier’s cost of $0.004/GB/month3.

And then one year ago CrashPlan announced:

 

we have shifted our business strategy to focus on the enterprise and small business segments. This means that over the next 14 months we will be exiting the consumer market and you must choose another option for data backup before your subscription expires.


To allow you time to transition to a new backup solution, we’ve extended your subscription (at no cost to you) by 60 days. Your new subscription expiration date is 09/28/2018.

 

Important Things In A Backup System

3-2-1-Bang

First a quick refresher on how to backup. Arguably the best method is the 3-2-1-bang strategy: “three total copies of your data of which two are local but on different mediums (read: devices), and at least one copy offsite.” Bang represents inevitable scenario where you have to use your backup.

This can be as simple as backing up your computer to two external hard drives — one you keep at home and backup to weekly and one you leave at a friends house and backup to monthly.

Of course, it can also be more complex.

Considerations

Replacing CrashPlan was hard because it has so many features for its price point, especially:

  • Encryption
  • Snapshots
  • Deduplication
  • Incremental backup
  • Recentness

…these would become my core requirements, in addition to also needing to understand how the backup software works (because of this I strongly prefer open-source).

I also had additional considerations I needed to keep in mind:

  • How much data I needed to backup:
    • Atlas: While I have 12TB of usable space (of which I’m using 10TB), I only had about 7TB of data to backup.
    • My Laptop: < 1TB GB
    • Wife’s Laptop: < 0.250 TB
    • Extended family: <500 GB each
    • Fremont:  decommissioned  in 2017, but < 20 GB at the time
  • How recent I wanted the backups to be (put another way, how much time/effort was I willing to loose):
    • I was willing to lose up to one hour of data
  • What kind of disasters was I looking to mitigate:
    • Hyper localized incident (e.g. hard drive failure, stupidity, file corruption, theft, etc)
      • This could impact a single device
    • Localized incident (e.g. fire, burglary, etc)
      • This could impact all devices within a given structure ( < ~ 1000 m radius)
    • Regionalized incident (e.g. earthquake, flood, etc)
      • This could impact all devices in the region (~ 1000 km radius)
  • How much touch-time did I want to put in to maintain the system:
    •  As little as possible (< 10 hours/year)

The New Backup System

There’s no single key to the system and this is probably the way it should be. Instead, it’s a series of smaller, modular elements that work together and can be replaced as needed.

My biggest concern was cost, and the primary driver for cost was going to be where to store the backups.

Where to put the data?

I did look at off-the-shelf options and my first consideration was just staying with CrashPlan and moving to their Small Business plan, but at $120/device/year I was looking at $360/year just to backup Atlas, Andrew, and Rachel.

Carbonite, a CrashPlan competitor but also who CrashPlan has partnered with to transition their home users to, has a “Safe” plan for $72/device/year, but it was a non-starter because they don’t support Linux, have a 30 day limit on file restoration, and do silly things like not automatically backing up files over 4GB and not backing up video files.

Backblaze is The Wirecutter’s Best Pick comes in at $50/device/year for unlimited data with no weird file restrictions, but there’s some wonkiness about file permissions and time stamps, and it also only retains old file versions/deleted files for 30 days.

I decided I could live with Backblaze Backups to handle the off-site copies for the laptops, at least for now. I was back to the drawing board for Atlas though.

The most challenging part was how to create a cost-effective solution for highly-recent off-site data backup. I looked at various cloud storage options4, setting up a server at a friends house (high initial costs, hands-on maintenance would be challenging, not enough bandwidth), and using external hard drives (recentness would be too prolonged in backups).

I was dreading how much data I had as it looked like backing up to the cloud was going to be the only viable option, even if it was expensive.

In an attempt to reduce my overall amount of data hoarding, I looked at the different kinds of data I had and noticed that only a relatively small amount changed on a regular basis — 2.20% within the last year, and 4.70% within the last three years.

The majority5 was “archive data” that I still want to have immediate (read-only) access to, but was not going to change, either because they are the digital originals (e.g. DV, RAW, PDF) or other files I keep for historic reasons — by the way, if I’ve ever worked on a project for you and you want a copy because you lost yours there’s a good chance I still have it.

Since archive data wasn’t changing, recentness would not be an issue and I could easily store external hard drives offsite. The significantly smaller amount of active data I could now backup in the cloud for a reasonable cost.

Backblazes B2 has the lowest overall costs for cloud storage: $0.005/GB/month with a retrieval fee of $0.01/GB6.

Assuming I’m only backing up the active data (~300GB) and I have a 20% data change rate over a year (i.e. 20% of the data will change over the year which I will also need to backup) results in roughly $21.60/year worth of costs. Combined with two external WD 8TB hard drives for rotating through off-site storage and the back-of-the-envelope calculations were now in the ballpark of just $85/year when amortized over five years.

How to put the data?

I looked at, tested, and eventually passed on several different programs:

  • borg/attic…requires server-side software
  • duplicity…does not deduplicate
  • Arq…does not have a Linux version
  • duplicacy…doesn’t support restoring files directly to a directory outside of the repository7

To be clear: these are all very good programs and in another scenario I would likely use one of them.

Also, deduplication was probably the biggest issue for me, not so much because I thought I had a lot of files that were identical (or even parts of files) — I don’t — but because I knew I was going to be re-organizing lots of files and when you move a file to a different folder the backup program (without deduplication capability) doesn’t know that it’s the same file8.

I eventually settled on Duplicati — not to be confused with duplicity or duplicacy — because it ticks all right boxes for me:

  • open source (with a good track record and actively maintained)
  • client side (e.g. does not require a server-side software)
  • incremental
  • block-level deduplication
  • snapshots
  • deletion
  • supports B2 and local storage destinations
  • multiple retention policies
  • encryption (including ability to use asymmetric keys with GPG!)

Fortunately, OpenMediaVault (OMV) supports Duplicati through the OMVExtras plugin, so installing and managing it was very easy.

The default settings appear to be pretty good and I didn’t change anything except for:

Adding SSL encryption for the web-based interface

Duplicati uses a web-based interface9 that is only designed to be used on the local computer — it’s not designed to be run on a server and have then access the GUI remotely through a browser. Because it was only designed to be accessed from localhost, it sends passwords in the clear, which is a concern but one that has already been filed as an issue and can be mitigated with using HTTPS.

Unfortunately, the OMV Duplicati plugin doesn’t support enabling HTTPS as one of its options.

Fortunately, I’m working on a patch to fix that: https://github.com/fergbrain/openmediavault-duplicati/tree/ssl

Somewhat frustratingly, Duplicati requires using the PKCS 12 certificate format. Thus I did have to repackage Atlas’ SSL key:

openssl pkcs12 -export -out certificate.pfx -inkey private_key.key -in server_certifcate.crt -certfile CAChain.crt

Asymmetric keys

Normally Duplicati uses symmetric keys. However, when doing some testing with duplicity I was turned on to the idea of using asymmetric keys.

If you generated the GPG key on your server then you’re all set. However, if you generated them elsewhere you’ll need to move over to the server and then import them:

gpg --import private.key
gpg --edit-key {KEY} trust quit
# enter 5<RETURN>
# enter y<RETURN>

Once you have your GPG key on the server you can then configure Duplicati to use them. This is not intuitive but has been documented:

--encryption-module=gpg
--gpg-encryption-command=--encrypt
--gpg-encryption-switches=--recipient "andrew@example.com"
--gpg-decryption-command=--decrypt
--passphrase=unused

Note: the recipient can either be an email address (e.g. andrew@example.com) or it can be a GPG Key ID (e.g. 9C7F1D46).

The last piece of the puzzle was how to manage my local backups for the laptops. I’m currently using Arq and TimeMachine to make nightly backups to Atlas on a trial basis.

Final Result

The resulting setup actually ends up being very similar to what I had with CrashPlan, with the exception of adding two rotating external drives which brings me into compliance with the “3 total copies” rule — something that was lacking.

Each external hard drive will spend a year off-site (as the off-site copy) and then a year on-site where it will serve as the “second” copy of the data (first is the “live” version, second is the on-site backup, and third is the the off-site backup).

Overall, this system should be usable for at the least the next five years — at least in terms of data capacity and wear/tear. Total costs should be under $285/year. However, I’m going to work on getting that down even more over the next year by looking at alternatives to the relatively high per-device cost for Backblaze Backup which only makes sense if a device is backing up close to 1TB of data — which I’m not.

Update: Edits based on feedback

3
  1. “While there is no current limitation for CrashPlan Unlimited subscribers on the amount of User Data backed up to the Public Cloud, Code 42 reserves the right in the future, in its sole discretion, to set commercially reasonable data storage limits (i.e. 20 TB) on all CrashPlan+ Family accounts.” Source 

  2. my actual usage was closer to 8TB, so my actual rate was ~$0.0015/GB/month…still an amazingly good deal 

  3. which also has additional costs associated with retrieval processing that could run up to near $2000 if you actually had to restore 20TB worth of data 

  4. very expensive – on the order of $500 to $2500/year for 10 TB 

  5. 95.30% had not been modified within the last three years 

  6. however there is also a trial program where they ship you a hard drive for free…you just pay return postage. 

  7. though the more I’ve though about it the more question if this would actually be a problem 

  8. it’s basically the same operation as making a copy of a file and then deleting the original version 

  9. you can also use the CLI 

OpenMediaVault, Round 2: Picking a NAS for Home

One year ago, I spent my Thanksgiving setting up OpenMediaVault on a computer I had just hanging around. It has served me faithfully through the years, but several things became clear, the most important thing being that external hard drives are not designed to be continuously powered.

I had two drives fail and a growing concerns about the remaining disks. I use CrashPlan to backup the data, so I wasn’t concern with losing the data, but I was concerned with having it available when I needed it.

I also had a huge increase in storage requirements, due mostly to my video archiving project from last Christmas (which I still need to write up).

I also got married this year, and Rachel had several external drives I was hoping to consolidate. Ironically, her computer also died last week…good thing we had a back up!

The need was clear: a more robust NAS with serious storage requirements.

Requirements

Minimum Requirements:

  • Multiple user access
  • Simultaneous user access
  • File sharing (prefer SMB)
  • Media sharing (prefer iTunes DAAP and DLNA)
  • Access Control List (ACL)
  • High availability (99% up time ~ 3.5 days of downtime/year) for all local users
  • Remote backup (prefer CrashPlan)
  • 10TB of usable space
  • Minumum 100MBit/s access rate
  • Minimal single points of failture (e.g. RAID 5, ZFS, or BTRFS)
  • Secure system
  • Minimum of five years of viable usage
  • Cost effective

Trade Study

I performed a trade study based on four major options:

  1. Upgrading internal the drives with systems
  2. Continuing to use external hard drives
  3. Using cloud storage
  4. Using a NAS
Internal External Cloud Network
Multiple User Access 2 3 3 3
Simultaneous User Access 2 2 3 3
File Sharing 3 3 3 3
Media Sharing 2 2 1 3
Access Control List 3 2 3 3
> 99% Up Time 0 0 3 3
Remote Backup 3 3 2 3
> 10TB Usable Space 1 1 3 3
> 100MBit/s bandwidth 3 3 1 3
Minimal Single Point of Failure 3 1 2 3
Secure System 3 3 1 3
> 5 Years of Usage 3 3 3 3
Total 28 26 28 36

From this trade study, the differentiations pop-out pretty quick: Accessibility and security.

Accessibility

Accessibility covers multiple and simultaneous user access, as well as bandwidth of data.

Single user storage

While increasing the internal local storage is often the best option for a single user, we are in a multi-user environment and the requirement for simultaneous access requires some sort of network connection. This requirement eliminates both per-user options of increasing either the internal or external disk space. Also, the feasibility of increasing the disk space would have been impossible give that Rachel and I both use laptops.

Cloud Storage

Storing and sharing data in the Internet has become incredibly easy thanks to the likes of DropBox, Google Drive, Microsoft Spaces, Microsoft Azure, RackSpace Cloud Storage, Amazon S3, SpiderOak, and the like. In fact, many consumer Cloud storage solutions (such as DropBox) use enterprises systems (such as Amazon S3) to store their data. Because it’s provided as a network service, simultaneous data access with multiple users is possible.

The challenge of Cloud Storage is getting access to the data, which requires a working Internet connection and sufficient bandwidth to transport the data. Current bandwidth with Comcast typically limited to no more than ~48MBits/s, which is less than 50% of the 100MBit/s requirement. While higher data rates are possible, they are cost prohibitive at this time.

NAS

Network Attached Storage Devices are not a new thing and have been around for decades. Within the last 10 years though, their popularity in home and home office environments has become greater as the costs of implementation and maintenance have decreased. At its core a NAS is a computer with lots of internal storage that is shared with users over the home network. While more costly than simply increasing internal/external local storage, it provides significantly better access to the data.

Because the NAS is primarily accessed over the home network, the speed of access is limited to the connection speed of the NAS to the network and the network to the end system. Directly connected systems (using an ethernet cable) can reach speeds of 1000 MBit/s and 300MBit/s over wireless. This is significantly slower than directly connected drives, but faster than externally connected USB 2.0 drives and Cloud Storage. Most files would open in less than one second and all video files would be able to stream immediately with no buffering.

System Security

Securing data is the other challenge.

Cloud Storage

Because the data is stored by a third-party there are considerable concerns about data safety, as well as the right to data privacy from allegedly lawful (but arguably constitutionally illegal) search and seizures by government agencies.

I ran into similar issues with securing my Linode VPS, and ended up not taking any extraordinary steps because the bottom line is: without physical control of the data, the data is not secure.

The data that I’m looking to store for this project is certainly more sensitive than whatever I host on the web. There are many ways to implement asymmetric encryption to store files, but it would also require that each end-user have the decryption keys. Key management gets very complicated very quick (trust me) and also throws out any hope of streaming media.

NAS

Since the NAS is local to the premise, physical control of data is maintained and also given the superior protection of the 4th Amendment for such items in your control.

Additionally, the system is behind several layers of security that would make remote extraction of data highly difficult and improbable.

Designing a NAS

With a NAS selected, I had to figure out which one. But first, a short primer on the 10TB of usable space and what that means.

Hard Drives

Capacity

I arrived at the 10TB requirement by examining the amount of storage we were currently use and then extrapolating what we might need over the next five years, which is generally considered the useful-life period1:

field failure rate pattern of hdd

While the “bathtub curve” has been widely used as a benchmark for life expectancy:

Changes in disk replacement rates during the first five years of the lifecycle were more dramatic than often assumed. While replacement rates are often expected to be in steady state in year 2-5 of operation (bottom of the “bathtub curve”), we observed a continuous increase in replacement rates, starting as early as in the second year of operation.2

Practically speaking, the data show that:

For drives less than five years old, field replacement rates were larger than what the datasheet MTTF suggested by a factor of 2-10. For five to eight year old drives, field replacement rates were a factor of 30 higher than what the datasheet MTTF suggested.3

Something to keep in mind if you’re building larger systems.

Redundancy

Unfortunately, there is no physical 10TB drive one can buy, but a series of smaller drives can be logically arranged to appear as 10TB. However, the danger of logically arranging these drives is that typically if any single drive fails, you would lose all the data. To prevent this, a redundancy system is employed that allows at least one drive to fail, but still have access to all the data.

Using a RAID array is the de facto way to do this, and RAID 5 has been the preferred implementation because it has one of the best storage efficiencies and only “requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost.”

Annualized Failure Rate

Failure rates of hard drives are generally given as a Mean Time Between Failures (MTBF), although Seagate has started to use Annualized Failure Rate (AFR), which is seen as a better measure.

A common MTBF for hard drives is about 1,000,000 hours, which can be converted to AFR:

\textup{AFR}=1-e^{\left(\frac{-\textup{Annual Operating Hours}}{\textup{MTBF}}\right)}

Assuming the drives are powered all the time, the Annual Operating Hours is 8760, which gives an AFR of 0.872%. Over five years, it can be expected that 4.36% of the drives will fail.

The AFR for the entire RAID array (not just a given disk) can be generally approximated as a Bernoulli trial.

For a RAID 5 array:
\textup{AFR}_{RAID5} = 1-(1-r)^{n}-nr(1-r)^{n-1}

For a RAID 6 array:
\textup{AFR}_{RAID6} = 1-(1-r)^{n}-nr(1-r)^{n-1}-{n\choose 2}r^{2}(1-r)^{n-2}

Efficiency of Space and Failure Rate

Using a five year failure rate of 4.36%, the data show that RAID 6 is significantly more tolerant to failure than RAID 5, which should not be a surprise: RAID 6 can lose two disks while RAID 5 can only lose one.

What was more impressive to me is how quickly RAID 5 failure rates grow (as a function of number of disks), especially when compared to RAID 6 failure rates.

Technically a Bernoulli trial requires the disk failures to be statistically independent, however there is strong evidence4 for the existence of correlations between disk replacement interarrivals; in short, once a disk fails there is actually a higher chance that another disk will fail within a short period of time. However, I believe the Bernoulli trial is still helpful to illustrate the relative failure rate differences between RAID 5 and RAID 6.

Bit Error Rate

Even if you ignore the data behind AFR, single disk fault tolerance is still no longer good enough due to non-recoverable read errors – the bit error rate (BER). For most drives, the BER is <1 in 1014 “which means that once every 100,000,000,000,000 bits, the disk will very politely tell you that, so sorry, but I really, truly can’t read that sector back to you.”

One hundred trillion bits is about 12 terabytes (which is roughly the capacity of the planned system), and “when a disk fails in a RAID 5 array and it has to rebuild there is a significant chance of a non-recoverable read error during the rebuild (BER / UER). As there is no longer any redundancy the RAID array cannot rebuild, this is not dependent on whether you are running Windows or Linux, hardware or software RAID 5, it is simple mathematics.”

The answer is dual disk fault tolerance, such as RAID 6, with one to guard against a whole disk failure and the other to, essentially, guard against the inevitable bit error that will occur.

RAID or ZFS

I originally wanted to use ZFS RAID-Z2, which is a dual disk fault tolerant file system. While it offers similar features as RAID 6, RAID 6 still needs a file system (such as ext4) put on top of it. ZFS RAID-Z2 a combined system which is important because:

From blogs.oracle.com:

“RAID-5 (and other data/parity schemes such as RAID-4, RAID-6, even-odd, and Row Diagonal Parity) never quite delivered on the RAID promise — and can’t — due to a fatal flaw known as the RAID-5 write hole. Whenever you update the data in a RAID stripe you must also update the parity, so that all disks XOR to zero — it’s that equation that allows you to reconstruct data when a disk fails. The problem is that there’s no way to update two or more disks atomically, so RAID stripes can become damaged during a crash or power outage.

RAID-Z is a data/parity scheme like RAID-5, but it uses dynamic stripe width. Every block is its own RAID-Z stripe, regardless of blocksize. This means that every RAID-Z write is a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, completely eliminates the RAID write hole. RAID-Z is also faster than traditional RAID because it never has to do read-modify-write.

Whoa, whoa, whoa — that’s it? Variable stripe width? Geez, that seems pretty obvious. If it’s such a good idea, why doesn’t everybody do it?

Well, the tricky bit here is RAID-Z reconstruction. Because the stripes are all different sizes, there’s no simple formula like “all the disks XOR to zero.” You have to traverse the filesystem metadata to determine the RAID-Z geometry. Note that this would be impossible if the filesystem and the RAID array were separate products, which is why there’s nothing like RAID-Z in the storage market today. You really need an integrated view of the logical and physical structure of the data to pull it off.”

However it’s not quite ready for primetime, and more importantly OpenMediaVault does not support it yet5.

So RAID 6 it is.

Cost

RAID 6 is pretty straight forward and provides (n-2)*capacity of storage. To provide at least 10 TB, I would need five 4 TB drives (or six 3 TB drives, or seven 2 TB drives, or twelve 1 TB drives, etc).

Western Digital’s Red NAS drives are designed for 24×7 operation (versus other drives which are geared toward 8 hours of daily operation) and are widely regarded as the best drives to use for a NAS.

Their cost structure breaks out as such:

Capacity Cost/Disk Cost/GB
1 TB $70 $0.0700
2 TB $99 $0.0495
3 TB $135 $0.0450
4 TB $180 $0.0450

At first glance, it appears that there’s no cost/GB difference between the 3 TB and 4 TB drives, but using smaller sized drives is more cost-effective because the amortization of the redundant disks is spread over more total disks and thus brings the cost/GB down faster for a given storage capacity

RAID_6_Cost_v_Space

However, the actual cost per a GB is the same (between 3TB and 4TB) for a given number of disks, you just get more usable space when using five 4 TB drivers versus five 3 TB drives.

Given that I was trying to keep things small, and some reviews indicated there are some possible manufacturing issues with the 3 TB WD Red drives, I decided to splurge a bit6 and go for the 4 TB drives.

Also, the cost per GB has, for the last 30+ years, decreased by half every 14 months. This corresponds to an order of magnitude every 5 years (i.e. if it costs $0.045/GB today, five years ago it would have cost about $0.45/GB and ten years ago it would have cost about $4.50/GB). If we wait 14 months, presumably it would cost $450 to purchase five new 4TB drives. If we wait 28 months, the cost should half again and it would presumably cost about $225 to purchase five new 4TB drives.

However, since we need drives now, whatever we spend becomes a sunk cost. The difference between buying five 2TB drives or five 4TB drives now is $181. However, if we buy them in 28 months, we would have to spend close to $225…or 24% more than we would have to pay now.

Since we will need the additional space sooner than 2.3 years from now, it actually makes financial sense to buy the 4TB drives now.

The Rest of the System

With the hard drives figured out, it’s time to figure out the rest of the system. There are basically two routes: build your own or buy an appliance.

Build your own NAS

My preliminary research quickly pointing to HP’s ProLiant MicroServer as an ideal candidate: it was small, reasonably powerful, a great price. Since I’ve built up computers before, I also wanted to price out what it would cost to build a system from scratch.

I was able to design a pretty slick system:
bitfenix

Buy an Appliance

After careful review, Synology is the only company that I believe builds an appliance worth considering. Their DiskStation Manager operating system seemed solid when I tried it, there was an easy and known method to get CrashPlan working on their x86-based system, and their system stability has garnered lots of praise.

Initially, I was looking at:

  • DS412+
  • DS414
  • DS1513+
  • DS1813+

However, the DS41x units only hold 4 drives and that was not going to be enough to have at least 10TB of RAID6 usable storage.

System Trade Study

HP G7 HP G8 DS1513+ DS1813+ Homebuilt
x86-based Yes Yes Yes Yes Yes
> 2GB RAM 2GB 2GB 2GB 2GB 4GB
— Max RAM 16GB 16GB 4GB 4GB 16GB
> 10TB Usable Space 12 TB 12 TB 12 TB 24 TB 12 TB
> 100MBit/s NIC 1GBit 1GBit 1GBit 1GBit 1GBit
Cost7 $415 $515 $800 $1000 $449

The main differences between the G7 and the G8 are:

  • G8 uses an Intel Celeron G1610T Dual Core 2.3 GHz instead of the AMD Turion II Model Neo N54L 2.2GHz…no real benefit
  • G8 has a second ethernet plug, however this no real benefit since our configuration would not use it
  • G8 has USB 3.0, which would be nice but can be added to the G7 for $30.
  • G8 has only one PCI Express slot which is downgrade since the G7 version has two slots.
  • G8 has an updated RAID controller, however this is no real benefit since it would not be used in our configuration
  • G8 has the iLO Management Engine, however this no real benefit for our configuration
  • The G8 HP BIOS is digitally signed, “reducing accidental programming and preventing malicious efforts to corrupt system ROM.” It’s also means I cannot use a modified BIOS…which is bad.
  • The G8 supports SATA III, which is faster than than the G7 SATA II…but probably not a differentiator for our configuration.

Conclusion

Perhaps the most important element is getting buy-in from your wife. All of this analysis is fun, but at the end of the day can I convince my wife to spend over $1000 on a data storage system that will sit in the closet – my side of the closet.

We selected the HP ProLiant MicroServer G7, which I think is a good choice.

I really wanted to build a server from scratch, but it can be a risky endeavour. I tried to pick good quality parts (those with good ratings, lots of reviews, and from vendors I know), but it can be a crapshoot.

For a first time major NAS system like this, I wanted something more reliable. I believe the HP ProLiant MicroServer G7 will be a reliable system and will meet our needs; lots of NAS enthusiasts use it, which is a big plus because it means that it works well and there are lots of people to ask questions of.

For next time (in five years or so), I want to do some more analysis of our data storage over time, which I will be able to track.

I’m also curious what the bottlenecks will be. We currently use a mix of 802.11n over 2.4 GHz and 5 GHz, but I’ve thought about putting in a GigE CAT5 cable.

RAID 6 still has has the write hole issue, and I hope it doesn’t cause an issue.

I’m not terribly thrilled with the efficiency of 3+2 (three storage disks plus two parity disks), but there’s not really a better way to slice it unless I add more disks. And it may be that more disks that are each smaller does actually make a difference.

Resources

0


  1. J. Yang and F.-B. Sun. A comprehensive review of hard-disk drive reliability. In Proc. of the Annual Reliability and Maintainability Symposium, 1999. 

  2. Bianca Schroeder and Garth A. Gibson. 2007. Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you?. In Proceedings of the 5th USENIX conference on File and Storage Technologies (FAST ’07)  

  3. Bianca Schroeder and Garth A. Gibson. 2007. Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you?. In Proceedings of the 5th USENIX conference on File and Storage Technologies (FAST ’07)  

  4. Bianca Schroeder and Garth A. Gibson. 2007. Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you?. In Proceedings of the 5th USENIX conference on File and Storage Technologies (FAST ’07)  

  5. NAS4Free and FreeNAS both support ZFS RAID-Z, but they run FreeBSD which does not have native support for CrashPlan 

  6. for the capacity, it’s an 11% increase in per GB cost 

  7. Not including hard drives 

Setting up OpenMediaVault

I hope everyone had a merry Thanksgiving! I spent some of my time setting up OpenMediaVault on an Acer Aspire 3610 that my Kolby gave me. It’s a pretty small machine, running an Intel Atom 330 1.6 GHz with 2 GB of RAM1, but I think it will be perfect for running my new NAS!

I’ve been dreaming of a NAS for some time, I’ve contemplated building one for at least two years, but could never justify the cost. What makes this different is that it doesn’t require any new outlay for equipment–I’m literally using what I have already!

I settled on OpenMediaVault because it was based on Debian, which I have more experience with2.

Here are some configuration tricks I need to use in order to get it to work How I Like It™:

CrashPlan

I use CrashPlan on my laptop and it’s great3! If you don’t have a backup plan, you need to stop reading and get one now. Seriously. What would you do if your computer was stolen, or the hard drive went kaput, or you accidentally deleted something? I want to make sure that data my NAS is storing is just as safe as the data on my laptop.

There’s a guide over on the OpenMediaVault forums which basically echos the official CrashPlan Linux installation guide. Everything went okay until I tried to launch the desktop client and I couldn’t get X11 forwarding to work. I was eventually able to get a headless client to run from my laptop connected over a tunneled SSH, but I didn’t want to have to muck with the ui.properties files every time I wanted to check on things. I also wanted to be able to run both my client and the OMV client simultaneously. So I went back and did some more work on the X11 issue and here’s what I found needed to happen:

For the purposes of this, the IP address of the OpenMediaVault server is 172.16.131.130

Log in to your terminal via SSH:

ssh root@172.16.131.130

Note, if you get a ssh: connect to host 172.16.131.130 port 22: Connection refused, you need to enable SSH via the OMV online console first!

Prerequisites:

apt-get update
apt-get install xorg
echo "X11UseLocalHost no" >> /etc/ssh/sshd_config
/etc/init.d/ssh restart
apt-get install openjdk-6-jre

Install CrashPlan

cd /tmp
wget&nbsp;http://download.crashplan.com/installs/linux/install/CrashPlan/CrashPlan_3.4.1_Linux.tgz
tar -zxvf CrashPlan_3.4.1_Linux.tgz
cd CrashPlan-install
./install.sh

Answer yes to install installing Java, and answer all the other questions as required. If you just press return, the defaults will work just fine (and that’s what I used).

Log out and back in with X11 forwarding enabled, then run CrashPlan:

exit
ssh -X root@172.16.131.130
/usr/local/bin/CrashPlanDesktop

Give it a few seconds and you’ll see that familiar CrashPlan green.

Other notes:

  • It was helpful to debug with ssh -v
  • Looking through /usr/local/crashplan/log/ui_error.log was the key to understanding that the version of Java downloaded by CrashPlan was throwing errors (such as java.lang.UnsatisfiedLinkError: no swt-pi-gtk-3448 or swt-pi-gtk in swt.library.path) and needed to be updated.

HFS+

I have a couple of drives that are formated in HFS+ that I wanted to use without having to reformat them. As a side note, I think NTFS is probably the best bet for multisystem compatibility when the potential for dealing with files larger than 4GB. A comment on the OMV blog by norse laid the basic ground work, but I also had to pull some information from Raam Dev’s blog about configuring HFS for Debian.

Note: hfsprogs 332.25-9 and below has a bug where “[f]ormatting a partition as HFSPLUS does not provide the partition with a UUID.“. The work around is to boot to OS X and use the disk utility to format the partition, but this doesn’t work as well when you’re using a VM. The solution is to use the unstable 332.25-10 release of hfsprogs.

echo "deb http://ftp.debian.org/debian testing main contrib" >> /etc/apt/sources.list
apt-get update
apt-get install hfsplus hfsprogs hfsutils
sed -i '$ d' /etc/apt/sources.list
apt-get update

Then modify /var/www/openmediavault/rpcfilesystemmgmt.inc to be able to handle and mount HFS+ disks:

48c48
< 					  '"jfs","xfs","hfsplus"]},
---
> 					  '"jfs","xfs"]},
118c118
< 			  "umsdos", "vfat", "ufs", "reiserfs", "btrfs","hfsplus"))) {
---
> 			  "umsdos", "vfat", "ufs", "reiserfs", "btrfs"))) {
664,667d663
< 			break;
< 		case "hfsplus":
< 			$fsName = $fs->getUuid();
< 			$opts = "defaults,force"; //force,rw,exec,auto,users,uid=501,gid=20";

Finally, you may need to fsck your HFS+ disk if it’s being stubborn and mounting in read-only mode. With the partition unmount:

fsck.hfsplus -f /dev/sdaX

WiFi

Getting WiFi to work took me down a rabbit hole that ended up being unnecessary. First, verify which Wireless you card you have. The easiest way to do this is using lspci:

apt-get install pciutils
lspci | grep -i network

You should see a line like:
05:00.0 Network controller: RaLink RT3090 Wireless 802.11n 1T/1R PCIe

Installing the RT3090 is pretty straight forward:

echo "deb http://ftp.us.debian.org/debian squeeze main contrib non-free" >> /etc/apt/sources.list
apt-get update
aptitude install firmware-ralink wireless-tools wpasupplicant
sed -i '$ d' /etc/apt/sources.list
apt-get update

Edit /etc/network/interfaces to add the following:

auto wlan0
iface wlan0 inet dhcp
    wpa-ssid mynetworkname
    wpa-psk mysecretpassphrase

Note: the Debian guide recommends restricting the permissions of /etc/network/interfaces to prevent disclosure of the passphrase.

Then run:

ifup wlan0

That’s all I have for now, I’m working on some methods for backing up other data to the NAS (such as my web site and GMail) which I’ll write up later.0


  1. we tried using it to stream the Olympics, but it wouldn’t even do that very well, but I think that was due to the nVidia chipset not playing well with Ubuntu 

  2. FreeNAS and NAS4Free both being based on FreeBSD 

  3. I previously use Mozy, but they started charging by the GB, which wasn’t going to work for me