I’m an explorer, ok? I get curious about everythin…
I’m an explorer, ok? I get curious about everything and I want to investigate all kinds of stuff.
0I’m an explorer, ok? I get curious about everything and I want to investigate all kinds of stuff.
0Budapest, Hungary
19 July 2009
18.0 mm || 1/1250 || f/6.3 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
18.0 mm || 1/640 || f/6.3 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
18.0 mm || 1/40 || f/6.3 || ISO1600 || NIKON D70
34.0 mm || 1/50 || f/6.3 || ISO800 || NIKON D70
18.0 mm || 1/800 || f/6.3 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
18.0 mm || 0.6 || f/6.3 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
18.0 mm || 1/10 || f/8.0 || ISO800 || NIKON D70
Budapest, Budapest, Magyarország
18.0 mm || 1/10 || f/8.0 || ISO800 || NIKON D70
Budapest, Budapest, Magyarország
18.0 mm || 1/13 || f/8.0 || ISO1600 || NIKON D70
Budapest, Budapest, Magyarország
70.0 mm || 1/2000 || f/4.5 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
70.0 mm || 1/1600 || f/4.5 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
22.0 mm || 1/500 || f/3.8 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
18.0 mm || 1/640 || f/3.5 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
18.0 mm || 1/200 || f/3.5 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
18.0 mm || 1/60 || f/3.5 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
27.0 mm || 1/200 || f/3.8 || ISO200 || NIKON D70
Budapest, Budapest, Magyarország
50.0 mm || 1/40 || f/1.8 || ISO400 || NIKON D70
Budapest, Budapest, Magyarország
Samos, Greece
30 June 2009
Batteries fully recharged, we (Cecilie and the Three Musketeers) started the day early. The plan was to rent either two mopeds or one car, with the former prefered because it was cheaper (by just a couple of Euros). We walked into town and the consensues between Anthony (from ISTA), my understanding of Greek and European Union laws, and, most importantly, the rental agency, was that we would have to rent a car becuase I did not have an International Driving Permit.
So it was. We got some groceries for breakfast and lunch and were on our way to explore Samos. Pedro, who was the most flamboyant man I have ever met, had recommended some places to visit, and, with marked-up map in hand, we were off.
18.0 mm || 1/200 || f/3.5 || ISO200 || NIKON D70
The first stop was to get some petrol, as rental cars, at least in Greece, come with an empty tank. We drove along the coast, eventually coming to the river where we could walk up to some waterfalls.
The walk quickly turned into a river excursion. Getting past the first waterfall with everyones bags was an interesting feat. The deep water at the bottom made positioning difficult. Charlie was able to climb to the top of the water fall and, with some careful positioning, the rest of us were able to pass our bags up to him before climbing up ourselves.
The second waterfall proved a bit more difficult and everyone, save me, dumped their bags in a small, dry rock cropping. I slipped the rain-jacket on my camera bag and prayed that I didn’t fall.
We wandered up the river more, passing three more waterfalls. We were able to jump off a couple of them, but the others were too dangerous to jump from, either because of proximity to other rocks or lack of depth at the bottom.
50.0 mm || 1/400 || f/4.5 || ISO200 || NIKON D70
Karlovassi, North Aegean, Ελλάδα
18.0 mm || 1/250 || f/4.0 || ISO200 || NIKON D70
Karlovassi, North Aegean, Ελλάδα
52.0 mm || 1/50 || f/4.5 || ISO200 || NIKON D70
Karlovassi, North Aegean, Ελλάδα
18.0 mm || 1/60 || f/4.0 || ISO200 || NIKON D70
Karlovassi, North Aegean, Ελλάδα
38.0 mm || 1/60 || f/4.2 || ISO200 || NIKON D70
Karlovassi, North Aegean, Ελλάδα
34.0 mm || 1/60 || f/4.2 || ISO200 || NIKON D70
Karlovassi, North Aegean, Ελλάδα
46.0 mm || 1/80 || f/4.5 || ISO200 || NIKON D70
Karlovassi, North Aegean, Ελλάδα
When we got back to the bottom, we were famished. We drove in search for a place to eat near the sea, such as a beach, and ended up driving to the end of a commercial pier, which was amazing! No one seemed to care (just try doing that in the US of A) and so we sat on the ledge of the breakwater, eating warm homemade sandwiches, throwing bits of ham, cheese, and bread to the schools of fish below our dangling feet. Afterwards, we dove off the end of the pier in the warm Aegean Sea.
18.0 mm || 1/5000 || f/3.5 || ISO200 || NIKON D70
Karlovasi, North Aegean, Ελλάδα
18.0 mm || 1/1000 || f/3.5 || ISO200 || NIKON D70
Karlovasi, North Aegean, Ελλάδα
18.0 mm || 1/2000 || f/3.5 || ISO200 || NIKON D70
Karlovasi, North Aegean, Ελλάδα
60.0 mm || 1/800 || f/4.5 || ISO200 || NIKON D70
Karlovasi, North Aegean, Ελλάδα
It was picturesque.
18.0 mm || 1/2500 || f/3.5 || ISO200 || NIKON D70
Karlovasi, North Aegean, Ελλάδα
We continued to explore the island of Samos. We drove up and over the island to the back side, stopping in a small town along the way to get some dessert.
18.0 mm || 1/640 || f/3.5 || ISO200 || NIKON D70
Ayios Konstandinos, North Aegean, Ελλάδα
29.0 mm || 1/2000 || f/4.0 || ISO200 || NIKON D70
Ayios Konstandinos, North Aegean, Ελλάδα
Once on the backside of the island, I settled down for nap on the beach while everyone else walked around town.
27.0 mm || 1/2000 || f/3.8 || ISO200 || NIKON D70
Pythagoreio, North Aegean, Ελλάδα
Not wanting to miss another amazing sunset, we headed back for our side of the island and returned the car. We walked alongside the pier as the sun set in the distance. Another amazing day capped by another amazing sunset.
70.0 mm || 1/400 || f/4.5 || ISO200 || NIKON D70
Samos, North Aegean, Ελλάδα
18.0 mm || 1/500 || f/3.5 || ISO200 || NIKON D70
Samos, North Aegean, Ελλάδα
Back at the hotel, we all hung out – Cecilie and the Three Musketeers – one last time before heading off to bed. Charlie and I would have to get up early to make our sailing for Paros.
0There’s a very super serious absolutely critical patch for Internet Explorer that you need to download right away. I usually (never?) blog about this type of thing, but this exploit is a rather serious exploit (they disabled Internet Explorer on all of our campus computers, which they’ve never done before). Anyway, Microsoft issued a patch today and I’m pleading with you to go and download it:
http://www.microsoft.com/technet/security/bulletin/ms08-078.mspx\
Alternatively, you may consider downloading and using:
Mozilla Firefox: http://www.mozilla.com/en-US/firefox/
Google Chrome: http://www.google.com/chrome
If you already use Firefox or Chrome, you still should install the patch, just in case.
If you’re using a Mac, you’re okay and don’t need to do anything.
Read more at http://gadgetwise.blogs.nytimes.com/2008/12/17/microsoft-issues-critical-update-for-its-browser/
0The 3rd annual Space Exploration Conference and Exhibition was in Denver this year and we were invited to attend this invitation only event. One might think that invitation only events would be rather dull and highly boring, however I can easily say this was one of the best events I’ve ever been to.
NASA tasked Boeing with getting together the best of the best when it comes to space systems. And that’s what Boeing did.
When was the last time you stood next to America’s first liquid hydrogen fueled rocket engine, a Pratt and Whitney RL-10?
In fact, Boeing still uses the RL-10 in their Delta IV. And of the three major rocket engines used in America (Boeing’s Delta IV, Lockheed Martin’s Atlas V and NASA’s Space Shuttle Main Engines), all of them are made by Pratt and Whitney.
Lockheed Martin had a robot there, Sprocket D. Rocket. Now originally, I thought it was just a simple AI robot. But then I listened to it talk and interact with other people and I thought it was just a remote controlled robot with a human behind it all. Later, someone told me that people would ask it esoteric questions in foreign languages and it would respond. If this is the case, then it fooled me and successfully passed my Turing test.
The conference concluded with a panel of persons from all different aspects of the space industry, including a gentleman by the name of Pat Schondel who is the Vice President of Business Development for Boeing NASA Systems, a part of Boeing Integrated Defense Systems. After the panel was over, I went over and talked with him for a few minutes and picked his brain a bit about Boeing, what’s going in the space sector and internship opportunities in the in the space sector at Boeing.
Talking to Mr. Schondel turned out to be one of the highlights of my time since I’ve been trying find out about Boeing’s space interests for some time now, but Seattle really isn’t the place to do that. Mr. Schondel was able to fill in some gaps for me and give me the ever so slighest glimpse of what goes on down in Houston.
0We drove the entire Seattle-Broomfield, CO trip in just under 24 hours. It was quite a feat of driving. We drove essentially nonstop the entire way. I went to my college orientation today and learned quite a few new things. The day was long and I was really tired when I got back. I’m glad I only have to do this once. I checked out the Greek scene and decided to go Greek and then changed my mind to not go Greek, but to at least check out the Greek thing the first week of school (Rush).
0On my latest flight from Seattle to Heathrow I took a stab at making a time lapse.
We were scheduled to leave Seattle at 7:20pm1, so I was hoping capture the aurora borealis during the night portion of the flight since we would be flying at at a pretty high latitude, even dipping into the arctic circle for a bit.
Unfortunately I failed to account for the fact that during the summer darkness is at a premium which I should have remembered given my prior travels to high northern latitudes. So, we never reached night and I didn’t capture any auroras.
It was still a good test and I’ve learned some things to refine for next time2. I’ll be getting a larger SD card for sure and will probably use a slightly different mounting technique so I don’t have to shoot through the Go Pro case. I also want to figure out a window cover I can put over it so A) my reflection doesn’t show up; and B) I’m not blasting the entire cabin with light while everyone tries to sleep (sorry guys!).
Gear list:
One year ago, I spent my Thanksgiving setting up OpenMediaVault on a computer I had just hanging around. It has served me faithfully through the years, but several things became clear, the most important thing being that external hard drives are not designed to be continuously powered.
I had two drives fail and a growing concerns about the remaining disks. I use CrashPlan to backup the data, so I wasn’t concern with losing the data, but I was concerned with having it available when I needed it.
I also had a huge increase in storage requirements, due mostly to my video archiving project from last Christmas (which I still need to write up).
I also got married this year, and Rachel had several external drives I was hoping to consolidate. Ironically, her computer also died last week…good thing we had a back up!
The need was clear: a more robust NAS with serious storage requirements.
Minimum Requirements:
I performed a trade study based on four major options:
Internal | External | Cloud | Network | |
---|---|---|---|---|
Multiple User Access | 2 | 3 | 3 | 3 |
Simultaneous User Access | 2 | 2 | 3 | 3 |
File Sharing | 3 | 3 | 3 | 3 |
Media Sharing | 2 | 2 | 1 | 3 |
Access Control List | 3 | 2 | 3 | 3 |
> 99% Up Time | 0 | 0 | 3 | 3 |
Remote Backup | 3 | 3 | 2 | 3 |
> 10TB Usable Space | 1 | 1 | 3 | 3 |
> 100MBit/s bandwidth | 3 | 3 | 1 | 3 |
Minimal Single Point of Failure | 3 | 1 | 2 | 3 |
Secure System | 3 | 3 | 1 | 3 |
> 5 Years of Usage | 3 | 3 | 3 | 3 |
Total | 28 | 26 | 28 | 36 |
From this trade study, the differentiations pop-out pretty quick: Accessibility and security.
Accessibility covers multiple and simultaneous user access, as well as bandwidth of data.
While increasing the internal local storage is often the best option for a single user, we are in a multi-user environment and the requirement for simultaneous access requires some sort of network connection. This requirement eliminates both per-user options of increasing either the internal or external disk space. Also, the feasibility of increasing the disk space would have been impossible give that Rachel and I both use laptops.
Storing and sharing data in the Internet has become incredibly easy thanks to the likes of DropBox, Google Drive, Microsoft Spaces, Microsoft Azure, RackSpace Cloud Storage, Amazon S3, SpiderOak, and the like. In fact, many consumer Cloud storage solutions (such as DropBox) use enterprises systems (such as Amazon S3) to store their data. Because it’s provided as a network service, simultaneous data access with multiple users is possible.
The challenge of Cloud Storage is getting access to the data, which requires a working Internet connection and sufficient bandwidth to transport the data. Current bandwidth with Comcast typically limited to no more than ~48MBits/s, which is less than 50% of the 100MBit/s requirement. While higher data rates are possible, they are cost prohibitive at this time.
Network Attached Storage Devices are not a new thing and have been around for decades. Within the last 10 years though, their popularity in home and home office environments has become greater as the costs of implementation and maintenance have decreased. At its core a NAS is a computer with lots of internal storage that is shared with users over the home network. While more costly than simply increasing internal/external local storage, it provides significantly better access to the data.
Because the NAS is primarily accessed over the home network, the speed of access is limited to the connection speed of the NAS to the network and the network to the end system. Directly connected systems (using an ethernet cable) can reach speeds of 1000 MBit/s and 300MBit/s over wireless. This is significantly slower than directly connected drives, but faster than externally connected USB 2.0 drives and Cloud Storage. Most files would open in less than one second and all video files would be able to stream immediately with no buffering.
Securing data is the other challenge.
Because the data is stored by a third-party there are considerable concerns about data safety, as well as the right to data privacy from allegedly lawful (but arguably constitutionally illegal) search and seizures by government agencies.
I ran into similar issues with securing my Linode VPS, and ended up not taking any extraordinary steps because the bottom line is: without physical control of the data, the data is not secure.
The data that I’m looking to store for this project is certainly more sensitive than whatever I host on the web. There are many ways to implement asymmetric encryption to store files, but it would also require that each end-user have the decryption keys. Key management gets very complicated very quick (trust me) and also throws out any hope of streaming media.
Since the NAS is local to the premise, physical control of data is maintained and also given the superior protection of the 4th Amendment for such items in your control.
Additionally, the system is behind several layers of security that would make remote extraction of data highly difficult and improbable.
With a NAS selected, I had to figure out which one. But first, a short primer on the 10TB of usable space and what that means.
I arrived at the 10TB requirement by examining the amount of storage we were currently use and then extrapolating what we might need over the next five years, which is generally considered the useful-life period1:
While the “bathtub curve” has been widely used as a benchmark for life expectancy:
Changes in disk replacement rates during the first five years of the lifecycle were more dramatic than often assumed. While replacement rates are often expected to be in steady state in year 2-5 of operation (bottom of the “bathtub curve”), we observed a continuous increase in replacement rates, starting as early as in the second year of operation.2
Practically speaking, the data show that:
For drives less than five years old, field replacement rates were larger than what the datasheet MTTF suggested by a factor of 2-10. For five to eight year old drives, field replacement rates were a factor of 30 higher than what the datasheet MTTF suggested.3
Something to keep in mind if you’re building larger systems.
Unfortunately, there is no physical 10TB drive one can buy, but a series of smaller drives can be logically arranged to appear as 10TB. However, the danger of logically arranging these drives is that typically if any single drive fails, you would lose all the data. To prevent this, a redundancy system is employed that allows at least one drive to fail, but still have access to all the data.
Using a RAID array is the de facto way to do this, and RAID 5 has been the preferred implementation because it has one of the best storage efficiencies and only “requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost.”
Failure rates of hard drives are generally given as a Mean Time Between Failures (MTBF), although Seagate has started to use Annualized Failure Rate (AFR), which is seen as a better measure.
A common MTBF for hard drives is about 1,000,000 hours, which can be converted to AFR:
Assuming the drives are powered all the time, the Annual Operating Hours is 8760, which gives an AFR of 0.872%. Over five years, it can be expected that 4.36% of the drives will fail.
The AFR for the entire RAID array (not just a given disk) can be generally approximated as a Bernoulli trial.
For a RAID 5 array:
For a RAID 6 array:
Using a five year failure rate of 4.36%, the data show that RAID 6 is significantly more tolerant to failure than RAID 5, which should not be a surprise: RAID 6 can lose two disks while RAID 5 can only lose one.
What was more impressive to me is how quickly RAID 5 failure rates grow (as a function of number of disks), especially when compared to RAID 6 failure rates.
Technically a Bernoulli trial requires the disk failures to be statistically independent, however there is strong evidence4 for the existence of correlations between disk replacement interarrivals; in short, once a disk fails there is actually a higher chance that another disk will fail within a short period of time. However, I believe the Bernoulli trial is still helpful to illustrate the relative failure rate differences between RAID 5 and RAID 6.
Even if you ignore the data behind AFR, single disk fault tolerance is still no longer good enough due to non-recoverable read errors – the bit error rate (BER). For most drives, the BER is <1 in 1014 “which means that once every 100,000,000,000,000 bits, the disk will very politely tell you that, so sorry, but I really, truly can’t read that sector back to you.”
One hundred trillion bits is about 12 terabytes (which is roughly the capacity of the planned system), and “when a disk fails in a RAID 5 array and it has to rebuild there is a significant chance of a non-recoverable read error during the rebuild (BER / UER). As there is no longer any redundancy the RAID array cannot rebuild, this is not dependent on whether you are running Windows or Linux, hardware or software RAID 5, it is simple mathematics.”
The answer is dual disk fault tolerance, such as RAID 6, with one to guard against a whole disk failure and the other to, essentially, guard against the inevitable bit error that will occur.
I originally wanted to use ZFS RAID-Z2, which is a dual disk fault tolerant file system. While it offers similar features as RAID 6, RAID 6 still needs a file system (such as ext4) put on top of it. ZFS RAID-Z2 a combined system which is important because:
“RAID-5 (and other data/parity schemes such as RAID-4, RAID-6, even-odd, and Row Diagonal Parity) never quite delivered on the RAID promise — and can’t — due to a fatal flaw known as the RAID-5 write hole. Whenever you update the data in a RAID stripe you must also update the parity, so that all disks XOR to zero — it’s that equation that allows you to reconstruct data when a disk fails. The problem is that there’s no way to update two or more disks atomically, so RAID stripes can become damaged during a crash or power outage.
…
RAID-Z is a data/parity scheme like RAID-5, but it uses dynamic stripe width. Every block is its own RAID-Z stripe, regardless of blocksize. This means that every RAID-Z write is a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, completely eliminates the RAID write hole. RAID-Z is also faster than traditional RAID because it never has to do read-modify-write.
Whoa, whoa, whoa — that’s it? Variable stripe width? Geez, that seems pretty obvious. If it’s such a good idea, why doesn’t everybody do it?
Well, the tricky bit here is RAID-Z reconstruction. Because the stripes are all different sizes, there’s no simple formula like “all the disks XOR to zero.” You have to traverse the filesystem metadata to determine the RAID-Z geometry. Note that this would be impossible if the filesystem and the RAID array were separate products, which is why there’s nothing like RAID-Z in the storage market today. You really need an integrated view of the logical and physical structure of the data to pull it off.”
However it’s not quite ready for primetime, and more importantly OpenMediaVault does not support it yet5.
So RAID 6 it is.
RAID 6 is pretty straight forward and provides (n-2)*capacity of storage. To provide at least 10 TB, I would need five 4 TB drives (or six 3 TB drives, or seven 2 TB drives, or twelve 1 TB drives, etc).
Western Digital’s Red NAS drives are designed for 24×7 operation (versus other drives which are geared toward 8 hours of daily operation) and are widely regarded as the best drives to use for a NAS.
Their cost structure breaks out as such:
Capacity | Cost/Disk | Cost/GB |
---|---|---|
1 TB | $70 | $0.0700 |
2 TB | $99 | $0.0495 |
3 TB | $135 | $0.0450 |
4 TB | $180 | $0.0450 |
At first glance, it appears that there’s no cost/GB difference between the 3 TB and 4 TB drives, but using smaller sized drives is more cost-effective because the amortization of the redundant disks is spread over more total disks and thus brings the cost/GB down faster for a given storage capacity
However, the actual cost per a GB is the same (between 3TB and 4TB) for a given number of disks, you just get more usable space when using five 4 TB drivers versus five 3 TB drives.
Given that I was trying to keep things small, and some reviews indicated there are some possible manufacturing issues with the 3 TB WD Red drives, I decided to splurge a bit6 and go for the 4 TB drives.
Also, the cost per GB has, for the last 30+ years, decreased by half every 14 months. This corresponds to an order of magnitude every 5 years (i.e. if it costs $0.045/GB today, five years ago it would have cost about $0.45/GB and ten years ago it would have cost about $4.50/GB). If we wait 14 months, presumably it would cost $450 to purchase five new 4TB drives. If we wait 28 months, the cost should half again and it would presumably cost about $225 to purchase five new 4TB drives.
However, since we need drives now, whatever we spend becomes a sunk cost. The difference between buying five 2TB drives or five 4TB drives now is $181. However, if we buy them in 28 months, we would have to spend close to $225…or 24% more than we would have to pay now.
Since we will need the additional space sooner than 2.3 years from now, it actually makes financial sense to buy the 4TB drives now.
With the hard drives figured out, it’s time to figure out the rest of the system. There are basically two routes: build your own or buy an appliance.
My preliminary research quickly pointing to HP’s ProLiant MicroServer as an ideal candidate: it was small, reasonably powerful, a great price. Since I’ve built up computers before, I also wanted to price out what it would cost to build a system from scratch.
I was able to design a pretty slick system:
After careful review, Synology is the only company that I believe builds an appliance worth considering. Their DiskStation Manager operating system seemed solid when I tried it, there was an easy and known method to get CrashPlan working on their x86-based system, and their system stability has garnered lots of praise.
Initially, I was looking at:
However, the DS41x units only hold 4 drives and that was not going to be enough to have at least 10TB of RAID6 usable storage.
HP G7 | HP G8 | DS1513+ | DS1813+ | Homebuilt | |
---|---|---|---|---|---|
x86-based | Yes | Yes | Yes | Yes | Yes |
> 2GB RAM | 2GB | 2GB | 2GB | 2GB | 4GB |
— Max RAM | 16GB | 16GB | 4GB | 4GB | 16GB |
> 10TB Usable Space | 12 TB | 12 TB | 12 TB | 24 TB | 12 TB |
> 100MBit/s NIC | 1GBit | 1GBit | 1GBit | 1GBit | 1GBit |
Cost7 | $415 | $515 | $800 | $1000 | $449 |
The main differences between the G7 and the G8 are:
Perhaps the most important element is getting buy-in from your wife. All of this analysis is fun, but at the end of the day can I convince my wife to spend over $1000 on a data storage system that will sit in the closet – my side of the closet.
We selected the HP ProLiant MicroServer G7, which I think is a good choice.
I really wanted to build a server from scratch, but it can be a risky endeavour. I tried to pick good quality parts (those with good ratings, lots of reviews, and from vendors I know), but it can be a crapshoot.
For a first time major NAS system like this, I wanted something more reliable. I believe the HP ProLiant MicroServer G7 will be a reliable system and will meet our needs; lots of NAS enthusiasts use it, which is a big plus because it means that it works well and there are lots of people to ask questions of.
For next time (in five years or so), I want to do some more analysis of our data storage over time, which I will be able to track.
I’m also curious what the bottlenecks will be. We currently use a mix of 802.11n over 2.4 GHz and 5 GHz, but I’ve thought about putting in a GigE CAT5 cable.
RAID 6 still has has the write hole issue, and I hope it doesn’t cause an issue.
I’m not terribly thrilled with the efficiency of 3+2 (three storage disks plus two parity disks), but there’s not really a better way to slice it unless I add more disks. And it may be that more disks that are each smaller does actually make a difference.
J. Yang and F.-B. Sun. A comprehensive review of hard-disk drive reliability. In Proc. of the Annual Reliability and Maintainability Symposium, 1999. ↩
Bianca Schroeder and Garth A. Gibson. 2007. Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you?. In Proceedings of the 5th USENIX conference on File and Storage Technologies (FAST ’07) ↩
Bianca Schroeder and Garth A. Gibson. 2007. Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you?. In Proceedings of the 5th USENIX conference on File and Storage Technologies (FAST ’07) ↩
Bianca Schroeder and Garth A. Gibson. 2007. Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you?. In Proceedings of the 5th USENIX conference on File and Storage Technologies (FAST ’07) ↩
NAS4Free and FreeNAS both support ZFS RAID-Z, but they run FreeBSD which does not have native support for CrashPlan ↩
for the capacity, it’s an 11% increase in per GB cost ↩
Not including hard drives ↩
I’m leaving for a short trip to San Francisco tonight for — gasp — pleasure! Rachel’s will be in Bozeman for wedding planning and family time, and I’ve owed Matt a visit for a long time. I also have a handful of other friends (Audrey/Griffin, Shanan) Question is, what should I do? I get in late tonight (Thursday) and leave early Sunday afternoon.
I wanted to go to the Exploratorium1, but they’re closed while they move in to their new place.
The Bay Model Visitor Center2 also looks interesting.
I’ll be based out of the Haight neighborhood, what’s a nerd to do on vacation in San Fran?
Also, what’s the best way to get around? I have a free car rental I could use, but will parking be disastrous?
0I’m cleaning up my room and came across this note. I wrote it a few months ago as I was doing my planning for 2012.
For 2011, my word was “Pace“. This year, I picked four words about how I’m going to approach this year: Explore, Focus, Finish, Fun.
I didn’t just pick these four words out of thing air, there was a process to get there and this piece of paper was my thought process. Click to embiggen.
0