As we’ve been doing more video and audio conferencing lately, I’ve been experimenting with temporally hyper-near servers to see if it results in a better experience. TL;DR…not really for most purposes.
Temporally hyper-near servers differ from geographically near servers in that it doesn’t matter how close the server is physically in miles, just packet transit transfer time in milliseconds…basically low-latency.
AWS calls these Local Zones and they’re designed so that “you can easily run latency-sensitive portions of applications local to end-users and resources in a specific geography, delivering single-digit millisecond latency for use cases such as media & entertainment content creation, real-time gaming…”, but they only have them in the Los Angeles region for now.
Azure calls them Edge Zones, but they aren’t available yet.
Google doesn’t have a specific offering, but instead provides a list of facilities within each region you can choose from, though none of them are near Seattle.
I went back my notes when I was looking at deploying some servers that I knew would generally only be accessed from the Seattle area and I found that Vultr could be a good solution1.
With Vultr (in Seattle), I’m getting an average round-trip time (RTT) of 3.221ms (stddev 0.244 ms)2
Compare to AWS (US West 2), which was an average RTT of 10.820 ms (stddev 0.815ms)3
After doing some traceroutes and poking around various peering databases , I think that Vultr is based at the Cyxtera SEA2 datacenter in Seattle and shares interconnections with CenturyLink, Comcast, and AT&T (among others).
I setup a Jitsi server, but didn’t notice anything perceptibly different between using my server and a standard Jitsi public server (the nearest of which is on an AWS US West 2 instance).
However, for Jamulus (which is software that enables musicians to perform real-time jam sessions over the internet) there does appear to be huge difference and I’ve received several emails about the setup I have, so here goes:
Jamulus on Vultr
Deploy a new server on Vultr4, here’s the the configuration I used:
The Jamulus wiki has a pretty decent set of instructions (which have only gotten better in the last few months) on how to download, compile, and run a headless Jamulus instance: https://github.com/corrados/jamulus/wiki/Server—Linux
Here’s the TL;DR (which assumes you are working as root):
–serverinfo: update with your location as [name];[city];[[country as QLocale ID]];5
–welcomemessage: if you want one
[Unit]
Description=Jamulus-Server
After=network.target
[Service]
Type=simple
User=jamulus
Group=nogroup
NoNewPrivileges=true
ProtectSystem=true
ProtectHome=true
Nice=-20
IOSchedulingClass=realtime
IOSchedulingPriority=0
#### Change this to set genre, location and other parameters.
#### See https://github.com/corrados/jamulus/wiki/Command-Line-Options ####
ExecStart=/usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername $(uname -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo "NW WA;Seattle, WA;225" -g --welcomemessage "This is an experimental service and support is not guaranteed. Please contact andrew@fergcorp.com with questions" --licence
Restart=on-failure
RestartSec=30
StandardOutput=journal
StandardError=inherit
SyslogIdentifier=jamulus
[Install]
WantedBy=multi-user.target
Give the unit file the correct permissions:
chmod 644 /etc/systemd/system/jamulus.service
Start and verify Jamulus:
systemctl start jamulus
systemctl status jamulus
You should get something like:
● jamulus.service - Jamulus-Server
Loaded: loaded (/etc/systemd/system/jamulus.service; disabled; vendor preset: enabled)
Active: active (running) since Wed 2020-07-08 10:57:09 PDT; 4s ago
Main PID: 14220 (Jamulus)
Tasks: 3 (limit: 1149)
Memory: 13.5M
CGroup: /system.slice/jamulus.service
└─14220 /usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo N
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - central server: jamulusallgenres.fischvolk.de:22224
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - server info: NW WA;Seattle, WA;225
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - ping servers in slave server list
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - welcome message: This is an experimental service and support is not guaranteed. Please contact andrew@fergcorp.com with questions
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - licence required
Jul 08 10:57:09 myserver.example.com jamulus[14220]: *** Jamulus, Version 3.5.8git
Jul 08 10:57:09 myserver.example.com jamulus[14220]: *** Internet Jam Session Software
Jul 08 10:57:09 myserver.example.com jamulus[14220]: *** Released under the GNU General Public License (GPL)
Jul 08 10:57:09 myserver.example.com jamulus[14220]: Server Registration Status update: Registration requested
Jul 08 10:57:09 myserver.example.com jamulus[14220]: Server Registration Status update: Registered
And that’s it! Enjoy the server and let me know how it goes!
9 July 2020 Update:
If you update jamulus.service unit file then run this:
systemctl daemon-reload
service jamulus restart
Also, thanks to Brian Pratt testing, feedback, catching a couple typos, and suggesting using the --fastupdate command line option paired with Vultr’s High Frequency Compute (instead of regular Compute) even better performance.
After having some fun hosting this on a Linode VPS, I’ve decided I don’t really want to be in the server maintenance business. So I’ve move everything over to SiteGround over the last couple of months. It feels good to have one less thing to worry about, and SiteGround supports Let’s Encrypt! Win-Win!
New Theme!
With great sadness, Alex King (of Crowd Favorite) passed in 2015. Unfortunately, his theme, FavePersonal, hasn’t been getting updates since and things were starting to break. So, new theme.
Unfortunately, this also means that the social media interoperability has changed. Comments on Facebook and Twitter used to automatically be aggregated on this blog as well. My thinking on this continues to evolve, and while I believe it would be best to have a single commenting ecosystem, I’m more at ease with allowing separate systems to exist.
Fortunately, I’m still pushing blog posts to Facebook and Twitter since I know that’s a primary news source for many people (for better or for worse).
Email “Newsletters”
Ugh…I’ve hated this. I hate posting something and having it go out via email only to find I made a typo or something. Or wanting to post multiple time in a day and feeling worried that people would hate all the email. This has honestly been a big mental block for me. Also, the plugin I was using1 was overly complicated and often didn’t render things correctly. I thought hard about getting rid of email subscriptions entirely, but instead I’m going to try something else. You’re welcome relatives 😉
First, I’ve switched to a new system: MailPoet. We’ll see how this works…it seems to tick all the boxes I need for what I want to do.
Second, everyone who was on the old mailing list has been migrated to the new weekly digest list. If there have been blog posts from the past week, you will get an email on Monday morning with them — in theory.0
I had two drives fail and a growing concerns about the remaining disks. I use CrashPlan to backup the data, so I wasn’t concern with losing the data, but I was concerned with having it available when I needed it.
I also had a huge increase in storage requirements, due mostly to my video archiving project from last Christmas (which I still need to write up).
I also got married this year, and Rachel had several external drives I was hoping to consolidate. Ironically, her computer also died last week…good thing we had a back up!
The need was clear: a more robust NAS with serious storage requirements.
High availability (99% up time ~ 3.5 days of downtime/year) for all local users
Remote backup (prefer CrashPlan)
10TB of usable space
Minumum 100MBit/s access rate
Minimal single points of failture (e.g. RAID 5, ZFS, or BTRFS)
Secure system
Minimum of five years of viable usage
Cost effective
Trade Study
I performed a trade study based on four major options:
Upgrading internal the drives with systems
Continuing to use external hard drives
Using cloud storage
Using a NAS
Internal
External
Cloud
Network
Multiple User Access
2
3
3
3
Simultaneous User Access
2
2
3
3
File Sharing
3
3
3
3
Media Sharing
2
2
1
3
Access Control List
3
2
3
3
> 99% Up Time
0
0
3
3
Remote Backup
3
3
2
3
> 10TB Usable Space
1
1
3
3
> 100MBit/s bandwidth
3
3
1
3
Minimal Single Point of Failure
3
1
2
3
Secure System
3
3
1
3
> 5 Years of Usage
3
3
3
3
Total
28
26
28
36
From this trade study, the differentiations pop-out pretty quick: Accessibility and security.
Accessibility
Accessibility covers multiple and simultaneous user access, as well as bandwidth of data.
Single user storage
While increasing the internal local storage is often the best option for a single user, we are in a multi-user environment and the requirement for simultaneous access requires some sort of network connection. This requirement eliminates both per-user options of increasing either the internal or external disk space. Also, the feasibility of increasing the disk space would have been impossible give that Rachel and I both use laptops.
Cloud Storage
Storing and sharing data in the Internet has become incredibly easy thanks to the likes of DropBox, Google Drive, Microsoft Spaces, Microsoft Azure, RackSpace Cloud Storage, Amazon S3, SpiderOak, and the like. In fact, many consumer Cloud storage solutions (such as DropBox) use enterprises systems (such as Amazon S3) to store their data. Because it’s provided as a network service, simultaneous data access with multiple users is possible.
The challenge of Cloud Storage is getting access to the data, which requires a working Internet connection and sufficient bandwidth to transport the data. Current bandwidth with Comcast typically limited to no more than ~48MBits/s, which is less than 50% of the 100MBit/s requirement. While higher data rates are possible, they are cost prohibitive at this time.
NAS
Network Attached Storage Devices are not a new thing and have been around for decades. Within the last 10 years though, their popularity in home and home office environments has become greater as the costs of implementation and maintenance have decreased. At its core a NAS is a computer with lots of internal storage that is shared with users over the home network. While more costly than simply increasing internal/external local storage, it provides significantly better access to the data.
Because the NAS is primarily accessed over the home network, the speed of access is limited to the connection speed of the NAS to the network and the network to the end system. Directly connected systems (using an ethernet cable) can reach speeds of 1000 MBit/s and 300MBit/s over wireless. This is significantly slower than directly connected drives, but faster than externally connected USB 2.0 drives and Cloud Storage. Most files would open in less than one second and all video files would be able to stream immediately with no buffering.
System Security
Securing data is the other challenge.
Cloud Storage
Because the data is stored by a third-party there are considerable concerns about data safety, as well as the right to data privacy from allegedly lawful (but arguably constitutionally illegal) search and seizures by government agencies.
I ran into similar issues with securing my Linode VPS, and ended up not taking any extraordinary steps because the bottom line is: without physical control of the data, the data is not secure.
The data that I’m looking to store for this project is certainly more sensitive than whatever I host on the web. There are many ways to implement asymmetric encryption to store files, but it would also require that each end-user have the decryption keys. Key management gets very complicated very quick (trust me) and also throws out any hope of streaming media.
NAS
Since the NAS is local to the premise, physical control of data is maintained and also given the superior protection of the 4th Amendment for such items in your control.
Additionally, the system is behind several layers of security that would make remote extraction of data highly difficult and improbable.
Designing a NAS
With a NAS selected, I had to figure out which one. But first, a short primer on the 10TB of usable space and what that means.
Hard Drives
Capacity
I arrived at the 10TB requirement by examining the amount of storage we were currently use and then extrapolating what we might need over the next five years, which is generally considered the useful-life period1:
While the “bathtub curve” has been widely used as a benchmark for life expectancy:
Changes in disk replacement rates during the first five years of the lifecycle were more dramatic than often assumed. While replacement rates are often expected to be in steady state in year 2-5 of operation (bottom of the “bathtub curve”), we observed a continuous increase in replacement rates, starting as early as in the second year of operation.2
Practically speaking, the data show that:
For drives less than five years old, field replacement rates were larger than what the datasheet MTTF suggested by a factor of 2-10. For five to eight year old drives, field replacement rates were a factor of 30 higher than what the datasheet MTTF suggested.3
Something to keep in mind if you’re building larger systems.
Redundancy
Unfortunately, there is no physical 10TB drive one can buy, but a series of smaller drives can be logically arranged to appear as 10TB. However, the danger of logically arranging these drives is that typically if any single drive fails, you would lose all the data. To prevent this, a redundancy system is employed that allows at least one drive to fail, but still have access to all the data.
Using a RAID array is the de facto way to do this, and RAID 5 has been the preferred implementation because it has one of the best storage efficiencies and only “requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost.”
A common MTBF for hard drives is about 1,000,000 hours, which can be converted to AFR:
Assuming the drives are powered all the time, the Annual Operating Hours is 8760, which gives an AFR of 0.872%. Over five years, it can be expected that 4.36% of the drives will fail.
The AFR for the entire RAID array (not just a given disk) can be generally approximated as a Bernoulli trial.
For a RAID 5 array:
For a RAID 6 array:
Using a five year failure rate of 4.36%, the data show that RAID 6 is significantly more tolerant to failure than RAID 5, which should not be a surprise: RAID 6 can lose two disks while RAID 5 can only lose one.
What was more impressive to me is how quickly RAID 5 failure rates grow (as a function of number of disks), especially when compared to RAID 6 failure rates.
Technically a Bernoulli trial requires the disk failures to be statistically independent, however there is strong evidence4 for the existence of correlations between disk replacement interarrivals; in short, once a disk fails there is actually a higher chance that another disk will fail within a short period of time. However, I believe the Bernoulli trial is still helpful to illustrate the relative failure rate differences between RAID 5 and RAID 6.
Bit Error Rate
Even if you ignore the data behind AFR, single disk fault tolerance is still no longer good enough due to non-recoverable read errors – the bit error rate (BER). For most drives, the BER is <1 in 1014 “which means that once every 100,000,000,000,000 bits, the disk will very politely tell you that, so sorry, but I really, truly can’t read that sector back to you.”
One hundred trillion bits is about 12 terabytes (which is roughly the capacity of the planned system), and “when a disk fails in a RAID 5 array and it has to rebuild there is a significant chance of a non-recoverable read error during the rebuild (BER / UER). As there is no longer any redundancy the RAID array cannot rebuild, this is not dependent on whether you are running Windows or Linux, hardware or software RAID 5, it is simple mathematics.”
The answer is dual disk fault tolerance, such as RAID 6, with one to guard against a whole disk failure and the other to, essentially, guard against the inevitable bit error that will occur.
RAID or ZFS
I originally wanted to use ZFS RAID-Z2, which is a dual disk fault tolerant file system. While it offers similar features as RAID 6, RAID 6 still needs a file system (such as ext4) put on top of it. ZFS RAID-Z2 a combined system which is important because:
“RAID-5 (and other data/parity schemes such as RAID-4, RAID-6, even-odd, and Row Diagonal Parity) never quite delivered on the RAID promise — and can’t — due to a fatal flaw known as the RAID-5 write hole. Whenever you update the data in a RAID stripe you must also update the parity, so that all disks XOR to zero — it’s that equation that allows you to reconstruct data when a disk fails. The problem is that there’s no way to update two or more disks atomically, so RAID stripes can become damaged during a crash or power outage.
…
RAID-Z is a data/parity scheme like RAID-5, but it uses dynamic stripe width. Every block is its own RAID-Z stripe, regardless of blocksize. This means that every RAID-Z write is a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, completely eliminates the RAID write hole. RAID-Z is also faster than traditional RAID because it never has to do read-modify-write.
Whoa, whoa, whoa — that’s it? Variable stripe width? Geez, that seems pretty obvious. If it’s such a good idea, why doesn’t everybody do it?
Well, the tricky bit here is RAID-Z reconstruction. Because the stripes are all different sizes, there’s no simple formula like “all the disks XOR to zero.” You have to traverse the filesystem metadata to determine the RAID-Z geometry. Note that this would be impossible if the filesystem and the RAID array were separate products, which is why there’s nothing like RAID-Z in the storage market today. You really need an integrated view of the logical and physical structure of the data to pull it off.”
However it’s not quite ready for primetime, and more importantly OpenMediaVault does not support it yet5.
So RAID 6 it is.
Cost
RAID 6 is pretty straight forward and provides (n-2)*capacity of storage. To provide at least 10 TB, I would need five 4 TB drives (or six 3 TB drives, or seven 2 TB drives, or twelve 1 TB drives, etc).
Western Digital’s Red NAS drives are designed for 24×7 operation (versus other drives which are geared toward 8 hours of daily operation) and are widely regarded as the best drives to use for a NAS.
Their cost structure breaks out as such:
Capacity
Cost/Disk
Cost/GB
1 TB
$70
$0.0700
2 TB
$99
$0.0495
3 TB
$135
$0.0450
4 TB
$180
$0.0450
At first glance, it appears that there’s no cost/GB difference between the 3 TB and 4 TB drives, but using smaller sized drives is more cost-effective because the amortization of the redundant disks is spread over more total disks and thus brings the cost/GB down faster for a given storage capacity
However, the actual cost per a GB is the same (between 3TB and 4TB) for a given number of disks, you just get more usable space when using five 4 TB drivers versus five 3 TB drives.
Given that I was trying to keep things small, and some reviews indicated there are some possible manufacturing issues with the 3 TB WD Red drives, I decided to splurge a bit6 and go for the 4 TB drives.
Also, the cost per GB has, for the last 30+ years, decreased by half every 14 months. This corresponds to an order of magnitude every 5 years (i.e. if it costs $0.045/GB today, five years ago it would have cost about $0.45/GB and ten years ago it would have cost about $4.50/GB). If we wait 14 months, presumably it would cost $450 to purchase five new 4TB drives. If we wait 28 months, the cost should half again and it would presumably cost about $225 to purchase five new 4TB drives.
However, since we need drives now, whatever we spend becomes a sunk cost. The difference between buying five 2TB drives or five 4TB drives now is $181. However, if we buy them in 28 months, we would have to spend close to $225…or 24% more than we would have to pay now.
Since we will need the additional space sooner than 2.3 years from now, it actually makes financial sense to buy the 4TB drives now.
The Rest of the System
With the hard drives figured out, it’s time to figure out the rest of the system. There are basically two routes: build your own or buy an appliance.
Build your own NAS
My preliminary research quickly pointing to HP’s ProLiant MicroServer as an ideal candidate: it was small, reasonably powerful, a great price. Since I’ve built up computers before, I also wanted to price out what it would cost to build a system from scratch.
After careful review, Synology is the only company that I believe builds an appliance worth considering. Their DiskStation Manager operating system seemed solid when I tried it, there was an easy and known method to get CrashPlan working on their x86-based system, and their system stability has garnered lots of praise.
Initially, I was looking at:
DS412+
DS414
DS1513+
DS1813+
However, the DS41x units only hold 4 drives and that was not going to be enough to have at least 10TB of RAID6 usable storage.
The main differences between the G7 and the G8 are:
G8 uses an Intel Celeron G1610T Dual Core 2.3 GHz instead of the AMD Turion II Model Neo N54L 2.2GHz…no real benefit
G8 has a second ethernet plug, however this no real benefit since our configuration would not use it
G8 has USB 3.0, which would be nice but can be added to the G7 for $30.
G8 has only one PCI Express slot which is downgrade since the G7 version has two slots.
G8 has an updated RAID controller, however this is no real benefit since it would not be used in our configuration
G8 has the iLO Management Engine, however this no real benefit for our configuration
The G8 HP BIOS is digitally signed, “reducing accidental programming and preventing malicious efforts to corrupt system ROM.” It’s also means I cannot use a modified BIOS…which is bad.
The G8 supports SATA III, which is faster than than the G7 SATA II…but probably not a differentiator for our configuration.
Conclusion
Perhaps the most important element is getting buy-in from your wife. All of this analysis is fun, but at the end of the day can I convince my wife to spend over $1000 on a data storage system that will sit in the closet – my side of the closet.
We selected the HP ProLiant MicroServer G7, which I think is a good choice.
I really wanted to build a server from scratch, but it can be a risky endeavour. I tried to pick good quality parts (those with good ratings, lots of reviews, and from vendors I know), but it can be a crapshoot.
For a first time major NAS system like this, I wanted something more reliable. I believe the HP ProLiant MicroServer G7 will be a reliable system and will meet our needs; lots of NAS enthusiasts use it, which is a big plus because it means that it works well and there are lots of people to ask questions of.
For next time (in five years or so), I want to do some more analysis of our data storage over time, which I will be able to track.
I’m also curious what the bottlenecks will be. We currently use a mix of 802.11n over 2.4 GHz and 5 GHz, but I’ve thought about putting in a GigE CAT5 cable.
RAID 6 still has has the write hole issue, and I hope it doesn’t cause an issue.
I’m not terribly thrilled with the efficiency of 3+2 (three storage disks plus two parity disks), but there’s not really a better way to slice it unless I add more disks. And it may be that more disks that are each smaller does actually make a difference.
J. Yang and F.-B. Sun. A comprehensive review of hard-disk drive reliability. In Proc. of the Annual Reliability and Maintainability Symposium, 1999. ↩
Bianca Schroeder and Garth A. Gibson. 2007. Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you?. In Proceedings of the 5th USENIX conference on File and Storage Technologies (FAST ’07) ↩
Bianca Schroeder and Garth A. Gibson. 2007. Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you?. In Proceedings of the 5th USENIX conference on File and Storage Technologies (FAST ’07) ↩
Bianca Schroeder and Garth A. Gibson. 2007. Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you?. In Proceedings of the 5th USENIX conference on File and Storage Technologies (FAST ’07) ↩
NAS4Free and FreeNAS both support ZFS RAID-Z, but they run FreeBSD which does not have native support for CrashPlan ↩
for the capacity, it’s an 11% increase in per GB cost ↩
I’ve been able to migrate all the files moved over1, setup, and fine tune the new system.
It’s not that I wasn’t happy with BlueHost, just that I had grown out of Bluehost, which makes sense: Bluehost really is targeted and people new to web hosting. I’ve had a web site since I was 11.
I’ve heard rumors that Bluehost has over 500 users on each one of their boxes, upgrading to their Pro Package a couple of years ago put me on a box with “80% less accounts per server”, but it still wasn’t cutting it. I needed more!
From a hardware standpoint, fremont is a NextGen 1GB Linode Virtual Private Server (VPS), powered by dual Intel Sandy Bridge E5-2670 processors each of which “enjoys 20 MB of cache and has 8 cores running at 2.6 Ghz” and is shared with, on average, 39 other Linodes.
Linux
I’ve chosen to run Debian 7 (64 bit); it’s a Linux distribution I trust, has a good security focus, and I’m also very familiar with it.
Setting it up the Linode was easy. I decided against using StackScripts because I wanted to know exactly what was going into my system and I wanted to have the experience in case something goes wrong down the line.
I took a fresh copy of Wheezy (Debian 7) and then used the following guides:
I very seriously considered encrypting the entire server, but decided against because ultimately the hardware was still going to be out of my physical control and thus encrypting the system was not an appropriate solution for the attack vector I was concerned with.
Nginx
I’ve always used Apache to do the actual web serving, but I’ve heard great things about Nginx and I wanted to try it. Since I was already going down the foolish path, I figured that I had nothing to lose with trying a new web server as well.
The standard tool for web-based SQL databases in my book has always been mySQL. But just like Nginx, I’ve heard some good things about MariaDB and figured why not. The great thing is, MariaDB is essentially a drop-in replacement for mySQL. Installing from the repo was a piece of cake and there really is no practical difference in operation…it just works, but better (in theory).
PHP-FPM
PHP FastCGI Process Manager (FPM) is an alternative to the regular PHP FastCGI implementation. In particular, it includes adaptive process spawning, among other things, and seems to be the defacto PHP implementation method for Nginx. Installing from the repo was a piece of cake and required only minimal configuration.
I originally used the TCP Sockets, but found that UNIX Sockets gave better performance.
Most of the fine tuning was the little things, like better matching the threads to the number of cores I had available. I also enabled IPv6 support, which means that AFdN is IPv6 compliant:
I’m actually kind of excited by this. It’s sort of like being back in high school and running my own server from my parents house. Except I’m ten years wiser…and married.
Anyway, after some minor toiling about whether I should install nginx from the Debian repository or compile it from source, I ended up going with option C and am trying the dotdeb repo.
This has been predominately driven by my continuous desire to push BlueHost to boundaries of what shared hosting meant. I upgraded to the Pro account last year, but it’s still a bit sluggish and I still consistently find myself having to scrape together horrid workarounds for things I want to do on the server. I probably should have got VPS a year ago, but I wasn’t sure I wanted to take that task on…I’m still not sure.
The server is named Fremont, because it’s located in Fremont, California.
I’m going to move some of the sites I run off of BlueHost to see how fremont (along with nginx, MariaDB, and PHP-FPM) handles everything — and to see if BlueHost gets any snappier.
If all goes well, there’s a good chance I’ll move all the sites to fremont.
For now though, I just have the basic “Hey, it works” page up and running, including an SSL certificate, at https://fremont.fergcorp.com.0