Technology’s Infestation of my Life

Examples of how technology has permeated every single bit of my life.

Welcome to fremont.fergcorp.com

For whatever foolish reason, I’ve decided to take the plunge in to Virtual Private Servers and sprung for a 1GB Linode.

I’m actually kind of excited by this. It’s sort of like being back in high school and running my own server from my parents house. Except I’m ten years wiser…and married.

Anyway, after some minor toiling about whether I should install nginx from the Debian repository or compile it from source, I ended up going with option C and am trying the dotdeb repo.

This has been predominately driven by my continuous desire to push BlueHost to boundaries of what shared hosting meant. I upgraded to the Pro account last year, but it’s still a bit sluggish and I still consistently find myself having to scrape together horrid workarounds for things I want to do on the server. I probably should have got VPS a year ago, but I wasn’t sure I wanted to take that task on…I’m still not sure.

The server is named Fremont, because it’s located in Fremont, California.

I’m going to move some of the sites I run off of BlueHost to see how fremont (along with nginx, MariaDB, and PHP-FPM) handles everything — and to see if BlueHost gets any snappier.

If all goes well, there’s a good chance I’ll move all the sites to fremont.

For now though, I just have the basic “Hey, it works” page up and running, including an SSL certificate, at https://fremont.fergcorp.com.

0

CryptoLocker: An Insidious Virus That Ransoms Your Files

There’s a new virus out there that I want to raise awareness of, it’s called CryptoLocker. I normally don’t post warnings or alerts about virus, but this is probably one of the more insidious viruses I’ve ever seen.

You can read additional information about CryptoLocker at:

Basically what this virus does is locates all your important files on any drive attached to your computer (hard drives, flash drives, USB sticks, network drives/shares) then encrypts the files it finds.

The only way to unlock the files again is to pay $300 to get the key used for the encryption. The encryption method used is RSA with a 2048 bit key, which makes it extremely hard to crack — impossible with the time span and todays computers.

Once infected, you will be presented with a ransom note stating you have 72 hours before the perpetrators destroy the key making it impossible for you to ever get your data back.

Let me put that another way: If you are infected, you will have to pay $300 if you want your data back. There is no way around it and most cases I’ve read about report that once the ransom was paid, the files were successfully recovered.

Nevertheless, this can be extremely devastating if you are running a business and all your files are gone; or if all your family pictures disappear.

If you sync your files to the cloud, you’re still not safe, it syncs the encrypted files as well. If you are able to restore to previous versions of your files in the cloud, you could be okay.

Also, if your backup files are accessible directly they could also be held ransom as well.

Let your friends, family and co-workers know about this.

Here are some simple ways to avoid getting a virus in general:

  1. Don’t open e-mails from people you don’t know
  2. Don’t open attachments in e-mails unless you were waiting for the attachment
  3. Don’t go to websites/click links that you don’t fully trust
  4. Don’t download and execute files that you don’t fully trust

It might seem obvious to the most of us to don’t do the above, but to a lot of friends, family and co-workers it might not be.

Imagine waking up and having to pay $300 to get your data back. However, the police tracked down one of the servers that serves the keys and shut them down which means the keys were not delivered and the data was lost, this means even if you do pay the $300, there is no guarantee that you will get the data back.

Raise awareness of this and avoid having your files lost.

0

Pressgram Just Another Instagram…Lame Sauce.

I was really excited when Pressgr.am first came out. It was supposed to cut out the Instagram middle man.

I’ve had issues getting it work with my site, in particular it would upload the image put it would often not create the post. I started digging around, running TCPDUMP on my router to capture the XMLRPC requests that should have been going between my iPhone and my web server. But I could never capture the traffic I was expecting. As it turns out, there’s a reason:

From stephanis.info:

It seems that, unlike the WordPress Mobile Apps, the password that you enter in Pressgram isn’t kept private on your own device. Without noting it on a Privacy Policy or in any way notifying you that Pressgram is doing it, your password is stored in plaintext on their server.

So what does this all mean?

Well, it means that Pressgram is storing your credentials in plaintext (or potentially encrypted alongside a decryption key) on your behalf, without notifying you or doing anything publicly to indicate that this is the case. No matter how high entropy your passwords may be, if you hand it to someone and they get hacked, it doesn’t matter. You are vulnerable – doubly so if you use that password for other accounts as well.

To some folks, this may be a worthwhile tradeoff. But as I look at it, I don’t see it as a necessary tradeoff. Your credentials could just as easily be kept private between the app on your phone, and your WordPress site. Just have your phone upload the photo directly to your WordPress install. It wouldn’t be difficult to do, it’s already making XMLRPC requests to the server. And it fulfills the initial Kickstarter promise of “your filtered photos published directly to your WordPress-powered blog”. It also would provide the added security that if Pressgram is eventually shut down or sold off, the app would still function, as it’s not needlessly dependent on the Pressgram Servers.

To protect yourself, you may want to consider making a seperate account for your WordPress site with the Author role, and using those credentials with Pressgram, and make sure you’re using a distinct password – as well as with any service that you provide a password to.

My data should be going directly to my server. But it’s not. And that’s, honestly, troubling for an app that promised “complete creative control and publishing freedom with the ability to publish filtered photos directly to your WordPress blog!”1

For the time being, I’ve deleted Pressgram and changed my password. On to looking for a better solution.

0
  1. Source:http://www.kickstarter.com/projects/tentblogger/pressgram-an-image-sharing-app-built-for-an-indepe 

CrashPlan Keeps Crashing

I had an issue where CrashPlan kept on crashing on Mac (OSX 10.8.4). The CrashPlan launch menu bar would also fail to show, and even when I started it manually, it would only stay active for no more than about 60 seconds before crashing again.

I thought the issue was related to my version of Java, but upgrading to the latest version did not solve the issue.

I finally came across a helpdesk article from Pro Backup, which uses the CrashPlan engine:

In some cases a large file selection (>1TiB or 1 million files) can cause CrashPlan to crash. This can be noticed by continuous stopping and starting. You may also see the CrashPlan application and System Tray icon disappearing in the middle of a backup. Or the main CrashPlan program will run for about 30 seconds then close down with no error message.

The CrashPlan engine by default is limited to 512MB of main memory. The CrashPlan engine won’t use more memory than that, even if the CrashPlan engine needs more working memory and the computer has memory available. When the CrashPlan engine is running out of memory it crashes.

The issue was that as a heavy user, I backup more than 1TB of data. However, CrashPlan only allocates 512MB of memory in Java, which is insufficient for my large backup size.

  1. Stop the CrashPlan daemon:
    sudo launchctl unload /library/launchdaemons/com.crashplan.engine.plist
  2. Edit /library/launchdaemons/com.crashplan.engine.plist and change -Xmx512m to -Xmx1024m (or whatever is needed).
  3. Restart the CrashPlan daemon:
    sudo launchctl load /library/launchdaemons/com.crashplan.engine.plist

Problem solved!

0

No More Facebook

If you’re missing me on Facebook, don’t worry, I didn’t unfriend you. About two weeks ago, I left Facebook. This is something that I’ve been wanting to do for a while, but could never bring myself to do — until two weeks ago.

I’m not sure if this is a forever move, or just short-term.

I’ll still be here and there.

0

Introducing AFdN.me

Spurred on by the coolness1 that Viper007Bond (aka Alex Mills) enabled with his v007.me site-specific URL shortener, I have also implemented a similar functionality, also using YOURLS.

a-n-d-r-e-w-f-e-r-g-u-s-o-n-.-n-e-t is 18 characters. I’ve managed to reduce that down to a mere seven (a 61% reduction in effort). http://afdn.me is primarily designed to serve at the short URL location and branding for AFdN. This will primarily be seen in social media, such as Twitter, but could also show up in print locations where space may be limited.

tweet

Instead of showing a link similar to wp.me/p4tPz-2cP (as shown above), links to AFdN will now appear similar to afdn.me/mg3sm. All previous short URLs will, of course, to continue to function. Only new posts (and old posts that have been edited) will see the new shortened URLs.

The self-referential nature of using AFdN.net did occur to me, but the domain was already taken.

This has also been rolled out for AndrewAndRachel.com, which uses http://aandr.us as its shortened URL.

0
  1. Honestly, the real reason I implemented this is that http://andrewandrachel.com/wedding/photo-booth is too long to put on a photo strip branding logo 

Setting up OpenMediaVault

I hope everyone had a merry Thanksgiving! I spent some of my time setting up OpenMediaVault on an Acer Aspire 3610 that my Kolby gave me. It’s a pretty small machine, running an Intel Atom 330 1.6 GHz with 2 GB of RAM1, but I think it will be perfect for running my new NAS!

I’ve been dreaming of a NAS for some time, I’ve contemplated building one for at least two years, but could never justify the cost. What makes this different is that it doesn’t require any new outlay for equipment–I’m literally using what I have already!

I settled on OpenMediaVault because it was based on Debian, which I have more experience with2.

Here are some configuration tricks I need to use in order to get it to work How I Like It™:

CrashPlan

I use CrashPlan on my laptop and it’s great3! If you don’t have a backup plan, you need to stop reading and get one now. Seriously. What would you do if your computer was stolen, or the hard drive went kaput, or you accidentally deleted something? I want to make sure that data my NAS is storing is just as safe as the data on my laptop.

There’s a guide over on the OpenMediaVault forums which basically echos the official CrashPlan Linux installation guide. Everything went okay until I tried to launch the desktop client and I couldn’t get X11 forwarding to work. I was eventually able to get a headless client to run from my laptop connected over a tunneled SSH, but I didn’t want to have to muck with the ui.properties files every time I wanted to check on things. I also wanted to be able to run both my client and the OMV client simultaneously. So I went back and did some more work on the X11 issue and here’s what I found needed to happen:

For the purposes of this, the IP address of the OpenMediaVault server is 172.16.131.130

Log in to your terminal via SSH:

ssh root@172.16.131.130

Note, if you get a ssh: connect to host 172.16.131.130 port 22: Connection refused, you need to enable SSH via the OMV online console first!

Prerequisites:

apt-get update
apt-get install xorg
echo "X11UseLocalHost no" >> /etc/ssh/sshd_config
/etc/init.d/ssh restart
apt-get install openjdk-6-jre

Install CrashPlan

cd /tmp
wget http://download.crashplan.com/installs/linux/install/CrashPlan/CrashPlan_3.4.1_Linux.tgz
tar -zxvf CrashPlan_3.4.1_Linux.tgz
cd CrashPlan-install
./install.sh

Answer yes to install installing Java, and answer all the other questions as required. If you just press return, the defaults will work just fine (and that’s what I used).

Log out and back in with X11 forwarding enabled, then run CrashPlan:

exit
ssh -X root@172.16.131.130
/usr/local/bin/CrashPlanDesktop

Give it a few seconds and you’ll see that familiar CrashPlan green.

Other notes:

  • It was helpful to debug with ssh -v
  • Looking through /usr/local/crashplan/log/ui_error.log was the key to understanding that the version of Java downloaded by CrashPlan was throwing errors (such as java.lang.UnsatisfiedLinkError: no swt-pi-gtk-3448 or swt-pi-gtk in swt.library.path) and needed to be updated.

HFS+

I have a couple of drives that are formated in HFS+ that I wanted to use without having to reformat them. As a side note, I think NTFS is probably the best bet for multisystem compatibility when the potential for dealing with files larger than 4GB. A comment on the OMV blog by norse laid the basic ground work, but I also had to pull some information from Raam Dev’s blog about configuring HFS for Debian.

Note: hfsprogs 332.25-9 and below has a bug where “[f]ormatting a partition as HFSPLUS does not provide the partition with a UUID.“. The work around is to boot to OS X and use the disk utility to format the partition, but this doesn’t work as well when you’re using a VM. The solution is to use the unstable 332.25-10 release of hfsprogs.

echo "deb http://ftp.debian.org/debian testing main contrib" >> /etc/apt/sources.list
apt-get update
apt-get install hfsplus hfsprogs hfsutils
sed -i '$ d' /etc/apt/sources.list
apt-get update

Then modify /var/www/openmediavault/rpcfilesystemmgmt.inc to be able to handle and mount HFS+ disks:

48c48
< 					  '"jfs","xfs","hfsplus"]},
---
> 					  '"jfs","xfs"]},
118c118
< 			  "umsdos", "vfat", "ufs", "reiserfs", "btrfs","hfsplus"))) {
---
> 			  "umsdos", "vfat", "ufs", "reiserfs", "btrfs"))) {
664,667d663
< 			break;
< 		case "hfsplus":
< 			$fsName = $fs->getUuid();
< 			$opts = "defaults,force"; //force,rw,exec,auto,users,uid=501,gid=20";

Finally, you may need to fsck your HFS+ disk if it’s being stubborn and mounting in read-only mode. With the partition unmount:

fsck.hfsplus -f /dev/sdaX

WiFi

Getting WiFi to work took me down a rabbit hole that ended up being unnecessary. First, verify which Wireless you card you have. The easiest way to do this is using lspci:

apt-get install pciutils
lspci | grep -i network

You should see a line like:
05:00.0 Network controller: RaLink RT3090 Wireless 802.11n 1T/1R PCIe

Installing the RT3090 is pretty straight forward:

echo "deb http://ftp.us.debian.org/debian squeeze main contrib non-free" >> /etc/apt/sources.list
apt-get update
aptitude install firmware-ralink wireless-tools wpasupplicant
sed -i '$ d' /etc/apt/sources.list
apt-get update

Edit /etc/network/interfaces to add the following:

auto wlan0
iface wlan0 inet dhcp
    wpa-ssid mynetworkname
    wpa-psk mysecretpassphrase

Note: the Debian guide recommends restricting the permissions of /etc/network/interfaces to prevent disclosure of the passphrase.

Then run:

ifup wlan0

That’s all I have for now, I’m working on some methods for backing up other data to the NAS (such as my web site and GMail) which I’ll write up later.

0
  1. we tried using it to stream the Olympics, but it wouldn’t even do that very well, but I think that was due to the nVidia chipset not playing well with Ubuntu 

  2. FreeNAS and NAS4Free both being based on FreeBSD 

  3. I previously use Mozy, but they started charging by the GB, which wasn’t going to work for me 

Physical Face Cloning

The most interesting thing was how they used physics simulation of the materials to determine and optimize the material geometry and actualization parameters for the servos.

The result: cloning a real humans’ face onto an animatronics figure.

The top comment is perfect as well:

mas8705: So Disney is Skynet. You would think we would have seen this sooner…

0

Not All Pixels Are Created Equal

Whenever I help friends and family buy a new camera, they almost always turn to pixels as the dominating trade point. The reality is, that’s probably not the most appropriate measure of “bestness” and here’s why:

The metric most often used by camera manufacturers and marketers to tout their products has been pixel count. That’s a shame, but it was probably inevitable — it’s easy to measure, and consumers are used to the idea that more is better. However, the number of pixels is a measure of quantity, not quality.

This is a great article explaining in a mostly non-technical way why pixels aren’t all they’re cracked up to be.

Case in point: I can (and have) print a 30″ x 20″ from my eight-year-old 6.1 MP Nikon D70 that look great because it has a 23.7 mm × 15.6 mm1 sensor. If I were to print a picture at the same size using my year-old iPhone 4S with its 8 MP 4.54 mm x 3.42 mm2 sensor, it would look very noisy.

0
  1. 369.72mm2 

  2. 15.52mm2