debian

Plex Proxmox VM with NVIDIA GPU passthrough

Editors note: Last updated 12/26/2022

I’ve seen various parts of this documented on the internet, but I don’t think I’ve seen all the steps written down in one place, so in the interest of sharing and not banging my head next time I need to re-create my Plex VM: here’s how I was able to get my NVIDIA Quadro K620 GPU to work with my Plex VM running in Proxmox.

Here’s my setup:

  • Proxmox (7.2-4 No-Subscription Repository) using the Linux 5.15.35-2-pve kernel
  • Plex VM on Debian 10 using the Linux 4.19.0-20-amd64 kernel
  • NVIDIA Quadro K620 GPU. Note: I’m only using this in headless mode for Plex transcoding.

The first part of the steps are based on https://3os.org/infrastructure/proxmox/gpu-passthrough/pgu-passthrough-to-vm/#proxmox-configuration-for-gpu-passthrough. However, installing the Debian driver didn’t work. Maybe because I’m using Debian 10 and perhaps the NVIDIA v470 driver has a bug? I think there are some repo differences between Ubuntu and Debian.

Proxmox Configuration for GPU Passthrough

Find GPU Bus Address and Device ID

Find the PCI address of the GPU Device:

lspci -nnv | grep VGA

We get:

0b:00.0 VGA compatible controller [0300]: Matrox Electronics Systems Ltd. G200eR2 [102b:0534] (prog-if 00 [VGA controller])
42:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107GL [Quadro K620] [10de:13bb] (rev a2) (prog-if 00 [VGA controller]

What we are looking is the PCI address of the NVIDIA GPU device. In this case it’s 42:00.0.
42:00.0 is only a part of of a group of PCI devices on the GPU.
We can list all the devices in the group 42:00 by using the following command:

lspci -s 42:00

The usual output will include VGA Device and Audio Device, which is what I have:

42:00.0 VGA compatible controller: NVIDIA Corporation GM107GL [Quadro K620] (rev a2)
42:00.1 Audio device: NVIDIA Corporation GM107 High Definition Audio Controller [GeForce 940MX] (rev a1)

Now we need to get the device ID’s of those devices. We can do this by using the following command:

lspci -s 42:00 -n

The output should look similar to this:

42:00.0 0300: 10de:13bb (rev a2)
42:00.1 0403: 10de:0fbc (rev a1)

What we are looking are the pairs, we will use those id to split the PCI Group to separate devices: 10de:13bb,10de:0fbc

Edit grub

Edit the grub configuration file at /etc/default/grub

Find the line that starts with GRUB_CMDLINE_LINUX_DEFAULT and updated it to look like this (Intel CPU example) — replace vfio-pci.ids= with the ids for the GPU you want to passthrough:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on pcie_acs_override=downstream,multifunction video=efifb:off video=vesa:off vfio-pci.ids=10de:13bb,10de:0fb vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 modprobe.blacklist=radeon,nouveau,nvidia,nvidiafb,nvidia-gpu"

Note: if you have an AMD CPU, then use amd_iommu=on instead of intel_iommu=on.

Save the config changed and then update GRUB.

update-grub

Update modules

Next we need to add vfio modules to allow PCI passthrough. Edit the /etc/modules file and add the following line to the end of the file:

# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Update configuration changes made in your /etc filesystem

update-initramfs -u -k all

Reboot and verify

Reboot Proxmox to apply the changes

Verify that IOMMU is enabled

dmesg | grep -e DMAR -e IOMMU

There should be a line that looks like DMAR: IOMMU enabled. If there is no output, something is wrong.

Your Proxmox host should be ready to GPU passthrough!

Debian 10 VM GPU Passthrough Configuration

VM Setup

I’m using Debian 10 as my VM.

For best performance the VM should be configured the Machine type to q35. This will allow the VM to utilize PCI-Express passthrough.

Add a PCI device to the VM (Add > PCI Device). Select the GPU Device, which you can find using it’s PCI address from before (42:00, in this example). Note: This list uses a different format for the PCI addresses id, 42:00.0 is listed as 0000:42:00.0

Select:

  • All Functions
  • ROM-Bar
  • PCI-Epress

…and then select Add. Note: Do not select Primary GPU.

Boot the VM. To test the GPU passthrough was successful, you can use the following command in the VM:

 sudo lspci -nnv | grep VGA

The output should include the GPU:

00:01.0 VGA compatible controller [0300]: Device [1234:1111] (rev 02) (prog-if 00 [VGA controller])
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107GL [Quadro K620] [10de:13bb] (rev a2) (prog-if 00 [VGA controller])

Download NVIDIA Driver

Before we install the NVIDIA driver, we need to install the linux header files and dkms so when the linux kernel is updated the NVIDIA driver will automatically recompile

sudo apt-get install dkms linux-headers-$(uname -r)

Now we need to install the GPU Driver, and this is where things diverge from the instructions on 3os.org.

Go to https://www.nvidia.com/Download/index.aspx?lang=en-us and search for the driver you need:

Your search will come up with a file you can download to your Plex VM (right-click the download link to copy the download URL)

I used the following file: https://www.nvidia.com/content/DriverDownload-March2009/confirmation.php?url=/XFree86/Linux-x86_64/515.48.07/NVIDIA-Linux-x86_64-515.48.07.run&lang=us&type=TITAN

It doesn’t matter where you save it, since you can delete it once you’re done installing the driver.

Install NVIDIA Driver

You will need to chmod +x the file so you can execute it. Then execute the file with the DKMS flag (e.g. sudo sh ./NVIDIA-Linux-x86_64-515.48.07.run --dkms).

Building the kernel modules

You might get a warning, which you can acknowledge. It didn’t seem to cause an issue for me.

Warning about guessing X library path. Select OK to dismiss. We’re using this headless, so don’t care about x.org stuff.

I chose to install the 32-bit compatibility library. I’m not sure if it’s actually needed, but it didn’t seem to cause any problems.

Install NVIDIA’s 32-bit compatibility libraries? Select Yes
Asking to run nvidia-xconfig utility. Select No. We’re using this headless, so don’t care about x.org stuff.

And we’re done!

NVIDIA driver installation complete!

Verify Driver Installation

You should be able to verify that the system recognizes the GPU by running nvidia-smi

Output of nvidia-smi when no GPU processes are running

Configure Plex

If you’ve not done so already, you’ll want to enable hardware acceleration for Plex under Settings > Transcoder:

Select “Save Changes”.

To verify that Plex is actually offloading the transcoding, start playing a TV or movie from your Plex library (make sure that the Quality is any setting except Original). Then you can run nvidia-smi again and you’ll see that it lists Plex Transcoder under the process name (instead of “No running processes found”).

Output of nvidia-smi when Plex is transcoding

You can also install nvtop (apt-get install nvtop) which is a nice way to view transcoding efforts over time:

Using nvtop

Other Notes

Debian vs Ubuntu

There’s a lot of guides and questions where Ubuntu is the OS. I’m not sure why, but none of those work for Debian when it comes to NVIDIA drivers. The NVIDIA Driver Installation Quickstart Guide (see below list of links) lists Ubuntu as supported, but not Debian. I’m not sure if that’s just because there’s no Debian package manager support, or if Debian is technically not supported by NVIDIA at all.

Removing the driver and upgrading

You can upgrade another .run file on top of an existing NVIDIA driver installation (as far as I can tell). You’ll need to run:

/usr/bin/nvidia-uninstall

…to uninstall the existing driver. You’ll need to reboot the VM and then you can install the new driver.

Somewhat related links

Other links that probably aren’t useful, but which I have stumbled on as part of my research. Sometimes these are things that don’t work (or don’t work for what I was wanting to do). Regardless, they are worth mentioning:

How do you typeset NVIDIA?

I thought it was supposed to be nVidia, but wikipedia says Nvidia with a footnote saying it’s “Officially written as NVIDIA.” Or maybe it’s supposed to be nVIDIA?

9

Jamulus and Temporally Hyper-Near Servers

Temporally Hyper-Near Servers

As we’ve been doing more video and audio conferencing lately, I’ve been experimenting with temporally hyper-near servers to see if it results in a better experience. TL;DR…not really for most purposes.

Temporally hyper-near servers differ from geographically near servers in that it doesn’t matter how close the server is physically in miles, just packet transit transfer time in milliseconds…basically low-latency.

AWS calls these Local Zones and they’re designed so that “you can easily run latency-sensitive portions of applications local to end-users and resources in a specific geography, delivering single-digit millisecond latency for use cases such as media & entertainment content creation, real-time gaming…”, but they only have them in the Los Angeles region for now.

Azure calls them Edge Zones, but they aren’t available yet.

Google doesn’t have a specific offering, but instead provides a list of facilities within each region you can choose from, though none of them are near Seattle.

I went back my notes when I was looking at deploying some servers that I knew would generally only be accessed from the Seattle area and I found that Vultr could be a good solution1.

With Vultr (in Seattle), I’m getting an average round-trip time (RTT) of 3.221ms (stddev 0.244 ms)2

Compare to AWS (US West 2), which was an average RTT of 10.820 ms (stddev 0.815ms)3

After doing some traceroutes and poking around various peering databases , I think that Vultr is based at the Cyxtera SEA2 datacenter in Seattle and shares interconnections with CenturyLink, Comcast, and AT&T (among others).

I setup a Jitsi server, but didn’t notice anything perceptibly different between using my server and a standard Jitsi public server (the nearest of which is on an AWS US West 2 instance).

However, for Jamulus (which is software that enables musicians to perform real-time jam sessions over the internet) there does appear to be huge difference and I’ve received several emails about the setup I have, so here goes:

Jamulus on Vultr

Deploy a new server on Vultr4, here’s the the configuration I used:

  • Choose Server: Cloud Compute (see update at the end for High Frequency Compute)
  • Server Location: Seattle
  • Server Type: Debian 10 x64
  • Server Size: $5/mo
    • 25 GB SSD
    • 1 CPU
    • 1024 MB Memory
    • 1000GB Bandwidth
  • SSH Keys: as desired (and beyond the scope of this)
  • Firewall Group: No Firewall (we’ll use UFW on the host for this)
  • Server Hostname & Label: as desired…we’ll call it myserver for the sake of this post

One you deploy the server, it will take a few minutes for it to be ready. Once it is, SSH to it:

ssh root@myserver

Update the linux distribution:

apt-get update
apt-get -y dist-upgrade

Install and configure the UFW firewall:

apt-get install ufw
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 22124/udp
ufw enable

DigitalOcean has a good tutorial on how to setup UFW as well.

You’re now ready to install Jamulus!

The Jamulus wiki has a pretty decent set of instructions (which have only gotten better in the last few months) on how to download, compile, and run a headless Jamulus instance: https://github.com/corrados/jamulus/wiki/Server—Linux

Here’s the TL;DR (which assumes you are working as root):

Install dependencies:

apt-get -y install git build-essential qtdeclarative5-dev qt5-default qttools5-dev-tools libjack-jackd2-dev

Download source code:

cd /tmp/
git clone https://github.com/corrados/jamulus.git

Compile:

cd jamulus
qmake "CONFIG+=nosound headless" Jamulus.pro
make clean
make
make install
mv Jamulus /usr/local/bin/

Create a user to run Jamulus:

adduser --system --no-create-home jamulus

Create a directory to record files to:

mkdir -p /var/jamulus/recording
chown jamulus /var/jamulus/recording

Create systemd unit file:

nano /etc/systemd/system/jamulus.service

Paste the following into the file above, making the needed changes to the Jamulus command line options as-needed for (see update at the end for using --fastupdate):

[Unit]
Description=Jamulus-Server
After=network.target

[Service]
Type=simple
User=jamulus
Group=nogroup
NoNewPrivileges=true
ProtectSystem=true
ProtectHome=true
Nice=-20
IOSchedulingClass=realtime
IOSchedulingPriority=0

#### Change this to set genre, location and other parameters.
#### See https://github.com/corrados/jamulus/wiki/Command-Line-Options ####
ExecStart=/usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername $(uname -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo "NW WA;Seattle, WA;225" -g --welcomemessage "This is an experimental service and support is not guaranteed. Please contact andrew@fergcorp.com with questions" --licence
     
Restart=on-failure
RestartSec=30
StandardOutput=journal
StandardError=inherit
SyslogIdentifier=jamulus

[Install]
WantedBy=multi-user.target

Give the unit file the correct permissions:

chmod 644 /etc/systemd/system/jamulus.service

Start and verify Jamulus:

systemctl start jamulus
systemctl status jamulus

You should get something like:

 jamulus.service - Jamulus-Server
   Loaded: loaded (/etc/systemd/system/jamulus.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-07-08 10:57:09 PDT; 4s ago
 Main PID: 14220 (Jamulus)
    Tasks: 3 (limit: 1149)
   Memory: 13.5M
   CGroup: /system.slice/jamulus.service
           └─14220 /usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo N

Jul 08 10:57:09 myserver.example.com jamulus[14220]: - central server: jamulusallgenres.fischvolk.de:22224
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - server info: NW WA;Seattle, WA;225
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - ping servers in slave server list
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - welcome message: This is an experimental service and support is not guaranteed. Please contact andrew@fergcorp.com with questions
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - licence required
Jul 08 10:57:09 myserver.example.com jamulus[14220]:  *** Jamulus, Version 3.5.8git
Jul 08 10:57:09 myserver.example.com jamulus[14220]:  *** Internet Jam Session Software
Jul 08 10:57:09 myserver.example.com jamulus[14220]:  *** Released under the GNU General Public License (GPL)
Jul 08 10:57:09 myserver.example.com jamulus[14220]: Server Registration Status update: Registration requested
Jul 08 10:57:09 myserver.example.com jamulus[14220]: Server Registration Status update: Registered

And that’s it! Enjoy the server and let me know how it goes!

9 July 2020 Update:

If you update jamulus.service unit file then run this:

systemctl daemon-reload
service jamulus restart

Also, thanks to Brian Pratt testing, feedback, catching a couple typos, and suggesting using the --fastupdate command line option paired with Vultr’s High Frequency Compute (instead of regular Compute) even better performance.

3
  1. Neither DigitalOcean nor Linode have data centers in Seattle 

  2. ping -c10 -W50 108.61.194.105 

  3. ping -c10 -W50 ec2.us-west-2.amazonaws.com 

  4. Get $100 free credit with that affiliate link; note: you must use credit within 30 days 

  5. USA is 225 

Transition to LEMP

If you’re reading this, it means you are using the new AFdN server! As part of my foolish reason plunge in to Virtual Private Servers.

I’ve been able to migrate all the files moved over1, setup, and fine tune the new system.

It’s not that I wasn’t happy with BlueHost, just that I had grown out of Bluehost, which makes sense: Bluehost really is targeted and people new to web hosting. I’ve had a web site since I was 11.

I’ve heard rumors that Bluehost has over 500 users on each one of their boxes, upgrading to their Pro Package a couple of years ago put me on a box with “80% less accounts per server”, but it still wasn’t cutting it. I needed more!

The LEMP setup: Linux, Nginx2, MariaDB, PHP-FPM.

From a hardware standpoint, fremont is a NextGen 1GB Linode Virtual Private Server (VPS), powered by dual Intel Sandy Bridge E5-2670 processors each of which “enjoys 20 MB of cache and has 8 cores running at 2.6 Ghz” and is shared with, on average, 39 other Linodes.

Linux

I’ve chosen to run Debian 7 (64 bit); it’s a Linux distribution I trust, has a good security focus, and I’m also very familiar with it.

Setting it up the Linode was easy. I decided against using StackScripts because I wanted to know exactly what was going into my system and I wanted to have the experience in case something goes wrong down the line.

I took a fresh copy of Wheezy (Debian 7) and then used the following guides:

I very seriously considered encrypting the entire server, but decided against because ultimately the hardware was still going to be out of my physical control and thus encrypting the system was not an appropriate solution for the attack vector I was concerned with.

Nginx

I’ve always used Apache to do the actual web serving, but I’ve heard great things about Nginx and I wanted to try it. Since I was already going down the foolish path, I figured that I had nothing to lose with trying a new web server as well.

To make things easier, I installed Nginx from the repo instead of from source and then configured it using the (more or less) standard approach.

It’s really simple to install, I probably over thought it.

rtCamp has a really great tutorial on setting up fastcgi_cache_purge that allows Nginx to cache WordPress data and then purge and rebuild the cached content after you edit a post/page from WordPress dashboard or approve a comment on an article.

MariaDB

The standard tool for web-based SQL databases in my book has always been mySQL. But just like Nginx, I’ve heard some good things about MariaDB and figured why not. The great thing is, MariaDB is essentially a drop-in replacement for mySQL. Installing from the repo was a piece of cake and there really is no practical difference in operation…it just works, but better (in theory).

PHP-FPM

PHP FastCGI Process Manager (FPM) is an alternative to the regular PHP FastCGI implementation. In particular, it includes adaptive process spawning, among other things, and seems to be the defacto PHP implementation method for Nginx. Installing from the repo was a piece of cake and required only minimal configuration.

I originally used the TCP Sockets, but found that UNIX Sockets gave better performance.

Fine-tuning

Getting everything moved over was pretty easy, I did some benchmarking using Google Chrome’s Network DevTool and using Plugin Performance Profiler from GoDaddy3.

Most of the fine tuning was the little things, like better matching the threads to the number of cores I had available. I also enabled IPv6 support, which means that AFdN is IPv6 compliant:

ipv6 ready

Enjoy faster and better access to AFdN!
0


  1. at least for AFdN, there are other sites I run that are still in migration 

  2. pronounced engine-x, the “e” is invisible 

  3. I know, I’m just as shocked as you 

Welcome to fremont.fergcorp.com

For whatever foolish reason, I’ve decided to take the plunge in to Virtual Private Servers and sprung for a 1GB Linode.

I’m actually kind of excited by this. It’s sort of like being back in high school and running my own server from my parents house. Except I’m ten years wiser…and married.

Anyway, after some minor toiling about whether I should install nginx from the Debian repository or compile it from source, I ended up going with option C and am trying the dotdeb repo.

This has been predominately driven by my continuous desire to push BlueHost to boundaries of what shared hosting meant. I upgraded to the Pro account last year, but it’s still a bit sluggish and I still consistently find myself having to scrape together horrid workarounds for things I want to do on the server. I probably should have got VPS a year ago, but I wasn’t sure I wanted to take that task on…I’m still not sure.

The server is named Fremont, because it’s located in Fremont, California.

I’m going to move some of the sites I run off of BlueHost to see how fremont (along with nginx, MariaDB, and PHP-FPM) handles everything — and to see if BlueHost gets any snappier.

If all goes well, there’s a good chance I’ll move all the sites to fremont.

For now though, I just have the basic “Hey, it works” page up and running, including an SSL certificate, at https://fremont.fergcorp.com.0

Setting up OpenMediaVault

I hope everyone had a merry Thanksgiving! I spent some of my time setting up OpenMediaVault on an Acer Aspire 3610 that my Kolby gave me. It’s a pretty small machine, running an Intel Atom 330 1.6 GHz with 2 GB of RAM1, but I think it will be perfect for running my new NAS!

I’ve been dreaming of a NAS for some time, I’ve contemplated building one for at least two years, but could never justify the cost. What makes this different is that it doesn’t require any new outlay for equipment–I’m literally using what I have already!

I settled on OpenMediaVault because it was based on Debian, which I have more experience with2.

Here are some configuration tricks I need to use in order to get it to work How I Like It™:

CrashPlan

I use CrashPlan on my laptop and it’s great3! If you don’t have a backup plan, you need to stop reading and get one now. Seriously. What would you do if your computer was stolen, or the hard drive went kaput, or you accidentally deleted something? I want to make sure that data my NAS is storing is just as safe as the data on my laptop.

There’s a guide over on the OpenMediaVault forums which basically echos the official CrashPlan Linux installation guide. Everything went okay until I tried to launch the desktop client and I couldn’t get X11 forwarding to work. I was eventually able to get a headless client to run from my laptop connected over a tunneled SSH, but I didn’t want to have to muck with the ui.properties files every time I wanted to check on things. I also wanted to be able to run both my client and the OMV client simultaneously. So I went back and did some more work on the X11 issue and here’s what I found needed to happen:

For the purposes of this, the IP address of the OpenMediaVault server is 172.16.131.130

Log in to your terminal via SSH:

ssh root@172.16.131.130

Note, if you get a ssh: connect to host 172.16.131.130 port 22: Connection refused, you need to enable SSH via the OMV online console first!

Prerequisites:

apt-get update
apt-get install xorg
echo "X11UseLocalHost no" >> /etc/ssh/sshd_config
/etc/init.d/ssh restart
apt-get install openjdk-6-jre

Install CrashPlan

cd /tmp
wget http://download.crashplan.com/installs/linux/install/CrashPlan/CrashPlan_3.4.1_Linux.tgz
tar -zxvf CrashPlan_3.4.1_Linux.tgz
cd CrashPlan-install
./install.sh

Answer yes to install installing Java, and answer all the other questions as required. If you just press return, the defaults will work just fine (and that’s what I used).

Log out and back in with X11 forwarding enabled, then run CrashPlan:

exit
ssh -X root@172.16.131.130
/usr/local/bin/CrashPlanDesktop

Give it a few seconds and you’ll see that familiar CrashPlan green.

Other notes:

  • It was helpful to debug with ssh -v
  • Looking through /usr/local/crashplan/log/ui_error.log was the key to understanding that the version of Java downloaded by CrashPlan was throwing errors (such as java.lang.UnsatisfiedLinkError: no swt-pi-gtk-3448 or swt-pi-gtk in swt.library.path) and needed to be updated.

HFS+

I have a couple of drives that are formated in HFS+ that I wanted to use without having to reformat them. As a side note, I think NTFS is probably the best bet for multisystem compatibility when the potential for dealing with files larger than 4GB. A comment on the OMV blog by norse laid the basic ground work, but I also had to pull some information from Raam Dev’s blog about configuring HFS for Debian.

Note: hfsprogs 332.25-9 and below has a bug where “[f]ormatting a partition as HFSPLUS does not provide the partition with a UUID.“. The work around is to boot to OS X and use the disk utility to format the partition, but this doesn’t work as well when you’re using a VM. The solution is to use the unstable 332.25-10 release of hfsprogs.

echo "deb http://ftp.debian.org/debian testing main contrib" >> /etc/apt/sources.list
apt-get update
apt-get install hfsplus hfsprogs hfsutils
sed -i '$ d' /etc/apt/sources.list
apt-get update

Then modify /var/www/openmediavault/rpcfilesystemmgmt.inc to be able to handle and mount HFS+ disks:

48c48
< 					  '"jfs","xfs","hfsplus"]},
---
> 					  '"jfs","xfs"]},
118c118
< 			  "umsdos", "vfat", "ufs", "reiserfs", "btrfs","hfsplus"))) {
---
> 			  "umsdos", "vfat", "ufs", "reiserfs", "btrfs"))) {
664,667d663
< 			break;
< 		case "hfsplus":
< 			$fsName = $fs->getUuid();
< 			$opts = "defaults,force"; //force,rw,exec,auto,users,uid=501,gid=20";

Finally, you may need to fsck your HFS+ disk if it’s being stubborn and mounting in read-only mode. With the partition unmount:

fsck.hfsplus -f /dev/sdaX

WiFi

Getting WiFi to work took me down a rabbit hole that ended up being unnecessary. First, verify which Wireless you card you have. The easiest way to do this is using lspci:

apt-get install pciutils
lspci | grep -i network

You should see a line like:
05:00.0 Network controller: RaLink RT3090 Wireless 802.11n 1T/1R PCIe

Installing the RT3090 is pretty straight forward:

echo "deb http://ftp.us.debian.org/debian squeeze main contrib non-free" >> /etc/apt/sources.list
apt-get update
aptitude install firmware-ralink wireless-tools wpasupplicant
sed -i '$ d' /etc/apt/sources.list
apt-get update

Edit /etc/network/interfaces to add the following:

auto wlan0
iface wlan0 inet dhcp
    wpa-ssid mynetworkname
    wpa-psk mysecretpassphrase

Note: the Debian guide recommends restricting the permissions of /etc/network/interfaces to prevent disclosure of the passphrase.

Then run:

ifup wlan0

That’s all I have for now, I’m working on some methods for backing up other data to the NAS (such as my web site and GMail) which I’ll write up later.0


  1. we tried using it to stream the Olympics, but it wouldn’t even do that very well, but I think that was due to the nVidia chipset not playing well with Ubuntu 

  2. FreeNAS and NAS4Free both being based on FreeBSD 

  3. I previously use Mozy, but they started charging by the GB, which wasn’t going to work for me