Technology’s Infestation of my Life

Examples of how technology has permeated every single bit of my life.

Feit SHOP/4/HO/CCT/AG Teardown

The SHOP/4/HO/CCT/AG shop light is a 10,000 lumen light with tunable white light (3000K – 6500K) with a NEMA 5-15P connector that plugs into a standard 3-prong outlet.

It can be found in the US on Amazon and in retailers such as Costco for about $60 (as of the time of publishing). It’s an interesting design (in my opinion) because it’s a smart light that supports on/off, dimming, and CCT color. But at the flip of a switch you can use it just like a standard, dumb LED light.

Lights

The light module contains 6 rows of LEDs (84 LEDs/row), each row alternating between cold and warm, for a total of 504 LEDs.

Major Integrated Circuits

In addition to your usual collection of resistors, capacitors, and diodes, there’s three inductors which I couldn’t figure out P/N information for. I think there’s also a thermistor or fuse of some kind (magenta circled item on top side of of PCB). I’ve highlighted the major ICs below and provided some information based on what I could find:

WB2L (BK7231T chip)

Manufacturer: Tuya

P/N: WB2L (BK7231T chip)

Function: Low-power embedded Wi-Fi and Bluetooth LE module. The chip is on a daughter board which is solder to the main board perpendicularly through a thru hole. There’s a switch that overrides the WB2L (and removes power from it!) and sets the light to cool white (not cold) color.

Pin No.SymbolI/O typeBK7231T FunctionUse
1PWM2I/OHardware PWM pin; P8/BT_ACTIVE/PWM2 (Pin24)Dimmer
2PWM1I/OHardware PWM pin; P7/WIFI_ACTIVE/PWM1 (Pin23)Color Temperature
3PWM0I/OHardware PWM pin; P6/CLK13M/PWM0 (Pin22)No Connect
4PWM5I/OHardware PWM pin; P26/IRDA/PWM5 (Pin15)Unknown
5PWM4I/OHardware PWM pin; P24/LPO_CLK/PWM4 (Pin16)No Connect
6GNDPGround pinGND
73V3PPower supply pin (3.3 V)3.3 V

For this light, here’s what my probing found:

  • PWM1: 0% duty cycle is full Cold and 100% duty cycle is full Warm
  • PWM2: 0% duty cycle is off and 100% duty cycle is full brightness.
  • PWM5: Unknown if this is an input or an output. With the stock Feit firmware:
    • Logic 0: when the light is on
    • Logic 1: when the light is off

BP2525

Manufacturer: Bright Power Semiconductor

P/N: BP2525

Function: Non-isolated step-down type AC/DC Constant Voltage chip…power for the LEDs that is then current controlled by the BP2929 (for color temperature) and BP5012 (for dimming).

BP5929

Manufacturer: Bright Power Semiconductor

P/N: BP5929

Function: Dual-channel PWM color matching chip1

This is an interesting chip, and I’m not quite sure why they’re using it. Normally, the microcontroller would output two PWM signals: one for Cool White and one for Warm White, and they would each drive a MOSFET and then dimming is handled algorithmically by the microcontroller. This chips seems to offload that functionality (sort of; this chip doesn’t actually handle dimming itself, but rather in combination with a separate current control method — which I believe is the BP5012 in this design), but I’m not sure why…the microcontroller can do all of this easily. My one thought is that maybe this method reduces flicker because it’s relies of constant voltage instead of using PWM to vary the voltage.

Pin NumberNameDescription
1VHHigh-voltage side power supply terminal
2VSHigh pressure side floating
3OUT2Output GATE signal 2
4OUT1Output GATE signal 1
5NCNot Connected
6PWMPWM Signal Input
7VCCLow voltage power supply
8GNDGround
Google translation of datasheet…so may not be exactly accurate

BP5012

Manufacturer: Bright Power Semiconductor

P/N: BP5012 or BP501 (maybe)

Function: PWM dimming interface converter (probably)

I couldn’t find a datasheet for BP5012 or BP501. There is a BP5011 though, which is a 2 channel PWM dimming controller. So I suspect it’s related to that.

JW1606

Manufacturer: Joulwatt

P/N: JW1602

Function: Non-isolated switching regulator with dimming

The package is marked JW1606, but JW1606 doesn’t exist. I’m assuming this is related to the JW1602, which I was able to find a datasheet for.

The JW1602 has dimming function, so I’m not sure why the duplicate functionality with the BP5012. The BP2525 is also a non-isolated switching regulator as well. So my theory is that there’s two completely separate control circuits on the PCB, one to provide variable control of color temperature and intensity when the microcontroller is active and another circuit that provides fixed color temperature (probably just 50/50) and a fixed dimming level (set by a voltage divider).

GBU 4J

Manufacturer: onSemi

P/N: GBU4J

Function: Bridge Rectifier. Probably for input to both of the non-isolated switching regulators (BP2525 and JW1606).

CRJF380N65G2

Manufacturer: CR Micro

P/N: CRJF380N65G2

Function: SJ-MOS N-MOSFET. I think this is used only when the unit is in “dumb” mode and provides the variable current control for all 504 LEDs.

SVF4N65F

Manufacturer: Silan Microelectronics

P/N: SVF4N65F

Function: N-CHANNEL MOSFET. There’s two of these that are directly connected to the LEDs themselves, one for each color temperature. These are probably driven by the BP5929 to control the current for each set of LEDs.

7002

Manufacturer: CT Micro

P/N: CT2N7002E-R3

Function: N-Channel Enhancement MOSFET. I think this is somehow related to the mysterious function of PWM5.

Disassembly

BK7231T is pretty straight forward:

  1. Remove two screws from cord side (red circles)
  2. Remove cord clamp (green circle)
  3. Slide the the PCB out from its slot
    Note: this might require scrapping some of the potting compound used to the secure the PCB into the slot. Pushing gently from the end using the switch usually pops it out

Flashing

I flashed the unit with OpenBK7231T/OpenBeken, which now supports this light. I followed the UART instructions, which requires using https://github.com/OpenBekenIOT/hid_download_py.

I used a USB to Serial USB to TTL CH340G to program the chip. Once the PCB is removed, make the following connections:

  • Connect computer TX to U1_RXD (Top side: red arrow)
  • Connect computer RX to U1_TXD (Top side: green arrow)
  • Connect a wire to the RST (Top side: magenta arrow), when you’re ready to program you’ll briefly connecting this to ground to reset the chip.
  • Connect 3.3 volts to 3V3 (Bottom side: red arrow)
  • Connect ground to GND (Bottom side: black arrow)

Outstanding Questions

  • What does PWM5 do?
  • Does power get cut to WB2L when switch is activated?
  • What are the other chips?
  • What chip is used for LED control?
2
  1. This is a Google Translation…so may not be exactly accurate 

Plex Proxmox VM with NVIDIA GPU passthrough

Editors note: Last updated 12/26/2022

I’ve seen various parts of this documented on the internet, but I don’t think I’ve seen all the steps written down in one place, so in the interest of sharing and not banging my head next time I need to re-create my Plex VM: here’s how I was able to get my NVIDIA Quadro K620 GPU to work with my Plex VM running in Proxmox.

Here’s my setup:

  • Proxmox (7.2-4 No-Subscription Repository) using the Linux 5.15.35-2-pve kernel
  • Plex VM on Debian 10 using the Linux 4.19.0-20-amd64 kernel
  • NVIDIA Quadro K620 GPU. Note: I’m only using this in headless mode for Plex transcoding.

The first part of the steps are based on https://3os.org/infrastructure/proxmox/gpu-passthrough/pgu-passthrough-to-vm/#proxmox-configuration-for-gpu-passthrough. However, installing the Debian driver didn’t work. Maybe because I’m using Debian 10 and perhaps the NVIDIA v470 driver has a bug? I think there are some repo differences between Ubuntu and Debian.

Proxmox Configuration for GPU Passthrough

Find GPU Bus Address and Device ID

Find the PCI address of the GPU Device:

lspci -nnv | grep VGA

We get:

0b:00.0 VGA compatible controller [0300]: Matrox Electronics Systems Ltd. G200eR2 [102b:0534] (prog-if 00 [VGA controller])
42:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107GL [Quadro K620] [10de:13bb] (rev a2) (prog-if 00 [VGA controller]

What we are looking is the PCI address of the NVIDIA GPU device. In this case it’s 42:00.0.
42:00.0 is only a part of of a group of PCI devices on the GPU.
We can list all the devices in the group 42:00 by using the following command:

lspci -s 42:00

The usual output will include VGA Device and Audio Device, which is what I have:

42:00.0 VGA compatible controller: NVIDIA Corporation GM107GL [Quadro K620] (rev a2)
42:00.1 Audio device: NVIDIA Corporation GM107 High Definition Audio Controller [GeForce 940MX] (rev a1)

Now we need to get the device ID’s of those devices. We can do this by using the following command:

lspci -s 42:00 -n

The output should look similar to this:

42:00.0 0300: 10de:13bb (rev a2)
42:00.1 0403: 10de:0fbc (rev a1)

What we are looking are the pairs, we will use those id to split the PCI Group to separate devices: 10de:13bb,10de:0fbc

Edit grub

Edit the grub configuration file at /etc/default/grub

Find the line that starts with GRUB_CMDLINE_LINUX_DEFAULT and updated it to look like this (Intel CPU example) — replace vfio-pci.ids= with the ids for the GPU you want to passthrough:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on pcie_acs_override=downstream,multifunction video=efifb:off video=vesa:off vfio-pci.ids=10de:13bb,10de:0fb vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 modprobe.blacklist=radeon,nouveau,nvidia,nvidiafb,nvidia-gpu"

Note: if you have an AMD CPU, then use amd_iommu=on instead of intel_iommu=on.

Save the config changed and then update GRUB.

update-grub

Update modules

Next we need to add vfio modules to allow PCI passthrough. Edit the /etc/modules file and add the following line to the end of the file:

# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Update configuration changes made in your /etc filesystem

update-initramfs -u -k all

Reboot and verify

Reboot Proxmox to apply the changes

Verify that IOMMU is enabled

dmesg | grep -e DMAR -e IOMMU

There should be a line that looks like DMAR: IOMMU enabled. If there is no output, something is wrong.

Your Proxmox host should be ready to GPU passthrough!

Debian 10 VM GPU Passthrough Configuration

VM Setup

I’m using Debian 10 as my VM.

For best performance the VM should be configured the Machine type to q35. This will allow the VM to utilize PCI-Express passthrough.

Add a PCI device to the VM (Add > PCI Device). Select the GPU Device, which you can find using it’s PCI address from before (42:00, in this example). Note: This list uses a different format for the PCI addresses id, 42:00.0 is listed as 0000:42:00.0

Select:

  • All Functions
  • ROM-Bar
  • PCI-Epress

…and then select Add. Note: Do not select Primary GPU.

Boot the VM. To test the GPU passthrough was successful, you can use the following command in the VM:

 sudo lspci -nnv | grep VGA

The output should include the GPU:

00:01.0 VGA compatible controller [0300]: Device [1234:1111] (rev 02) (prog-if 00 [VGA controller])
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107GL [Quadro K620] [10de:13bb] (rev a2) (prog-if 00 [VGA controller])

Download NVIDIA Driver

Before we install the NVIDIA driver, we need to install the linux header files and dkms so when the linux kernel is updated the NVIDIA driver will automatically recompile

sudo apt-get install dkms linux-headers-$(uname -r)

Now we need to install the GPU Driver, and this is where things diverge from the instructions on 3os.org.

Go to https://www.nvidia.com/Download/index.aspx?lang=en-us and search for the driver you need:

Your search will come up with a file you can download to your Plex VM (right-click the download link to copy the download URL)

I used the following file: https://www.nvidia.com/content/DriverDownload-March2009/confirmation.php?url=/XFree86/Linux-x86_64/515.48.07/NVIDIA-Linux-x86_64-515.48.07.run&lang=us&type=TITAN

It doesn’t matter where you save it, since you can delete it once you’re done installing the driver.

Install NVIDIA Driver

You will need to chmod +x the file so you can execute it. Then execute the file with the DKMS flag (e.g. sudo sh ./NVIDIA-Linux-x86_64-515.48.07.run --dkms).

Building the kernel modules

You might get a warning, which you can acknowledge. It didn’t seem to cause an issue for me.

Warning about guessing X library path. Select OK to dismiss. We’re using this headless, so don’t care about x.org stuff.

I chose to install the 32-bit compatibility library. I’m not sure if it’s actually needed, but it didn’t seem to cause any problems.

Install NVIDIA’s 32-bit compatibility libraries? Select Yes
Asking to run nvidia-xconfig utility. Select No. We’re using this headless, so don’t care about x.org stuff.

And we’re done!

NVIDIA driver installation complete!

Verify Driver Installation

You should be able to verify that the system recognizes the GPU by running nvidia-smi

Output of nvidia-smi when no GPU processes are running

Configure Plex

If you’ve not done so already, you’ll want to enable hardware acceleration for Plex under Settings > Transcoder:

Select “Save Changes”.

To verify that Plex is actually offloading the transcoding, start playing a TV or movie from your Plex library (make sure that the Quality is any setting except Original). Then you can run nvidia-smi again and you’ll see that it lists Plex Transcoder under the process name (instead of “No running processes found”).

Output of nvidia-smi when Plex is transcoding

You can also install nvtop (apt-get install nvtop) which is a nice way to view transcoding efforts over time:

Using nvtop

Other Notes

Debian vs Ubuntu

There’s a lot of guides and questions where Ubuntu is the OS. I’m not sure why, but none of those work for Debian when it comes to NVIDIA drivers. The NVIDIA Driver Installation Quickstart Guide (see below list of links) lists Ubuntu as supported, but not Debian. I’m not sure if that’s just because there’s no Debian package manager support, or if Debian is technically not supported by NVIDIA at all.

Removing the driver and upgrading

You can upgrade another .run file on top of an existing NVIDIA driver installation (as far as I can tell). You’ll need to run:

/usr/bin/nvidia-uninstall

…to uninstall the existing driver. You’ll need to reboot the VM and then you can install the new driver.

Somewhat related links

Other links that probably aren’t useful, but which I have stumbled on as part of my research. Sometimes these are things that don’t work (or don’t work for what I was wanting to do). Regardless, they are worth mentioning:

How do you typeset NVIDIA?

I thought it was supposed to be nVidia, but wikipedia says Nvidia with a footnote saying it’s “Officially written as NVIDIA.” Or maybe it’s supposed to be nVIDIA?

9

2010 Prius Microphone

This is an update to Notes on Installing Sony XAV-AX100 in a 2010 Prius, specifically trying to integrate the factory microphone.

Unfortunately, the microphone is pre-amplified with what I believe to be a New Japan Radio Co. 2140 Op-Amp (the black IC in the middle on front side, pin 1 is upper left corner).

When connected to the factory head unit, the output signal of the factory microphone module was about 500mv peak-to-peak and centered on 0.0v.

Factory microphone output signal when connected to factory head unit

I wasn’t able to get a great oscilloscope capture of the factory microphone output when connected to the Sony head unit, but it ended up being biased by about 3.3v and about 400mv peak-to-peak.

I could have re-wired the factory microphone circuit to remove the amplification, or even made a new circuit to de-amplify it and put that inline. But I decided that was too much work and just installed an external 3.5mm microphone instead:

3

Notes on Installing Sony XAV-AX100 in a 2010 Prius

Parts

Sony XAV-AX100 — $240

Metra 95-8226B Dash Kit for Toyota Prius 2010 Double DIN (Black)— $15

Metra TYTO-01 JBL Amplifier Interface Harness — $40

Axxess AX-TOYCAM2-6V Toyota Back-Up Camera Retain/Add-On with 6 Volt Turn on — $11

Metra 70-8114 Steering Wheel Control Wire Harness with RCA for 2003-Up Select Toyota/Scion/Lexus Vehicles — $7

Note: You do not need the ASWC-1 module, we’re just going to cannibalize this harness for it’s connectors

DC/DC Converter 12 to 5V — $7

3.5mm Tip/Sleve (Mono) adapter/pigtail — $8

Cllena High Speed Dual Port USB Car Charger with Audio Socket for Toyota Series — $15

Note: The 2010 Prius did not include a USB adapter/plug — even for the Trim V and/or Nav package

Note: For the 2010 Prius you’ll need the one that is 22mm x 33mm (0.87in x 1.3in). This is also the same size as the 2015 RAV4.

Steering Wheel Control

I originally thought I needed the ASWC-1, but as it turns out there’s two ways Steering Wheel Control (SWC) signals could be sent: either as voltage-based1 analog or as digital signals over the CANBUS.

Fortunately for me, both my Prius and the Sony XAV-AX100 use the voltage-based analog SWC signaling. I lopped off the black connector of the Metra 70-8114 (which normally would plug into the ASWC-1) and soldered to the red/white wires of the 3.5mm connector:

  • Green/Orange to thin White (SW1)
  • Green/Black to thin Red (SW2)
TerminalConditionSpecification*
L41-7 (SW1)Seek+ switch pushed< 0.8 V
L41-7 (SW1)Seek- switch pushed0.9 to 1.3 V
L41-7 (SW1)Volume+ switch pushed1.65 to 1.9 V
L41-7 (SW1)Volume- switch pushed2.45 to 2.6 V
L41-7 (SW1)Steering pad switch not operated3.28 to 3.5 V
L41-8 (SW2)MODE switch pushed< 0.8 V
L41-8 (SW2)On hook switch pushed0.9 to 1.3 V
L41-8 (SW2)Off hook switch pushed1.65 to 1.9 V
L41-8 (SW2)Voice switch pushed2.45 to 2.6 V
L41-8 (SW2)Steering pad switch not operated3.28 to 3.5 V

* With respect to L40-20 (GND)

On the XAV-AX100, there’s an option to program the SWC buttons, so I did that and everything works as expected.

Microphone

The Prius has a microphone (at least mine does) and I wanted to keep that microphone instead of adding a new one.

The microphone in the Prius requires 5V for it’s built in amp, which looks like is always powered when the car is on.

  • DC/DC Converter 5V to Prius L37-17 (MACC, Telephone microphone assembly power supply, 5V)
  • Sony XAV-AX100 Black to Prius L37-18 (SGND, Shield ground)
  • Sony XAV-AX100 Mic Tip to Prius L37-19 (MIN+, Microphone voice signal)
  • Sony XAV-AX100 Mic Sleeve to Prius L37-20 (MIN-, Microphone voice signal)

Note: I think this works…though I’m having some call quality issues. Not sure if it’s related to this, CarPlay, or something else.

I added an external microphone: 2010 Prius Microphone

Rear Camera Hookup Options

Assumes factory backup camera, remember to plug the Yellow RCA cable into the Radio as well.

You can buy the L42 connector with a pigtail from https://autoharnesshouse.com/49914.html, or just do what I did and stick a wire in the female connector and tape it .

Normal (only on when in reverse)

  • AX-CAM6 Blue/White(Reverse trigger) to Sony XAV-AX100 Purple/white (Reverse In) and to Prius L42-5 (Reverse Signal)
  • AX-CAM6 Black(Ground) to Sony XAV-AX100 Black (Ground)
  • AX-CAM6 Blue/Red (Camera power, 6V) to Prius L37-24(CA+, Television camera power supply, 5.5 to 7V)

Always available

  • AX-CAM6 Blue/White (Reverse trigger) to Sony XAV-AX100 Red (Accessory Power, 12V)
  • AX-CAM6 Black (Ground) to Sony XAV-AX100 Black (Ground)
  • AX-CAM6 Blue/Red (Camera power, 6V) to Prius L37-24 (CA+, Television camera power supply, 5.5 to 7V)
  • Prius L42-5 (Reverse Signal) to Sony XAV-AX100 Purple/white (Reverse In)

Other Notes:

  • The color and texture of the dash kit definitely does not match, but I’m not sure there is one that does.
  • It’s been a while since I’ve replaced a factory radio, and this one took me some time to figure out the SWC, microphone, and backup camera. I was used to the Old Days™ where the radio comes with a pigtail connector, you buy a pigtail connector for your vehicle, solder the two together and that’s it.
  • I installed the Dual port USB adapter, but only the USB connector for the audio is currently hooked up. I still need to hook up the second USB for charging. My current plan is to run the wire to the 12V cigarette adapter in the center console.
  • There’s an adjustment for the volume on the Metra TYTO-01, I think I set mine too low because A) I have the radio cranked up pretty high when driving (the Prius is notorious for road noise), and B) when I’m playing Spotify through CarPlay it sounds like the audio is clipping. So I’ll need to take the dash apart again and adjust that.
  • While I did remember to remove all the music CD’s from the factory radio before I uninstalled it, I forgot to clear the Oil Maintenance reminder message (which is set and controlled through the factory radio)…so I’ll probably need to hook it back up to clear it *facepalm*

Resources

3
  1. I think there also may be a resistive-based analog format as well…so three ways 

Jamulus and Temporally Hyper-Near Servers

Temporally Hyper-Near Servers

As we’ve been doing more video and audio conferencing lately, I’ve been experimenting with temporally hyper-near servers to see if it results in a better experience. TL;DR…not really for most purposes.

Temporally hyper-near servers differ from geographically near servers in that it doesn’t matter how close the server is physically in miles, just packet transit transfer time in milliseconds…basically low-latency.

AWS calls these Local Zones and they’re designed so that “you can easily run latency-sensitive portions of applications local to end-users and resources in a specific geography, delivering single-digit millisecond latency for use cases such as media & entertainment content creation, real-time gaming…”, but they only have them in the Los Angeles region for now.

Azure calls them Edge Zones, but they aren’t available yet.

Google doesn’t have a specific offering, but instead provides a list of facilities within each region you can choose from, though none of them are near Seattle.

I went back my notes when I was looking at deploying some servers that I knew would generally only be accessed from the Seattle area and I found that Vultr could be a good solution1.

With Vultr (in Seattle), I’m getting an average round-trip time (RTT) of 3.221ms (stddev 0.244 ms)2

Compare to AWS (US West 2), which was an average RTT of 10.820 ms (stddev 0.815ms)3

After doing some traceroutes and poking around various peering databases , I think that Vultr is based at the Cyxtera SEA2 datacenter in Seattle and shares interconnections with CenturyLink, Comcast, and AT&T (among others).

I setup a Jitsi server, but didn’t notice anything perceptibly different between using my server and a standard Jitsi public server (the nearest of which is on an AWS US West 2 instance).

However, for Jamulus (which is software that enables musicians to perform real-time jam sessions over the internet) there does appear to be huge difference and I’ve received several emails about the setup I have, so here goes:

Jamulus on Vultr

Deploy a new server on Vultr4, here’s the the configuration I used:

  • Choose Server: Cloud Compute (see update at the end for High Frequency Compute)
  • Server Location: Seattle
  • Server Type: Debian 10 x64
  • Server Size: $5/mo
    • 25 GB SSD
    • 1 CPU
    • 1024 MB Memory
    • 1000GB Bandwidth
  • SSH Keys: as desired (and beyond the scope of this)
  • Firewall Group: No Firewall (we’ll use UFW on the host for this)
  • Server Hostname & Label: as desired…we’ll call it myserver for the sake of this post

One you deploy the server, it will take a few minutes for it to be ready. Once it is, SSH to it:

ssh root@myserver

Update the linux distribution:

apt-get update
apt-get -y dist-upgrade

Install and configure the UFW firewall:

apt-get install ufw
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 22124/udp
ufw enable

DigitalOcean has a good tutorial on how to setup UFW as well.

You’re now ready to install Jamulus!

The Jamulus wiki has a pretty decent set of instructions (which have only gotten better in the last few months) on how to download, compile, and run a headless Jamulus instance: https://github.com/corrados/jamulus/wiki/Server—Linux

Here’s the TL;DR (which assumes you are working as root):

Install dependencies:

apt-get -y install git build-essential qtdeclarative5-dev qt5-default qttools5-dev-tools libjack-jackd2-dev

Download source code:

cd /tmp/
git clone https://github.com/corrados/jamulus.git

Compile:

cd jamulus
qmake "CONFIG+=nosound headless" Jamulus.pro
make clean
make
make install
mv Jamulus /usr/local/bin/

Create a user to run Jamulus:

adduser --system --no-create-home jamulus

Create a directory to record files to:

mkdir -p /var/jamulus/recording
chown jamulus /var/jamulus/recording

Create systemd unit file:

nano /etc/systemd/system/jamulus.service

Paste the following into the file above, making the needed changes to the Jamulus command line options as-needed for (see update at the end for using --fastupdate):

[Unit]
Description=Jamulus-Server
After=network.target

[Service]
Type=simple
User=jamulus
Group=nogroup
NoNewPrivileges=true
ProtectSystem=true
ProtectHome=true
Nice=-20
IOSchedulingClass=realtime
IOSchedulingPriority=0

#### Change this to set genre, location and other parameters.
#### See https://github.com/corrados/jamulus/wiki/Command-Line-Options ####
ExecStart=/usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername $(uname -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo "NW WA;Seattle, WA;225" -g --welcomemessage "This is an experimental service and support is not guaranteed. Please contact andrew@fergcorp.com with questions" --licence
     
Restart=on-failure
RestartSec=30
StandardOutput=journal
StandardError=inherit
SyslogIdentifier=jamulus

[Install]
WantedBy=multi-user.target

Give the unit file the correct permissions:

chmod 644 /etc/systemd/system/jamulus.service

Start and verify Jamulus:

systemctl start jamulus
systemctl status jamulus

You should get something like:

 jamulus.service - Jamulus-Server
   Loaded: loaded (/etc/systemd/system/jamulus.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-07-08 10:57:09 PDT; 4s ago
 Main PID: 14220 (Jamulus)
    Tasks: 3 (limit: 1149)
   Memory: 13.5M
   CGroup: /system.slice/jamulus.service
           └─14220 /usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo N

Jul 08 10:57:09 myserver.example.com jamulus[14220]: - central server: jamulusallgenres.fischvolk.de:22224
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - server info: NW WA;Seattle, WA;225
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - ping servers in slave server list
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - welcome message: This is an experimental service and support is not guaranteed. Please contact andrew@fergcorp.com with questions
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - licence required
Jul 08 10:57:09 myserver.example.com jamulus[14220]:  *** Jamulus, Version 3.5.8git
Jul 08 10:57:09 myserver.example.com jamulus[14220]:  *** Internet Jam Session Software
Jul 08 10:57:09 myserver.example.com jamulus[14220]:  *** Released under the GNU General Public License (GPL)
Jul 08 10:57:09 myserver.example.com jamulus[14220]: Server Registration Status update: Registration requested
Jul 08 10:57:09 myserver.example.com jamulus[14220]: Server Registration Status update: Registered

And that’s it! Enjoy the server and let me know how it goes!

9 July 2020 Update:

If you update jamulus.service unit file then run this:

systemctl daemon-reload
service jamulus restart

Also, thanks to Brian Pratt testing, feedback, catching a couple typos, and suggesting using the --fastupdate command line option paired with Vultr’s High Frequency Compute (instead of regular Compute) even better performance.

3
  1. Neither DigitalOcean nor Linode have data centers in Seattle 

  2. ping -c10 -W50 108.61.194.105 

  3. ping -c10 -W50 ec2.us-west-2.amazonaws.com 

  4. Get $100 free credit with that affiliate link; note: you must use credit within 30 days 

  5. USA is 225 

Deleted Facebook

Yesterday was my last day on Facebook. Today I deleted my account.

I may write more later, but fundamentally I don’t trust Facebook with my data or their motives.

I have similar concerns with Google as well and I don’t use GMail (I use FastMail), I don’t use Google Search (I use DuckDuckGo), and I don’t use an Android device (I use an iPhone).

Facebook (similar to Google) has repeatedly demonstrated they want to ingest all possible information they can about me, my family, my friends, my coworkers, and my acquaintances…damn the consequences.

They do this in overt and obvious ways, such as on the Facebook site itself when I provide them information, as well as offsite via the use of embedded “Like” buttons across the web. I used Firefox’s ‘Facebook Container’ and EFF’s ‘Privacy Badger’ plugins in an attempt to segregate Facebook from the rest of my online digital presence.

Facebook also does this in more covert ways, such as creating social graphs to see how people are related and interact with each other, scanning photos to identify people (even people who aren’t users of Facebook)[1], and even creating ‘shadow profiles’ for people who don’t have accounts [2].

Facebook desires to be at the intersection of every kind of interaction they can be — social groups, personal communication, advertisement, sales, currency, etc — and to profit off it…to profit off of me.

This is a dangerous desire, in my opinion, and one I do not want to be involved in or exploited to achieve.

I also don’t like what Facebook does to my brain in terms of the intermittent reinforcement (similar to what happens at casinos) with new posts and updates from friends as well as the comparing (and glamorizing) of idealized existences.

I also hate the polarization that occurs with Facebook, and is in part driven by Facebook. Through their algorithms, Facebook encourages echo chambers and the spread of (dis)information thereof.

This is incredibly scary in our current socialgeopolitical climate…we seem to have lost the ability to have rational debate…something that is very urgently needed.

But know that I’d still love to keep in touch, so please call, text, email, or visit the blogs: andrewferguson.net (more tech and politics) and andrewandrachel.com (more life events and pictures…you’ll need to create a login because it’s private, you can also get email updates if you want too!)

[1] https://chicago.suntimes.com/metro-state/2020/1/29/21114569/facebook-could-pay-550-million-to-illinois-users-in-privacy-settlement

[2] https://www.theverge.com/2018/4/11/17225482/facebook-shadow-profiles-zuckerberg-congress-data-privacy1

Backing Up All The Things

Having a backup of your data is important, and for me it’s taken several different forms over the years — morphing as my needs have changed, as I’ve gotten better at doing backups, and as my curiosity has compelled me.

For various reasons that will become clear, I’ve iterated through yet another backup system/strategy which I think would be useful to share.

The Backup System That Was

The most recently incarnation of my backup strategy was centered around CrashPlan and looked something like this:

Atlas is my NAS and where a bulk of the data I care about is located. It backs up its data to CrashPlan Cloud.

Andrew and Rachel are the laptops we have. I also care about that data and they also backup to CrashPlan Cloud. Additionally, they also backup to Atlas using CrashPlan’s handy peer-to-peer system.

Brother and Mom are extended family member’s laptops that just backup to CrashPlan Cloud

Fremont is the web server (decommissioned recently though), it used to backup to CrashPlan as well.

This all worked great because CrashPlan offered a (frankly) unbelievably good CrashPlan+ Family Plan deal that allowed up ten computers and “unlimited” data — which CrashPlan took to mean somewhere around 20TB of total backups1 — for $150/year. In terms of pure data storage cost this was $0.000625/GB/month2, which is an order of magnitude less than Amazon Glacier’s cost of $0.004/GB/month3.

And then one year ago CrashPlan announced:

 

we have shifted our business strategy to focus on the enterprise and small business segments. This means that over the next 14 months we will be exiting the consumer market and you must choose another option for data backup before your subscription expires.


To allow you time to transition to a new backup solution, we’ve extended your subscription (at no cost to you) by 60 days. Your new subscription expiration date is 09/28/2018.

 

Important Things In A Backup System

3-2-1-Bang

First a quick refresher on how to backup. Arguably the best method is the 3-2-1-bang strategy: “three total copies of your data of which two are local but on different mediums (read: devices), and at least one copy offsite.” Bang represents inevitable scenario where you have to use your backup.

This can be as simple as backing up your computer to two external hard drives — one you keep at home and backup to weekly and one you leave at a friends house and backup to monthly.

Of course, it can also be more complex.

Considerations

Replacing CrashPlan was hard because it has so many features for its price point, especially:

  • Encryption
  • Snapshots
  • Deduplication
  • Incremental backup
  • Recentness

…these would become my core requirements, in addition to also needing to understand how the backup software works (because of this I strongly prefer open-source).

I also had additional considerations I needed to keep in mind:

  • How much data I needed to backup:
    • Atlas: While I have 12TB of usable space (of which I’m using 10TB), I only had about 7TB of data to backup.
    • My Laptop: < 1TB GB
    • Wife’s Laptop: < 0.250 TB
    • Extended family: <500 GB each
    • Fremont:  decommissioned  in 2017, but < 20 GB at the time
  • How recent I wanted the backups to be (put another way, how much time/effort was I willing to loose):
    • I was willing to lose up to one hour of data
  • What kind of disasters was I looking to mitigate:
    • Hyper localized incident (e.g. hard drive failure, stupidity, file corruption, theft, etc)
      • This could impact a single device
    • Localized incident (e.g. fire, burglary, etc)
      • This could impact all devices within a given structure ( < ~ 1000 m radius)
    • Regionalized incident (e.g. earthquake, flood, etc)
      • This could impact all devices in the region (~ 1000 km radius)
  • How much touch-time did I want to put in to maintain the system:
    •  As little as possible (< 10 hours/year)

The New Backup System

There’s no single key to the system and this is probably the way it should be. Instead, it’s a series of smaller, modular elements that work together and can be replaced as needed.

My biggest concern was cost, and the primary driver for cost was going to be where to store the backups.

Where to put the data?

I did look at off-the-shelf options and my first consideration was just staying with CrashPlan and moving to their Small Business plan, but at $120/device/year I was looking at $360/year just to backup Atlas, Andrew, and Rachel.

Carbonite, a CrashPlan competitor but also who CrashPlan has partnered with to transition their home users to, has a “Safe” plan for $72/device/year, but it was a non-starter because they don’t support Linux, have a 30 day limit on file restoration, and do silly things like not automatically backing up files over 4GB and not backing up video files.

Backblaze is The Wirecutter’s Best Pick comes in at $50/device/year for unlimited data with no weird file restrictions, but there’s some wonkiness about file permissions and time stamps, and it also only retains old file versions/deleted files for 30 days.

I decided I could live with Backblaze Backups to handle the off-site copies for the laptops, at least for now. I was back to the drawing board for Atlas though.

The most challenging part was how to create a cost-effective solution for highly-recent off-site data backup. I looked at various cloud storage options4, setting up a server at a friends house (high initial costs, hands-on maintenance would be challenging, not enough bandwidth), and using external hard drives (recentness would be too prolonged in backups).

I was dreading how much data I had as it looked like backing up to the cloud was going to be the only viable option, even if it was expensive.

In an attempt to reduce my overall amount of data hoarding, I looked at the different kinds of data I had and noticed that only a relatively small amount changed on a regular basis — 2.20% within the last year, and 4.70% within the last three years.

The majority5 was “archive data” that I still want to have immediate (read-only) access to, but was not going to change, either because they are the digital originals (e.g. DV, RAW, PDF) or other files I keep for historic reasons — by the way, if I’ve ever worked on a project for you and you want a copy because you lost yours there’s a good chance I still have it.

Since archive data wasn’t changing, recentness would not be an issue and I could easily store external hard drives offsite. The significantly smaller amount of active data I could now backup in the cloud for a reasonable cost.

Backblazes B2 has the lowest overall costs for cloud storage: $0.005/GB/month with a retrieval fee of $0.01/GB6.

Assuming I’m only backing up the active data (~300GB) and I have a 20% data change rate over a year (i.e. 20% of the data will change over the year which I will also need to backup) results in roughly $21.60/year worth of costs. Combined with two external WD 8TB hard drives for rotating through off-site storage and the back-of-the-envelope calculations were now in the ballpark of just $85/year when amortized over five years.

How to put the data?

I looked at, tested, and eventually passed on several different programs:

  • borg/attic…requires server-side software
  • duplicity…does not deduplicate
  • Arq…does not have a Linux version
  • duplicacy…doesn’t support restoring files directly to a directory outside of the repository7

To be clear: these are all very good programs and in another scenario I would likely use one of them.

Also, deduplication was probably the biggest issue for me, not so much because I thought I had a lot of files that were identical (or even parts of files) — I don’t — but because I knew I was going to be re-organizing lots of files and when you move a file to a different folder the backup program (without deduplication capability) doesn’t know that it’s the same file8.

I eventually settled on Duplicati — not to be confused with duplicity or duplicacy — because it ticks all right boxes for me:

  • open source (with a good track record and actively maintained)
  • client side (e.g. does not require a server-side software)
  • incremental
  • block-level deduplication
  • snapshots
  • deletion
  • supports B2 and local storage destinations
  • multiple retention policies
  • encryption (including ability to use asymmetric keys with GPG!)

Fortunately, OpenMediaVault (OMV) supports Duplicati through the OMVExtras plugin, so installing and managing it was very easy.

The default settings appear to be pretty good and I didn’t change anything except for:

Adding SSL encryption for the web-based interface

Duplicati uses a web-based interface9 that is only designed to be used on the local computer — it’s not designed to be run on a server and have then access the GUI remotely through a browser. Because it was only designed to be accessed from localhost, it sends passwords in the clear, which is a concern but one that has already been filed as an issue and can be mitigated with using HTTPS.

Unfortunately, the OMV Duplicati plugin doesn’t support enabling HTTPS as one of its options.

Fortunately, I’m working on a patch to fix that: https://github.com/fergbrain/openmediavault-duplicati/tree/ssl

Somewhat frustratingly, Duplicati requires using the PKCS 12 certificate format. Thus I did have to repackage Atlas’ SSL key:

openssl pkcs12 -export -out certificate.pfx -inkey private_key.key -in server_certifcate.crt -certfile CAChain.crt

Asymmetric keys

Normally Duplicati uses symmetric keys. However, when doing some testing with duplicity I was turned on to the idea of using asymmetric keys.

If you generated the GPG key on your server then you’re all set. However, if you generated them elsewhere you’ll need to move over to the server and then import them:

gpg --import private.key
gpg --edit-key {KEY} trust quit
# enter 5<RETURN>
# enter y<RETURN>

Once you have your GPG key on the server you can then configure Duplicati to use them. This is not intuitive but has been documented:

--encryption-module=gpg
--gpg-encryption-command=--encrypt
--gpg-encryption-switches=--recipient "andrew@example.com"
--gpg-decryption-command=--decrypt
--passphrase=unused

Note: the recipient can either be an email address (e.g. andrew@example.com) or it can be a GPG Key ID (e.g. 9C7F1D46).

The last piece of the puzzle was how to manage my local backups for the laptops. I’m currently using Arq and TimeMachine to make nightly backups to Atlas on a trial basis.

Final Result

The resulting setup actually ends up being very similar to what I had with CrashPlan, with the exception of adding two rotating external drives which brings me into compliance with the “3 total copies” rule — something that was lacking.

Each external hard drive will spend a year off-site (as the off-site copy) and then a year on-site where it will serve as the “second” copy of the data (first is the “live” version, second is the on-site backup, and third is the the off-site backup).

Overall, this system should be usable for at the least the next five years — at least in terms of data capacity and wear/tear. Total costs should be under $285/year. However, I’m going to work on getting that down even more over the next year by looking at alternatives to the relatively high per-device cost for Backblaze Backup which only makes sense if a device is backing up close to 1TB of data — which I’m not.

Update: Edits based on feedback

3
  1. “While there is no current limitation for CrashPlan Unlimited subscribers on the amount of User Data backed up to the Public Cloud, Code 42 reserves the right in the future, in its sole discretion, to set commercially reasonable data storage limits (i.e. 20 TB) on all CrashPlan+ Family accounts.” Source 

  2. my actual usage was closer to 8TB, so my actual rate was ~$0.0015/GB/month…still an amazingly good deal 

  3. which also has additional costs associated with retrieval processing that could run up to near $2000 if you actually had to restore 20TB worth of data 

  4. very expensive – on the order of $500 to $2500/year for 10 TB 

  5. 95.30% had not been modified within the last three years 

  6. however there is also a trial program where they ship you a hard drive for free…you just pay return postage. 

  7. though the more I’ve though about it the more question if this would actually be a problem 

  8. it’s basically the same operation as making a copy of a file and then deleting the original version 

  9. you can also use the CLI 

I Bought a 3D Printer

Guys! I bought a 3D printer! It hasn’t even arrived yet, but I already feel like I should have done this ages ago! I ended up going with the
Wanhao i3 v2.1 Duplicator. It’s an upgraded version of the v2.0, which is effectively the same model that MonoPrice rebrands and sells as the Maker Select 3D Printer v2.

All around it seems to hit the sweet spot between price and capability. For me, the big selling points are:

  • Sufficient large build envelope: 200 mm x 200 mm x 180 mm
  • Sufficient build resolution: 0.1 mm, but can go down to 0.05 mm!
  • Multiple-material filament capabilities
  • Good community support
  • Easy to make your improvements/repairs

I had to pay a bit of a premium since I’m in the UK, but I think it will be worth it. Printer arrives tomorrow, and I hope to have a report out soon thereafter.0

You Can’t Always Get What You Want

Jeffrey Goldberg at Agilebits, who make 1Password, has a great primer on why law enforcement back doors are bad for security architecture. The entire article is worth a read, presents a solid yet easily understood technical discussion — but I think it really can be distilled down to this:

From blog.agilebits.com:

Just because something would be useful for law enforcement doesn’t mean that they should have it. There is no doubt that law enforcement would be able to catch more criminals if they weren’t bound by various rules. If they could search any place or anybody any time they wished (instead of being bound by various rules about when they can), they would clearly be able to solve and prevent more crimes. That is just one of many examples of where we deny to law enforcement tools that would obviously be useful to them.

Quite simply, non-tyrannical societies don’t give every power to law enforcement that law enforcement would find useful. Instead we make choices based on a whole complex array of factors. Obviously the value of some power is one factor that plays a role in such a decision, and so it is important to hear from law enforcement about what they would find useful. But that isn’t where the conversation ends, it is where it begins.

Whenever that conversation does takes place, it is essential that all the participants understand the nature of the technology: There are some things that we simply can’t do without deeply undermining the security of the systems that we all rely on to keep us safe.

0