The SHOP/4/HO/CCT/AG shop light is a 10,000 lumen light with tunable white light (3000K – 6500K) with a NEMA 5-15P connector that plugs into a standard 3-prong outlet.
It can be found in the US on Amazon and in retailers such as Costco for about $60 (as of the time of publishing). It’s an interesting design (in my opinion) because it’s a smart light that supports on/off, dimming, and CCT color. But at the flip of a switch you can use it just like a standard, dumb LED light.
The light module contains 6 rows of LEDs (84 LEDs/row), each row alternating between cold and warm, for a total of 504 LEDs.
Major Integrated Circuits
In addition to your usual collection of resistors, capacitors, and diodes, there’s three inductors which I couldn’t figure out P/N information for. I think there’s also a thermistor or fuse of some kind (magenta circled item on top side of of PCB). I’ve highlighted the major ICs below and provided some information based on what I could find:
Function: Low-power embedded Wi-Fi and Bluetooth LE module. The chip is on a daughter board which is solder to the main board perpendicularly through a thru hole. There’s a switch that overrides the WB2L (and removes power from it!) and sets the light to cool white (not cold) color.
Hardware PWM pin; P8/BT_ACTIVE/PWM2 (Pin24)
Hardware PWM pin; P7/WIFI_ACTIVE/PWM1 (Pin23)
Hardware PWM pin; P6/CLK13M/PWM0 (Pin22)
Hardware PWM pin; P26/IRDA/PWM5 (Pin15)
Hardware PWM pin; P24/LPO_CLK/PWM4 (Pin16)
Power supply pin (3.3 V)
For this light, here’s what my probing found:
PWM1: 0% duty cycle is full Cold and 100% duty cycle is full Warm
PWM2: 0% duty cycle is off and 100% duty cycle is full brightness.
PWM5: Unknown if this is an input or an output. With the stock Feit firmware:
Logic 0: when the light is on
Logic 1: when the light is off
Manufacturer: Bright Power Semiconductor
Function: Non-isolated step-down type AC/DC Constant Voltage chip…power for the LEDs that is then current controlled by the BP2929 (for color temperature) and BP5012 (for dimming).
This is an interesting chip, and I’m not quite sure why they’re using it. Normally, the microcontroller would output two PWM signals: one for Cool White and one for Warm White, and they would each drive a MOSFET and then dimming is handled algorithmically by the microcontroller. This chips seems to offload that functionality (sort of; this chip doesn’t actually handle dimming itself, but rather in combination with a separate current control method — which I believe is the BP5012 in this design), but I’m not sure why…the microcontroller can do all of this easily. My one thought is that maybe this method reduces flicker because it’s relies of constant voltage instead of using PWM to vary the voltage.
High-voltage side power supply terminal
High pressure side floating
Output GATE signal 2
Output GATE signal 1
PWM Signal Input
Low voltage power supply
Google translation of datasheet…so may not be exactly accurate
Function: Non-isolated switching regulator with dimming
The package is marked JW1606, but JW1606 doesn’t exist. I’m assuming this is related to the JW1602, which I was able to find a datasheet for.
The JW1602 has dimming function, so I’m not sure why the duplicate functionality with the BP5012. The BP2525 is also a non-isolated switching regulator as well. So my theory is that there’s two completely separate control circuits on the PCB, one to provide variable control of color temperature and intensity when the microcontroller is active and another circuit that provides fixed color temperature (probably just 50/50) and a fixed dimming level (set by a voltage divider).
Function: N-CHANNEL MOSFET. There’s two of these that are directly connected to the LEDs themselves, one for each color temperature. These are probably driven by the BP5929 to control the current for each set of LEDs.
Slide the the PCB out from its slot Note: this might require scrapping some of the potting compound used to the secure the PCB into the slot. Pushing gently from the end using the switch usually pops it out
I’ve seen various parts of this documented on the internet, but I don’t think I’ve seen all the steps written down in one place, so in the interest of sharing and not banging my head next time I need to re-create my Plex VM: here’s how I was able to get my NVIDIA Quadro K620 GPU to work with my Plex VM running in Proxmox.
Here’s my setup:
Proxmox (7.2-4 No-Subscription Repository) using the Linux 5.15.35-2-pve kernel
Plex VM on Debian 10 using the Linux 4.19.0-20-amd64 kernel
NVIDIA Quadro K620 GPU. Note: I’m only using this in headless mode for Plex transcoding.
What we are looking is the PCI address of the NVIDIA GPU device. In this case it’s 42:00.0. 42:00.0 is only a part of of a group of PCI devices on the GPU. We can list all the devices in the group 42:00 by using the following command:
lspci -s 42:00
The usual output will include VGA Device and Audio Device, which is what I have:
Note: if you have an AMD CPU, then use amd_iommu=on instead of intel_iommu=on.
Save the config changed and then update GRUB.
Next we need to add vfio modules to allow PCI passthrough. Edit the /etc/modules file and add the following line to the end of the file:
# Modules required for PCI passthrough
Update configuration changes made in your /etc filesystem
update-initramfs -u -k all
Reboot and verify
Reboot Proxmox to apply the changes
Verify that IOMMU is enabled
dmesg | grep -e DMAR -e IOMMU
There should be a line that looks like DMAR: IOMMU enabled. If there is no output, something is wrong.
Your Proxmox host should be ready to GPU passthrough!
Debian 10 VM GPU Passthrough Configuration
I’m using Debian 10 as my VM.
For best performance the VM should be configured the Machine type to q35. This will allow the VM to utilize PCI-Express passthrough.
Add a PCI device to the VM (Add > PCI Device). Select the GPU Device, which you can find using it’s PCI address from before (42:00, in this example). Note: This list uses a different format for the PCI addresses id, 42:00.0 is listed as 0000:42:00.0
…and then select Add. Note: Do not select Primary GPU.
Boot the VM. To test the GPU passthrough was successful, you can use the following command in the VM:
It doesn’t matter where you save it, since you can delete it once you’re done installing the driver.
Install NVIDIA Driver
You will need to chmod +x the file so you can execute it. Then execute the file (e.g. ./NVIDIA-Linux-x86_64-515.48.07.run).
You might get a warning, which you can acknowledge. It didn’t seem to cause an issue for me.
I chose to install the 32-bit compatibility library. I’m not sure if it’s actually needed, but it didn’t seem to cause any problems.
And we’re done!
Verify Driver Installation
You should be able to verify that the system recognizes the GPU by running nvidia-smi
If you’ve not done so already, you’ll want to enable hardware acceleration for Plex under Settings > Transcoder:
Select “Save Changes”.
To verify that Plex is actually offloading the transcoding, start playing a TV or movie from your Plex library (make sure that the Quality is any setting except Original). Then you can run nvidia-smi again and you’ll see that it lists Plex Transcoder under the process name (instead of “No running processes found”).
You can also install nvtop (apt-get install nvtop) which is a nice way to view transcoding efforts over time:
Debian vs Ubuntu
There’s a lot of guides and questions where Ubuntu is the OS. I’m not sure why, but none of those work for Debian when it comes to NVIDIA drivers. The NVIDIA Driver Installation Quickstart Guide (see below list of links) lists Ubuntu as supported, but not Debian. I’m not sure if that’s just because there’s no Debian package manager support, or if Debian is technically not supported by NVIDIA at all.
Removing the driver and upgrading
You can upgrade another .run file on top of an existing NVIDIA driver installation (as far as I can tell). You’ll need to run:
…to uninstall the existing driver. You’ll need to reboot the VM and then you can install the new driver.
Somewhat related links
Other links that probably aren’t useful, but which I have stumbled on as part of my research. Sometimes these are things that don’t work (or don’t work for what I was wanting to do). Regardless, they are worth mentioning:
I randomly stumbled1 onto a FREE (!) class teaching Intro to Printed Circuit Board (PCB) design. It may shock you to know that even as an Electrical Engineer I never learned how to make PCBs and it’s always been something I’ve wanted to do.
Astute readers may remember that I tried making a PCB several years ago, but it never worked2 and I abandoned it.
I can’t rave enough about the class, it’s been a wonderful experience, I learned a lot, made some new friends, and I look forward to taking some other class offerings through TeachMePCB.com!
The goal of the class was learn about PCB design (including designing footprints and symbols), design the PCB itself (including how to use hierarchical layouts), learn about design rule check (DRC) and design for manufacture (DFM), have the PCB fabricated, and then assemble the PCB.
Everyone in the course made a macro keyboard of some sort. The core “requirements” were:
Raspberry Pi Pico microcontroller
10 MX-style key switches
2 Rotary Encoders with RGB LEDs
VEML7700 Ambient Light Sensor (I2C)
NLSF595 LED Driver (SPI)
10 NeoPixel lights (1-wire)
Don’t make a rectangular PCB
I made a couple of changes:
Swapped PCA9745B for NLSF595
Added ePaper display (because why not make my first project harder)
I actually jumped the gun and started laying out my board in Week 3 because I thought this was the best way to figure out what items I needed for my Bill of Material (BOM). Creating the schematic this way was an — interesting — exercise, and it turned out I was over complicating what that weeks assignment was.
Note for example how I’ve repeated the NeoPixel LEDs ten times (grid ref: A1 to A5), as well as the pushbutton switches ten times (located just below the LEDs). As I would later learn, there’s no need to do that; just make the switch (or NeoPixel) once and then duplicate it (pointing the copy the same schematic so updates propagate correctly). Et voilà!
Once I stopped over complicating it, schematic layout was relatively simple for this particular board.
I think the hardest part was making sure I got the eInk circuit right3. The design seems to be a prototypical DC-DC Boost Converter and was the same circuit design used by both the manufacturer (Pervasive Displays) as well as Adafruit’s eInk Breakout Friend.
After I put together my schematics, I would discover (and have corrected in the latest design files) several mistakes
I initially had my pins reversed (Pin 1 was Pin 24, Pin 2 was 23, etc), fortunately I caught that before I had sent my PCB off to be fabricated.
I was less fortunate with the following issues, which I didn’t catch until after the PCB had been fabricated and I started assembly:
Not shown (and fixed via software):
S3001 (Bottom Rotary Encoder) Pin 01 (red LED) and Pin 04 (blue LED) are swapped, so they got mapped as BGR instead of RGB.
Flipped pin GPIO assignments on U2801:
Pin 15 should be labeled as GPIO11 and connect to J2901 Pin 09 (BUSY signal for eInk)
Pin 16 should be labeled as GPIO12 and connect to J2901 Pin 11 (Command/Data signal for eInk)
I also ended up removing the rotary encoders I had got from SparkFun (P/N COM-15141) and replacing them with Bourne P/N PEL12T-4225S-S1024 (which I got from Mouser, since they had them in stock) because the red LEDs weren’t really working (they were really, really dim) and the rotary function was very finicky.
Not so much error, per se, but after I printed my paper doll (a scaled 1:1 print) I switched to a different Flat Flexible Cable (FFC) connector so I would have a little more space to hand solder.
Coming up with a design was challenging and I went through several design ideas:
Several of my earlier designs had fewer keys and rotary encoder since I wasn’t sure I really had a use for a 10-key and two rotary encoders. But I eventually got to a design I liked and then expanded to be a 10-key with two encoders.
I also went from a guarded momentary switch to more of an e-stop button with deflector shroud. Originally I was trying to source once, but then realized I could probably just as easily design one and have it 3D printed—so I did that.
I designed the button assembly in FreeCAD and had it printed in nylon by JLCPCB. The files are up on Thingiverse so go grab it! I originally was going to paint it, but then realized I could let the NeoPixel shine through the nylon—and I didn’t have the paint I needed.
Getting the PCB outline from my head and into KiCAD was rather difficult process, in my opinion. I thought about using Inkscape, but I’m not super familiar with it and so I went with FreeCAD and ended up spending almost an entire day on it.
I was able to export my outline as a DXF and import it in to KiCAD and then lay everything out.
Layout was incredibly fun and it was hard to stop—there’s always something that can be tweaked. I did a two layer board. I probably spent way to much time adding fun little things. See if you can spot them all!
Most of the parts I used had footprints (either provided by the class, from the vendor, or through Ultralibrarian). I did have to make a couple footprints, such as for the Amphenol connector:
Going from the schematic to the layout might be a confusing, so I put together this graphic showing how the NeoPixel schematic and the switch schematic end up as looking on the laid-out PCB:
Also potentially confusing is that most of the time I’m working without the filled-in areas shown. In reality though, all the empty space is actually filled in as part of the ground plane.
The final design looks something like this:
One of the nice things about KiCAD is that is does a pretty decent render which I think is helpful for doing spacing checks on components
I order most of my component parts through DigiKey, but some from Mouser and SparkFun as well. All total, it was about $51.87+tax (in reality the total is a bit more because I bought some extras of certain components). I also had to make several orders because of course I forgot things.
The PCB was ordered from JLCPCB4. It’s amazing how inexpensive PCBs can be! I opted for Electroless nickel immersion gold (ENIG) plating because of aesthetics, so my board was $27.90 + S&H — and that includes 5 copies of the PCB (their minimum order). However, if I did the basic PCB manufacturing it would have been about $9.20 + S&H!
I also had JLCPCB print the e-stop button and deflector should since they could do it in nylon and it only cost $1/each + S&H.
Assembly was pretty straight forward, I printed out my BOM and started soldering. My approach was to solder in stages, doing the most difficult soldering first, and then do it by sub-system (starting with the NeoPixels) so I could do integration testing as I progressed.
While this was my first PCB design, this was not my first PCB assembly and I feel pretty comfortable with the soldering iron. The passive components are Surface Mount Device (SMD) size 2012 (0805 imperial), which means they 2.0 mm (not cm) long and 1.2mm wide. That is very tiny. Ideally I would use solder paste and a reflow oven, but I don’t have one…yet.
I had a flux pen, but I found getting some liquid flux was extremely helpful — especially with the PCA9745B IC and FPC connector, which have a lot of fine pitch legs. I spent a lot of time trying to get the FPC connector soldered without the liquid flux and it was a nightmare.
I also ended up getting a stereo inspection microscope (which has been on my wishlist for ages) and that was helpful in inspecting some of my solder joints for any bridging. Having a good and bright light source is key though.
Board Bring Up
Bringing up the board was pretty easy for the most part.
This was my first foray into CircuitPython and had I RTFM, I would have known that while “some of the CircuitPython compatible boards come with CircuitPython installed. Others [such as the Pico] are CircuitPython-ready, but need to have it installed.”
The two most challenging things were the eInk display and the PCA9745B LED driver (which I used for the RGB LEDs on the rotary encoders). The biggest issue: there was no CircuitPython module for either of those.
I ended up writing a module for both, which you can find on GitHub for now and hopefully PiPy in the future:
The PCA9745B was probably the easiest and least complex. I was also able to find a micropython library for the PCA9745B written by Mirko Vogt at Sensorberg GmbH that was helpful to validate some of my assumptions with.
The Pervasive Display eInk display was more challenging and involved a fair amount of integration hell. Between flip-flopped Busy and Command/Data lines, not having a RESET line (which I’m still not convinced I need), and using the wrong resistor on the current sense line for the DC-DC booster, I was never really sure if my problems were software or hardware or both.
The eInk display was a reach goal and I had to step away from it many times because of the often head-banging frustration.
I was eventually able to find an Arduino library from Pervasive Display, and so I was able to use that to verify if my hardware was correct — it wasn’t.
I actually ended up ordering their development breakout kit (which has all the hardware needed to run the board, including—crucially—the DC-DC booster, and another Pico so I do development with known-good hardware (best $20 spent).
Adafruit/CircuitPython (by way of micropython) has a displayio.EPaperDisplay class that I was able to extend, so I didn’t have to write much of the code from scratch.
One of the interesting things about the Pervasive Display eInk display I was using is that it’s part of their Spectra line which has a chip on glass (CoG) and internal timing circuitry instead of a separate controller (such as the IL0373 or SSD1608). In theory it’s easier to drive since you don’t have to deal with complicated look up tables (LUT), you just send it the bitmap data you want and it handles the rest.
I have an MSO-19, which is a USB oscilloscope and logic analyzer and that was incredibly helpful as well.
The code running on the keypad is CircuitPython, an open source version of Python for tiny, inexpensive computers called microcontrollers.
I may be new to CircuitPython, but not Python. So it’s a pretty natural fit for my programming.
As a learning exercise in making my own PCB this has been a resounding success for me and I can’t thank the TeachMePCB facilitators (Mark and Jesse) enough! I managed to get everything working (despite my several snafus) and I’m pretty pleased with it.
Adding the eInk was a good challenge and I’m glad I got it working.
I have lots of ideas for other things I’d like to make, so who knows what will show up next in a Show and Tell.
As a bonus, I also got to show off an almost completed version a couple weeks ago (26 January 2022) at Adafruit’s Show and Tell (I’m at the very end, start at 22m24s):
Unfortunately, the microphone is pre-amplified with what I believe to be a New Japan Radio Co. 2140 Op-Amp (the black IC in the middle on front side, pin 1 is upper left corner).
When connected to the factory head unit, the output signal of the factory microphone module was about 500mv peak-to-peak and centered on 0.0v.
I wasn’t able to get a great oscilloscope capture of the factory microphone output when connected to the Sony head unit, but it ended up being biased by about 3.3v and about 400mv peak-to-peak.
I could have re-wired the factory microphone circuit to remove the amplification, or even made a new circuit to de-amplify it and put that inline. But I decided that was too much work and just installed an external 3.5mm microphone instead:
Note: The 2010 Prius did not include a USB adapter/plug — even for the Trim V and/or Nav package
Note: For the 2010 Prius you’ll need the one that is 22mm x 33mm (0.87in x 1.3in). This is also the same size as the 2015 RAV4.
Steering Wheel Control
I originally thought I needed the ASWC-1, but as it turns out there’s two ways Steering Wheel Control (SWC) signals could be sent: either as voltage-based1 analog or as digital signals over the CANBUS.
Fortunately for me, both my Prius and the Sony XAV-AX100 use the voltage-based analog SWC signaling. I lopped off the black connector of the Metra 70-8114 (which normally would plug into the ASWC-1) and soldered to the red/white wires of the 3.5mm connector:
Green/Orange to thin White (SW1)
Green/Black to thin Red (SW2)
Seek+ switch pushed
< 0.8 V
Seek- switch pushed
0.9 to 1.3 V
Volume+ switch pushed
1.65 to 1.9 V
Volume- switch pushed
2.45 to 2.6 V
Steering pad switch not operated
3.28 to 3.5 V
MODE switch pushed
< 0.8 V
On hook switch pushed
0.9 to 1.3 V
Off hook switch pushed
1.65 to 1.9 V
Voice switch pushed
2.45 to 2.6 V
Steering pad switch not operated
3.28 to 3.5 V
* With respect to L40-20 (GND)
On the XAV-AX100, there’s an option to program the SWC buttons, so I did that and everything works as expected.
The Prius has a microphone (at least mine does) and I wanted to keep that microphone instead of adding a new one.
The microphone in the Prius requires 5V for it’s built in amp, which looks like is always powered when the car is on.
DC/DC Converter 5V to Prius L37-17 (MACC, Telephone microphone assembly power supply, 5V)
Sony XAV-AX100 Black to Prius L37-18(SGND, Shield ground)
Sony XAV-AX100 Mic Tip to Prius L37-19(MIN+, Microphone voice signal)
Sony XAV-AX100 Mic Sleeve to Prius L37-20(MIN-, Microphone voice signal)
Note: I think this works…though I’m having some call quality issues. Not sure if it’s related to this, CarPlay, or something else.
Rear Camera Hookup Options
Assumes factory backup camera, remember to plug the Yellow RCA cable into the Radio as well.
You can buy the L42 connector with a pigtail from https://autoharnesshouse.com/49914.html, or just do what I did and stick a wire in the female connector and tape it .
Normal (only on when in reverse)
AX-CAM6 Blue/White(Reverse trigger) to Sony XAV-AX100 Purple/white (Reverse In) and to Prius L42-5 (Reverse Signal)
AX-CAM6 Black(Ground) to Sony XAV-AX100 Black (Ground)
AX-CAM6 Blue/Red (Camera power, 6V) to Prius L37-24(CA+, Television camera power supply, 5.5 to 7V)
AX-CAM6 Blue/White(Reverse trigger) to Sony XAV-AX100 Red (Accessory Power, 12V)
AX-CAM6 Black(Ground) to Sony XAV-AX100 Black (Ground)
AX-CAM6 Blue/Red (Camera power, 6V) to Prius L37-24(CA+, Television camera power supply, 5.5 to 7V)
Prius L42-5 (Reverse Signal) to Sony XAV-AX100 Purple/white (Reverse In)
The color and texture of the dash kit definitely does not match, but I’m not sure there is one that does.
It’s been a while since I’ve replaced a factory radio, and this one took me some time to figure out the SWC, microphone, and backup camera. I was used to the Old Days™ where the radio comes with a pigtail connector, you buy a pigtail connector for your vehicle, solder the two together and that’s it.
I installed the Dual port USB adapter, but only the USB connector for the audio is currently hooked up. I still need to hook up the second USB for charging. My current plan is to run the wire to the 12V cigarette adapter in the center console.
There’s an adjustment for the volume on the Metra TYTO-01, I think I set mine too low because A) I have the radio cranked up pretty high when driving (the Prius is notorious for road noise), and B) when I’m playing Spotify through CarPlay it sounds like the audio is clipping. So I’ll need to take the dash apart again and adjust that.
While I did remember to remove all the music CD’s from the factory radio before I uninstalled it, I forgot to clear the Oil Maintenance reminder message (which is set and controlled through the factory radio)…so I’ll probably need to hook it back up to clear it *facepalm*
As we’ve been doing more video and audio conferencing lately, I’ve been experimenting with temporally hyper-near servers to see if it results in a better experience. TL;DR…not really for most purposes.
Temporally hyper-near servers differ from geographically near servers in that it doesn’t matter how close the server is physically in miles, just packet transit transfer time in milliseconds…basically low-latency.
AWS calls these Local Zones and they’re designed so that “you can easily run latency-sensitive portions of applications local to end-users and resources in a specific geography, delivering single-digit millisecond latency for use cases such as media & entertainment content creation, real-time gaming…”, but they only have them in the Los Angeles region for now.
Azure calls them Edge Zones, but they aren’t available yet.
Google doesn’t have a specific offering, but instead provides a list of facilities within each region you can choose from, though none of them are near Seattle.
I went back my notes when I was looking at deploying some servers that I knew would generally only be accessed from the Seattle area and I found that Vultr could be a good solution1.
With Vultr (in Seattle), I’m getting an average round-trip time (RTT) of 3.221ms (stddev 0.244 ms)2
Compare to AWS (US West 2), which was an average RTT of 10.820 ms (stddev 0.815ms)3
After doing some traceroutes and poking around various peering databases , I think that Vultr is based at the Cyxtera SEA2 datacenter in Seattle and shares interconnections with CenturyLink, Comcast, and AT&T (among others).
I setup a Jitsi server, but didn’t notice anything perceptibly different between using my server and a standard Jitsi public server (the nearest of which is on an AWS US West 2 instance).
However, for Jamulus (which is software that enables musicians to perform real-time jam sessions over the internet) there does appear to be huge difference and I’ve received several emails about the setup I have, so here goes:
Jamulus on Vultr
Deploy a new server on Vultr4, here’s the the configuration I used:
–serverinfo: update with your location as [name];[city];[[country as QLocale ID]];5
–welcomemessage: if you want one
#### Change this to set genre, location and other parameters.
#### See https://github.com/corrados/jamulus/wiki/Command-Line-Options ####
ExecStart=/usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername $(uname -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo "NW WA;Seattle, WA;225" -g --welcomemessage "This is an experimental service and support is not guaranteed. Please contact firstname.lastname@example.org with questions" --licence
Give the unit file the correct permissions:
chmod 644 /etc/systemd/system/jamulus.service
Start and verify Jamulus:
systemctl start jamulus
systemctl status jamulus
You should get something like:
● jamulus.service - Jamulus-Server
Loaded: loaded (/etc/systemd/system/jamulus.service; disabled; vendor preset: enabled)
Active: active (running) since Wed 2020-07-08 10:57:09 PDT; 4s ago
Main PID: 14220 (Jamulus)
Tasks: 3 (limit: 1149)
└─14220 /usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo N
Jul 08 10:57:09 myserver.example.com jamulus: - central server: jamulusallgenres.fischvolk.de:22224
Jul 08 10:57:09 myserver.example.com jamulus: - server info: NW WA;Seattle, WA;225
Jul 08 10:57:09 myserver.example.com jamulus: - ping servers in slave server list
Jul 08 10:57:09 myserver.example.com jamulus: - welcome message: This is an experimental service and support is not guaranteed. Please contact email@example.com with questions
Jul 08 10:57:09 myserver.example.com jamulus: - licence required
Jul 08 10:57:09 myserver.example.com jamulus: *** Jamulus, Version 3.5.8git
Jul 08 10:57:09 myserver.example.com jamulus: *** Internet Jam Session Software
Jul 08 10:57:09 myserver.example.com jamulus: *** Released under the GNU General Public License (GPL)
Jul 08 10:57:09 myserver.example.com jamulus: Server Registration Status update: Registration requested
Jul 08 10:57:09 myserver.example.com jamulus: Server Registration Status update: Registered
And that’s it! Enjoy the server and let me know how it goes!
9 July 2020 Update:
If you update jamulus.service unit file then run this:
service jamulus restart
Also, thanks to Brian Pratt testing, feedback, catching a couple typos, and suggesting using the --fastupdate command line option paired with Vultr’s High Frequency Compute (instead of regular Compute) even better performance.
Pikler Ladders are expensive. Building one seemed like a good idea. There’s many different designs out there, but none that I was terribly thrilled with. So I designed my own. Then I roped my friend Charlie into helping me build one (spoiler alert: other friends wanted one too…so we made four).
Safe to use
As low-cost as practical
Easy to store when not in use
Varied angles of use
Easy to make
The original concept was this folding design that had two climbing positions, but could also be folded up. I originally was going to use ¾” diameter dowels, but wood is a rather vexing material in terms of strength — it’s what called an anisotropic material which means that it has different material properties in different directions. This is in addition to the varied strength tree-to-tree. I wasn’t confident that ¾” diameter dowels would be sufficient (“safe to use” requirement) and so I upped it to 1″ during Version 3 of the design. However, this designed was ultimately scrapped because the board along the bottom side was unnecessarily, a bit unwieldy, and wouldn’t fulfill the “as low-cost as practical” requirement.
This used a removable bar that could be moved up or down a rung to vary the angle. I think this is actually version 2.5, which introduced the “scalloped” edges on the one side to allow the ladder to fold together all the way.
This design was ultimately scrapped because using 1″ diameter dowels didn’t leave sufficient edge margin (“safe to use” requirement) without going to a 1″x6″ board (which would have increased the cost — “as low-cost as practical” requirement). I experimented with offsetting the rungs, but decided that would make it harder to manufacture (“easy to make” requirement). Also cutting all the “scallops” would have been time consuming (also “easy to make” requirement).
This is the design we ended up making (see the build notes for deviations and such) and the one I made the drawings for that you can download. I originally discounted this option because there’s no good way to get a 10″x20″x¾” piece of wood without buying an unnecessarily large sheet (“as low-cost as practical” requirement), but by building several ladders at once it helped make this more cost effective. This design also uses 1″ dowels
This was designed such that you should be able to buy everything at your local major hardware store (and probably most local stores as well). Poplar is recommended as a good compromise of quality, strength, and cost.
This was not sponsored by Lowe’s, but I did end up buying everything from there because they had Poplar dowels and Home Depot did not.
Charlie and I built a total of four of these at first go and it took roughly 15 hours over five (I think) build sessions. So factor in setup and tear-down time as well.
You probably don’t need to secure the rungs with screws (though you will still need them to secure the Plate to the the Long and Short Leg Assemblies). We ended up only using screws for the first of the four we built (the rest just used wood glue). If you decide to use screws, it might be a good idea to use a shorter length for those that don’t go through the Plates — it’s a bit harrowing making sure the screws are sufficiently aligned so they don’t split out the dowels.
With the cabinet screws we used you don’t have to drill a pilot-hold for the dowels.
We broke the sharp edges on the boards using 120 grit sandpaper.
We sanded the dowels with 220 grit sandpaper to help give a good finish for little hands.
We put a small chamfer on the dowels to help them seat properly during assembly.
We used an edge-glued spruce board for the Plate, in retrospect we should have used a plywood with a veneer.
The Plate Assembly is a somewhat complex design to manually make. Because I needed to make eight of them I did some math and made a jig of sorts. However I also designed a paper template1that you can just adhere to your plywood.
The Storage Position hole is waaaay to close to the edge and will blow out. I’ve left it in the design because I like the idea of being able to keep the bolt with the ladder when it’s folded. If you want to include it then do what I did premptively blow out the hole and sand it so it looks nice-ish — otherwise don’t drill it.
If you’re building lots of these, maybe call ahead to make sure they have enough dowels. I ended buying every single 1″x48″ dowel that Lowe’s had on the shelf.
Yesterday was my last day on Facebook. Today I deleted my account.
I may write more later, but fundamentally I don’t trust Facebook with my data or their motives.
I have similar concerns with Google as well and I don’t use GMail (I use FastMail), I don’t use Google Search (I use DuckDuckGo), and I don’t use an Android device (I use an iPhone).
Facebook (similar to Google) has repeatedly demonstrated they want to ingest all possible information they can about me, my family, my friends, my coworkers, and my acquaintances…damn the consequences.
They do this in overt and obvious ways, such as on the Facebook site itself when I provide them information, as well as offsite via the use of embedded “Like” buttons across the web. I used Firefox’s ‘Facebook Container’ and EFF’s ‘Privacy Badger’ plugins in an attempt to segregate Facebook from the rest of my online digital presence.
Facebook also does this in more covert ways, such as creating social graphs to see how people are related and interact with each other, scanning photos to identify people (even people who aren’t users of Facebook), and even creating ‘shadow profiles’ for people who don’t have accounts .
Facebook desires to be at the intersection of every kind of interaction they can be — social groups, personal communication, advertisement, sales, currency, etc — and to profit off it…to profit off of me.
This is a dangerous desire, in my opinion, and one I do not want to be involved in or exploited to achieve.
I also don’t like what Facebook does to my brain in terms of the intermittent reinforcement (similar to what happens at casinos) with new posts and updates from friends as well as the comparing (and glamorizing) of idealized existences.
I also hate the polarization that occurs with Facebook, and is in part driven by Facebook. Through their algorithms, Facebook encourages echo chambers and the spread of (dis)information thereof.
This is incredibly scary in our current socialgeopolitical climate…we seem to have lost the ability to have rational debate…something that is very urgently needed.
But know that I’d still love to keep in touch, so please call, text, email, or visit the blogs: andrewferguson.net (more tech and politics) and andrewandrachel.com (more life events and pictures…you’ll need to create a login because it’s private, you can also get email updates if you want too!)
Can anyone explain why, when my internet download speed is testing around 20mbps, if I go to download a file, the actual speed result is more like 1-2mbps?
There’s a couple things going on, but first a primer on the internet:
For our purposes, think of the internet as several independent networks that are joined together through interconnection points. for our sake, let’s assume that each independent network is a physically restricted to a city; so there’s a Seattle Network and a Denver Network and a Minneapolis Network, etc.
Also, each network is only connected to its closest *major* city. So, Seattle and Denver don’t actually connect to each other but instead both connect to the Salt Lake City Network…this called a hop and it takes two hops to get from the core of the Seattle Network to the core of the Denver Network (Hop 1: Core Seattle -> Core SLC; Hop 2: Core SLC -> Core Denver).
There are also other ways the Seattle Network could connect to the Denver Network…it could go down the west coast and then back up, but that would take more hops (through Portland, San Fran, LA, etc). Each hop takes time so there’s benefit to keeping the number of hops as low as possible. Also, the connections between any two cities are not infinitely big, but some are bigger than others.
Web servers are located throughout the world, but generally congregate near large cities since they offer the best chance of serving the most people with the fewest hops. If a web site has customers in many different cities they will probably have web servers in each of those cities to try to reduce the number of hops each visitor has to make to get to their server.
As a general internet user you and I are on the outer fringes of one of these networks. If I want to connect to a server in a different city, I first must get to the core of my network before I can transit across other city Networks to get to my destination. This could take several hops just within my city to get to the core of Seattle and then several more hops to get to a different city if it’s not physically located nearby and potentially even more hops if the server I want isn’t near a large city.
Okay that’s the primary and hopefully that makes sense. To answer your specific question:
When you do a speed test, you are generally checking it against a that’s run by your own ISP. If you look at Speedtest, you have an option to pick a server and you can see that there are servers run by Frontier, AT&T, CenturyLink, Comcast, Sprint, and whole bunch of other internet service providers. What you are testing is the connection between you and *near* the core of your city network. It probably takes about 6-10 hops. This is also the part of the network that is generally underutilized the most (which is also why ISPs also oversubscribe and you get the dreaded 7pm slowdown when everyone is binging Netflix). This is rarely representative of real-world situations.
When you go to download your file, it’s probably hosted across the country and has to make 20+ hops. Any one of those hops may be subject to limits for all sorts of reasons that ultimately result in a slower download speed.
If you want a true test of your download speeds, you need to check it using a site that better represents a real-world situation. I’d suggest trying http://speedtest.fremont.linode.com/ and see how that compares.
If you’re interested I can go waaaay more in depth too and we can even look at exactly what routes your data is taking (it’s actually really fascinating) and maybe even figure out where the bottle neck is happening (though it will be tough, but not impossible, to do anything about it).0