Jamulus and Temporally Hyper-Near Servers

Temporally Hyper-Near Servers

As we’ve been doing more video and audio conferencing lately, I’ve been experimenting with temporally hyper-near servers to see if it results in a better experience. TL;DR…not really for most purposes.

Temporally hyper-near servers differ from geographically near servers in that it doesn’t matter how close the server is physically in miles, just packet transit transfer time in milliseconds…basically low-latency.

AWS calls these Local Zones and they’re designed so that “you can easily run latency-sensitive portions of applications local to end-users and resources in a specific geography, delivering single-digit millisecond latency for use cases such as media & entertainment content creation, real-time gaming…”, but they only have them in the Los Angeles region for now.

Azure calls them Edge Zones, but they aren’t available yet.

Google doesn’t have a specific offering, but instead provides a list of facilities within each region you can choose from, though none of them are near Seattle.

I went back my notes when I was looking at deploying some servers that I knew would generally only be accessed from the Seattle area and I found that Vultr could be a good solution1.

With Vultr (in Seattle), I’m getting an average round-trip time (RTT) of 3.221ms (stddev 0.244 ms)2

Compare to AWS (US West 2), which was an average RTT of 10.820 ms (stddev 0.815ms)3

After doing some traceroutes and poking around various peering databases , I think that Vultr is based at the Cyxtera SEA2 datacenter in Seattle and shares interconnections with CenturyLink, Comcast, and AT&T (among others).

I setup a Jitsi server, but didn’t notice anything perceptibly different between using my server and a standard Jitsi public server (the nearest of which is on an AWS US West 2 instance).

However, for Jamulus (which is software that enables musicians to perform real-time jam sessions over the internet) there does appear to be huge difference and I’ve received several emails about the setup I have, so here goes:

Jamulus on Vultr

Deploy a new server on Vultr4, here’s the the configuration I used:

  • Choose Server: Cloud Compute (see update at the end for High Frequency Compute)
  • Server Location: Seattle
  • Server Type: Debian 10 x64
  • Server Size: $5/mo
    • 25 GB SSD
    • 1 CPU
    • 1024 MB Memory
    • 1000GB Bandwidth
  • SSH Keys: as desired (and beyond the scope of this)
  • Firewall Group: No Firewall (we’ll use UFW on the host for this)
  • Server Hostname & Label: as desired…we’ll call it myserver for the sake of this post

One you deploy the server, it will take a few minutes for it to be ready. Once it is, SSH to it:

ssh root@myserver

Update the linux distribution:

apt-get update
apt-get -y dist-upgrade

Install and configure the UFW firewall:

apt-get install ufw
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 22124/udp
ufw enable

DigitalOcean has a good tutorial on how to setup UFW as well.

You’re now ready to install Jamulus!

The Jamulus wiki has a pretty decent set of instructions (which have only gotten better in the last few months) on how to download, compile, and run a headless Jamulus instance: https://github.com/corrados/jamulus/wiki/Server—Linux

Here’s the TL;DR (which assumes you are working as root):

Install dependencies:

apt-get -y install git build-essential qtdeclarative5-dev qt5-default qttools5-dev-tools libjack-jackd2-dev

Download source code:

cd /tmp/
git clone https://github.com/corrados/jamulus.git


cd jamulus
qmake "CONFIG+=nosound headless" Jamulus.pro
make clean
make install
mv Jamulus /usr/local/bin/

Create a user to run Jamulus:

adduser --system --no-create-home jamulus

Create a directory to record files to:

mkdir -p /var/jamulus/recording
chown jamulus /var/jamulus/recording

Create systemd unit file:

nano /etc/systemd/system/jamulus.service

Paste the following into the file above, making the needed changes to the Jamulus command line options as-needed for (see update at the end for using --fastupdate):



#### Change this to set genre, location and other parameters.
#### See https://github.com/corrados/jamulus/wiki/Command-Line-Options ####
ExecStart=/usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername $(uname -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo "NW WA;Seattle, WA;225" -g --welcomemessage "This is an experimental service and support is not guaranteed. Please contact andrew@fergcorp.com with questions" --licence


Give the unit file the correct permissions:

chmod 644 /etc/systemd/system/jamulus.service

Start and verify Jamulus:

systemctl start jamulus
systemctl status jamulus

You should get something like:

● jamulus.service - Jamulus-Server
   Loaded: loaded (/etc/systemd/system/jamulus.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-07-08 10:57:09 PDT; 4s ago
 Main PID: 14220 (Jamulus)
    Tasks: 3 (limit: 1149)
   Memory: 13.5M
   CGroup: /system.slice/jamulus.service
           └─14220 /usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo N

Jul 08 10:57:09 myserver.example.com jamulus[14220]: - central server: jamulusallgenres.fischvolk.de:22224
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - server info: NW WA;Seattle, WA;225
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - ping servers in slave server list
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - welcome message: This is an experimental service and support is not guaranteed. Please contact andrew@fergcorp.com with questions
Jul 08 10:57:09 myserver.example.com jamulus[14220]: - licence required
Jul 08 10:57:09 myserver.example.com jamulus[14220]:  *** Jamulus, Version 3.5.8git
Jul 08 10:57:09 myserver.example.com jamulus[14220]:  *** Internet Jam Session Software
Jul 08 10:57:09 myserver.example.com jamulus[14220]:  *** Released under the GNU General Public License (GPL)
Jul 08 10:57:09 myserver.example.com jamulus[14220]: Server Registration Status update: Registration requested
Jul 08 10:57:09 myserver.example.com jamulus[14220]: Server Registration Status update: Registered

And that’s it! Enjoy the server and let me know how it goes!

9 July 2020 Update:

If you update jamulus.service unit file then run this:

systemctl daemon-reload
service jamulus restart

Also, thanks to Brian Pratt testing, feedback, catching a couple typos, and suggesting using the --fastupdate command line option paired with Vultr’s High Frequency Compute (instead of regular Compute) even better performance.

  1. Neither DigitalOcean nor Linode have data centers in Seattle 

  2. ping -c10 -W50 

  3. ping -c10 -W50 ec2.us-west-2.amazonaws.com 

  4. Get $100 free credit with that affiliate link; note: you must use credit within 30 days 

  5. USA is 225 

7 thoughts on “Jamulus and Temporally Hyper-Near Servers”

  1. Hi Andrew. Thank-you for this! I was able to install a Jamulus server on Vultr with little trouble. I’d like to upgrade my Jamulus server to the latest v3.5.9 (using Debian 10 x64 High Frequency Compute option). What steps do I take up update via SSH?

    Any help is appreciated.

    1. Add --fastupdate at the end of line 18 of the jamulus.service file. It should now look like:

      ExecStart=/usr/local/bin/Jamulus --server --nogui --recording /var/jamulus/recording/ --servername $(uname -n) --centralserver jamulusallgenres.fischvolk.de:22224 --serverinfo "NW WA;Seattle, WA;225" -g --welcomemessage "This is an experimental service and support is not guaranteed. Please contact andrew@fergcorp.com with questions" --licence --fastupdate
      1. Hi, Andrew!

        I live in Los Angeles and would like to try the aws local zone. Do you think this would be better than Vultr? I have a Vultr server running now following your directions (got a tech friend to help) and it is working wonderfully. Do you have any instructions for an aws local zone or how to get going with that? Most people out here had ISPs with terrible ping time, so if AWS local zone is faster than Vultr, I would go for it. Also, I would like to set-up a private central server so that I can have several different rooms (using different ports I believe) on the same instance so that our music teachers and students may access several private rooms.

        1. Hello Matthew:

          Vultr does have a datacenter in LA, so if you picked that one then I would suspect it would be pretty close to being on par with AWS. The easiest way to check is to test the ping times from your location to Vultr LA and AWS US-West-2-LAX-1:

          Vultr LA: ping lax-ca-us-ping.vultr.com
          Amazon LAX: ping (IP from http://ec2-reachability.amazonaws.com/)

          It’s been a while since I’ve spun up an EC2 instance on AWS, so I don’t have any instructions specific for that.

          With regard to hosting multiple rooms on a single server, you’ll want to carefully monitor your processor utilization to make sure you’re not topping out.

          If needed, I do offer technical consulting services…I’ll send you a separate email so you have my contact info.


  2. Hi – just to say that the line “mv Jamulus /usr/local/bin/” after make install on the compilation instructions is redundant. Make install puts the binary in there anyway 🙂

    Also, you may want to avoid using git clone as that will give you the “bleeding edge” sources. Better to get the latest official release really. See the wiki:


    Oh and also – we intend to move the wiki to Github Pages at some point as it’s not indexed by Google. You will need to update any links in this article at that point.

    1. Thanks for pointing those changes out. I’ve moved on from this project — it really was more about experimenting with temporally hyper-near servers, and I never really intended this to be a living document of things.

      Feel free to leave a comment notice here once you do migrate to Github Pages though! I always happy to make sure people have a breadcrumb somewhere.

Leave a Reply

Your email address will not be published. Required fields are marked *