UKNOF held a Virtual UKNOF for September. Here’s my notes on two of the presentations that I found interesting.
VyOS is something I’ve nearly used before. I recall the announcement a few years ago that some smart people were forking Vyatta into something properly free and open - it is good to see they’re still going. I’m really not a fan of the IOS-style-shell-based configuration Vyatta had, which I’m assuming they continued with VyOS. en, conf t, and all that. I actually pushed to move away from Quagga on the basis that BIRD’s flat config file was easier to work with - at least for our limited requirements. Although NIC.cz’s involvement with BIRD did give me a measure of confidence.
I had no clue Quagga had been forked. If I was forced to stop using BIRD - I’d still pick Quagga over FRR on grounds of a strong track-record on stability. Routing is one area I’m not hurting for new features.
I really only know about Salt from my cursory investigation into configuration automation/deployment a few years ago - when I’d put it down in favour of Ansible for requiring an agent running on the target.
Side note: Here’s a tip Saltstack - Ansible has a very prominent “How Ansible Works” link on the homepage that takes me to a quickstart guide showing me the key points of an ansible setup and what it requires. Your ‘Learn How Saltstack Works’ button takes me to a marketing page that tells me nothing, but has a ‘Try it now’ button. Except that button takes me to a form to request access to a hosted product. I don’t know how Salt works, and I really don’t care enough to find out.
Annika Wickert and Matthias Kesler gave a presentation on a non-profit videoconferencing solution. The bulk of the talk wasn’t of any particular note, as a lot of the challenges they faced are challenges I’ve seen at Pebbletree. Case in point, browsers prefering the first TURN server your provide. Two things they mentioned stuck out:
The first was that they ended up running a mixed-"cloud” network. That is to say virtual private servers from multiple different providers. Now that was interesting - particularly given how they ended up deploying an overlay network. My experiences dealing with colocated servers (behind our own BGP ranges, paying for independent transit) also led to the same conclusion.
Whereas my experiences of an overlay network are with (primarily) Calico and (a little bit of) Cillium - ffmuc instead went with Nebula. I’ve never touched Nebula, and our requirements (in relation to Kubernetes) mean I likely won’t. But I do wonder why Slack found the various existing CNIs available for Kubernetes lacking (given most do not require Kubernetes).