--- title: First attempt on Debian author: reynir --- # Running Mirage unikernels on Debian For a couple of years I have had a cheap-ish dedicated server with Online.net. I started renting it with the purpose in mind to run Mirage unikernels. Specifically, I chose them for their IPv6 support with a /56 subnet. At the time, though, IPv6 in Mirage wasn't so well-tested, and I ran into issues (that might very well be because of my lack of experience with IPv6, peculiarities with the network setup, or something else). Recently, I gave it another good try. Below I document what I attempted and what issues I came across. It probably shows that I'm not strong with (Linux) networking. ## Bridging eth0 My first thought was the VM should be out on the internet, so making a bridge to the internet-connected eth0 seemed natural to me. This was not the way to go for me since I only get one ipv4 address from my ISP. I could spend money to get more, but I didn't feel like paying the price they charge. Another potential issue is VMs sending DHCP messages on the LAN - my ISP doesn't like that and will shut down the ethernet port. In the end I did configure the bridge! Maybe I can use it in the future for unikernels with IPv6 addresses. Another problem I had was I often forget that you configure the network on the *bridge* and not on the *enslaved interface*. I had to spend a lot of time in the poor web-based serial console to my own frustration. ## Bridging the VPN I run a VPN network already which the host machine is a part of. My next idea was to bridge with the VPN. Then I could at least run unikernels that only I use. The VPN I run uses a tun device which is at layer 3 (IP packets). The way I want to run unikernels is using tap devices which is at layer 2 (ethernet frames). This does not work together with bridges either which work at layer 2. It is possible to have my VPN at layer 2, but that requires me reconfiguring it so on all hosts in the VPN and I'm dubious if I want it that way regardless. ## Private host-local virtual network Finally, I decided to just create a host-local virtual network for the VMs and worry about NAT and forwarding later. At some point prior I had switched to using systemd-networkd. I created a `.netdev` file for creating a bridge. Then I made a `.network` file with a `[Match]` clause matching on names starting with `mir1` that adds the interface to the bridge. Finally, I configured an address on the bridge so we can communicate with the VMs from the host. This seemed to work, but the unikernel I was testing with, [Canopy](https://github.com/Engil/Canopy), fetches data from an online git repository. Since I hadn't done anything to give it access to the internet it of course failed and crashed.