__blog/First attempt/Index
Reynir Björnsson 9afeab13e7 Flesh out First attempt
I still need to document the DHCP bits.
2020-11-09 09:25:14 +01:00

60 lines
3.5 KiB
Plaintext

---
title: First attempt on Debian
author: reynir
---
# Running Mirage unikernels on Debian
For a couple of years I have had a cheap-ish dedicated server with Online.net.
I started renting it with the purpose in mind to run Mirage unikernels.
Specifically, I chose them for their IPv6 support with a /56 subnet.
At the time, though, IPv6 in Mirage wasn't so well-tested, and I ran into issues (that might very well be because of my lack of experience with IPv6, peculiarities with the network setup, or something else).
Recently, I gave it another good try.
Below I document what I attempted and what issues I came across.
It probably shows that I'm not strong with (Linux) networking.
## Bridging eth0
My first thought was the VM should be out on the internet, so making a bridge to the internet-connected eth0 seemed natural to me.
This was not the way to go for me since I only get one ipv4 address from my ISP.
I could spend money to get more, but I didn't feel like paying the price they charge.
Another potential issue is VMs sending DHCP messages on the LAN - my ISP doesn't like that and will shut down the ethernet port.
In the end I did configure the bridge! Maybe I can use it in the future for unikernels with IPv6 addresses.
Another problem I had was I often forget that you configure the network on the *bridge* and not on the *enslaved interface*.
I had to spend a lot of time in the poor web-based serial console to my own frustration.
## Bridging the VPN
I run a VPN network already which the host machine is a part of.
My next idea was to bridge with the VPN.
Then I could at least run unikernels that only I use.
The VPN I run uses a tun device which is at layer 3 (IP packets).
The way I want to run unikernels is using tap devices which is at layer 2 (ethernet frames).
This does not work together with bridges either which work at layer 2.
It is possible to have my VPN at layer 2, but that requires me reconfiguring it so on all hosts in the VPN and I'm dubious if I want it that way regardless.
## Private host-local virtual network
Finally, I decided to just create a host-local virtual network for the VMs and worry about NAT and forwarding later.
At some point prior I had switched to using systemd-networkd.
I created a `.netdev` file for creating a bridge.
Then I made a `.network` file with a `[Match]` clause matching on names starting with `mir1` that adds the interface to the bridge.
Finally, I configured an address on the bridge so we can communicate with the VMs from the host.
This seemed to work, but the unikernel I was testing with, [Canopy](https://github.com/Engil/Canopy), fetches data from an online git repository.
Since I hadn't done anything to give it access to the internet it of course failed and crashed.
### Getting Canopy on the internet
I found the options `IPForward=true` and `IPMasquerade=true` on the bridge did the trick.
One thing to note is `IPForward=true` and `IPForward=ipv4` sets globally the sysctl option `net.ipv4.ip_forward`, and never unsets it.
The systemd documentation says to use a firewall to work around this.
The option says to the kernel to route packets across interfaces.
The option `IPMasquerade` basically enables NAT so the packets to the internet coming from the VM will look like they come from the host.
### Using DHCP
So far I had been using statically configured addresses and routes.
I got it working to some degree.
Canopy seems to work fine with the ISPs DNS servers, but less so using anycast.censurfridns.dk or google's 8.8.8.8.
You can read more about it in my [next post](Using%20DHCP).