vm foo.bar is active with 32mb
add_policy bar --mem 16 <- failed :/
what is checked on add_policy <id> <new-policy>?
- all policies above <id> that <new policy> is a sub-policy
- all policies below <id> that each is a sub-policy of <new-policy>
- resource usage of vms below <id> is within <new-policy> limits (number of vms, memory, network access, cpuids)
- Vmm_console/log/stats do not read multiple times
console_add loops
console_subscribe terminates (a stream of messages is sent)
log data stream loops
log_subscribe terminates (a stream of data is sent)
stat_add loops
stat_remove loops
stat_subscribe terminates (a stream of stats is sent)
terminates means: reads once more, and closes socket after second read returned
loop processes further incoming data
vmmc now has more subcommands
- policy [-n name] returns all policies in name and below
- add_policy [-n name] [--cpu cpuid] [--mem mem] [--bridge bridge] [--block size] adds a policy
- remove [-n name] removes policy at name
policy is just the same which is in vmm_req_delegation, and vmm_resources now check them:
- you cannot insert a subpolicy violating the prefix
- you cannot insert a policy which would forbid current resource usage
- you cannot insert a policy with which any subpolicy would be invalid
- you can adjust (increase/decrease) a policy if the above invariants are kept
implement "force create" directly in vmmd: much nicer to
- check resource constraints,
- kill vm potentially,
- and create a new vm,
all as single transaction.
don't use /tmp anymore, but /var/run/albatross for fifos + sockets + vm images,
and /var/db/albatross for ukvm-bin and crls, and /var/log/albatross for logging
vmm_console/vmm_log/vmm_stats_lwt: delete socket on startup if it exists
vmm_influxdb_stats: connects to vmm_stats socket and pushes every interval in
influxdb line format via tcp to specified host and port
allows to cleanup various hacks, such as checking for pid in vmm_resources
or removing temporarily the allocated resources from the resource map in vmm_engine
semantics is now slightly different, but for sure enhanced.
- each VM has a Lwt.wait () task attached in Vmm_engine.t (tasks : 'c String.Map.t)
- normal create shouldn't be much different, apart from memoizing the sleeper
- after waitpid is done in vmmd, and vmm_engine.shutdown succeeded, Lwt.wakeup is called for the sleeper
- force create now:
- checks static policies
- looks for existing VM (and task), if present: kill and wait for task in vmmd
- continue with presence checking of vm name, dynamic policies, allocate resources (tap, img, fifo)
this means the whole randomness in filenames can be removed, and the
communication between vmm_console and vmm_client is working again (attach/detach
could not work since vmm_console knew only about "albatross.AAA.BBB.RANDOM",
whereas vmm_client insisted on "AAA.BBB"
resource overcommitment (and races in e.g. block device closing + opening) are
gone now, only if the old vm is cleanup up, resources for the new one are
allocated and it is executed
a client certificate may either contain `Create or `Force_create permission. If
the latter is used (vmm_req_vm --force), and a VM with the same name already
exists, this is destroyed (if the dynamic resources without the existing would
allow the new one to be deployed) and the new one is started.
I had this concrete deployment scenario, where kill ; create takes some minutes
since it is 10MB data which needs to be transferred from my laptop to a remote
server (me behind dialup).
- renamed `Image to `Create
- renamed `Destroy_image to `Destroy_vm
- fix fd leak (always close socket)
- send first message (login) after renegotiation
vmm_stats:
- remove unneeded functionality (keeping old statistics around)
- translate internal tap names to bridge names
- gather statistics from vmmapi as well
vmm_prometheus_stats:
- new exporter of statistics to prometheus
*:
- fix typo in README
- style