- Vmm_console/log/stats do not read multiple times
console_add loops
console_subscribe terminates (a stream of messages is sent)
log data stream loops
log_subscribe terminates (a stream of data is sent)
stat_add loops
stat_remove loops
stat_subscribe terminates (a stream of stats is sent)
terminates means: reads once more, and closes socket after second read returned
loop processes further incoming data
vmmc now has more subcommands
- policy [-n name] returns all policies in name and below
- add_policy [-n name] [--cpu cpuid] [--mem mem] [--bridge bridge] [--block size] adds a policy
- remove [-n name] removes policy at name
policy is just the same which is in vmm_req_delegation, and vmm_resources now check them:
- you cannot insert a subpolicy violating the prefix
- you cannot insert a policy which would forbid current resource usage
- you cannot insert a policy with which any subpolicy would be invalid
- you can adjust (increase/decrease) a policy if the above invariants are kept
implement "force create" directly in vmmd: much nicer to
- check resource constraints,
- kill vm potentially,
- and create a new vm,
all as single transaction.
don't use /tmp anymore, but /var/run/albatross for fifos + sockets + vm images,
and /var/db/albatross for ukvm-bin and crls, and /var/log/albatross for logging
vmm_console/vmm_log/vmm_stats_lwt: delete socket on startup if it exists
vmm_influxdb_stats: connects to vmm_stats socket and pushes every interval in
influxdb line format via tcp to specified host and port
allows to cleanup various hacks, such as checking for pid in vmm_resources
or removing temporarily the allocated resources from the resource map in vmm_engine
semantics is now slightly different, but for sure enhanced.
- each VM has a Lwt.wait () task attached in Vmm_engine.t (tasks : 'c String.Map.t)
- normal create shouldn't be much different, apart from memoizing the sleeper
- after waitpid is done in vmmd, and vmm_engine.shutdown succeeded, Lwt.wakeup is called for the sleeper
- force create now:
- checks static policies
- looks for existing VM (and task), if present: kill and wait for task in vmmd
- continue with presence checking of vm name, dynamic policies, allocate resources (tap, img, fifo)
this means the whole randomness in filenames can be removed, and the
communication between vmm_console and vmm_client is working again (attach/detach
could not work since vmm_console knew only about "albatross.AAA.BBB.RANDOM",
whereas vmm_client insisted on "AAA.BBB"
resource overcommitment (and races in e.g. block device closing + opening) are
gone now, only if the old vm is cleanup up, resources for the new one are
allocated and it is executed