Commit graph

27 commits

Author SHA1 Message Date
Hannes Mehnert 65693ea188 revise the "--net=yyy" argument to (optionally) contain a service:bridge
it used to only contain service, and used the same string for the bridge. This
is not flexible enough to run off-the-shelf unikernels (configured for bridge
"service" and "management" on multi-homed servers). The old behaviour is the
new default (i.e. "--net=service" creates and attaches a tap device to bridge
"service", and passes "--net:service=tapYY" to the solo5 tender). But it is more
flexible now: "--net=service:other-bridge" will create a tap device attached to
"other-bridge" and pass "--net:service=tapYY" to the tender. This way, there's
no need to match bridge names on the actual server with network device names of
the unikernels.

NB: this is (mostly) backwards-compatible: the on-disk data structures are
versioned (and the version is bumped with this PR), an old albatross client can
send "create" commands to a new server. But a new client will get a parse error
from an old server - which is fine taking into consideration the deployment
base.
2020-03-25 16:09:23 +01:00
Hannes Mehnert a579a8e143 root name is "." instead of "" 2019-10-13 13:40:17 +02:00
Hannes Mehnert f81a12bc4d initial metrics 2019-10-12 02:06:38 +02:00
Hannes Mehnert 94912c21e4 changes for solo5 0.6
-- this is a breaking change in the wire protocol
2019-10-12 02:06:27 +02:00
Hannes Mehnert a9c32d7801 vmmd: actually, first check resources, then exec VM, then insert VM
in case the insertion fails, raise Invalid_argument

this leads to more sane failure behaviour, and also cleans up resources in case
 vmm_resources.insert_vm fails (or cpuset/open of the fifo, create_process)
2019-01-27 17:20:24 +01:00
Hannes Mehnert c8f1030403 rename Vm to Unikernel 2018-11-13 01:02:05 +01:00
Hannes Mehnert 85372b0c7e rework resources: now block, vms, and policies are in separate tries 2018-11-13 00:06:43 +01:00
Hannes Mehnert b5c9cdea6a cleanups 2018-11-12 22:19:39 +01:00
Hannes Mehnert 8ccda0e410 refactor bridge: use a string instead of a complicated thing 2018-11-12 22:07:45 +01:00
Hannes Mehnert 2e7f2730a2 move Vm to submodule 2018-11-11 03:24:50 +01:00
Hannes Mehnert 561ba5c5df put Policy in a submodule 2018-11-11 03:09:37 +01:00
Hannes Mehnert 43379d6d9d rename Vmm_core.id to Vmm_core.Name.t and make it private - also check constructors to fit into 20 chars ldh (and in Vmm_tls max depth = 10) 2018-11-11 01:44:31 +01:00
Hannes Mehnert 6dcde8eb68 block device support 2018-11-11 00:01:56 +01:00
Hannes Mehnert 75372a792f fix resource policies. it was checking too many vms:
vm foo.bar is active with 32mb
add_policy bar --mem 16 <- failed :/

what is checked on add_policy <id> <new-policy>?
- all policies above <id> that <new policy> is a sub-policy
- all policies below <id> that each is a sub-policy of <new-policy>
- resource usage of vms below <id> is within <new-policy> limits (number of vms, memory, network access, cpuids)
2018-11-03 00:05:10 +01:00
Hannes Mehnert c669be8e02 address most of @cfcs comments 2018-10-29 17:14:51 +01:00
Hannes Mehnert 8ab37d6b3b resources: remove_vm and remove_policy - no need to intertwine into a single remove 2018-10-28 19:50:48 +01:00
Hannes Mehnert 7b8f2cf802 add policy does nothing when received policy is equal to stored one 2018-10-28 19:41:06 +01:00
Hannes Mehnert 5e921d7345 skip empty common names in vmm_tls 2018-10-28 19:04:24 +01:00
Hannes Mehnert ce0c42fa77 more cleanups 2018-10-26 21:29:59 +02:00
Hannes Mehnert 811f3abc50 adjustments 2018-10-26 21:29:59 +02:00
Hannes Mehnert c399501a18 get rid of vm_config.vname 2018-10-26 21:29:59 +02:00
Hannes Mehnert 182e2ae10c policies:
vmmc now has more subcommands
  - policy [-n name] returns all policies in name and below
  - add_policy [-n name] [--cpu cpuid] [--mem mem] [--bridge bridge] [--block size] adds a policy
  - remove [-n name] removes policy at name

policy is just the same which is in vmm_req_delegation, and vmm_resources now check them:
- you cannot insert a subpolicy violating the prefix
- you cannot insert a policy which would forbid current resource usage
- you cannot insert a policy with which any subpolicy would be invalid
- you can adjust (increase/decrease) a policy if the above invariants are kept

implement "force create" directly in vmmd: much nicer to
 - check resource constraints,
 - kill vm potentially,
 - and create a new vm,
all as single transaction.
2018-10-26 21:29:59 +02:00
Hannes Mehnert ea83013068 delegation -> policy 2018-10-26 21:29:59 +02:00
Hannes Mehnert 9696953cd7 revise force-restart: now with wait for kill and resource cleanup before start
allows to cleanup various hacks, such as checking for pid in vmm_resources
or removing temporarily the allocated resources from the resource map in vmm_engine

semantics is now slightly different, but for sure enhanced.
- each VM has a Lwt.wait () task attached in Vmm_engine.t (tasks : 'c String.Map.t)
- normal create shouldn't be much different, apart from memoizing the sleeper
- after waitpid is done in vmmd, and vmm_engine.shutdown succeeded, Lwt.wakeup is called for the sleeper
- force create now:
 - checks static policies
 - looks for existing VM (and task), if present: kill and wait for task in vmmd
 - continue with presence checking of vm name, dynamic policies, allocate resources (tap, img, fifo)

this means the whole randomness in filenames can be removed, and the
communication between vmm_console and vmm_client is working again (attach/detach
could not work since vmm_console knew only about "albatross.AAA.BBB.RANDOM",
whereas vmm_client insisted on "AAA.BBB"

resource overcommitment (and races in e.g. block device closing + opening) are
gone now, only if the old vm is cleanup up, resources for the new one are
allocated and it is executed
2018-04-05 01:02:45 +02:00
Hannes Mehnert 7a4661b2e1 style: require lwt 3.0.0, fix warnings, disable 4 (fragile pattern matching) and 48 (implicit elimination of optional argument) 2018-04-03 22:58:31 +02:00
Hannes Mehnert bb61388cfc new permission: force_create
a client certificate may either contain `Create or `Force_create permission.  If
the latter is used (vmm_req_vm --force), and a VM with the same name already
exists, this is destroyed (if the dynamic resources without the existing would
allow the new one to be deployed) and the new one is started.

I had this concrete deployment scenario, where kill ; create takes some minutes
since it is 10MB data which needs to be transferred from my laptop to a remote
server (me behind dialup).

- renamed `Image to `Create
- renamed `Destroy_image to `Destroy_vm
2018-03-22 17:00:08 +01:00
Hannes Mehnert 02be3f4528 initial 2017-07-10 10:38:25 +01:00