You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently Kiosc is running on a server equipped with 120 Gb of memory, using Docker compose to start the different services of Kiosc in individual Docker containers. Users can start their own containers running in a different Docker network and without any resource restriction, i.e. one container can potentially consume all resources.
Planned Change
As with all other services, we want to migrate Kiosc from using one large server to multiple small ones. In the long term, we also need to find a way to limit the resources of the users containers.
Issues
A good target are of 32 Gb memory per server. This means that running the service containers of Kiosc and the users containers at the same time on that machine will most likely result in a race for memory, potential swapping and eventually stopping Kiosc from running and crashing containers.
Potential Solutions
Systems like Kubernetes and Docker swarm can schedule containers and distribute them intelligently to other attached machines. As we are already using Docker compose, Docker swarm is the most straightforward candidate to look at as we expect the least extension in used third party software as well as a low learning curve.
Should State
Kiosc is using multiple smaller servers. We use Docker swarm to distribute the users containers among other virtual machines.
Tasks
Get to know Docker swarm, especially regarding:
Using specific resources for a container
Running container in a specific Docker network
How to ensure distribution of containers/"services" throughout the cluster of smaller machines
In the case of container death, e.g., via OOM, can restart on another server be done?
Swarm uses services instead of containers. How do states and logging differ?
The text was updated successfully, but these errors were encountered:
holtgrewe
changed the title
Migrate Kiosc to OpenStack
Implement launching containers with docker engine swarm mode
Jan 25, 2022
Is State
Currently Kiosc is running on a server equipped with 120 Gb of memory, using Docker compose to start the different services of Kiosc in individual Docker containers. Users can start their own containers running in a different Docker network and without any resource restriction, i.e. one container can potentially consume all resources.
Planned Change
As with all other services, we want to migrate Kiosc from using one large server to multiple small ones. In the long term, we also need to find a way to limit the resources of the users containers.
Issues
A good target are of 32 Gb memory per server. This means that running the service containers of Kiosc and the users containers at the same time on that machine will most likely result in a race for memory, potential swapping and eventually stopping Kiosc from running and crashing containers.
Potential Solutions
Systems like Kubernetes and Docker swarm can schedule containers and distribute them intelligently to other attached machines. As we are already using Docker compose, Docker swarm is the most straightforward candidate to look at as we expect the least extension in used third party software as well as a low learning curve.
Should State
Kiosc is using multiple smaller servers. We use Docker swarm to distribute the users containers among other virtual machines.
Tasks
The text was updated successfully, but these errors were encountered: