diff --git a/docs/conf.py b/docs/conf.py index 5a92403..47ff893 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -14,7 +14,8 @@ # Enable MathJax for Math extensions = ['sphinx.ext.mathjax', 'sphinx.ext.intersphinx', - 'sphinx_copybutton'] + 'sphinx_copybutton', + 'myst_parser'] # The master toctree document. master_doc = 'index' diff --git a/docs/configuration/cull.md b/docs/configuration/cull.md new file mode 100644 index 0000000..4cde59f --- /dev/null +++ b/docs/configuration/cull.md @@ -0,0 +1,7 @@ +# Culling idle servers + +Plasma uses the [same defaults as The Littlest JupyterHub](http://tljh.jupyter.org/en/latest/topic/idle-culler.html#default-settings) +for culling idle servers. + +It overrides the `timeout` value to `3600`, which means that the user servers will be shut down if they have +been idle for more than one hour. diff --git a/docs/configuration/cull.rst b/docs/configuration/cull.rst deleted file mode 100644 index e25e32f..0000000 --- a/docs/configuration/cull.rst +++ /dev/null @@ -1,8 +0,0 @@ -Culling idle servers -==================== - -Plasma uses the `same defaults as The Littlest JupyterHub `_ -for culling idle servers. - -It overrides the ``timeout`` value to ``3600``, which means that the user servers will be shut down if they have -been idle for more than one hour. diff --git a/docs/configuration/index.md b/docs/configuration/index.md new file mode 100644 index 0000000..7040d9c --- /dev/null +++ b/docs/configuration/index.md @@ -0,0 +1,11 @@ +# Configuration + +```{toctree} +:maxdepth: 3 + +monitoring +persistence +resources +cull +namedservers +``` diff --git a/docs/configuration/index.rst b/docs/configuration/index.rst deleted file mode 100644 index 1a32666..0000000 --- a/docs/configuration/index.rst +++ /dev/null @@ -1,11 +0,0 @@ -Configuration -============= - -.. toctree:: - :maxdepth: 3 - - monitoring - persistence - resources - cull - namedservers diff --git a/docs/configuration/monitoring.md b/docs/configuration/monitoring.md new file mode 100644 index 0000000..3362389 --- /dev/null +++ b/docs/configuration/monitoring.md @@ -0,0 +1,98 @@ +# Monitoring + +```{warning} +HTTPS must be enabled to be able to access Cockpit. Refer to {ref}`install-https` for more info. +``` + +## Installing Cockpit + +`cockpit` is not installed by default as a monitoring tool for the server. + +First make sure HTTPS is enabled and the `name_server` variable is specified in the `hosts` file. +See {ref}`install-https` for more info. + +Then execute the `cockpit.yml` playbook: + +```bash +ansible-playbook cockpit.yml -i hosts -u ubuntu +``` + +The Plasma TLJH plugin registers `cockpit` as a JupyterHub service. This means that +Cockpit is accessible to JupyterHub admin users via the JupyterHub interface: + +```{image} ../images/configuration/cockpit-navbar.png +:align: center +:alt: Accessing cockpit from the nav bar +:width: 100% +``` + +Users will be asked to login with their system credentials. They can then access the Cockpit dashboard: + +```{image} ../images/configuration/cockpit.png +:align: center +:alt: Cockpit +:width: 100% +``` + +## Monitoring user servers with Cockpit + +````{note} +Access to Docker Containers requires access to `docker`. + +Make sure your user can access docker on the machine with: + +```bash +sudo docker info +``` + +Your user should also be able to login with a password. If the user doesn't have a password yet, you can +create a new one with: + +```bash +sudo passwd +``` + +For example if your user is `ubuntu`: + +```bash +sudo passwd ubuntu +``` + +To add more users as admin or change permissions from the Cockpit UI, see {ref}`monitoring-permissions`. +```` + +Since user servers are started as Docker containers, they will be displayed in the Cockpit interface in the +`Docker Containers` section: + +```{image} ../images/configuration/cockpit-docker.png +:align: center +:alt: Docker Containers from Cockpit +:width: 100% +``` + +The Cockpit interface shows: + +- The username as part of the name of the Docker container +- The resources they are currently using +- The environment currently in use + +It is also possible to stop the user server by clicking on the "Stop" button. + +(monitoring-permissions)= + +## Changing user permissions from the Cockpit UI + +```{note} +You first need to be logged in with a user that has the `sudo` permission. +::: + +Cockpit makes it easy to add a specific user to a certain group. + +For example a user can be given the "Container Administrator" role via the UI to be able to manage Docker containers +and images on the machine: + +```{image} ../images/configuration/cockpit-roles.png +:align: center +:alt: Manage user roles from the Cockpit UI +:width: 100% +``` diff --git a/docs/configuration/monitoring.rst b/docs/configuration/monitoring.rst deleted file mode 100644 index 455488e..0000000 --- a/docs/configuration/monitoring.rst +++ /dev/null @@ -1,100 +0,0 @@ -Monitoring -========== - -.. warning:: - - HTTPS must be enabled to be able to access Cockpit. Refer to :ref:`install/https` for more info. - -Installing Cockpit ------------------- - -``cockpit`` is not installed by default as a monitoring tool for the server. - -First make sure HTTPS is enabled and the ``name_server`` variable is specified in the ``hosts`` file. -See :ref:`install/https` for more info. - -Then execute the ``cockpit.yml`` playbook: - -.. code-block:: bash - - ansible-playbook cockpit.yml -i hosts -u ubuntu - -The Plasma TLJH plugin registers ``cockpit`` as a JupyterHub service. This means that -Cockpit is accessible to JupyterHub admin users via the JupyterHub interface: - -.. image:: ../images/configuration/cockpit-navbar.png - :alt: Accessing cockpit from the nav bar - :width: 100% - :align: center - -Users will be asked to login with their system credentials. They can then access the Cockpit dashboard: - -.. image:: ../images/configuration/cockpit.png - :alt: Cockpit - :width: 100% - :align: center - -Monitoring user servers with Cockpit ------------------------------------- - -.. note:: - - Access to Docker Containers requires access to ``docker``. - - Make sure your user can access docker on the machine with: - - .. code-block:: bash - - sudo docker info - - Your user should also be able to login with a password. If the user doesn't have a password yet, you can - create a new one with: - - .. code-block:: bash - - sudo passwd - - For example if your user is ``ubuntu``: - - .. code-block:: bash - - sudo passwd ubuntu - - To add more users as admin or change permissions from the Cockpit UI, see :ref:`monitoring/permissions`. - - -Since user servers are started as Docker containers, they will be displayed in the Cockpit interface in the -``Docker Containers`` section: - -.. image:: ../images/configuration/cockpit-docker.png - :alt: Docker Containers from Cockpit - :width: 100% - :align: center - -The Cockpit interface shows: - -- The username as part of the name of the Docker container -- The resources they are currently using -- The environment currently in use - -It is also possible to stop the user server by clicking on the "Stop" button. - - -.. _monitoring/permissions: - -Changing user permissions from the Cockpit UI ---------------------------------------------- - -.. note:: - - You first need to be logged in with a user that has the ``sudo`` permission. - -Cockpit makes it easy to add a specific user to a certain group. - -For example a user can be given the "Container Administrator" role via the UI to be able to manage Docker containers -and images on the machine: - -.. image:: ../images/configuration/cockpit-roles.png - :alt: Manage user roles from the Cockpit UI - :width: 100% - :align: center diff --git a/docs/configuration/namedservers.md b/docs/configuration/namedservers.md new file mode 100644 index 0000000..72d4ca2 --- /dev/null +++ b/docs/configuration/namedservers.md @@ -0,0 +1,23 @@ +# Named Servers + +By default, users can run only one server at once. + +[Named servers functionality](https://jupyterhub.readthedocs.io/en/stable/reference/config-user-env.html#named-servers) in JupyterHub +can be activated to let the user run several servers. + +To allow up to 2 simultaneous named servers (in addition to the default one), create the file `named_servers_config.py` +in the directory `/opt/tljh/config/jupyterhub_config.d` with the following content: + +```text +c.JupyterHub.allow_named_servers = True +c.JupyterHub.named_server_limit_per_user = 2 +``` + +Then, reload tljh: + +```text +sudo tljh-config reload +``` + +Have a look at the [named servers documentation](https://jupyterhub.readthedocs.io/en/stable/reference/config-user-env.html#named-servers) +for more details. diff --git a/docs/configuration/namedservers.rst b/docs/configuration/namedservers.rst deleted file mode 100644 index 2bb166c..0000000 --- a/docs/configuration/namedservers.rst +++ /dev/null @@ -1,24 +0,0 @@ -Named Servers -============= - -By default, users can run only one server at once. - -`Named servers functionality `_ in JupyterHub -can be activated to let the user run several servers. - -To allow up to 2 simultaneous named servers (in addition to the default one), create the file ``named_servers_config.py`` -in the directory ``/opt/tljh/config/jupyterhub_config.d`` with the following content: - -.. code-block:: text - - c.JupyterHub.allow_named_servers = True - c.JupyterHub.named_server_limit_per_user = 2 - -Then, reload tljh: - -.. code-block:: text - - sudo tljh-config reload - -Have a look at the `named servers documentation `_ -for more details. diff --git a/docs/configuration/persistence.md b/docs/configuration/persistence.md new file mode 100644 index 0000000..d4d6bb7 --- /dev/null +++ b/docs/configuration/persistence.md @@ -0,0 +1,91 @@ +# Data Persistence + +(persistence-user-data)= + +## User Data + +The user servers are started using JupyterHub's [SystemUserSpawner](https://github.com/jupyterhub/dockerspawner#systemuserspawner). + +This spawner is based on the [DockerSpawner](https://github.com/jupyterhub/dockerspawner#dockerspawner), but makes it possible +to use the host users to start the notebook servers. + +Concretely this means that the user inside the container corresponds to a real user that exists on the host. +Processes will be started by that user, instead of the default `jovyan` user that is usually found in the regular +Jupyter Docker images and on Binder. + +For example when the user `foo` starts their server, the list of processes looks like the following: + +```bash +foo@9cf23d669647:~$ ps aux +USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND +root 1 1.1 0.0 50944 3408 ? Ss 11:17 0:00 su - foo -m -c "$0" "$@" -- /srv/conda/envs/notebook/bin/jupyterhub-singleuser --ip=0.0.0.0 --port=8888 --NotebookApp.default_url=/lab --ResourceUseDisplay.track_cpu_percent=True +foo 32 5.4 0.8 399044 70528 ? Ssl 11:17 0:01 /srv/conda/envs/notebook/bin/python /srv/conda/envs/notebook/bin/jupyterhub-singleuser --ip=0.0.0.0 --port=8888 --NotebookApp.default_url=/lab --ResourceUseDisplay.track_cpu_percent=True +foo 84 0.0 0.0 20312 4036 pts/0 Ss 11:17 0:00 /bin/bash -l +foo 112 29.0 0.5 458560 46448 ? Ssl 11:17 0:00 /srv/conda/envs/notebook/bin/python -m bash_kernel -f /home/foo/.local/share/jupyter/runtime/kernel-9a7c8ad3-4ac2-4754-88cc-ef746d1be83e.json +foo 126 0.5 0.0 20180 3884 pts/1 Ss+ 11:17 0:00 /bin/bash --rcfile /srv/conda/envs/notebook/lib/python3.8/site-packages/pexpect/bashrc.sh +foo 140 0.0 0.0 36076 3368 pts/0 R+ 11:17 0:00 ps aux +``` + +The following steps happen when a user starts their server: + +1. Mount the user home directory on the host into the container. This means that the file structure in the container reflects what is on the host. +2. A new directory is created in the user home directory for each new environment (i.e for each Docker image). + For example if a user starts the `2020-python-course` environment, there will be a new folder created under `/home/user/2020-python-course`. + This folder is then persisted to disk in the user home directory on the host. Any file and notebook created from the notebook interface are also persisted to disk. +3. On server startup, the entrypoint script copies the files from the base image that are initially in `/home/jovyan` to `/home/user/2020-python-course` in the container. + They are then persisted in `/home/user/2020-python-course` on the host. + +```{image} ../images/configuration/persistence.png +:align: center +:alt: Mounting user's home directories +:width: 80% +``` + +- The files highlighted in blue correspond to the files initially bundled in the environment. These files are copied to the environment subdirectory in the user home directory on startup. +- The other files are examples of files created by the user. + +## User server startup + +The user server is started from the environment directory: + +```{image} ../images/configuration/user-server-rootdir.png +:align: center +:alt: User servers are started in the environment directory +:width: 50% +``` + +The rest of the user files are mounted into the container, see {ref}`persistence-user-data`. + +A user can for example open a terminal and access their files by typing `cd`. + +They can then inspect their files: + +```text +foo@3e29b2297563:/home/foo$ ls -lisah +total 56K + 262882 4.0K drwxr-xr-x 9 foo foo 4.0K Apr 21 16:53 . +6205024 4.0K drwxr-xr-x 1 root root 4.0K Apr 21 16:50 .. + 266730 4.0K -rw------- 1 foo foo 228 Apr 21 14:41 .bash_history + 262927 4.0K -rw-r--r-- 1 foo foo 220 May 5 2019 .bash_logout + 262928 4.0K -rw-r--r-- 1 foo foo 3.7K May 5 2019 .bashrc +1043206 4.0K drwx------ 3 foo foo 4.0K Apr 21 09:26 .cache + 528378 4.0K drwx------ 3 foo foo 4.0K Apr 17 17:36 .gnupg +1565895 4.0K drwxrwxr-x 2 foo foo 4.0K Apr 21 09:55 .ipynb_checkpoints +1565898 4.0K drwxr-xr-x 5 foo foo 4.0K Apr 21 09:27 .ipython +1565880 4.0K drwxrwxr-x 3 foo foo 4.0K Apr 21 09:26 .local + 262926 4.0K -rw-r--r-- 1 foo foo 807 May 5 2019 .profile +1050223 4.0K drwxrwxr-x 12 foo foo 4.0K Apr 20 10:44 2020-python-course +1043222 4.0K drwxrwxr-x 13 foo foo 4.0K Apr 20 17:07 r-intro + 258193 4.0K -rw-rw-r-- 1 foo foo 843 Apr 21 09:56 Untitled.ipynb +``` + +## Shared Data + +In addition to the user data, the plugin also mounts a shared data volume for all users. + +The shared data is available under `/srv/data` inside the user server, as pictured in the diagram above. + +On the host machine, the shared data should be placed under `/srv/data` as recommended in the +[TLJH documentation](http://tljh.jupyter.org/en/latest/howto/content/share-data.html#option-2-create-a-read-only-shared-folder-for-data). + +The shared data is **read-only**. diff --git a/docs/configuration/persistence.rst b/docs/configuration/persistence.rst deleted file mode 100644 index 5f4848b..0000000 --- a/docs/configuration/persistence.rst +++ /dev/null @@ -1,94 +0,0 @@ -Data Persistence -================ - -.. _persistence/user-data: - -User Data ---------- - -The user servers are started using JupyterHub's `SystemUserSpawner `_. - -This spawner is based on the `DockerSpawner `_, but makes it possible -to use the host users to start the notebook servers. - -Concretely this means that the user inside the container corresponds to a real user that exists on the host. -Processes will be started by that user, instead of the default ``jovyan`` user that is usually found in the regular -Jupyter Docker images and on Binder. - -For example when the user ``foo`` starts their server, the list of processes looks like the following: - -.. code-block:: bash - - foo@9cf23d669647:~$ ps aux - USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND - root 1 1.1 0.0 50944 3408 ? Ss 11:17 0:00 su - foo -m -c "$0" "$@" -- /srv/conda/envs/notebook/bin/jupyterhub-singleuser --ip=0.0.0.0 --port=8888 --NotebookApp.default_url=/lab --ResourceUseDisplay.track_cpu_percent=True - foo 32 5.4 0.8 399044 70528 ? Ssl 11:17 0:01 /srv/conda/envs/notebook/bin/python /srv/conda/envs/notebook/bin/jupyterhub-singleuser --ip=0.0.0.0 --port=8888 --NotebookApp.default_url=/lab --ResourceUseDisplay.track_cpu_percent=True - foo 84 0.0 0.0 20312 4036 pts/0 Ss 11:17 0:00 /bin/bash -l - foo 112 29.0 0.5 458560 46448 ? Ssl 11:17 0:00 /srv/conda/envs/notebook/bin/python -m bash_kernel -f /home/foo/.local/share/jupyter/runtime/kernel-9a7c8ad3-4ac2-4754-88cc-ef746d1be83e.json - foo 126 0.5 0.0 20180 3884 pts/1 Ss+ 11:17 0:00 /bin/bash --rcfile /srv/conda/envs/notebook/lib/python3.8/site-packages/pexpect/bashrc.sh - foo 140 0.0 0.0 36076 3368 pts/0 R+ 11:17 0:00 ps aux - - -The following steps happen when a user starts their server: - -1. Mount the user home directory on the host into the container. This means that the file structure in the container reflects what is on the host. -2. A new directory is created in the user home directory for each new environment (i.e for each Docker image). - For example if a user starts the ``2020-python-course`` environment, there will be a new folder created under ``/home/user/2020-python-course``. - This folder is then persisted to disk in the user home directory on the host. Any file and notebook created from the notebook interface are also persisted to disk. -3. On server startup, the entrypoint script copies the files from the base image that are initially in ``/home/jovyan`` to ``/home/user/2020-python-course`` in the container. - They are then persisted in ``/home/user/2020-python-course`` on the host. - -.. image:: ../images/configuration/persistence.png - :alt: Mounting user's home directories - :width: 80% - :align: center - -- The files highlighted in blue correspond to the files initially bundled in the environment. These files are copied to the environment subdirectory in the user home directory on startup. -- The other files are examples of files created by the user. - -User server startup -------------------- - -The user server is started from the environment directory: - -.. image:: ../images/configuration/user-server-rootdir.png - :alt: User servers are started in the environment directory - :width: 50% - :align: center - -The rest of the user files are mounted into the container, see :ref:`persistence/user-data`. - -A user can for example open a terminal and access their files by typing ``cd``. - -They can then inspect their files: - -.. code-block:: text - - foo@3e29b2297563:/home/foo$ ls -lisah - total 56K - 262882 4.0K drwxr-xr-x 9 foo foo 4.0K Apr 21 16:53 . - 6205024 4.0K drwxr-xr-x 1 root root 4.0K Apr 21 16:50 .. - 266730 4.0K -rw------- 1 foo foo 228 Apr 21 14:41 .bash_history - 262927 4.0K -rw-r--r-- 1 foo foo 220 May 5 2019 .bash_logout - 262928 4.0K -rw-r--r-- 1 foo foo 3.7K May 5 2019 .bashrc - 1043206 4.0K drwx------ 3 foo foo 4.0K Apr 21 09:26 .cache - 528378 4.0K drwx------ 3 foo foo 4.0K Apr 17 17:36 .gnupg - 1565895 4.0K drwxrwxr-x 2 foo foo 4.0K Apr 21 09:55 .ipynb_checkpoints - 1565898 4.0K drwxr-xr-x 5 foo foo 4.0K Apr 21 09:27 .ipython - 1565880 4.0K drwxrwxr-x 3 foo foo 4.0K Apr 21 09:26 .local - 262926 4.0K -rw-r--r-- 1 foo foo 807 May 5 2019 .profile - 1050223 4.0K drwxrwxr-x 12 foo foo 4.0K Apr 20 10:44 2020-python-course - 1043222 4.0K drwxrwxr-x 13 foo foo 4.0K Apr 20 17:07 r-intro - 258193 4.0K -rw-rw-r-- 1 foo foo 843 Apr 21 09:56 Untitled.ipynb - -Shared Data ------------ - -In addition to the user data, the plugin also mounts a shared data volume for all users. - -The shared data is available under ``/srv/data`` inside the user server, as pictured in the diagram above. - -On the host machine, the shared data should be placed under ``/srv/data`` as recommended in the -`TLJH documentation `_. - -The shared data is **read-only**. diff --git a/docs/configuration/resources.rst b/docs/configuration/resources.md similarity index 51% rename from docs/configuration/resources.rst rename to docs/configuration/resources.md index 75dde29..358f916 100644 --- a/docs/configuration/resources.rst +++ b/docs/configuration/resources.md @@ -1,52 +1,49 @@ -Resources -========= +# Resources Plasma provides default values to limit the Memory and CPU usage. -Memory ------- +## Memory -By default Plasma sets a limit of ``2GB`` for each user server. +By default Plasma sets a limit of `2GB` for each user server. This limit is enforced by the operating system, which kills the process if the memory consumption goes aboved this threshold. Users can monitor their memory usage using the indicator in the top bar area if the environment has these dependencies -(see the :ref:`resources/display` section below). +(see the {ref}`resources-display` section below). -.. image:: ../images/configuration/memory-usage.png - :alt: Memory indicator in the top bar area - :width: 50% - :align: center +```{image} ../images/configuration/memory-usage.png +:align: center +:alt: Memory indicator in the top bar area +:width: 50% +``` -CPU ---- +## CPU -By default Plasma sets a limit of ``2 cpus`` for each user server. +By default Plasma sets a limit of `2 cpus` for each user server. This limit is enforced by the operating system, which throttles access to the CPU by the processes running in the Docker container. Users can monitor their CPU usage using the indicator in the top bar area if the environment has these dependencies -(see the :ref:`resources/display` section below). +(see the {ref}`resources-display` section below). -.. image:: ../images/configuration/cpu-usage.png - :alt: CPU indicator in the top bar area - :width: 50% - :align: center +```{image} ../images/configuration/cpu-usage.png +:align: center +:alt: CPU indicator in the top bar area +:width: 50% +``` +(resources-display)= -.. _resources/display: - -Displaying the indicators -------------------------- +## Displaying the indicators To enable the Memory and CPU indicators as shown above, the following dependencies must be added to the user environment: -- ``nbresuse`` -- ``jupyterlab-topbar-extension`` -- ``jupyterlab-system-monitor`` +- `nbresuse` +- `jupyterlab-topbar-extension` +- `jupyterlab-system-monitor` As an example, checkout the following two links: -- `Adding nbresuse `_ -- `Adding the JupyterLab extensions `_ +- [Adding nbresuse](https://github.com/plasmabio/template-python/blob/a4edf334c6b4b16be3a184d0d6e8196137ee1b06/environment.yml#L9) +- [Adding the JupyterLab extensions](https://github.com/plasmabio/template-python/blob/a4edf334c6b4b16be3a184d0d6e8196137ee1b06/postBuild#L4-L5) diff --git a/docs/contributing/documentation.md b/docs/contributing/documentation.md new file mode 100644 index 0000000..0592fa0 --- /dev/null +++ b/docs/contributing/documentation.md @@ -0,0 +1,25 @@ +# Writing Documentation + +The documentation is available at [docs.plasmabio.org](https://docs.plasmabio.org) and is written +with [Sphinx](https://sphinx-doc.org/). + +The Littlest JupyterHub has a good [overview page](https://the-littlest-jupyterhub.readthedocs.io/en/latest/contributing/docs.html) +on writing documentation, with externals links to the tools used to generate it. + +First, create a new environment: + +```bash +conda create -n plasma-docs -c conda-forge python +conda activate plasma-docs +``` + +In the `docs` folder, run: + +```bash +python -m pip install -r requirements.txt +make html +``` + +Open `docs/_build/index.html` in a browser to start browsing the documentation. + +Rerun `make html` after making any changes to the source. diff --git a/docs/contributing/documentation.rst b/docs/contributing/documentation.rst deleted file mode 100644 index 68bd006..0000000 --- a/docs/contributing/documentation.rst +++ /dev/null @@ -1,26 +0,0 @@ -Writing Documentation -===================== - -The documentation is available at `docs.plasmabio.org `_ and is written -with `Sphinx `_. - -The Littlest JupyterHub has a good `overview page `_ -on writing documentation, with externals links to the tools used to generate it. - -First, create a new environment: - -.. code-block:: bash - - conda create -n plasma-docs -c conda-forge python - conda activate plasma-docs - -In the ``docs`` folder, run: - -.. code-block:: bash - - python -m pip install -r requirements.txt - make html - -Open ``docs/_build/index.html`` in a browser to start browsing the documentation. - -Rerun ``make html`` after making any changes to the source. \ No newline at end of file diff --git a/docs/contributing/index.md b/docs/contributing/index.md new file mode 100644 index 0000000..91e6c2c --- /dev/null +++ b/docs/contributing/index.md @@ -0,0 +1,10 @@ +# Contributing + +Thanks for your interest in contributing to the project! + +```{toctree} +:maxdepth: 3 + +documentation +local +``` diff --git a/docs/contributing/index.rst b/docs/contributing/index.rst deleted file mode 100644 index f1496b9..0000000 --- a/docs/contributing/index.rst +++ /dev/null @@ -1,10 +0,0 @@ -Contributing -============ - -Thanks for your interest in contributing to the project! - -.. toctree:: - :maxdepth: 3 - - documentation - local diff --git a/docs/contributing/local.md b/docs/contributing/local.md new file mode 100644 index 0000000..67f22c9 --- /dev/null +++ b/docs/contributing/local.md @@ -0,0 +1,68 @@ +# Setting up a dev environment + +It is possible to test the project locally without installing TLJH. Instead we use the `jupyterhub` Python package. + +## Requirements + +`Docker` is used as a `Spawner` to start the user servers, and is then required to run the project locally. + +Check out the official Docker documentation to know how to install Docker on your machine: + + +## Create a virtual environment + +Using `conda`: + +```bash +conda create -n plasma -c conda-forge python nodejs +conda activate plasma +``` + +Alternatively, with Python's built in `venv` module, you can create a virtual environment with: + +```bash +python3 -m venv . +source bin/activate +``` + +## Install the development requirements + +```bash +pip install -r dev-requirements.txt + +# dev install of the plasma package +pip install -e tljh-plasma + +# Install (https://github.com/jupyterhub/configurable-http-proxy) +npm -g install configurable-http-proxy +``` + +## Pull the repo2docker Docker image + +User environments are built with `repo2docker` running in a Docker container. To pull the Docker image: + +```bash +docker pull quay.io/jupyterhub/repo2docker:main +``` + +## Create a config file for the group list + +Create a `config.yaml` file at the root of the repository with a list of groups your user belongs to. +For example: + +```yaml +plasma: + groups: + - docker + - adm +``` + +## Run + +Finally, start `jupyterhub` with the config in `debug` mode: + +```bash +python3 -m jupyterhub -f jupyterhub_config.py --debug +``` + +Open [http://localhost:8000](http://localhost:8000) in a web browser. diff --git a/docs/contributing/local.rst b/docs/contributing/local.rst deleted file mode 100644 index 297a80b..0000000 --- a/docs/contributing/local.rst +++ /dev/null @@ -1,75 +0,0 @@ -Setting up a dev environment -============================ - -It is possible to test the project locally without installing TLJH. Instead we use the ``jupyterhub`` Python package. - -Requirements ------------- - -``Docker`` is used as a ``Spawner`` to start the user servers, and is then required to run the project locally. - -Check out the official Docker documentation to know how to install Docker on your machine: -https://docs.docker.com/install/linux/docker-ce/ubuntu/ - -Create a virtual environment ----------------------------- - -Using ``conda``: - -.. code-block:: bash - - conda create -n plasma -c conda-forge python nodejs - conda activate plasma - -Alternatively, with Python's built in ``venv`` module, you can create a virtual environment with: - -.. code-block:: bash - - python3 -m venv . - source bin/activate - -Install the development requirements ------------------------------------- - -.. code-block:: bash - - pip install -r dev-requirements.txt - - # dev install of the plasma package - pip install -e tljh-plasma - - # Install (https://github.com/jupyterhub/configurable-http-proxy) - npm -g install configurable-http-proxy - -Pull the repo2docker Docker image ---------------------------------- - -User environments are built with ``repo2docker`` running in a Docker container. To pull the Docker image: - -.. code-block:: bash - - docker pull quay.io/jupyterhub/repo2docker:main - -Create a config file for the group list ---------------------------------------- - -Create a ``config.yaml`` file at the root of the repository with a list of groups your user belongs to. -For example: - -.. code-block:: yaml - - plasma: - groups: - - docker - - adm - -Run ---- - -Finally, start ``jupyterhub`` with the config in ``debug`` mode: - -.. code-block:: bash - - python3 -m jupyterhub -f jupyterhub_config.py --debug - -Open `http://localhost:8000 `_ in a web browser. diff --git a/docs/environments/add.md b/docs/environments/add.md new file mode 100644 index 0000000..a35a17d --- /dev/null +++ b/docs/environments/add.md @@ -0,0 +1,34 @@ +(environments-add)= + +# Adding a new environment + +Now that the repository is ready, we can add it to the JupyterHub via the user interface. + +To add the new user environment, click on the `Add New` button and provide the following information: + +- `Repository URL`: the URL to the repository to build the environment from +- `Reference (git commit)`: the git commit hash to use +- `Name of the environment`: the display name of the environment. If left empty, it will be automatically generated from the repository URL. +- `Memory Limit (GB)`: the memory limit to apply to the user server. + Float values are allowed (for example a value of `3.5` corresponds to a limit of 3.5GB) +- `CPU Limit`: the number of cpus the user server is allowed to used. + See the [JupyterHub documentation](https://jupyterhub.readthedocs.io/en/stable/api/spawner.html#jupyterhub.spawner.Spawner.cpu_limit) for more info. + +As an example: + +```{image} ../images/environments/add-new.png +:align: center +:alt: Adding a new image +:width: 100% +``` + +After clicking on the `Add Image` button, the page will automatically reload and show the list of built environments, +as well as the ones currently being built: + +```{image} ../images/environments/environments.png +:align: center +:alt: Listing the environments being built +:width: 100% +``` + +Building a new environment can take a few minutes. You can reload the page to refresh the status. diff --git a/docs/environments/add.rst b/docs/environments/add.rst deleted file mode 100644 index 4f968f8..0000000 --- a/docs/environments/add.rst +++ /dev/null @@ -1,38 +0,0 @@ -.. _environments/add: - -Adding a new environment -======================== - -Now that the repository is ready, we can add it to the JupyterHub via the user interface. - -To add the new user environment, click on the ``Add New`` button and provide the following information: - -- ``Repository URL``: the URL to the repository to build the environment from -- ``Reference (git commit)``: the git commit hash to use -- ``Name of the environment``: the display name of the environment. If left empty, it will be automatically generated from the repository URL. -- ``Memory Limit (GB)``: the memory limit to apply to the user server. - Float values are allowed (for example a value of ``3.5`` corresponds to a limit of 3.5GB) -- ``CPU Limit``: the number of cpus the user server is allowed to used. - See the `JupyterHub documentation `_ for more info. - - -As an example: - - -.. image:: ../images/environments/add-new.png - :alt: Adding a new image - :width: 100% - :align: center - - -After clicking on the ``Add Image`` button, the page will automatically reload and show the list of built environments, -as well as the ones currently being built: - - -.. image:: ../images/environments/environments.png - :alt: Listing the environments being built - :width: 100% - :align: center - - -Building a new environment can take a few minutes. You can reload the page to refresh the status. diff --git a/docs/environments/index.md b/docs/environments/index.md new file mode 100644 index 0000000..4a23d2b --- /dev/null +++ b/docs/environments/index.md @@ -0,0 +1,40 @@ +# User Environments + +User environments are built as immutable [Docker images](https://docs.docker.com/engine/docker-overview). +The Docker images bundle the dependencies, extensions, and predefined notebooks that should be available to all users. + +Plasma relies on the [tljh-repo2docker](https://github.com/plasmabio/tljh-repo2docker) plugin to manage environments. +The `tljh-repo2docker` uses [jupyter-repo2docker](https://repo2docker.readthedocs.io) to build the Docker images. + +Environments can be managed by admin users by clicking on `Environments` in the navigation bar: + +```{image} ../images/environments/services-navbar.png +:align: center +:alt: Manage the list of environments +:width: 50% +``` + +```{note} +The user must be an **admin** to be able to access and manage the list of environments. +``` + +The page will show the list of environments currently available: + +```{image} ../images/environments/environments.png +:align: center +:alt: List of built environments +:width: 100% +``` + +After a fresh install, this list will be empty. + +## Managing User Environments + +```{toctree} +:maxdepth: 3 + +prepare +add +remove +update +``` diff --git a/docs/environments/index.rst b/docs/environments/index.rst deleted file mode 100644 index 60d2b90..0000000 --- a/docs/environments/index.rst +++ /dev/null @@ -1,43 +0,0 @@ -User Environments -================= - -User environments are built as immutable `Docker images `_. -The Docker images bundle the dependencies, extensions, and predefined notebooks that should be available to all users. - -Plasma relies on the `tljh-repo2docker `_ plugin to manage environments. -The ``tljh-repo2docker`` uses `jupyter-repo2docker `_ to build the Docker images. - -Environments can be managed by admin users by clicking on ``Environments`` in the navigation bar: - -.. image:: ../images/environments/services-navbar.png - :alt: Manage the list of environments - :width: 50% - :align: center - -.. note:: - - The user must be an **admin** to be able to access and manage the list of environments. - - -The page will show the list of environments currently available: - -.. image:: ../images/environments/environments.png - :alt: List of built environments - :width: 100% - :align: center - - -After a fresh install, this list will be empty. - - -Managing User Environments --------------------------- - - -.. toctree:: - :maxdepth: 3 - - prepare - add - remove - update diff --git a/docs/environments/prepare.md b/docs/environments/prepare.md new file mode 100644 index 0000000..4163d78 --- /dev/null +++ b/docs/environments/prepare.md @@ -0,0 +1,64 @@ +# Preparing the environment + +An `environment` is defined as an immutable set of dependencies and files. + +Since Plasma uses [jupyter-repo2docker](https://repo2docker.readthedocs.io), it relies on the same set of rules +and patterns as `repo2docker` to create the environments. + +## Create a new repository + +Plasma fetches the environments from publicly accessible Git repositories from code sharing platforms such as [GitHub](https://github.com). + +To create a new environment with its own set of dependencies, it is recommended to create a new repository on GitHub. + +The [plasmabio](https://github.com/plasmabio) organization defines a couple of template repositories that can be used to bootstrap new ones: + +- For Python: +- For R: +- For Bash: + +To create a new repository using one of these templates, go to the organization and click on `New`. + +Then select the template from the `Repository Template` dropdown: + +```{image} ../images/environments/github-templates.png +:align: center +:alt: Creating a new repository from a template +:width: 100% +``` + +## How to specify the dependencies + +`repo2docker` relies on a specific set of files to know which dependencies to install and how +to build the Docker image. + +These files are listed on the [Configuration Files page](https://repo2docker.readthedocs.io/en/latest/config_files.html) in the documentation. + +In the case of the [Python Template](https://github.com/plasmabio/template-python), they consist of an `environment.yml` and `postBuild` files: + +```{image} ../images/environments/configuration-files.png +:align: center +:alt: Creating a new repository from a template +:width: 50% +``` + +(environments-prepare-binder)= + +## Testing on Binder + +Since both Plasma and Binder use `repo2docker` to build the images, it is possible to try the +environment on Binder first to make sure they are working correctly before adding theme to the JupyterHub server. + +The template repository has a Binder button in the `README.md` file. This button will redirect to the +public facing instance of BinderHub, [mybinder.org](https://mybinder.org), and will build a Binder using the +configuration files in the repository. + +You can use the same approach for the other environments, and update the Binder link to point to your repository. + +Make sure to check out the documentation below for more details. + +## Extra documentation + +To learn more about `repo2docker`, check out the [Documentation](https://repo2docker.readthedocs.io). + +To learn more about `Binder`, check out the [Binder User Guide](https://mybinder.readthedocs.io/en/latest/index.html). diff --git a/docs/environments/prepare.rst b/docs/environments/prepare.rst deleted file mode 100644 index 2f6ec7c..0000000 --- a/docs/environments/prepare.rst +++ /dev/null @@ -1,70 +0,0 @@ -Preparing the environment -========================= - -An `environment` is defined as an immutable set of dependencies and files. - -Since Plasma uses `jupyter-repo2docker `_, it relies on the same set of rules -and patterns as ``repo2docker`` to create the environments. - -Create a new repository -....................... - -Plasma fetches the environments from publicly accessible Git repositories from code sharing platforms such as `GitHub `_. - -To create a new environment with its own set of dependencies, it is recommended to create a new repository on GitHub. - -The `plasmabio `_ organization defines a couple of template repositories that can be used to bootstrap new ones: - -- For Python: https://github.com/plasmabio/template-python -- For R: https://github.com/plasmabio/template-r -- For Bash: https://github.com/plasmabio/template-bash - -To create a new repository using one of these templates, go to the organization and click on ``New``. - -Then select the template from the ``Repository Template`` dropdown: - -.. image:: ../images/environments/github-templates.png - :alt: Creating a new repository from a template - :width: 100% - :align: center - - -How to specify the dependencies -............................... - -``repo2docker`` relies on a specific set of files to know which dependencies to install and how -to build the Docker image. - -These files are listed on the `Configuration Files page `_ in the documentation. - -In the case of the `Python Template `_, they consist of an ``environment.yml`` and ``postBuild`` files: - -.. image:: ../images/environments/configuration-files.png - :alt: Creating a new repository from a template - :width: 50% - :align: center - - - -.. _environments/prepare/binder: - -Testing on Binder -................. - -Since both Plasma and Binder use ``repo2docker`` to build the images, it is possible to try the -environment on Binder first to make sure they are working correctly before adding theme to the JupyterHub server. - -The template repository has a Binder button in the ``README.md`` file. This button will redirect to the -public facing instance of BinderHub, `mybinder.org `_, and will build a Binder using the -configuration files in the repository. - -You can use the same approach for the other environments, and update the Binder link to point to your repository. - -Make sure to check out the documentation below for more details. - -Extra documentation -................... - -To learn more about ``repo2docker``, check out the `Documentation `_. - -To learn more about ``Binder``, check out the `Binder User Guide `_. \ No newline at end of file diff --git a/docs/environments/remove.md b/docs/environments/remove.md new file mode 100644 index 0000000..2d26b55 --- /dev/null +++ b/docs/environments/remove.md @@ -0,0 +1,40 @@ +# Removing an environment + +To remove an environment, click on the `Remove` button. This will bring the following confirmation dialog: + +```{image} ../images/environments/remove-dialog.png +:align: center +:alt: Removing an environment +:width: 100% +``` + +After clicking on `Remove`, a spinner will be shown and the page will reload shortly after: + +```{image} ../images/environments/remove-spinner.png +:align: center +:alt: Removing an environment - spinner +:width: 100% +``` + +(remove-error)= + +## Removing an environment returns an error + +It is possible that removing an environment returns an error such as the following: + +```{image} ../images/environments/remove-image-error.png +:align: center +:alt: Removing an environment - error +:width: 100% +``` + +This is most likely because the environment is currently being used. We recommend asking the users to stop their server +before attempting to remove the environment one more time. + +The environment (image) that a user is currently using is also displayed in the Admin panel: + +```{image} ../images/environments/admin-panel-images.png +:align: center +:alt: Admin panel with the image name +:width: 100% +``` diff --git a/docs/environments/remove.rst b/docs/environments/remove.rst deleted file mode 100644 index 27f20d9..0000000 --- a/docs/environments/remove.rst +++ /dev/null @@ -1,38 +0,0 @@ -Removing an environment -======================= - -To remove an environment, click on the ``Remove`` button. This will bring the following confirmation dialog: - -.. image:: ../images/environments/remove-dialog.png - :alt: Removing an environment - :width: 100% - :align: center - -After clicking on ``Remove``, a spinner will be shown and the page will reload shortly after: - -.. image:: ../images/environments/remove-spinner.png - :alt: Removing an environment - spinner - :width: 100% - :align: center - -.. _remove/error: - -Removing an environment returns an error ----------------------------------------- - -It is possible that removing an environment returns an error such as the following: - -.. image:: ../images/environments/remove-image-error.png - :alt: Removing an environment - error - :width: 100% - :align: center - -This is most likely because the environment is currently being used. We recommend asking the users to stop their server -before attempting to remove the environment one more time. - -The environment (image) that a user is currently using is also displayed in the Admin panel: - -.. image:: ../images/environments/admin-panel-images.png - :alt: Admin panel with the image name - :width: 100% - :align: center diff --git a/docs/environments/update.md b/docs/environments/update.md new file mode 100644 index 0000000..9dacdee --- /dev/null +++ b/docs/environments/update.md @@ -0,0 +1,8 @@ +# Updating an environment + +Since the environments are built as Docker images, they are immutable. + +Instead of updating an environment, it is recommended to: + +1. Add a new one with the new `Reference` +2. Remove the previous one by clicking on the `Remove` button (see previous section) diff --git a/docs/environments/update.rst b/docs/environments/update.rst deleted file mode 100644 index ba3fd4f..0000000 --- a/docs/environments/update.rst +++ /dev/null @@ -1,9 +0,0 @@ -Updating an environment -======================= - -Since the environments are built as Docker images, they are immutable. - -Instead of updating an environment, it is recommended to: - -1. Add a new one with the new ``Reference`` -2. Remove the previous one by clicking on the ``Remove`` button (see previous section) diff --git a/docs/index.md b/docs/index.md new file mode 100644 index 0000000..a842b84 --- /dev/null +++ b/docs/index.md @@ -0,0 +1,35 @@ +```{image} images/logo/full-logo.png +:align: center +:alt: Plasma Logo +:width: 100% +``` + +Plasma, aka in French “Plateforme d’e-Learning pour l’Analyse de données Scientifiques MAssives”, aims at creating +an interactive tool to teach computational analysis of massive scientific data. +Plasma was born out of the need to offer a reproducible and high-performance analysis environment to our students. + +Based on the Jupyter ecosystem, Plasma allows the creation and the management of isolated and highly customizable environments, +with an easy deployement on bare-metal servers or virtual machines. + +Plasma utilizes [tljh-repo2docker](https://github.com/plasmabio/tljh-repo2docker), +a [repo2docker](https://github.com/jupyterhub/repo2docker) plugin for [The Littlest JupyterHub](https://tljh.jupyter.org/en/latest/). + +For more details, have a look at: + +- The web page of the [Plasma project](https://plasmabio.org/). +- Plasma public announcement: [Plasma: A learning platform powered by Jupyter](https://blog.jupyter.org/plasma-a-learning-platform-powered-by-jupyter-1b850fcd8624) (May 2020, by Jérémy Tuloup). + +# Documentation + +```{toctree} +:maxdepth: 2 +:titlesonly: true + +overview/index +install/index +environments/index +permissions/index +configuration/index +troubleshooting/index +contributing/index +``` diff --git a/docs/index.rst b/docs/index.rst deleted file mode 100644 index 4ac731d..0000000 --- a/docs/index.rst +++ /dev/null @@ -1,36 +0,0 @@ -.. image:: images/logo/full-logo.png - :alt: Plasma Logo - :width: 100% - :align: center - - -Plasma, aka in French “Plateforme d’e-Learning pour l’Analyse de données Scientifiques MAssives”, aims at creating -an interactive tool to teach computational analysis of massive scientific data. -Plasma was born out of the need to offer a reproducible and high-performance analysis environment to our students. - -Based on the Jupyter ecosystem, Plasma allows the creation and the management of isolated and highly customizable environments, -with an easy deployement on bare-metal servers or virtual machines. - -Plasma utilizes `tljh-repo2docker `_, -a `repo2docker `_ plugin for `The Littlest JupyterHub `_. - -For more details, have a look at: - -* The web page of the `Plasma project `_. -* Plasma public announcement: `Plasma: A learning platform powered by Jupyter `_ (May 2020, by Jérémy Tuloup). - - -Documentation -============= - -.. toctree:: - :titlesonly: - :maxdepth: 2 - - overview/index - install/index - environments/index - permissions/index - configuration/index - troubleshooting/index - contributing/index diff --git a/docs/install/admins.md b/docs/install/admins.md new file mode 100644 index 0000000..f9f3a8c --- /dev/null +++ b/docs/install/admins.md @@ -0,0 +1,32 @@ +(install-admins)= + +# Adding Admin Users to JupyterHub + +By default the `site.yml` playbook does not add admin users to JupyterHub. + +New admin users can be added by adding `admin: true` to the `users-config.yml` file +from the previous section: + +```yaml +users: + - name: foo + password: PLAIN_TEXT_PASSWORD + groups: + - group_1 + - group_2 + admin: true +``` + +And re-running the `users.yml` playbook: + +```bash +ansible-playbook users.yml -i hosts -u ubuntu -e @users-config.yml +``` + +```{warning} +The list of existing admin users is first reset before adding the new admin users. +``` + +Alternatively it is also possible to use the `tljh-config` command on the server directly. +Please refer to [the Littlest JupyterHub documentation](http://tljh.jupyter.org/en/latest/howto/admin/admin-users.html#adding-admin-users-from-the-command-line) +for more info. diff --git a/docs/install/admins.rst b/docs/install/admins.rst deleted file mode 100644 index a23bebe..0000000 --- a/docs/install/admins.rst +++ /dev/null @@ -1,33 +0,0 @@ -.. _install/admins: - -Adding Admin Users to JupyterHub -================================ - -By default the ``site.yml`` playbook does not add admin users to JupyterHub. - -New admin users can be added by adding ``admin: true`` to the ``users-config.yml`` file -from the previous section: - -.. code-block:: yaml - - users: - - name: foo - password: PLAIN_TEXT_PASSWORD - groups: - - group_1 - - group_2 - admin: true - -And re-running the ``users.yml`` playbook: - -.. code-block:: bash - - ansible-playbook users.yml -i hosts -u ubuntu -e @users-config.yml - -.. warning:: - - The list of existing admin users is first reset before adding the new admin users. - -Alternatively it is also possible to use the ``tljh-config`` command on the server directly. -Please refer to `the Littlest JupyterHub documentation `_ -for more info. diff --git a/docs/install/ansible.md b/docs/install/ansible.md new file mode 100644 index 0000000..ce64c02 --- /dev/null +++ b/docs/install/ansible.md @@ -0,0 +1,292 @@ +(install-ansible)= + +# Deploying with Ansible + +## What is Ansible? + +Ansible is an open-source tool to automate the provisioning of servers, configuration management, +and application deployment. + +Playbooks can be used to define the list of tasks that should be executed and to declare the desired +state of the server. + +Check out the [How Ansible Works](https://www.ansible.com/overview/how-ansible-works) guide on the Ansible +official documentation website for more information. + +## Installing Ansible + +Plasma comes with several `Ansible Playbooks` to automatically provision the machine with +the system requirements, as well as installing Plasma and starting up the services. + +````{note} +We recommend creating a new virtual environment to install Python packages. + +Using the built-in `venv` module: + +```bash +python -m venv . +source bin/activate +``` + +Using `conda`: + +```bash +conda create -n plasma -c conda-forge python nodejs +conda activate plasma +``` +```` + +Make sure [Ansible](https://docs.ansible.com/ansible/latest/index.html) is installed: + +```bash +python -m pip install ansible>=2.9 +``` + +```{note} +We recommend `ansible>=2.9` to discard the warning messages +regarding the use of `aptitude`. +``` + +To verify the installation, run: + +```bash +which ansible +``` + +This should return the path to the ansible CLI tool in the virtual environment. +For example: `/home/myuser/miniconda/envs/plasma/bin/ansible` + +## Running the Playbooks + +Check out the repository, and go to the `plasma/ansible/` directory: + +```bash +git clone https://github.com/plasmabio/plasma +cd plasma/ansible +``` + +Create a `hosts` file with the following content: + +```text +[server] +51.178.95.237 + +[server:vars] +ansible_python_interpreter=/usr/bin/python3 +``` + +Replace the IP corresponds to your server. If you already defined the hostname (see {ref}`install-https`), +you can also specify the domain name: + +```text +[server] +dev.plasmabio.org + +[server:vars] +ansible_python_interpreter=/usr/bin/python3 +``` + +If you have multiple servers, the `hosts` file will look like the following: + +```text +[server1] +51.178.95.237 + +[server2] +51.178.95.238 + +[server1:vars] +ansible_python_interpreter=/usr/bin/python3 + +[server2:vars] +ansible_python_interpreter=/usr/bin/python3 +``` + +Then run the following command after replacing `` by your user on the remote machine: + +```bash +ansible-playbook site.yml -i hosts -u +``` + +Many Ubuntu systems running on cloud virtual machines have the default `ubuntu` user. In this case, the command becomes: + +```bash +ansible-playbook site.yml -i hosts -u ubuntu +``` + +Ansible will log the progress in the terminal, and will indicate which components have changed in the process of running the playbook: + +```text +PLAY [all] ********************************************************************************************** + +TASK [Gathering Facts] ********************************************************************************** +Tuesday 07 July 2020 11:34:43 +0200 (0:00:00.043) 0:00:00.043 ********** +ok: [51.83.15.159] + +TASK [Install required system packages] ***************************************************************** +Tuesday 07 July 2020 11:34:44 +0200 (0:00:01.428) 0:00:01.472 ********** +changed: [51.83.15.159] => (item=apt-transport-https) +changed: [51.83.15.159] => (item=ca-certificates) +changed: [51.83.15.159] => (item=curl) +changed: [51.83.15.159] => (item=software-properties-common) +changed: [51.83.15.159] => (item=python3-pip) +changed: [51.83.15.159] => (item=virtualenv) +ok: [51.83.15.159] => (item=python3-setuptools) + +TASK [Add Docker GPG apt Key] *************************************************************************** +Tuesday 07 July 2020 11:37:36 +0200 (0:02:51.590) 0:02:53.062 ********** +changed: [51.83.15.159] + +TASK [Add Docker Repository] **************************************************************************** +Tuesday 07 July 2020 11:37:38 +0200 (0:00:02.577) 0:02:55.640 ********** +changed: [51.83.15.159] + +TASK [Update apt and install docker-ce] ***************************************************************** +Tuesday 07 July 2020 11:37:45 +0200 (0:00:06.394) 0:03:02.035 ********** +changed: [51.83.15.159] + +TASK [Install Docker Module for Python] ***************************************************************** +Tuesday 07 July 2020 11:38:13 +0200 (0:00:27.878) 0:03:29.914 ********** +changed: [51.83.15.159] + +PLAY [all] ********************************************************************************************** + +TASK [Gathering Facts] ********************************************************************************** +Tuesday 07 July 2020 11:38:16 +0200 (0:00:03.123) 0:03:33.038 ********** +ok: [51.83.15.159] + +TASK [Install extra system packages] ******************************************************************** +Tuesday 07 July 2020 11:38:17 +0200 (0:00:01.295) 0:03:34.333 ********** +changed: [51.83.15.159] => (item=jq) +changed: [51.83.15.159] => (item=tree) + +TASK [Install ctop] ************************************************************************************* +Tuesday 07 July 2020 11:38:31 +0200 (0:00:13.419) 0:03:47.752 ********** +changed: [51.83.15.159] + +PLAY [all] ********************************************************************************************** + +TASK [Gathering Facts] ********************************************************************************** +Tuesday 07 July 2020 11:38:33 +0200 (0:00:02.825) 0:03:50.578 ********** +ok: [51.83.15.159] + +TASK [Install required system packages] ***************************************************************** +Tuesday 07 July 2020 11:38:35 +0200 (0:00:01.304) 0:03:51.883 ********** +ok: [51.83.15.159] => (item=curl) +ok: [51.83.15.159] => (item=python3) +ok: [51.83.15.159] => (item=python3-dev) +ok: [51.83.15.159] => (item=python3-pip) + +TASK [Download the TLJH installer] ********************************************************************** +Tuesday 07 July 2020 11:38:48 +0200 (0:00:13.532) 0:04:05.415 ********** +changed: [51.83.15.159] + +TASK [Check if the tljh-plasma is already installed] **************************************************** +Tuesday 07 July 2020 11:38:49 +0200 (0:00:00.999) 0:04:06.414 ********** +ok: [51.83.15.159] + +TASK [Upgrade the tljh-plasma plugin first if it is already installed] ********************************** +Tuesday 07 July 2020 11:38:50 +0200 (0:00:00.728) 0:04:07.143 ********** +skipping: [51.83.15.159] + +TASK [Run the TLJH installer] *************************************************************************** +Tuesday 07 July 2020 11:38:50 +0200 (0:00:00.040) 0:04:07.183 ********** +changed: [51.83.15.159] + +TASK [Set the idle culler timeout to 1 hour] ************************************************************ +Tuesday 07 July 2020 11:40:00 +0200 (0:01:09.668) 0:05:16.852 ********** +changed: [51.83.15.159] + +TASK [Set the default memory and cpu limits] ************************************************************ +Tuesday 07 July 2020 11:40:01 +0200 (0:00:01.053) 0:05:17.905 ********** +changed: [51.83.15.159] + +TASK [Reload the hub] *********************************************************************************** +Tuesday 07 July 2020 11:40:02 +0200 (0:00:01.555) 0:05:19.461 ********** +changed: [51.83.15.159] + +TASK [Pull jupyter/repo2docker] ************************************************************************* +Tuesday 07 July 2020 11:40:06 +0200 (0:00:03.571) 0:05:23.032 ********** +changed: [51.83.15.159] + +PLAY RECAP ********************************************************************************************** +51.83.15.159 : ok=18 changed=13 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 + +Tuesday 07 July 2020 11:40:16 +0200 (0:00:10.626) 0:05:33.658 ********** +=============================================================================== +Install required system packages --------------------------------------------------------------- 171.59s +Run the TLJH installer -------------------------------------------------------------------------- 69.67s +Update apt and install docker-ce ---------------------------------------------------------------- 27.88s +Install required system packages ---------------------------------------------------------------- 13.53s +Install extra system packages ------------------------------------------------------------------- 13.42s +Pull jupyter/repo2docker ------------------------------------------------------------------------ 10.63s +Add Docker Repository ---------------------------------------------------------------------------- 6.40s +Reload the hub ----------------------------------------------------------------------------------- 3.57s +Install Docker Module for Python ----------------------------------------------------------------- 3.12s +Install ctop ------------------------------------------------------------------------------------- 2.83s +Add Docker GPG apt Key --------------------------------------------------------------------------- 2.58s +Set the default memory and cpu limits ------------------------------------------------------------ 1.56s +Gathering Facts ---------------------------------------------------------------------------------- 1.43s +Gathering Facts ---------------------------------------------------------------------------------- 1.30s +Gathering Facts ---------------------------------------------------------------------------------- 1.30s +Set the idle culler timeout to 1 hour ------------------------------------------------------------ 1.05s +Download the TLJH installer ---------------------------------------------------------------------- 1.00s +Check if the tljh-plasma is already installed ---------------------------------------------------- 0.73s +Upgrade the tljh-plasma plugin first if it is already installed ---------------------------------- 0.04s +``` + +(install-individual-playbook)= + +## Running individual playbooks + +The `site.yml` Ansible playbook includes all the playbooks and will process them in order. + +It is however possible to run the playbooks individually. For example to run the `tljh.yml` playbook only (to install +and update The Littlest JupyterHub): + +```bash +ansible-playbook tljh.yml -i hosts -u ubuntu +``` + +For more in-depth details about the Ansible playbook, check out the +[official documentation](https://docs.ansible.com/ansible/latest/user_guide/playbooks.html). + +## Using a specific version of Plasma + +By default the Ansible playbooks use the latest version from the `master` branch. + +This is specified in the `ansible/vars/default.yml` file: + +```yaml +tljh_plasma: git+https://github.com/plasmabio/plasma@master#"egg=tljh-plasma&subdirectory=tljh-plasma" +``` + +But it is also possible to use a specific git commit hash, branch or tag. For example to use the version of Plasma +tagged as `v0.1`: + +```yaml +tljh_plasma: git+https://github.com/plasmabio/plasma@v0.1#"egg=tljh-plasma&subdirectory=tljh-plasma" +``` + +## List of available playbooks + +The Ansible playbooks are located in the `ansible/` directory: + +- `docker.yml`: install Docker CE on the host +- `utils.yml`: install extra system packages useful for debugging and system administration +- `users.yml`: create the tests users on the host +- `quotas.yml`: enable quotas on the host to limit disk usage +- `include-groups.yml`: add user groups to JupyterHub +- `cockpit.yml`: install Cockpit on the host as a monitoring tool +- `tljh.yml`: install TLJH and the Plasma TLJH plugin +- `https.yml`: enable HTTPS for TLJH +- `uninstall.yml`: uninstall TLJH only +- `site.yml`: the main playbook that references some of the other playbooks + +## Running playbook on a given server + +If you have multiple servers defined in the `hosts` file, you can run a playbook on a single server with the `--limit` option: + +```bash +ansible-playbook site.yml -i hosts -u ubuntu --limit server1 +``` diff --git a/docs/install/ansible.rst b/docs/install/ansible.rst deleted file mode 100644 index 1b086fc..0000000 --- a/docs/install/ansible.rst +++ /dev/null @@ -1,306 +0,0 @@ -.. _install/ansible: - -Deploying with Ansible -====================== - -What is Ansible? ----------------- - -Ansible is an open-source tool to automate the provisioning of servers, configuration management, -and application deployment. - -Playbooks can be used to define the list of tasks that should be executed and to declare the desired -state of the server. - -Check out the `How Ansible Works `_ guide on the Ansible -official documentation website for more information. - -Installing Ansible ------------------- - -Plasma comes with several `Ansible Playbooks` to automatically provision the machine with -the system requirements, as well as installing Plasma and starting up the services. - -.. note:: - - We recommend creating a new virtual environment to install Python packages. - - Using the built-in ``venv`` module: - - .. code-block:: bash - - python -m venv . - source bin/activate - - Using ``conda``: - - .. code-block:: bash - - conda create -n plasma -c conda-forge python nodejs - conda activate plasma - - -Make sure `Ansible `_ is installed: - -.. code-block:: bash - - python -m pip install ansible>=2.9 - -.. note:: - - We recommend ``ansible>=2.9`` to discard the warning messages - regarding the use of ``aptitude``. - - -To verify the installation, run: - -.. code-block:: bash - - which ansible - -This should return the path to the ansible CLI tool in the virtual environment. -For example: ``/home/myuser/miniconda/envs/plasma/bin/ansible`` - -Running the Playbooks ---------------------- - -Check out the repository, and go to the ``plasma/ansible/`` directory: - -.. code-block:: bash - - git clone https://github.com/plasmabio/plasma - cd plasma/ansible - -Create a ``hosts`` file with the following content: - -.. code-block:: text - - [server] - 51.178.95.237 - - [server:vars] - ansible_python_interpreter=/usr/bin/python3 - -Replace the IP corresponds to your server. If you already defined the hostname (see :ref:`install/https`), -you can also specify the domain name: - -.. code-block:: text - - [server] - dev.plasmabio.org - - [server:vars] - ansible_python_interpreter=/usr/bin/python3 - -If you have multiple servers, the ``hosts`` file will look like the following: - -.. code-block:: text - - [server1] - 51.178.95.237 - - [server2] - 51.178.95.238 - - [server1:vars] - ansible_python_interpreter=/usr/bin/python3 - - [server2:vars] - ansible_python_interpreter=/usr/bin/python3 - -Then run the following command after replacing ```` by your user on the remote machine: - -.. code-block:: bash - - ansible-playbook site.yml -i hosts -u - -Many Ubuntu systems running on cloud virtual machines have the default ``ubuntu`` user. In this case, the command becomes: - -.. code-block:: bash - - ansible-playbook site.yml -i hosts -u ubuntu - -Ansible will log the progress in the terminal, and will indicate which components have changed in the process of running the playbook: - -.. code-block:: text - - PLAY [all] ********************************************************************************************** - - TASK [Gathering Facts] ********************************************************************************** - Tuesday 07 July 2020 11:34:43 +0200 (0:00:00.043) 0:00:00.043 ********** - ok: [51.83.15.159] - - TASK [Install required system packages] ***************************************************************** - Tuesday 07 July 2020 11:34:44 +0200 (0:00:01.428) 0:00:01.472 ********** - changed: [51.83.15.159] => (item=apt-transport-https) - changed: [51.83.15.159] => (item=ca-certificates) - changed: [51.83.15.159] => (item=curl) - changed: [51.83.15.159] => (item=software-properties-common) - changed: [51.83.15.159] => (item=python3-pip) - changed: [51.83.15.159] => (item=virtualenv) - ok: [51.83.15.159] => (item=python3-setuptools) - - TASK [Add Docker GPG apt Key] *************************************************************************** - Tuesday 07 July 2020 11:37:36 +0200 (0:02:51.590) 0:02:53.062 ********** - changed: [51.83.15.159] - - TASK [Add Docker Repository] **************************************************************************** - Tuesday 07 July 2020 11:37:38 +0200 (0:00:02.577) 0:02:55.640 ********** - changed: [51.83.15.159] - - TASK [Update apt and install docker-ce] ***************************************************************** - Tuesday 07 July 2020 11:37:45 +0200 (0:00:06.394) 0:03:02.035 ********** - changed: [51.83.15.159] - - TASK [Install Docker Module for Python] ***************************************************************** - Tuesday 07 July 2020 11:38:13 +0200 (0:00:27.878) 0:03:29.914 ********** - changed: [51.83.15.159] - - PLAY [all] ********************************************************************************************** - - TASK [Gathering Facts] ********************************************************************************** - Tuesday 07 July 2020 11:38:16 +0200 (0:00:03.123) 0:03:33.038 ********** - ok: [51.83.15.159] - - TASK [Install extra system packages] ******************************************************************** - Tuesday 07 July 2020 11:38:17 +0200 (0:00:01.295) 0:03:34.333 ********** - changed: [51.83.15.159] => (item=jq) - changed: [51.83.15.159] => (item=tree) - - TASK [Install ctop] ************************************************************************************* - Tuesday 07 July 2020 11:38:31 +0200 (0:00:13.419) 0:03:47.752 ********** - changed: [51.83.15.159] - - PLAY [all] ********************************************************************************************** - - TASK [Gathering Facts] ********************************************************************************** - Tuesday 07 July 2020 11:38:33 +0200 (0:00:02.825) 0:03:50.578 ********** - ok: [51.83.15.159] - - TASK [Install required system packages] ***************************************************************** - Tuesday 07 July 2020 11:38:35 +0200 (0:00:01.304) 0:03:51.883 ********** - ok: [51.83.15.159] => (item=curl) - ok: [51.83.15.159] => (item=python3) - ok: [51.83.15.159] => (item=python3-dev) - ok: [51.83.15.159] => (item=python3-pip) - - TASK [Download the TLJH installer] ********************************************************************** - Tuesday 07 July 2020 11:38:48 +0200 (0:00:13.532) 0:04:05.415 ********** - changed: [51.83.15.159] - - TASK [Check if the tljh-plasma is already installed] **************************************************** - Tuesday 07 July 2020 11:38:49 +0200 (0:00:00.999) 0:04:06.414 ********** - ok: [51.83.15.159] - - TASK [Upgrade the tljh-plasma plugin first if it is already installed] ********************************** - Tuesday 07 July 2020 11:38:50 +0200 (0:00:00.728) 0:04:07.143 ********** - skipping: [51.83.15.159] - - TASK [Run the TLJH installer] *************************************************************************** - Tuesday 07 July 2020 11:38:50 +0200 (0:00:00.040) 0:04:07.183 ********** - changed: [51.83.15.159] - - TASK [Set the idle culler timeout to 1 hour] ************************************************************ - Tuesday 07 July 2020 11:40:00 +0200 (0:01:09.668) 0:05:16.852 ********** - changed: [51.83.15.159] - - TASK [Set the default memory and cpu limits] ************************************************************ - Tuesday 07 July 2020 11:40:01 +0200 (0:00:01.053) 0:05:17.905 ********** - changed: [51.83.15.159] - - TASK [Reload the hub] *********************************************************************************** - Tuesday 07 July 2020 11:40:02 +0200 (0:00:01.555) 0:05:19.461 ********** - changed: [51.83.15.159] - - TASK [Pull jupyter/repo2docker] ************************************************************************* - Tuesday 07 July 2020 11:40:06 +0200 (0:00:03.571) 0:05:23.032 ********** - changed: [51.83.15.159] - - PLAY RECAP ********************************************************************************************** - 51.83.15.159 : ok=18 changed=13 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 - - Tuesday 07 July 2020 11:40:16 +0200 (0:00:10.626) 0:05:33.658 ********** - =============================================================================== - Install required system packages --------------------------------------------------------------- 171.59s - Run the TLJH installer -------------------------------------------------------------------------- 69.67s - Update apt and install docker-ce ---------------------------------------------------------------- 27.88s - Install required system packages ---------------------------------------------------------------- 13.53s - Install extra system packages ------------------------------------------------------------------- 13.42s - Pull jupyter/repo2docker ------------------------------------------------------------------------ 10.63s - Add Docker Repository ---------------------------------------------------------------------------- 6.40s - Reload the hub ----------------------------------------------------------------------------------- 3.57s - Install Docker Module for Python ----------------------------------------------------------------- 3.12s - Install ctop ------------------------------------------------------------------------------------- 2.83s - Add Docker GPG apt Key --------------------------------------------------------------------------- 2.58s - Set the default memory and cpu limits ------------------------------------------------------------ 1.56s - Gathering Facts ---------------------------------------------------------------------------------- 1.43s - Gathering Facts ---------------------------------------------------------------------------------- 1.30s - Gathering Facts ---------------------------------------------------------------------------------- 1.30s - Set the idle culler timeout to 1 hour ------------------------------------------------------------ 1.05s - Download the TLJH installer ---------------------------------------------------------------------- 1.00s - Check if the tljh-plasma is already installed ---------------------------------------------------- 0.73s - Upgrade the tljh-plasma plugin first if it is already installed ---------------------------------- 0.04s - -.. _install/individual-playbook: - - -Running individual playbooks ----------------------------- - -The ``site.yml`` Ansible playbook includes all the playbooks and will process them in order. - -It is however possible to run the playbooks individually. For example to run the ``tljh.yml`` playbook only (to install -and update The Littlest JupyterHub): - -.. code-block:: bash - - ansible-playbook tljh.yml -i hosts -u ubuntu - -For more in-depth details about the Ansible playbook, check out the -`official documentation `_. - - -Using a specific version of Plasma ----------------------------------- - -By default the Ansible playbooks use the latest version from the ``master`` branch. - -This is specified in the ``ansible/vars/default.yml`` file: - -.. code-block:: yaml - - tljh_plasma: git+https://github.com/plasmabio/plasma@master#"egg=tljh-plasma&subdirectory=tljh-plasma" - -But it is also possible to use a specific git commit hash, branch or tag. For example to use the version of Plasma -tagged as ``v0.1``: - -.. code-block:: yaml - - tljh_plasma: git+https://github.com/plasmabio/plasma@v0.1#"egg=tljh-plasma&subdirectory=tljh-plasma" - - -List of available playbooks ---------------------------- - -The Ansible playbooks are located in the ``ansible/`` directory: - -- ``docker.yml``: install Docker CE on the host -- ``utils.yml``: install extra system packages useful for debugging and system administration -- ``users.yml``: create the tests users on the host -- ``quotas.yml``: enable quotas on the host to limit disk usage -- ``include-groups.yml``: add user groups to JupyterHub -- ``cockpit.yml``: install Cockpit on the host as a monitoring tool -- ``tljh.yml``: install TLJH and the Plasma TLJH plugin -- ``https.yml``: enable HTTPS for TLJH -- ``uninstall.yml``: uninstall TLJH only -- ``site.yml``: the main playbook that references some of the other playbooks - - -Running playbook on a given server ----------------------------------- - -If you have multiple servers defined in the ``hosts`` file, you can run a playbook on a single server with the ``--limit`` option: - -.. code-block:: bash - - ansible-playbook site.yml -i hosts -u ubuntu --limit server1 diff --git a/docs/install/https.md b/docs/install/https.md new file mode 100644 index 0000000..28f44e1 --- /dev/null +++ b/docs/install/https.md @@ -0,0 +1,79 @@ +(install-https)= + +# HTTPS + +```{warning} +HTTPS is **not** enabled by default. + +**We do not recommend deploying JupyterHub without HTTPS for production use.** + +However in some situations it can be handy to do so, for example when testing the setup. +``` + +## Enable HTTPS + +Support for HTTPS is handled automatically thanks to [Let's Encrypt](https://letsencrypt.org), which also +handles the automatic renewal of the certificates when they are about to expire. + +In your `hosts` file, add the `name_server` and `letsencrypt_email` variables: + +```text +[server] +51.178.95.237 + +[server:vars] +ansible_python_interpreter=/usr/bin/python3 +name_server=dev.plasmabio.org +letsencrypt_email=contact@plasmabio.org +``` + +If you have multiple servers, the `hosts` file will look like the following: + +```text +[server1] +51.178.95.237 + +[server2] +51.178.95.238 + +[server1:vars] +ansible_python_interpreter=/usr/bin/python3 +name_server=dev1.plasmabio.org +letsencrypt_email=contact@plasmabio.org + +[server2:vars] +ansible_python_interpreter=/usr/bin/python3 +name_server=dev2.plasmabio.org +letsencrypt_email=contact@plasmabio.org +``` + +Modify these values to the ones you want to use. + +Then, run the `https.yml` playbook: + +```bash +ansible-playbook https.yml -i hosts -u ubuntu +``` + +This will reload the proxy to take the changes into account. + +It might take a few minutes for the certificates to be setup and the changes to take effect. + +## How to make the domain point to the IP of the server + +The domain used in the playbook variables (for example `dev.plasmabio.org`), should also point to the IP of the +server running JupyterHub. + +This is typically done by logging in to the registrar website and adding a new entry to the DNS records. + +You can refer to the [documentation for The Littlest JupyterHub on how to enable HTTPS](http://tljh.jupyter.org/en/latest/howto/admin/https.html#enable-https) +for more details. + +## Manual HTTPS + +To use an existing SSL key and certificate, you can refer to the +[Manual HTTPS with existing key and certificate](http://tljh.jupyter.org/en/latest/howto/admin/https.html#manual-https-with-existing-key-and-certificate) +documentation for TLJH. + +This can also be integrated in the `https.yml` playbook by replacing the `tljh-config` commands to the ones mentioned +in the documentation. diff --git a/docs/install/https.rst b/docs/install/https.rst deleted file mode 100644 index 562fdcd..0000000 --- a/docs/install/https.rst +++ /dev/null @@ -1,83 +0,0 @@ -.. _install/https: - -HTTPS -===== - -.. warning:: - - HTTPS is **not** enabled by default. - - **We do not recommend deploying JupyterHub without HTTPS for production use.** - - However in some situations it can be handy to do so, for example when testing the setup. - -Enable HTTPS ------------- - -Support for HTTPS is handled automatically thanks to `Let's Encrypt `_, which also -handles the automatic renewal of the certificates when they are about to expire. - -In your ``hosts`` file, add the ``name_server`` and ``letsencrypt_email`` variables: - -.. code-block:: text - - [server] - 51.178.95.237 - - [server:vars] - ansible_python_interpreter=/usr/bin/python3 - name_server=dev.plasmabio.org - letsencrypt_email=contact@plasmabio.org - -If you have multiple servers, the ``hosts`` file will look like the following: - -.. code-block:: text - - [server1] - 51.178.95.237 - - [server2] - 51.178.95.238 - - [server1:vars] - ansible_python_interpreter=/usr/bin/python3 - name_server=dev1.plasmabio.org - letsencrypt_email=contact@plasmabio.org - - [server2:vars] - ansible_python_interpreter=/usr/bin/python3 - name_server=dev2.plasmabio.org - letsencrypt_email=contact@plasmabio.org - -Modify these values to the ones you want to use. - -Then, run the ``https.yml`` playbook: - -.. code-block:: bash - - ansible-playbook https.yml -i hosts -u ubuntu - -This will reload the proxy to take the changes into account. - -It might take a few minutes for the certificates to be setup and the changes to take effect. - -How to make the domain point to the IP of the server ----------------------------------------------------- - -The domain used in the playbook variables (for example ``dev.plasmabio.org``), should also point to the IP of the -server running JupyterHub. - -This is typically done by logging in to the registrar website and adding a new entry to the DNS records. - -You can refer to the `documentation for The Littlest JupyterHub on how to enable HTTPS `_ -for more details. - -Manual HTTPS ------------- - -To use an existing SSL key and certificate, you can refer to the -`Manual HTTPS with existing key and certificate `_ -documentation for TLJH. - -This can also be integrated in the ``https.yml`` playbook by replacing the ``tljh-config`` commands to the ones mentioned -in the documentation. diff --git a/docs/install/index.md b/docs/install/index.md new file mode 100644 index 0000000..dbb7496 --- /dev/null +++ b/docs/install/index.md @@ -0,0 +1,15 @@ +# Installation + +This guide will walk you through the steps to install Plasma on your own server. + +```{toctree} +:maxdepth: 3 + +requirements +ansible +https +users +admins +upgrade +uninstall +``` diff --git a/docs/install/index.rst b/docs/install/index.rst deleted file mode 100644 index b0f7e3a..0000000 --- a/docs/install/index.rst +++ /dev/null @@ -1,15 +0,0 @@ -Installation -============ - -This guide will walk you through the steps to install Plasma on your own server. - -.. toctree:: - :maxdepth: 3 - - requirements - ansible - https - users - admins - upgrade - uninstall diff --git a/docs/install/requirements.md b/docs/install/requirements.md new file mode 100644 index 0000000..8e0e105 --- /dev/null +++ b/docs/install/requirements.md @@ -0,0 +1,94 @@ +(install-requirements)= + +# Requirements + +Before installing Plasma, you will need: + +- A server running at least **Ubuntu 18.04** +- The public IP of the server +- SSH access to the machine +- A `priviledged user` on the remote machine that can issue commands using `sudo` + +(install-ssh-key)= + +## Adding the public SSH key to the server + +To deploy Plasma, you need to be able to access the server via SSH. + +This is typically done by copying the key to the remote server using the `ssh-copy-id` command, or +by providing the key during the creation of the server (see section below). + +To copy the SSH key to the server: + +```bash +ssh-copy-id ubuntu@51.178.95.143 +``` + +Alternatively, the SSH key can be copied from `~/.ssh/id_rsa.pub`, and looks like the following: + +```bash +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCeeTSTvuZ4KzWBwUj2yIKNhX9Jw+LLdNfjOaVONfnYrlVYywRLexRcKJVcUOL8ofK/RXW2xuRQzUu4Kpa0eKMM+iUPEKFF+RtLQGxn3aCVctvXprzrugm69unWot+rc2aBosX99j64U74KkEaLquuBZDd/hmqxxbCr9DRYqb/aFIjfhBS8V0QdKVln1jPoy/nPCY6HMnovicExjB/E5s5lTj/2qUoNXWF5r4zHQlXuc6CY0NN11F2/5n0KfSD3eunBd26zrhzpOJbcyftUV9YOICjJXWOLLOPFn2mqXsPa0k/xRCjCiLv/aiU8xF5mJvYDEJ2jigqGihzfgPz4UEwH0bqQRsq9LrFYVcFLQprCknxxt9F2WgO6nv/V5kgRSi3WOzRt12NcWjg1um/C2TTK9bSqFTEMXlPlsLxDa7Js/kUMZh6N3rIzTsQpXuhKjQLxZ5TReUUdsGyAtU0eQv5rrJBr6ML02C9EMZ5NvduPs1w44+39WONCmoQoKBkiFIYfN0EV7Ps6kM6okzT7Cu8n4DOlsrdLT1b4gSK891461EjIHsfQsD+m53tKZx3Q2FTPJkPofUISzUXzRnXoPflWPbvwLl42qEjWJ4eZv0LHDtJhyr1RvRCXi7P24DdbLbjTjWy3kpNWTdO3b0Zto90ekHNElriHlM1BeqFo+6ABnw== your_email@example.com +``` + +It can then be manually added to `~/.ssh/authorized_keys` on the server. + +For more information, checkout [this tutorial on DigitalOcean to set up SSH Keys on Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys-on-ubuntu-1804). + +(requirements-server)= + +## Creating a new server (optional) + +If you don't already have a server (or want to test the setup from scratch) you can create a new one using a cloud provider. + +[The Littlest JupyterHub documentation](https://the-littlest-jupyterhub.readthedocs.io/en/latest/install/index.html) +provides detailed guides for different cloud providers. + +You can pick one of them, and stop at the point where the TLJH script (starting with `#!/bin/bash`) should be provided +(this part is covered in the next section). + +During the installation steps, you will be able to specify the SSH key to use to connect to the server. + +The key must first be added to the list of available keys by using the cloud provider interface: + +```{image} ../images/install/add-ssh-key.png +:align: center +:alt: Add a new SSH key +:width: 75% +``` + +When asked to choose an SSH key, select the one you just added: + +```{image} ../images/install/select-ssh-key.png +:align: center +:alt: Select the SSH key +:width: 75% +``` + +## Testing the connection + +For a server with an `ubuntu` user, validate that you have access to it with: + +```bash +ssh -t ubuntu@51.178.95.143 echo "test" +``` + +Which should output the following: + +```bash +test +Connection to 51.178.95.143 closed. +``` + +## Updating the local SSH config (optional) + +Depending on the server used for the deployment, see {ref}`requirements-server`, you might want to add the +following to your local SSH config located in `~/.ssh/config`: + +```bash +Host * + ServerAliveInterval 60 + ServerAliveCountMax 10 +``` + +These settings help keep the connection to server alive while the deployment is happening, +or if you have an open SSH connection to the server. diff --git a/docs/install/requirements.rst b/docs/install/requirements.rst deleted file mode 100644 index 8eae3b2..0000000 --- a/docs/install/requirements.rst +++ /dev/null @@ -1,98 +0,0 @@ -.. _install/requirements: - -Requirements -============ - -Before installing Plasma, you will need: - -* A server running at least **Ubuntu 18.04** -* The public IP of the server -* SSH access to the machine -* A `priviledged user` on the remote machine that can issue commands using ``sudo`` - - -.. _install/ssh-key: - -Adding the public SSH key to the server ---------------------------------------- - -To deploy Plasma, you need to be able to access the server via SSH. - -This is typically done by copying the key to the remote server using the ``ssh-copy-id`` command, or -by providing the key during the creation of the server (see section below). - -To copy the SSH key to the server: - -.. code-block:: bash - - ssh-copy-id ubuntu@51.178.95.143 - -Alternatively, the SSH key can be copied from ``~/.ssh/id_rsa.pub``, and looks like the following: - -.. code-block:: bash - - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCeeTSTvuZ4KzWBwUj2yIKNhX9Jw+LLdNfjOaVONfnYrlVYywRLexRcKJVcUOL8ofK/RXW2xuRQzUu4Kpa0eKMM+iUPEKFF+RtLQGxn3aCVctvXprzrugm69unWot+rc2aBosX99j64U74KkEaLquuBZDd/hmqxxbCr9DRYqb/aFIjfhBS8V0QdKVln1jPoy/nPCY6HMnovicExjB/E5s5lTj/2qUoNXWF5r4zHQlXuc6CY0NN11F2/5n0KfSD3eunBd26zrhzpOJbcyftUV9YOICjJXWOLLOPFn2mqXsPa0k/xRCjCiLv/aiU8xF5mJvYDEJ2jigqGihzfgPz4UEwH0bqQRsq9LrFYVcFLQprCknxxt9F2WgO6nv/V5kgRSi3WOzRt12NcWjg1um/C2TTK9bSqFTEMXlPlsLxDa7Js/kUMZh6N3rIzTsQpXuhKjQLxZ5TReUUdsGyAtU0eQv5rrJBr6ML02C9EMZ5NvduPs1w44+39WONCmoQoKBkiFIYfN0EV7Ps6kM6okzT7Cu8n4DOlsrdLT1b4gSK891461EjIHsfQsD+m53tKZx3Q2FTPJkPofUISzUXzRnXoPflWPbvwLl42qEjWJ4eZv0LHDtJhyr1RvRCXi7P24DdbLbjTjWy3kpNWTdO3b0Zto90ekHNElriHlM1BeqFo+6ABnw== your_email@example.com - -It can then be manually added to ``~/.ssh/authorized_keys`` on the server. - -For more information, checkout `this tutorial on DigitalOcean to set up SSH Keys on Ubuntu 18.04 `_. - -.. _requirements/server: - -Creating a new server (optional) --------------------------------- - -If you don't already have a server (or want to test the setup from scratch) you can create a new one using a cloud provider. - -`The Littlest JupyterHub documentation `_ -provides detailed guides for different cloud providers. - -You can pick one of them, and stop at the point where the TLJH script (starting with ``#!/bin/bash``) should be provided -(this part is covered in the next section). - -During the installation steps, you will be able to specify the SSH key to use to connect to the server. - -The key must first be added to the list of available keys by using the cloud provider interface: - -.. image:: ../images/install/add-ssh-key.png - :alt: Add a new SSH key - :width: 75% - :align: center - -When asked to choose an SSH key, select the one you just added: - -.. image:: ../images/install/select-ssh-key.png - :alt: Select the SSH key - :width: 75% - :align: center - -Testing the connection ----------------------- - -For a server with an ``ubuntu`` user, validate that you have access to it with: - -.. code-block:: bash - - ssh -t ubuntu@51.178.95.143 echo "test" - -Which should output the following: - -.. code-block:: bash - - test - Connection to 51.178.95.143 closed. - -Updating the local SSH config (optional) ----------------------------------------- - -Depending on the server used for the deployment, see :ref:`requirements/server`, you might want to add the -following to your local SSH config located in ``~/.ssh/config``: - -.. code-block:: bash - - Host * - ServerAliveInterval 60 - ServerAliveCountMax 10 - -These settings help keep the connection to server alive while the deployment is happening, -or if you have an open SSH connection to the server. diff --git a/docs/install/uninstall.md b/docs/install/uninstall.md new file mode 100644 index 0000000..2a83399 --- /dev/null +++ b/docs/install/uninstall.md @@ -0,0 +1,21 @@ +# Uninstalling + +If you want to uninstall The Littlest JupyterHub from the machine, you can: + +- Destroy the VM: this is the recommended way as it is easier to start fresh +- Run the `uninstall.yml` Ansible playbook if destroying the VM is not an option + +To run the playbook: + +```bash +ansible-playbook uninstall.yml -i hosts -u ubuntu +``` + +```{note} +The playbook will **only** uninstall TLJH from the server. + +It will **not**: + +- delete user data +- remove environments and Docker images +``` diff --git a/docs/install/uninstall.rst b/docs/install/uninstall.rst deleted file mode 100644 index e555101..0000000 --- a/docs/install/uninstall.rst +++ /dev/null @@ -1,22 +0,0 @@ -Uninstalling -============ - -If you want to uninstall The Littlest JupyterHub from the machine, you can: - -- Destroy the VM: this is the recommended way as it is easier to start fresh -- Run the ``uninstall.yml`` Ansible playbook if destroying the VM is not an option - -To run the playbook: - -.. code-block:: bash - - ansible-playbook uninstall.yml -i hosts -u ubuntu - -.. note:: - - The playbook will **only** uninstall TLJH from the server. - - It will **not**: - - - delete user data - - remove environments and Docker images diff --git a/docs/install/upgrade.rst b/docs/install/upgrade.md similarity index 50% rename from docs/install/upgrade.rst rename to docs/install/upgrade.md index 0c096af..ffd8865 100644 --- a/docs/install/upgrade.rst +++ b/docs/install/upgrade.md @@ -1,53 +1,45 @@ -Upgrading -========= +# Upgrading -Backup ------- +## Backup Before performing an upgrade, you might want to back up some components of the stack. -Database -........ +### Database JupyterHub keeps the state in a sqlite database, with information such as the last login and whether a user is an admin or not. -TLJH keeps the database in the ``/opt/tljh/state`` directory on the server. The full path to the database is ``/opt/tljh/state/jupyterhub.sqlite``. +TLJH keeps the database in the `/opt/tljh/state` directory on the server. The full path to the database is `/opt/tljh/state/jupyterhub.sqlite`. To know more about backing up the database please refer to: -- `The JupyterHub documentation `_ -- `The TLJH documentation on the state files `_ +- [The JupyterHub documentation](https://jupyterhub.readthedocs.io/en/stable/admin/upgrading.html#backup-database-config) +- [The TLJH documentation on the state files](http://tljh.jupyter.org/en/latest/topic/installer-actions.html#state-files) -For more info on where TLJH is installed: `What does the installer do? `_ +For more info on where TLJH is installed: [What does the installer do?](http://tljh.jupyter.org/en/latest/topic/installer-actions.html) -Plasma TLJH Plugin -.................. +### Plasma TLJH Plugin This TLJH plugin is a regular Python package. -It is installed in ``/opt/tljh/hub/lib/python3.6/site-packages/tljh_plasma``, and doesn't need to be backed up +It is installed in `/opt/tljh/hub/lib/python3.6/site-packages/tljh_plasma`, and doesn't need to be backed up as it doesn't hold any state. -User Environments -................. +### User Environments The user environments correspond to Docker images on the host. There is no need to back them up as they will stay untouched if not removed manually. -User Data -......... +### User Data It is generally recommended to have a backup strategy for important data such as user data. This can be achieved by setting up tools that for example sync the user home directories to another machine on a regular basis. -Check out the :ref:`persistence/user-data` section to know more about user data. +Check out the {ref}`persistence-user-data` section to know more about user data. +## Running the playbook -Running the playbook --------------------- - -To perform an upgrade of the setup, you can re-run the playbooks as explained in :ref:`install/ansible`. +To perform an upgrade of the setup, you can re-run the playbooks as explained in {ref}`install-ansible`. Re-running the playbooks will: @@ -61,6 +53,6 @@ However, performing an upgrade does not: - Remove user environments (Docker images) - Delete user data -In most cases, it is enough to only run the ``tljh.yml`` playbook to perform the upgrade. +In most cases, it is enough to only run the `tljh.yml` playbook to perform the upgrade. -Refer to :ref:`install/individual-playbook` for more info. +Refer to {ref}`install-individual-playbook` for more info. diff --git a/docs/install/users.md b/docs/install/users.md new file mode 100644 index 0000000..0a53b03 --- /dev/null +++ b/docs/install/users.md @@ -0,0 +1,233 @@ +(install-users)= + +# Creating users and user groups on the host + +```{note} +By default the `site.yml` playbook does not create any users nor user groups on the host machine. + +This step is optional because in some scenarios users and user groups might already exist on the host machine +and don't need to be created. +``` + +(install-users-playbook)= + +## Using the users playbook + +The `ansible/` directory contains a `users.yml` playbook that makes it easier to create new users and user groups on the host in batches. + +First you need to create a new `users-config.yml` with the following content: + +```yaml +plasma_groups: + - group_1 + - group_2 + - group_3 + +users: + - name: foo + password: PLAIN_TEXT_PASSWORD + groups: + - group_1 + - group_2 + + - name: bar + password: PLAIN_TEXT_PASSWORD + groups: + - group_3 +``` + +Replace the `groups`, `name` and `password` entries by the real values. + +User groups will be later used to adjust permissions to access environments (see {ref}`permissions-groups`). + +`password` should correspond to the plain text value of the user password. + +For more info about password hashing, please refer to the +[Ansible Documentation](http://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module) +to learn how to generate the encrypted passwords. + +When the user file is ready, execute the `users.yml` playbook with the following command: + +```bash +ansible-playbook users.yml -i hosts -u ubuntu -e @users-config.yml +``` + +By default the user home directory is created in `/home`. A custom home directory can be configured by setting the variable `home_path` in the `hosts` file. +For instance: + +```text +[server] +51.178.95.237 + +[server:vars] +ansible_python_interpreter=/usr/bin/python3 +name_server=dev.plasmabio.org +letsencrypt_email=contact@plasmabio.org +home_path=/srv/home +``` + +```{note} +The first time, this playbook will failed complaining with the error message `setquota: not found`. +This is normal considering quotas are not yet enforced. +``` + +## Handling secrets + +```{warning} +Passwords are sensitive data. The `users.yml` playbook mentioned in the previous section +automatically encrypts the password from a plain text file. + +For production use, you should consider protecting the passwords using the +[Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/playbooks_vault.html#playbooks-vault). +``` + +This `users.yml` playbook is mostly provided as a convenience script to quickly bootstrap the host machine with +a predefined set of users. + +You are free to choose a different approach for managing users that suits your needs. + +## Set Disk Quotas + +Users can save their files on the host machine in their home directrory. More details in {ref}`persistence-user-data`. + +If you would like to enable quotas for users to limit how much disk space they can use, you can use the `quotas.yml` Ansible playbook. + +The playbook is heavily inspired by the excellent [DigitalOcean tutorial on user quotas](https://www.digitalocean.com/community/tutorials/how-to-set-filesystem-quotas-on-ubuntu-18-04). +Check it out for more info on user and group quotas. + +```{warning} +It is recommended to do the initial quota setup **before** letting users connect to the hub. +``` + +### Finding the source device + +Run the `quotas.yml` playbook with the `discover` tag to find out the device and path on which to apply quota: + +```bash +ansible-playbook quotas.yml -i hosts -u ubuntu --tags discover +``` + +The output will be similar to: + +```text +msg: |- + LABEL=cloudimg-rootfs / ext4 defaults 0 0 + LABEL=UEFI /boot/efi vfat defaults 0 0 +``` + +or + +```text +msg: |- + /dev/disk/by-uuid/55fe8be8-0e4e-46cd-a643-d74284eae15a / ext4 defaults 0 0 + /dev/disk/by-uuid/ecae1a6e-f240-4f3c-adda-56d22691f159 /srv ext4 defaults 0 0 +``` + +In our case, we want to apply quotas on device `LABEL=cloudimg-rootfs` that is mounted on path `/`. +Copy these values in the `hosts` file: + +```text +[server] +51.178.95.237 + +[server:vars] +ansible_python_interpreter=/usr/bin/python3 +name_server=dev.plasmabio.org +letsencrypt_email=contact@plasmabio.org +quota_device_name=LABEL=cloudimg-rootfs +quota_device_path=/ +``` + +```{warning} +Be extra cautious when reporting the device name and path in the `hosts` file. +A typo could prevent to mount your device and require a physical intervention on the server (or a reset if its a virtual machine). +``` + +### Enabling quotas + +To enable quotas on the machine, execute the `quotas.yml` playbook (this time without the `discover` tag): + +```bash +ansible-playbook quotas.yml -i hosts -u ubuntu +``` + +### Setting the user quotas + +The `users.yml` playbook can also be used to set the user quotas. In `users-config.yml` you can define quotas as follows: + +```yaml +# default quotas for all users +quota: + soft: 10G + hard: 12G + +plasma_groups: + - group_1 + - group_2 + - group_3 + +users: + - name: foo + password: foo + groups: + - group_1 + - group_2 + # override quota for a specific user + quota: + soft: 5G + hard: 10G + + - name: bar + password: bar + groups: + - group_3 +``` + +Then re-run the `users.yml` playbook as mentioned in {ref}`install-users-playbook`. + +For example, if a user exceeds their quota when creating a file from the terminal inside the container, they will be shown the following message: + +```text +foo@549539d386e5:~/plasmabio-template-python-master$ fallocate -l 12G test.img +fallocate: fallocate failed: Disk quota exceeded +``` + +On the host machine, a user can check their quota by running the following command: + +```text +foo@test-server:~$ quota -vs +Disk quotas for user foo (uid 1006): + Filesystem space quota limit grace files quota limit grace + /dev/sda1 16K 5120M 10240M +``` + +If the quota is exceeded and the user tries to create a new notebook from the interface, they will be shown an error dialog: + +```{image} ../images/install/quota-exceeded.png +:align: center +:alt: User quota exceeded +:width: 80% +``` + +On the host machine, an admin can check user quotas by running the following command: + +```text +ubuntu@plasmabio-pierrepo:~$ sudo repquota -as +*** Report for user quotas on device /dev/sda1 +Block grace time: 7days; Inode grace time: 7days + Space limits File limits +User used soft hard grace used soft hard grace +---------------------------------------------------------------------- +root -- 3668M 0K 0K 160k 0 0 +daemon -- 64K 0K 0K 4 0 0 +man -- 1652K 0K 0K 141 0 0 +syslog -- 1328K 0K 0K 11 0 0 +_apt -- 24K 0K 0K 4 0 0 +lxd -- 4K 0K 0K 1 0 0 +landscape -- 8K 0K 0K 3 0 0 +pollinate -- 4K 0K 0K 2 0 0 +ubuntu -- 84K 0K 0K 16 0 0 +foo -- 16K 5120M 10240M 4 0 0 +bar -- 16K 10240M 12288M 4 0 0 +#62583 -- 4K 0K 0K 2 0 0 +``` diff --git a/docs/install/users.rst b/docs/install/users.rst deleted file mode 100644 index 7012d90..0000000 --- a/docs/install/users.rst +++ /dev/null @@ -1,245 +0,0 @@ -.. _install/users: - -Creating users and user groups on the host -========================================== - -.. note:: - By default the ``site.yml`` playbook does not create any users nor user groups on the host machine. - - This step is optional because in some scenarios users and user groups might already exist on the host machine - and don't need to be created. - -.. _install/users-playbook: - -Using the users playbook ------------------------- - -The ``ansible/`` directory contains a ``users.yml`` playbook that makes it easier to create new users and user groups on the host in batches. - -First you need to create a new ``users-config.yml`` with the following content: - -.. code-block:: yaml - - plasma_groups: - - group_1 - - group_2 - - group_3 - - users: - - name: foo - password: PLAIN_TEXT_PASSWORD - groups: - - group_1 - - group_2 - - - name: bar - password: PLAIN_TEXT_PASSWORD - groups: - - group_3 - -Replace the ``groups``, ``name`` and ``password`` entries by the real values. - -User groups will be later used to adjust permissions to access environments (see :ref:`permissions/groups`). - -``password`` should correspond to the plain text value of the user password. - -For more info about password hashing, please refer to the -`Ansible Documentation `_ -to learn how to generate the encrypted passwords. - -When the user file is ready, execute the ``users.yml`` playbook with the following command: - -.. code-block:: bash - - ansible-playbook users.yml -i hosts -u ubuntu -e @users-config.yml - -By default the user home directory is created in ``/home``. A custom home directory can be configured by setting the variable ``home_path`` in the ``hosts`` file. -For instance: - -.. code-block:: text - - [server] - 51.178.95.237 - - [server:vars] - ansible_python_interpreter=/usr/bin/python3 - name_server=dev.plasmabio.org - letsencrypt_email=contact@plasmabio.org - home_path=/srv/home - - -.. note:: - - The first time, this playbook will failed complaining with the error message ``setquota: not found``. - This is normal considering quotas are not yet enforced. - - -Handling secrets ----------------- - -.. warning:: - - Passwords are sensitive data. The ``users.yml`` playbook mentioned in the previous section - automatically encrypts the password from a plain text file. - - For production use, you should consider protecting the passwords using the - `Ansible Vault `_. - -This ``users.yml`` playbook is mostly provided as a convenience script to quickly bootstrap the host machine with -a predefined set of users. - -You are free to choose a different approach for managing users that suits your needs. - -Set Disk Quotas ---------------- - -Users can save their files on the host machine in their home directrory. More details in :ref:`persistence/user-data`. - -If you would like to enable quotas for users to limit how much disk space they can use, you can use the ``quotas.yml`` Ansible playbook. - -The playbook is heavily inspired by the excellent `DigitalOcean tutorial on user quotas `_. -Check it out for more info on user and group quotas. - -.. warning:: - - It is recommended to do the initial quota setup **before** letting users connect to the hub. - - -Finding the source device -......................... - -Run the ``quotas.yml`` playbook with the ``discover`` tag to find out the device and path on which to apply quota: - -.. code-block:: bash - - ansible-playbook quotas.yml -i hosts -u ubuntu --tags discover - - -The output will be similar to: - -.. code-block:: text - - msg: |- - LABEL=cloudimg-rootfs / ext4 defaults 0 0 - LABEL=UEFI /boot/efi vfat defaults 0 0 - -or - -.. code-block:: text - - msg: |- - /dev/disk/by-uuid/55fe8be8-0e4e-46cd-a643-d74284eae15a / ext4 defaults 0 0 - /dev/disk/by-uuid/ecae1a6e-f240-4f3c-adda-56d22691f159 /srv ext4 defaults 0 0 - - -In our case, we want to apply quotas on device ``LABEL=cloudimg-rootfs`` that is mounted on path ``/``. -Copy these values in the ``hosts`` file: - -.. code-block:: text - - [server] - 51.178.95.237 - - [server:vars] - ansible_python_interpreter=/usr/bin/python3 - name_server=dev.plasmabio.org - letsencrypt_email=contact@plasmabio.org - quota_device_name=LABEL=cloudimg-rootfs - quota_device_path=/ - -.. warning:: - - Be extra cautious when reporting the device name and path in the ``hosts`` file. - A typo could prevent to mount your device and require a physical intervention on the server (or a reset if its a virtual machine). - - -Enabling quotas -............... - -To enable quotas on the machine, execute the ``quotas.yml`` playbook (this time without the ``discover`` tag): - -.. code-block:: bash - - ansible-playbook quotas.yml -i hosts -u ubuntu - - -Setting the user quotas -....................... - -The ``users.yml`` playbook can also be used to set the user quotas. In ``users-config.yml`` you can define quotas as follows: - -.. code-block:: yaml - - # default quotas for all users - quota: - soft: 10G - hard: 12G - - plasma_groups: - - group_1 - - group_2 - - group_3 - - users: - - name: foo - password: foo - groups: - - group_1 - - group_2 - # override quota for a specific user - quota: - soft: 5G - hard: 10G - - - name: bar - password: bar - groups: - - group_3 - -Then re-run the ``users.yml`` playbook as mentioned in :ref:`install/users-playbook`. - -For example, if a user exceeds their quota when creating a file from the terminal inside the container, they will be shown the following message: - -.. code-block:: text - - foo@549539d386e5:~/plasmabio-template-python-master$ fallocate -l 12G test.img - fallocate: fallocate failed: Disk quota exceeded - -On the host machine, a user can check their quota by running the following command: - -.. code-block:: text - - foo@test-server:~$ quota -vs - Disk quotas for user foo (uid 1006): - Filesystem space quota limit grace files quota limit grace - /dev/sda1 16K 5120M 10240M - -If the quota is exceeded and the user tries to create a new notebook from the interface, they will be shown an error dialog: - -.. image:: ../images/install/quota-exceeded.png - :alt: User quota exceeded - :width: 80% - :align: center - -On the host machine, an admin can check user quotas by running the following command: - -.. code-block:: text - - ubuntu@plasmabio-pierrepo:~$ sudo repquota -as - *** Report for user quotas on device /dev/sda1 - Block grace time: 7days; Inode grace time: 7days - Space limits File limits - User used soft hard grace used soft hard grace - ---------------------------------------------------------------------- - root -- 3668M 0K 0K 160k 0 0 - daemon -- 64K 0K 0K 4 0 0 - man -- 1652K 0K 0K 141 0 0 - syslog -- 1328K 0K 0K 11 0 0 - _apt -- 24K 0K 0K 4 0 0 - lxd -- 4K 0K 0K 1 0 0 - landscape -- 8K 0K 0K 3 0 0 - pollinate -- 4K 0K 0K 2 0 0 - ubuntu -- 84K 0K 0K 16 0 0 - foo -- 16K 5120M 10240M 4 0 0 - bar -- 16K 10240M 12288M 4 0 0 - #62583 -- 4K 0K 0K 2 0 0 \ No newline at end of file diff --git a/docs/overview/index.rst b/docs/overview/index.md similarity index 66% rename from docs/overview/index.rst rename to docs/overview/index.md index b9a56d2..749429d 100644 --- a/docs/overview/index.rst +++ b/docs/overview/index.md @@ -1,9 +1,8 @@ -.. _overview/overview: +(overview-overview)= -Overview -======== +# Overview -Plasma is built with `The Littlest JupyterHub `_ (TLJH) +Plasma is built with [The Littlest JupyterHub](https://the-littlest-jupyterhub.readthedocs.io/en/latest/) (TLJH) and uses Docker containers to start the user servers. The project provides: @@ -16,25 +15,24 @@ Plasma can be seen as an **opinionated TLJH distribution**: - It gives admin users the possibility to configure multiple user environments backed by Docker images - It provides an interface to build the user environments, accessible from the JupyterHub panel, using - `tljh-repo2docker `_ + [tljh-repo2docker](https://github.com/plasmabio/tljh-repo2docker) - It uses PAM as the authenticator, and relies on system users for data persistence (home directories) and authentication - It provides additional Ansible Playbooks to provision the server with extra monitoring tools Here is an overview of all the different components and their interactions after Plasma has been deployed on a new server: -.. image:: ../images/overview.png - :alt: Overview Diagram - :width: 100% - :align: center +```{image} ../images/overview.png +:align: center +:alt: Overview Diagram +:width: 100% +``` - -The JupyterHub Documentation ----------------------------- +## The JupyterHub Documentation Since Plasma is built on top of JupyterHub and The Littlest JupyterHub distribution, it benefits from its community and high quality documentation. For more information on these projects: -- `JupyterHub Documentation `_ -- `The Littlest JupyterHub Documentation `_ +- [JupyterHub Documentation](https://jupyterhub.readthedocs.io) +- [The Littlest JupyterHub Documentation](https://the-littlest-jupyterhub.readthedocs.io) diff --git a/docs/permissions/edit.md b/docs/permissions/edit.md new file mode 100644 index 0000000..c607c73 --- /dev/null +++ b/docs/permissions/edit.md @@ -0,0 +1,42 @@ +# Editing permissions + +The page to edit permissions is accessible via the navigation bar: + +```{image} ../images/permissions/permissions-navbar.png +:align: center +:alt: Manage the permissions +:width: 100% +``` + +## Mapping new groups to user environments + +By default, users don't have access to any environment: + +```{image} ../images/permissions/permissions-empty.png +:align: center +:alt: No access to user environments +:width: 100% +``` + +Admins can prepare a list of environments before assigning them to user groups. + +Environments can be added on a group level by clicking on the `Add Group` button and selecting +the group using the dropdown menu: + +```{image} ../images/permissions/permissions-group-dropdown.png +:align: center +:alt: Choosing a group for an image +:width: 100% +``` + +## Saving the changes + +Save the changes by clicking on the `Submit` button. + +This will reload the page and show the updated list of permissions. + +```{image} ../images/permissions/permissions-page.png +:align: center +:alt: The permissions page +:width: 100% +``` diff --git a/docs/permissions/edit.rst b/docs/permissions/edit.rst deleted file mode 100644 index 4134903..0000000 --- a/docs/permissions/edit.rst +++ /dev/null @@ -1,41 +0,0 @@ -Editing permissions -=================== - -The page to edit permissions is accessible via the navigation bar: - -.. image:: ../images/permissions/permissions-navbar.png - :alt: Manage the permissions - :width: 100% - :align: center - -Mapping new groups to user environments ---------------------------------------- - -By default, users don't have access to any environment: - -.. image:: ../images/permissions/permissions-empty.png - :alt: No access to user environments - :width: 100% - :align: center - -Admins can prepare a list of environments before assigning them to user groups. - -Environments can be added on a group level by clicking on the ``Add Group`` button and selecting -the group using the dropdown menu: - -.. image:: ../images/permissions/permissions-group-dropdown.png - :alt: Choosing a group for an image - :width: 100% - :align: center - -Saving the changes ------------------- - -Save the changes by clicking on the ``Submit`` button. - -This will reload the page and show the updated list of permissions. - -.. image:: ../images/permissions/permissions-page.png - :alt: The permissions page - :width: 100% - :align: center \ No newline at end of file diff --git a/docs/permissions/groups.md b/docs/permissions/groups.md new file mode 100644 index 0000000..0dbb774 --- /dev/null +++ b/docs/permissions/groups.md @@ -0,0 +1,65 @@ +# Managing UNIX groups + +(permissions-groups)= + +## The group include list + +By default Plasma users don't have access to any environments. + +Users must be assigned to UNIX groups, and included groups be defined in the Plasma configuration. + +Unix groups are defined in the config file `users-config.yml` already used for users creation (see {ref}`install-users-playbook`). + +```yaml +plasma_groups: + - python-course + - bash-intro +``` + +To add these groups into allowed groups to access environments, execute the `ansible/include-groups.yml` playbook: + +```bash +cd ansible/ +ansible-playbook include-groups.yml -i hosts -u ubuntu -e @users-config.yml +``` + +The playbook creates the groups on the host machine if they don't already exist, and defines the list +of included groups in the TLJH config. + +## Managing user groups via the command line + +To create a new group `test`: + +```bash +groupadd test +``` + +To add a user `alice` to the `test` group: + +```bash +usermod -a -G test alice +``` + +To remove the user `alice` from the `test` group: + +```bash +deluser alice test +``` + +Groups can be listed using the following command: + +```bash +$ cat /etc/group +root:x:0: +daemon:x:1: +bin:x:2: +sys:x:3: +adm:x:4:syslog,ubuntu +tty:x:5: +disk:x:6: +lp:x:7: +mail:x:8: +... +``` + +There are also plenty of good resources online to learn more about UNIX user and groups management. diff --git a/docs/permissions/groups.rst b/docs/permissions/groups.rst deleted file mode 100644 index b3e35c5..0000000 --- a/docs/permissions/groups.rst +++ /dev/null @@ -1,69 +0,0 @@ -Managing UNIX groups -==================== - -.. _permissions/groups: - -The group include list ----------------------- - -By default Plasma users don't have access to any environments. - -Users must be assigned to UNIX groups, and included groups be defined in the Plasma configuration. - -Unix groups are defined in the config file `users-config.yml` already used for users creation (see :ref:`install/users-playbook`). - -.. code-block:: yaml - - plasma_groups: - - python-course - - bash-intro - -To add these groups into allowed groups to access environments, execute the ``ansible/include-groups.yml`` playbook: - -.. code-block:: bash - - cd ansible/ - ansible-playbook include-groups.yml -i hosts -u ubuntu -e @users-config.yml - -The playbook creates the groups on the host machine if they don't already exist, and defines the list -of included groups in the TLJH config. - -Managing user groups via the command line ------------------------------------------ - -To create a new group ``test``: - -.. code-block:: bash - - groupadd test - -To add a user ``alice`` to the ``test`` group: - -.. code-block:: bash - - usermod -a -G test alice - -To remove the user ``alice`` from the ``test`` group: - -.. code-block:: bash - - deluser alice test - -Groups can be listed using the following command: - -.. code-block:: bash - - $ cat /etc/group - root:x:0: - daemon:x:1: - bin:x:2: - sys:x:3: - adm:x:4:syslog,ubuntu - tty:x:5: - disk:x:6: - lp:x:7: - mail:x:8: - ... - - -There are also plenty of good resources online to learn more about UNIX user and groups management. diff --git a/docs/permissions/index.rst b/docs/permissions/index.md similarity index 50% rename from docs/permissions/index.rst rename to docs/permissions/index.md index 7b3f516..878ff49 100644 --- a/docs/permissions/index.rst +++ b/docs/permissions/index.md @@ -1,22 +1,22 @@ -Permissions -=========== +# Permissions Since Plasma relies on UNIX system users that exist on the host machine, it can leverage `UNIX groups` to enable permission management. The Permissions page lets admin users configure which user groups have access to user environments. -.. image:: ../images/permissions/permissions-page.png - :alt: Manage the permissions - :width: 100% - :align: center +```{image} ../images/permissions/permissions-page.png +:align: center +:alt: Manage the permissions +:width: 100% +``` -Managing Permissions --------------------- +## Managing Permissions -.. toctree:: - :maxdepth: 3 +```{toctree} +:maxdepth: 3 - groups - edit - spawn +groups +edit +spawn +``` diff --git a/docs/permissions/spawn.md b/docs/permissions/spawn.md new file mode 100644 index 0000000..4efc562 --- /dev/null +++ b/docs/permissions/spawn.md @@ -0,0 +1,10 @@ +# Choosing the environment on the spawn page + +The environment will be listed on the spawn page if it is assigned to a group a user +belongs to: + +```{image} ../images/permissions/permissions-spawn.png +:align: center +:alt: Choosing an environment +:width: 100% +``` diff --git a/docs/permissions/spawn.rst b/docs/permissions/spawn.rst deleted file mode 100644 index 23d37b6..0000000 --- a/docs/permissions/spawn.rst +++ /dev/null @@ -1,11 +0,0 @@ -Choosing the environment on the spawn page -========================================== - -The environment will be listed on the spawn page if it is assigned to a group a user -belongs to: - -.. image:: ../images/permissions/permissions-spawn.png - :alt: Choosing an environment - :width: 100% - :align: center - diff --git a/docs/requirements.txt b/docs/requirements.txt index ed6543d..9468517 100644 --- a/docs/requirements.txt +++ b/docs/requirements.txt @@ -1,3 +1,4 @@ sphinx>=1.4, !=1.5.4 sphinx_copybutton pydata-sphinx-theme +myst-parser \ No newline at end of file diff --git a/docs/troubleshooting/index.md b/docs/troubleshooting/index.md new file mode 100644 index 0000000..a3d8536 --- /dev/null +++ b/docs/troubleshooting/index.md @@ -0,0 +1,137 @@ +(troubleshooting-troubleshooting)= + +# Troubleshooting + +```{contents} Table of contents +:depth: 1 +:local: true +``` + +## How to SSH to the machine + +First make sure your SSH key has been deployed to the server. See {ref}`install-ssh-key` for more details. + +Once the key is set up, connect to the machine over SSH using the following command: + +```bash +ssh ubuntu@51.178.95.143 +``` + +## Looking at logs + +See: [The Littlest JupyterHub documentation](https://the-littlest-jupyterhub.readthedocs.io/en/latest/troubleshooting/logs.html). + +## Why is my environment not building? + +If for some reasons an environment does not appear after {ref}`environments-add`, it is possible that +there are some issues building it and installing the dependencies. + +We recommend building the environment either locally with `repo2docker` (next section) or on Binder. + +See {ref}`environments-prepare-binder` and the [repo2docker FAQ](https://repo2docker.readthedocs.io/en/latest/faq.html) +for more details. + +### Accessing the `repo2docker` container + +In Plasma, `repo2docker` runs in a Docker container, based on the Docker image available at +`quay.io/jupyterhub/repo2docker:main`. + +If you are not able to run `repo2docker` manually to investigate a build failure (see section below), you can try to access the +logs of the Docker container. + +On the machine running TLJH, run the `docker ps` command. The output should look like the following: + +```bash +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +146b4d335215 quay.io/jupyterhub/repo2docker:main "/usr/local/bin/entr…" 31 seconds ago Up 30 seconds 52000/tcp naughty_thompson +``` + +You can then access the logs of the container with: + +```bash +docker logs 146b4d335215 +# or with the generated name +docker logs naughty_thompson +``` + +If the `repo2docker` container has stopped, then you can use the `docker ps -a` to display all the containers. +The output will show `Exited` as part of the `STATUS`: + +```bash +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +146b4d335215 quay.io/jupyterhub/repo2docker:main "/usr/local/bin/entr…" 4 minutes ago Exited (0) About a m +``` + +## Running the environments on my local machine + +To run the same environments on a local machine, you can use `jupyter-repo2docker` with the following parameters: + +```bash +jupyter-repo2docker --ref a4edf334c6b4b16be3a184d0d6e8196137ee1b06 https://github.com/plasmabio/template-python +``` + +Update the parameters based on the image you would like to build. + +This will create a Docker image and start it automatically once the build is complete. + +Refer to the [repo2docker documentation](https://repo2docker.readthedocs.io/en/latest/usage.html) for more details. + +## My extension and / or dependency does not seem to be installed + +See the two previous sections to investigate why they are missing. + +The logs might contain silent errors that did not cause the build to fail. + +## The name of the environment is not displayed in the top bar + +This functionality requires the `jupyter-topbar-text` extension to be installed in the environment. + +This extension must be added to the `postBuild` file of the repository. +See this [commit](https://github.com/plasmabio/template-python/commit/b3dd6c4b525ed4584e79175d4ae340a8b2395682) as an example. + +The name of the environment will then be displayed as follows: + +```{image} ../images/troubleshooting/topbar-env-name.png +:align: center +:alt: The name of the environment in the top bar +:width: 75% +``` + +## The environment is very slow to build + +Since the environments are built as Docker images, they can +[leverage the Docker cache](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache) +to make the builds faster. + +In some cases Docker will not be able to leverage the cache, for example when building a Python or R environment for the first time. + +Another reason for the build to be slow could be the amount of dependencies specified in files such as `environment.yml` or +`requirements.txt`. + +Check out the previous section for more info on how to troubleshoot it. + +## Finding the source for an environment + +If you are managing the environments, you can click on the `Reference` link in the UI, +which will open a new tab to the repository pointing the commit hash: + +```{image} ../images/troubleshooting/git-commit-hash.png +:align: center +:alt: The git commit hash on GitHub +:width: 50% +``` + +If you are using the environments, the name contains the information about the repository +and the reference used to build the environment. + +On the repository page, enter the reference in the search input box: + +```{image} ../images/troubleshooting/search-github-repo.png +:align: center +:alt: Searching for a commit hash on GitHub +:width: 100% +``` + +## Removing an environment returns an error + +See {ref}`remove-error` for more info. diff --git a/docs/troubleshooting/index.rst b/docs/troubleshooting/index.rst deleted file mode 100644 index 78dff3b..0000000 --- a/docs/troubleshooting/index.rst +++ /dev/null @@ -1,147 +0,0 @@ -.. _troubleshooting/troubleshooting: - -Troubleshooting -=============== - -.. contents:: Table of contents - :local: - :depth: 1 - -How to SSH to the machine -------------------------- - -First make sure your SSH key has been deployed to the server. See :ref:`install/ssh-key` for more details. - -Once the key is set up, connect to the machine over SSH using the following command: - -.. code-block:: bash - - ssh ubuntu@51.178.95.143 - -Looking at logs ---------------- - -See: `The Littlest JupyterHub documentation `_. - -Why is my environment not building? ------------------------------------ - -If for some reasons an environment does not appear after :ref:`environments/add`, it is possible that -there are some issues building it and installing the dependencies. - -We recommend building the environment either locally with ``repo2docker`` (next section) or on Binder. - -See :ref:`environments/prepare/binder` and the `repo2docker FAQ `_ -for more details. - -Accessing the ``repo2docker`` container -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In Plasma, ``repo2docker`` runs in a Docker container, based on the Docker image available at -``quay.io/jupyterhub/repo2docker:main``. - -If you are not able to run ``repo2docker`` manually to investigate a build failure (see section below), you can try to access the -logs of the Docker container. - -On the machine running TLJH, run the ``docker ps`` command. The output should look like the following: - -.. code-block:: bash - - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 146b4d335215 quay.io/jupyterhub/repo2docker:main "/usr/local/bin/entr…" 31 seconds ago Up 30 seconds 52000/tcp naughty_thompson - -You can then access the logs of the container with: - -.. code-block:: bash - - docker logs 146b4d335215 - # or with the generated name - docker logs naughty_thompson - -If the ``repo2docker`` container has stopped, then you can use the ``docker ps -a`` to display all the containers. -The output will show ``Exited`` as part of the ``STATUS``: - -.. code-block:: bash - - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 146b4d335215 quay.io/jupyterhub/repo2docker:main "/usr/local/bin/entr…" 4 minutes ago Exited (0) About a m - -Running the environments on my local machine --------------------------------------------- - -To run the same environments on a local machine, you can use ``jupyter-repo2docker`` with the following parameters: - -.. code-block:: bash - - jupyter-repo2docker --ref a4edf334c6b4b16be3a184d0d6e8196137ee1b06 https://github.com/plasmabio/template-python - -Update the parameters based on the image you would like to build. - -This will create a Docker image and start it automatically once the build is complete. - -Refer to the `repo2docker documentation `_ for more details. - -My extension and / or dependency does not seem to be installed --------------------------------------------------------------- - -See the two previous sections to investigate why they are missing. - -The logs might contain silent errors that did not cause the build to fail. - -The name of the environment is not displayed in the top bar ------------------------------------------------------------ - -This functionality requires the ``jupyter-topbar-text`` extension to be installed in the environment. - -This extension must be added to the ``postBuild`` file of the repository. -See this `commit `_ as an example. - -The name of the environment will then be displayed as follows: - -.. image:: ../images/troubleshooting/topbar-env-name.png - :alt: The name of the environment in the top bar - :width: 75% - :align: center - -The environment is very slow to build -------------------------------------- - -Since the environments are built as Docker images, they can -`leverage the Docker cache `_ -to make the builds faster. - -In some cases Docker will not be able to leverage the cache, for example when building a Python or R environment for the first time. - -Another reason for the build to be slow could be the amount of dependencies specified in files such as ``environment.yml`` or -``requirements.txt``. - -Check out the previous section for more info on how to troubleshoot it. - -Finding the source for an environment -------------------------------------- - -If you are managing the environments, you can click on the ``Reference`` link in the UI, -which will open a new tab to the repository pointing the commit hash: - - -.. image:: ../images/troubleshooting/git-commit-hash.png - :alt: The git commit hash on GitHub - :width: 50% - :align: center - - -If you are using the environments, the name contains the information about the repository -and the reference used to build the environment. - -On the repository page, enter the reference in the search input box: - - -.. image:: ../images/troubleshooting/search-github-repo.png - :alt: Searching for a commit hash on GitHub - :width: 100% - :align: center - -Removing an environment returns an error ----------------------------------------- - -See :ref:`remove/error` for more info.