Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert docs to MyST #203

Merged
merged 2 commits into from
Mar 10, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@
# Enable MathJax for Math
extensions = ['sphinx.ext.mathjax',
'sphinx.ext.intersphinx',
'sphinx_copybutton']
'sphinx_copybutton',
'myst_parser']

# The master toctree document.
master_doc = 'index'
Expand Down
7 changes: 7 additions & 0 deletions docs/configuration/cull.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Culling idle servers

Plasma uses the [same defaults as The Littlest JupyterHub](http://tljh.jupyter.org/en/latest/topic/idle-culler.html#default-settings)
for culling idle servers.

It overrides the `timeout` value to `3600`, which means that the user servers will be shut down if they have
been idle for more than one hour.
8 changes: 0 additions & 8 deletions docs/configuration/cull.rst

This file was deleted.

11 changes: 11 additions & 0 deletions docs/configuration/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Configuration

```{toctree}
:maxdepth: 3
monitoring
persistence
resources
cull
namedservers
```
11 changes: 0 additions & 11 deletions docs/configuration/index.rst

This file was deleted.

98 changes: 98 additions & 0 deletions docs/configuration/monitoring.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# Monitoring

:::{warning}
HTTPS must be enabled to be able to access Cockpit. Refer to {ref}`install-https` for more info.
:::

## Installing Cockpit

`cockpit` is not installed by default as a monitoring tool for the server.

First make sure HTTPS is enabled and the `name_server` variable is specified in the `hosts` file.
See {ref}`install-https` for more info.

Then execute the `cockpit.yml` playbook:

```bash
ansible-playbook cockpit.yml -i hosts -u ubuntu
```

The Plasma TLJH plugin registers `cockpit` as a JupyterHub service. This means that
Cockpit is accessible to JupyterHub admin users via the JupyterHub interface:

```{image} ../images/configuration/cockpit-navbar.png
:align: center
:alt: Accessing cockpit from the nav bar
:width: 100%
```

Users will be asked to login with their system credentials. They can then access the Cockpit dashboard:

```{image} ../images/configuration/cockpit.png
:align: center
:alt: Cockpit
:width: 100%
```

## Monitoring user servers with Cockpit

:::{note}
Access to Docker Containers requires access to `docker`.

Make sure your user can access docker on the machine with:

```bash
sudo docker info
```

Your user should also be able to login with a password. If the user doesn't have a password yet, you can
create a new one with:

```bash
sudo passwd <username>
```

For example if your user is `ubuntu`:

```bash
sudo passwd ubuntu
```

To add more users as admin or change permissions from the Cockpit UI, see {ref}`monitoring-permissions`.
:::

Since user servers are started as Docker containers, they will be displayed in the Cockpit interface in the
`Docker Containers` section:

```{image} ../images/configuration/cockpit-docker.png
:align: center
:alt: Docker Containers from Cockpit
:width: 100%
```

The Cockpit interface shows:

- The username as part of the name of the Docker container
- The resources they are currently using
- The environment currently in use

It is also possible to stop the user server by clicking on the "Stop" button.

(monitoring-permissions)=

## Changing user permissions from the Cockpit UI

:::{note}
You first need to be logged in with a user that has the `sudo` permission.
:::

Cockpit makes it easy to add a specific user to a certain group.

For example a user can be given the "Container Administrator" role via the UI to be able to manage Docker containers
and images on the machine:

```{image} ../images/configuration/cockpit-roles.png
:align: center
:alt: Manage user roles from the Cockpit UI
:width: 100%
```
100 changes: 0 additions & 100 deletions docs/configuration/monitoring.rst

This file was deleted.

23 changes: 23 additions & 0 deletions docs/configuration/namedservers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Named Servers

By default, users can run only one server at once.

[Named servers functionality](https://jupyterhub.readthedocs.io/en/stable/reference/config-user-env.html#named-servers) in JupyterHub
can be activated to let the user run several servers.

To allow up to 2 simultaneous named servers (in addition to the default one), create the file `named_servers_config.py`
in the directory `/opt/tljh/config/jupyterhub_config.d` with the following content:

```text
c.JupyterHub.allow_named_servers = True
c.JupyterHub.named_server_limit_per_user = 2
```

Then, reload tljh:

```text
sudo tljh-config reload
```

Have a look at the [named servers documentation](https://jupyterhub.readthedocs.io/en/stable/reference/config-user-env.html#named-servers)
for more details.
24 changes: 0 additions & 24 deletions docs/configuration/namedservers.rst

This file was deleted.

91 changes: 91 additions & 0 deletions docs/configuration/persistence.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# Data Persistence

(persistence-user-data)=

## User Data

The user servers are started using JupyterHub's [SystemUserSpawner](https://github.com/jupyterhub/dockerspawner#systemuserspawner).

This spawner is based on the [DockerSpawner](https://github.com/jupyterhub/dockerspawner#dockerspawner), but makes it possible
to use the host users to start the notebook servers.

Concretely this means that the user inside the container corresponds to a real user that exists on the host.
Processes will be started by that user, instead of the default `jovyan` user that is usually found in the regular
Jupyter Docker images and on Binder.

For example when the user `foo` starts their server, the list of processes looks like the following:

```bash
foo@9cf23d669647:~$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 1.1 0.0 50944 3408 ? Ss 11:17 0:00 su - foo -m -c "$0" "$@" -- /srv/conda/envs/notebook/bin/jupyterhub-singleuser --ip=0.0.0.0 --port=8888 --NotebookApp.default_url=/lab --ResourceUseDisplay.track_cpu_percent=True
foo 32 5.4 0.8 399044 70528 ? Ssl 11:17 0:01 /srv/conda/envs/notebook/bin/python /srv/conda/envs/notebook/bin/jupyterhub-singleuser --ip=0.0.0.0 --port=8888 --NotebookApp.default_url=/lab --ResourceUseDisplay.track_cpu_percent=True
foo 84 0.0 0.0 20312 4036 pts/0 Ss 11:17 0:00 /bin/bash -l
foo 112 29.0 0.5 458560 46448 ? Ssl 11:17 0:00 /srv/conda/envs/notebook/bin/python -m bash_kernel -f /home/foo/.local/share/jupyter/runtime/kernel-9a7c8ad3-4ac2-4754-88cc-ef746d1be83e.json
foo 126 0.5 0.0 20180 3884 pts/1 Ss+ 11:17 0:00 /bin/bash --rcfile /srv/conda/envs/notebook/lib/python3.8/site-packages/pexpect/bashrc.sh
foo 140 0.0 0.0 36076 3368 pts/0 R+ 11:17 0:00 ps aux
```

The following steps happen when a user starts their server:

1. Mount the user home directory on the host into the container. This means that the file structure in the container reflects what is on the host.
2. A new directory is created in the user home directory for each new environment (i.e for each Docker image).
For example if a user starts the `2020-python-course` environment, there will be a new folder created under `/home/user/2020-python-course`.
This folder is then persisted to disk in the user home directory on the host. Any file and notebook created from the notebook interface are also persisted to disk.
3. On server startup, the entrypoint script copies the files from the base image that are initially in `/home/jovyan` to `/home/user/2020-python-course` in the container.
They are then persisted in `/home/user/2020-python-course` on the host.

```{image} ../images/configuration/persistence.png
:align: center
:alt: Mounting user's home directories
:width: 80%
```

- The files highlighted in blue correspond to the files initially bundled in the environment. These files are copied to the environment subdirectory in the user home directory on startup.
- The other files are examples of files created by the user.

## User server startup

The user server is started from the environment directory:

```{image} ../images/configuration/user-server-rootdir.png
:align: center
:alt: User servers are started in the environment directory
:width: 50%
```

The rest of the user files are mounted into the container, see {ref}`persistence-user-data`.

A user can for example open a terminal and access their files by typing `cd`.

They can then inspect their files:

```text
foo@3e29b2297563:/home/foo$ ls -lisah
total 56K
262882 4.0K drwxr-xr-x 9 foo foo 4.0K Apr 21 16:53 .
6205024 4.0K drwxr-xr-x 1 root root 4.0K Apr 21 16:50 ..
266730 4.0K -rw------- 1 foo foo 228 Apr 21 14:41 .bash_history
262927 4.0K -rw-r--r-- 1 foo foo 220 May 5 2019 .bash_logout
262928 4.0K -rw-r--r-- 1 foo foo 3.7K May 5 2019 .bashrc
1043206 4.0K drwx------ 3 foo foo 4.0K Apr 21 09:26 .cache
528378 4.0K drwx------ 3 foo foo 4.0K Apr 17 17:36 .gnupg
1565895 4.0K drwxrwxr-x 2 foo foo 4.0K Apr 21 09:55 .ipynb_checkpoints
1565898 4.0K drwxr-xr-x 5 foo foo 4.0K Apr 21 09:27 .ipython
1565880 4.0K drwxrwxr-x 3 foo foo 4.0K Apr 21 09:26 .local
262926 4.0K -rw-r--r-- 1 foo foo 807 May 5 2019 .profile
1050223 4.0K drwxrwxr-x 12 foo foo 4.0K Apr 20 10:44 2020-python-course
1043222 4.0K drwxrwxr-x 13 foo foo 4.0K Apr 20 17:07 r-intro
258193 4.0K -rw-rw-r-- 1 foo foo 843 Apr 21 09:56 Untitled.ipynb
```

## Shared Data

In addition to the user data, the plugin also mounts a shared data volume for all users.

The shared data is available under `/srv/data` inside the user server, as pictured in the diagram above.

On the host machine, the shared data should be placed under `/srv/data` as recommended in the
[TLJH documentation](http://tljh.jupyter.org/en/latest/howto/content/share-data.html#option-2-create-a-read-only-shared-folder-for-data).

The shared data is **read-only**.
Loading