diff --git a/notebooks/README.md b/notebooks/README.md index ad6c04cc463..3f1cdbaf2a1 100644 --- a/notebooks/README.md +++ b/notebooks/README.md @@ -61,7 +61,39 @@ Running the example in these notebooks requires: * CUDA 11.4+ * NVIDIA driver 450.51+ +### QuickStart +The easiest way to run the notebooks is to get the latest [rapidsai/notebooks](https://hub.docker.com/r/rapidsai/notebooks) docker image with matching cuda version and run a container based on the image. + +For example, get the latest (as of writing the document) nightly image (`a` after the version number indicates that an image is nightly) with cuda 12.0 using +```sh +docker pull rapidsai/notebooks:24.04a-cuda12.0-py3.9 +``` + +And, then run a container based on the image using + +```sh +docker run --rm -it --pull always --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p 8888:8888 rapidsai/notebooks:24.04a-cuda12.0-py3.9 +``` +You are all set. Run and edit cugraph notebooks from a browser at url +http://127.0.0.1:8888/lab/tree/cugraph/cugraph_benchmarks + + +If you want to run the container in a remote machine which has access to GPUs, you can use `ssh` tunneling to run/edit the notebooks locally as explained above. + +Login to your remote machine with ssh tunneling/port forwarding + +``` +ssh -L 127.0.0.1:8888:127.0.0.1:8888 [USER_NAME@][REMOTE_HOST_NAME or REMOTE_HOST_IP] +``` +and then run the container in your remote machine. + +```sh +docker pull rapidsai/notebooks:24.04a-cuda12.0-py3.9 +docker run --rm -it --pull always --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p 8888:8888 rapidsai/notebooks:24.04a-cuda12.0-py3.9 +``` + +You can run and edit cugraph notebooks at url http://127.0.0.1:8888/lab/tree/cugraph/cugraph_benchmarks as if they are running locally. ## Additional Notebooks