From a8436b2d2c8c56257dd1c2bc0860b388baf99327 Mon Sep 17 00:00:00 2001 From: Kyle Gerard Felker Date: Thu, 27 Jul 2023 23:34:30 -0500 Subject: [PATCH] Fix links --- .../allocation-management.md | 2 +- .../project-management/starting-alcf-award.md | 2 +- .../graphcore/unused/Scaling-ResNet50.md | 8 +- .../{ => unused}/files/benchmarks.yml | 0 .../graphcore/unused/profiling-mnist.md | 8 +- .../graphcore/unused/profiling-resnet50.md | 6 +- .../sambanova_gen1/unused/sambanova.md | 2 +- .../example-multi-node-programs.md | 4 +- .../sambanova_gen2/unused/sambanova.md | 110 ------------------ .../compiling-and-linking-overview.md | 2 +- docs/running-jobs/example-job-scripts.md | 2 +- docs/running-jobs/job-and-queue-scheduling.md | 2 +- docs/services/jenkins.md | 2 +- docs/stylesheets/alcf-extra.css | 9 ++ .../building-python-packages.md | 2 +- .../job-and-queue-scheduling.md | 2 +- docs/theta/data-science-workflows/keras.md | 28 ++--- docs/theta/performance-tools/intel-advisor.md | 2 +- docs/theta/programming-models/openmp-theta.md | 8 +- .../affinity-theta.md | 2 +- mkdocs.yml | 4 + 21 files changed, 54 insertions(+), 153 deletions(-) rename docs/ai-testbed/graphcore/{ => unused}/files/benchmarks.yml (100%) delete mode 100644 docs/ai-testbed/sambanova_gen2/unused/sambanova.md diff --git a/docs/account-project-management/allocation-management/allocation-management.md b/docs/account-project-management/allocation-management/allocation-management.md index d7450ef86..64f7c5252 100644 --- a/docs/account-project-management/allocation-management/allocation-management.md +++ b/docs/account-project-management/allocation-management/allocation-management.md @@ -2,7 +2,7 @@ Allocations require management – balance checks, resource allocation, requesting more time, etc. ## Checking for an Active Allocation -To determine if there is an active allocation, check [Job Submision](../../../theta/queueing-and-running-jobs/job-and-queue-scheduling/#submit-a-job). +To determine if there is an active allocation, check [Job Submission](../../theta/queueing-and-running-jobs/job-and-queue-scheduling.md#submit-a-job). For information on how to run the query, look at our documentation on our [sbank Allocations Accounting System](sbank-allocation-accounting-system.md) or email [support@alcf.anl.gov](mailto:support@alcf.anl.gov) and ask for all active allocations. diff --git a/docs/account-project-management/project-management/starting-alcf-award.md b/docs/account-project-management/project-management/starting-alcf-award.md index 0ef6b2543..59fc9f991 100644 --- a/docs/account-project-management/project-management/starting-alcf-award.md +++ b/docs/account-project-management/project-management/starting-alcf-award.md @@ -106,7 +106,7 @@ The ALCF will send you a report template at the end of each quarter. Please comp Please be aware that we will periodically monitor, and could potentially adjust, your project allocation if a large portion of it goes unused. You may view: [Pullback Policy](../../policies/queue-scheduling/pullback-policy.md) ### Allocation Overburn Policy -Please see this page for overburn/overuse eligibility for INCITE projects that have exhausted their allocation in the first 11 months of its allocation year: [Allocation Overburn](../../../policies/queue-scheduling/queue-and-scheduling-policy/#incitealcc-overburn-policy) +Please see this page for overburn/overuse eligibility for INCITE projects that have exhausted their allocation in the first 11 months of its allocation year: [Allocation Overburn](../../policies/queue-scheduling/queue-and-scheduling-policy.md#incitealcc-overburn-policy) ### Acknowledgment In Publications Please follow the guidelines provided on the [ALCF Acknowledgement Policy page](../../policies/alcf-acknowledgement-policy.md) to properly acknowledge the use of ALCF resources in all of your publications, both online and print. diff --git a/docs/ai-testbed/graphcore/unused/Scaling-ResNet50.md b/docs/ai-testbed/graphcore/unused/Scaling-ResNet50.md index f462c7081..39eae342d 100644 --- a/docs/ai-testbed/graphcore/unused/Scaling-ResNet50.md +++ b/docs/ai-testbed/graphcore/unused/Scaling-ResNet50.md @@ -1,6 +1,6 @@ # Scaling ResNet50 -Follow all the instructions in [Getting Started](/docs/graphcore/Getting-Started) to log into a Graphcore node. +Follow all the instructions in [Getting Started](../getting-started.md) to log into a Graphcore node. ## Examples Repo @@ -131,12 +131,12 @@ You should see: # gc-poplar-04:22 SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.5 ``` -## Benchmarks.yml +## `benchmarks.yml` Update **${HOME}/graphcore/examples/vision/cnns/pytorch/train/benchmarks.yml** -with your favorite editor to match [benchmarks.yml](/docs/graphcore/benchmarks.yml). +with your favorite editor to match [benchmarks.yml](./files/benchmarks.yml). -## Configs.yml +## `configs.yml` Update **${HOME}/graphcore/examples/vision/cnns/pytorch/train/configs.yml** with your favorite editor. At about line 30, change **use_bbox_info: true** to diff --git a/docs/ai-testbed/graphcore/files/benchmarks.yml b/docs/ai-testbed/graphcore/unused/files/benchmarks.yml similarity index 100% rename from docs/ai-testbed/graphcore/files/benchmarks.yml rename to docs/ai-testbed/graphcore/unused/files/benchmarks.yml diff --git a/docs/ai-testbed/graphcore/unused/profiling-mnist.md b/docs/ai-testbed/graphcore/unused/profiling-mnist.md index 26fdb4169..5496c9b2f 100644 --- a/docs/ai-testbed/graphcore/unused/profiling-mnist.md +++ b/docs/ai-testbed/graphcore/unused/profiling-mnist.md @@ -1,10 +1,10 @@ # Profiling MNIST -Follow all the instructions in [Getting Started](/docs/graphcore/Getting-Started) to log into a Graphcore node. +Follow all the instructions in [Getting Started](../getting-started.md) to log into a Graphcore node. -Follow the instructions in [Virtual Environments](/docs/graphcore/Virtual-Environments) up to and including **PopART Environment Setup**. +Follow the instructions in [Virtual Environments](../virtual-environments.md) up to and including **PopART Environment Setup**. -Following the instructions in [Example Programs](/docs/graphcore/Example-Programs) up to and including +Following the instructions in [Example Programs](../example-programs.md) up to and including **MNIST, Install Requirements**. ## Change Directory @@ -33,4 +33,4 @@ Do so by running the following command: python mnist_poptorch.py ``` -When MNIST has finished running, see [Profiling](/docs/graphcore/Profiling) to use **Graph Analyser**. +When MNIST has finished running, see [Profiling](./profiling.md) to use **Graph Analyser**. diff --git a/docs/ai-testbed/graphcore/unused/profiling-resnet50.md b/docs/ai-testbed/graphcore/unused/profiling-resnet50.md index 201ae1a1c..1eca7f505 100644 --- a/docs/ai-testbed/graphcore/unused/profiling-resnet50.md +++ b/docs/ai-testbed/graphcore/unused/profiling-resnet50.md @@ -1,8 +1,8 @@ # Profiling ResNet50 -Follow all the instructions in [Getting Started](/docs/graphcore/Getting-Started) to log into a Graphcore node. +Follow all the instructions in [Getting Started](../getting-started.md) to log into a Graphcore node. -Follow the instructions in [Virtual Environments](/docs/graphcore/Virtual-Environments) up to and including **PopART Environment Setup**. +Follow the instructions in [Virtual Environments](../virtual-environments.md) up to and including **PopART Environment Setup**. ## Examples Repo @@ -58,4 +58,4 @@ python3 -m examples_utils benchmark --spec benchmarks.yml --benchmark pytorch_re ## Profile Results -When ResNet50 has finished running, see [Profiling](/docs/graphcore/Profiling) to use **Graph Analyser**. +When ResNet50 has finished running, see [Profiling](./profiling.md) to use **Graph Analyser**. diff --git a/docs/ai-testbed/sambanova_gen1/unused/sambanova.md b/docs/ai-testbed/sambanova_gen1/unused/sambanova.md index 4114d6d7a..1d7257119 100644 --- a/docs/ai-testbed/sambanova_gen1/unused/sambanova.md +++ b/docs/ai-testbed/sambanova_gen1/unused/sambanova.md @@ -36,7 +36,7 @@ broken (503 errors). ## Further Information -[Human Decisions Files notes](/display/AI/Human+Decisions+Files+notes) + ## Creating a SambaNova Portal Account to access the documentation portal diff --git a/docs/ai-testbed/sambanova_gen2/example-multi-node-programs.md b/docs/ai-testbed/sambanova_gen2/example-multi-node-programs.md index 1cd3a6c06..d1c8f82c2 100644 --- a/docs/ai-testbed/sambanova_gen2/example-multi-node-programs.md +++ b/docs/ai-testbed/sambanova_gen2/example-multi-node-programs.md @@ -1,8 +1,6 @@ # Example Multi-Node Programs -In this section we will learn how to extend the UNet2d and Gpt1.5B applications scripts that we introduced in the [Example Programs](/docs/ai-testbed/sambanova_gen2/example-programs.md) to compile and run multiple instances of the model in a data parallel fashion across multiple tiles or across multiple nodes. - - +In this section we will learn how to extend the UNet2d and Gpt1.5B applications scripts that we introduced in the [Example Programs](./example-programs.md) to compile and run multiple instances of the model in a data parallel fashion across multiple tiles or across multiple nodes. ## UNet2d diff --git a/docs/ai-testbed/sambanova_gen2/unused/sambanova.md b/docs/ai-testbed/sambanova_gen2/unused/sambanova.md deleted file mode 100644 index 4114d6d7a..000000000 --- a/docs/ai-testbed/sambanova_gen2/unused/sambanova.md +++ /dev/null @@ -1,110 +0,0 @@ -# SambaNova - -## PyTorch Mirrors - -See . - -There are two mirrors (in the python docs) used for downloading the -mnist dataset. - -mirrors = [ - 'http://yann.lecun.com/exdb/mnist/', - 'https://ossci-datasets.s3.amazonaws.com/mnist/'] - -[yann.lecun.com](http://yann.lecun.com) appears to be intermittently -broken (503 errors). - -## Resources - -- - -- [Argonne SambaNova Training - 11/20](https://anl.app.box.com/s/bqc101mvt3r7rpxbd2yxjsf623ea3gpe) - -- [https://docs.sambanova.ai](https://docs.sambanova.ai/) Create a - SambaNova account if you do not have one. - -- [Getting Started with - SambaFlow](https://docs.sambanova.ai/sambanova-docs/1.6/developer/getting-started.html) - Skip this one. - -- [Tutorial: Creating Models with - SambaFlow](https://docs.sambanova.ai/sambanova-docs/1.6/developer/intro-tutorial.html) - -- Administrators --- @ryade - -## Further Information - -[Human Decisions Files notes](/display/AI/Human+Decisions+Files+notes) - -## Creating a SambaNova Portal Account to access the documentation portal - -1. Go to [login.sambanova.ai](http://login.sambanova.ai/); - -2. Select the "Sign up" link at the bottom; - -3. Enter your information - - 1. Your ANL email address; - - 2. A password that you choose to access the site; - - 3. First name; - - 4. Last name; - - 5. Alternate email address; - - 6. Use 64693137 for the CLOUD ID; - - 7. Select "Register" button; - - 8. Note: The new web page may be displaying a QR code. Do not navigate away from it. Please edit this page to describe what -happenes for you. - -4. Verify your email address - - 1. Open your ANL email; - - 2. Open the email from Okta; - - 3. Select the "Activate Account" button; - - 4. Select the "Configure factor" button on the displayed web page; - - 5. Select either iPhone or Android for the device time on the new web page; - - 6. Install Okta Verify from the App Store/Google Play Store onto your mobile device.; - - 7. Select "Next" button on the web page; - -5. On your phone - - 1. Open Okta Verify app; - - 2. Select "Get Started" button; - - 3. Select "Next" button; - - 4. Select "Add Account" button; - - 5. Select "Organization" for Account Type; - - 6. Scan the QR Code shown in the browser; - -6. Sign in to the SambaNova web site - - 1. Select the "SambaNova Documentation" button. - -Authorization for sections of the SambaNova site uses the tuple (email -address, cloud id). For ANL users, th*ese **should** be an anl email -address and the cloud id specified above (64693137). (Note: the cloud -id can be changed in the SambaNova user settings.) -**If you are not at Argonne, please send us an email (ai@alcf.anl.gov) -for access. ** - -If you plan to publish, say to a conference, workshop or journal, we -have a review process wherein you share the draft with us -(pre-submission) at and we -will work with SambaNova for the requisite approvals. diff --git a/docs/polaris/compiling-and-linking/compiling-and-linking-overview.md b/docs/polaris/compiling-and-linking/compiling-and-linking-overview.md index 4e607cd2f..e756aa315 100644 --- a/docs/polaris/compiling-and-linking/compiling-and-linking-overview.md +++ b/docs/polaris/compiling-and-linking/compiling-and-linking-overview.md @@ -2,7 +2,7 @@ ## Compiling on Polaris Login and Compute Nodes -If your build system does not require GPUs for the build process, as is usually the case, compilation of GPU-accelerated codes is generally expected to work well on the Polaris login nodes. If your build system _does_ require GPUs, you cannot yet compile on the Polaris login nodes, as they do not currently have GPUs installed. You may in this case compile your applications on the Polaris compute nodes. Do this by submitting an [interactive single-node job](/polaris/running-jobs#Interactive-Jobs-on-Compute-Nodes), or running your build system in a batch job. +If your build system does not require GPUs for the build process, as is usually the case, compilation of GPU-accelerated codes is generally expected to work well on the Polaris login nodes. If your build system _does_ require GPUs, you cannot yet compile on the Polaris login nodes, as they do not currently have GPUs installed. You may in this case compile your applications on the Polaris compute nodes. Do this by submitting an [interactive single-node job](../running-jobs.md#Interactive-Jobs-on-Compute-Nodes), or running your build system in a batch job. diff --git a/docs/running-jobs/example-job-scripts.md b/docs/running-jobs/example-job-scripts.md index 29c8111a0..e48dd4cda 100644 --- a/docs/running-jobs/example-job-scripts.md +++ b/docs/running-jobs/example-job-scripts.md @@ -7,7 +7,7 @@ A simple example using a similar script on Polaris is available in the ## CPU MPI-OpenMP Examples -The following `submit.sh` example submits a 1-node job to Polaris with 16 MPI ranks per node and 2 OpenMP threads per rank. See [Queues](./job-and-queue-scheduling/#queues) for details on practical limits to node counts and job times for different sizes of jobs. +The following `submit.sh` example submits a 1-node job to Polaris with 16 MPI ranks per node and 2 OpenMP threads per rank. See [Queues](./job-and-queue-scheduling.md#queues) for details on practical limits to node counts and job times for different sizes of jobs. The [`hello_affinity`](https://github.com/argonne-lcf/GettingStarted/tree/master/Examples/Polaris/affinity_gpu) program is a compiled C++ code, which is built via `make -f Makefile.nvhpc` in the linked directory after cloning the [Getting Started](https://github.com/argonne-lcf/GettingStarted) repository. diff --git a/docs/running-jobs/job-and-queue-scheduling.md b/docs/running-jobs/job-and-queue-scheduling.md index 65545595d..9c9902203 100644 --- a/docs/running-jobs/job-and-queue-scheduling.md +++ b/docs/running-jobs/job-and-queue-scheduling.md @@ -89,7 +89,7 @@ Where: * `walltime=HH:MM:SS` specifying a wall time is mandatory at the ALCF. Valid wall times depend on the queue you are using. There is a table with the queues for each machine at the end of this section and in the machine specific documentation. * `filesystems=fs1:fs2:...` Specifying which filesystems your application uses is mandatory at ALCF. The reason for this is if a filesystem goes down, we have a way of making PBS aware of that and it won't run jobs that need that filesystem. If you don't specify filesystems you will receive the following error: `qsub: Resource: filesystems is required to be set.` * `place=scatter` is telling PBS you want each of your chunks on a separate vnode. By default, PBS will pack your chunks to get maximum utilization. If you requested `ncpus=1` and `chunks=64` **without** `place=scatter` on a system with `ncpus=64`, all your chunks would end up on one node. -* Your job script: See [Example Job Scripts](../example-job-scripts) for more information about how to build your job script. For options that wont change, you do have the option of taking things off the command line and putting them in your job script. For instance the above command line could be simplified to `qsub -l select=<#> ` if you added the following to the top (the PBS directives have to be before any executable line) of your job script: +* Your job script: See [Example Job Scripts](./example-job-scripts.md) for more information about how to build your job script. For options that wont change, you do have the option of taking things off the command line and putting them in your job script. For instance the above command line could be simplified to `qsub -l select=<#> ` if you added the following to the top (the PBS directives have to be before any executable line) of your job script: ```bash #PBS -A diff --git a/docs/services/jenkins.md b/docs/services/jenkins.md index 41f2343bb..5e98e0f16 100644 --- a/docs/services/jenkins.md +++ b/docs/services/jenkins.md @@ -1,7 +1,7 @@ # Jenkins on Theta ## Jenkins to be decommissioned -New projects should request access to use our GitLab-CI-based service. You can learn how to request access in our documentation found [here](/services/gitlab-ci/#quickstart). +New projects should request access to use our GitLab-CI-based service. You can learn how to request access in our documentation found [here](./gitlab-ci.md#quickstart). Existing projects can continue to use Jenkins. We will notify projects when we have the date it will be retired. Projects will have ample notice to migrate their work to our GitLab-CI service. diff --git a/docs/stylesheets/alcf-extra.css b/docs/stylesheets/alcf-extra.css index 589064f0e..6c4b23aed 100644 --- a/docs/stylesheets/alcf-extra.css +++ b/docs/stylesheets/alcf-extra.css @@ -913,3 +913,12 @@ footer a:hover { .js-dropdown-hidden { display: none; } + +table { + table-layout: fixed; + max-width: 100%; +} + +.md-typeset code { + overflow-wrap: break-word; +} diff --git a/docs/theta-gpu/data-science-workflows/building-python-packages.md b/docs/theta-gpu/data-science-workflows/building-python-packages.md index 6fb47e406..df42771c2 100644 --- a/docs/theta-gpu/data-science-workflows/building-python-packages.md +++ b/docs/theta-gpu/data-science-workflows/building-python-packages.md @@ -4,7 +4,7 @@ To build Python packages for ThetaGPU, there are two options: build on top of a ## Build on ThetaGPU compute using Conda To build on ThetaGPU compute and install your own packages, login to theta and then submit an interactive job to log on to ThetaGPU compute node. -Please see [Running PyTorch with Conda](/dl-frameworks/running-pytorch-conda.md) or [Running TensorFlow with Conda](/dl-frameworks/running-tensorflow-conda/index.html) for more information. +Please see [Running PyTorch with Conda](./dl-frameworks/running-pytorch-conda.md) or [Running TensorFlow with Conda](./dl-frameworks/running-tensorflow-conda.md) for more information. ## Building on top of a container At the moment, you will need two shells to do this: have one open on a login node (for example, ```thetaloginN```, and one open on a compute node (```thetagpuN```). First, start the container in interactive mode: diff --git a/docs/theta-gpu/queueing-and-running-jobs/job-and-queue-scheduling.md b/docs/theta-gpu/queueing-and-running-jobs/job-and-queue-scheduling.md index 1471a22ce..bc6d76b5d 100644 --- a/docs/theta-gpu/queueing-and-running-jobs/job-and-queue-scheduling.md +++ b/docs/theta-gpu/queueing-and-running-jobs/job-and-queue-scheduling.md @@ -13,7 +13,7 @@ As with all Argonne Leadership Computing Facility production systems, job priori * job duration - shorter duration jobs will accumulate priority more quickly, so it is best to specify the job run time as accurately as possible ### Reservations and Scheduling Policy -Some work will require use of Theta that requires deviation from regular policy. On such occasions, normal reservation policy applies. Please send the [regular form](/docs/theta/queueing-and-running-jobs/machine-reservations.md) no fewer than five (5) business days in advance. +Some work will require use of Theta that requires deviation from regular policy. On such occasions, normal reservation policy applies. Please send the [regular form](../../theta/queueing-and-running-jobs/machine-reservations.md) no fewer than five (5) business days in advance. ### Monday Maintenance When the ALCF is on a regular business schedule, preventitive maintenance is typically scheduled on alternate Mondays. The showres command may be used to view pending and active maintenance reservations. diff --git a/docs/theta/data-science-workflows/keras.md b/docs/theta/data-science-workflows/keras.md index ba0806350..f9ba8cee4 100644 --- a/docs/theta/data-science-workflows/keras.md +++ b/docs/theta/data-science-workflows/keras.md @@ -7,26 +7,26 @@ On Theta, we support TensorFlow backend for Keras. To use the datascience Keras module on Theta, please load the following two modules: ``` module load datascience/keras-2.2.4 - + module load datascience/tensorflow-1.12 ``` -Notice that the datascience/tensorflow-* modules were compiled with AVX512 extension on Theta. Therefore, it could not run on login node, otherwise it will issue an “illegal instruction” fault. One has to submit the job to KNL nodes (see TensorFlow documentation for details). +Notice that the `datascience/tensorflow-*` modules were compiled with AVX512 extension on Theta. Therefore, it could not run on login node, otherwise it will issue an "illegal instruction" fault. One has to submit the job to KNL nodes (see TensorFlow documentation for details). -Since we use TensorFlow as the backend, all the optimal environmental setups (Threading + affinity) are applicable here. Please visit the [Tensorflow documentation page](tensorflow) for the optimal setting. +Since we use TensorFlow as the backend, all the optimal environmental setups (Threading + affinity) are applicable here. Please visit the [TensorFlow documentation page](tensorflow.md) for the optimal setting. -We do not see any incompatibility issues in using different versions of keras and tensorflow as those specified above. Feel free to change other versions of keras or TensorFlow. Currently, we support version 2.2.2 and 2.2.4. +We do not see any incompatibility issues in using different versions of keras and tensorflow as those specified above. Feel free to change other versions of Keras or TensorFlow. Currently, we support version 2.2.2 and 2.2.4. ## Distributed learning using Horovod We support distributed learning using Horovod. To use it please load datascience/horovod-0.15.2 module. Please change your python script accordingly -### Initialize Horovod by adding the following lines to the beginning of your python script. +### Initialize Horovod by adding the following lines to the beginning of your Python script. ``` import horovod.keras as hvd - + hvd.init() ``` -After this initialization, the total number of ranks and the rank id could be access through hvd.rank(), hvd.size() functions. +After this initialization, the total number of ranks and the rank id could be access through hvd.rank(), hvd.size() functions. ## Scale the learning rate. Typically, since we use multiple workers, the global batch is usually increased n times (n is the number of workers). The learning rate should increase proportionally as follows (assuming that the learning rate initially is 0.01). @@ -34,19 +34,19 @@ Typically, since we use multiple workers, the global batch is usually increased ``` opt = keras.optimizers.Adadelta(1.0 * hvd.size() ``` -In some case, 0.01*hvd.size() might be too large, one might want to have some warming up steps with smaller learning rate. +In some case, `0.01*hvd.size()` might be too large, one might want to have some warming up steps with smaller learning rate. ### Wrap the optimizer with Distributed Optimizer ``` opt = hvd.DistributedOptimizer(opt) ``` -In such case, opt will automatically average the loss and gradients among all the workers and then perform update. +In such case, opt will automatically average the loss and gradients among all the workers and then perform update. ### Broadcast the model from rank 0, so that all the workers will have the same starting point ``` callbacks = [hvd.callbacks.BroadcastGlobalVariablesCallback(0)] ``` -Notice that by default, TensorFlow will initialize the parameters randomly. Therefore, by default, different workers will have different parameters. So it is crucial to broadcast the model from rank 0 to other ranks. +Notice that by default, TensorFlow will initialize the parameters randomly. Therefore, by default, different workers will have different parameters. So it is crucial to broadcast the model from rank 0 to other ranks. ### Letting only rank 0 to write checkpoint ``` @@ -55,12 +55,12 @@ if hvd.rank() == 0: callbacks.append(keras.callbacks.ModelCheckpoint('./checkpoint-{epoch}.h5')) ### Loading data according to rank ID -Since we are using data parallel scheme. Different ranks shall process different data. One has to change the data loader part of the python script to ensure different ranks read different mini batches of data. +Since we are using data parallel scheme. Different ranks shall process different data. One has to change the data loader part of the python script to ensure different ranks read different mini batches of data. #### Example A simple example for doing linear regression using Keras + Horovod is put in the follwoing directory on Theta: -/projects/SDL_Workshop/hzheng/examples/keras/linreg - -linreg_keras.py is the python script, and qsub.sc is the COBALT submission script. +/projects/SDL_Workshop/hzheng/examples/keras/linreg + +linreg_keras.py is the python script, and qsub.sc is the COBALT submission script. diff --git a/docs/theta/performance-tools/intel-advisor.md b/docs/theta/performance-tools/intel-advisor.md index 0a9a5aca9..ba7808674 100644 --- a/docs/theta/performance-tools/intel-advisor.md +++ b/docs/theta/performance-tools/intel-advisor.md @@ -98,6 +98,6 @@ There are three other types of collections that can be performed with Advisor fo ## Additional Information: There are many command line options. See [2] for more details on all of the options, and its more comprehensive user guide also available on Intel’s website. - [1] Williams, Samuel, Andrew Waterman, and David Patterson. "Roofline: an insightful visual performance model for multicore architectures." Communications of the ACM 52.4 (2009): 65-76. -- [2] Intel. “Get Started with Intel Advisor.” Intel® Software, Intel, 18 Oct. 2018, [software.intel.com/en-us/get-started-with-advisor-for-more-information](software.intel.com/en-us/get-started-with-advisor-for-more-information) +- [2] Intel. “Get Started with Intel Advisor.” Intel® Software, Intel, 18 Oct. 2018 diff --git a/docs/theta/programming-models/openmp-theta.md b/docs/theta/programming-models/openmp-theta.md index bc16757f3..25dcd29a3 100644 --- a/docs/theta/programming-models/openmp-theta.md +++ b/docs/theta/programming-models/openmp-theta.md @@ -21,7 +21,7 @@ To enable OpenMP, use the following flags in your compile line, depending on the | LLVM | PrgEnv-llvm | “-fopenmp" | ## Running Jobs with OpenMP on Theta -To run jobs on Theta with OpenMP threads, the OpenMP environment variable OMP_NUM_THREADS will need to be set to the desired number of threads per MPI rank, and certain flags in the aprun command will need to be set. Some examples are given below, and more information about running is here: [Affinity on Theta](/docs/theta/queueing-and-running-jobs/affinity-theta.md). +To run jobs on Theta with OpenMP threads, the OpenMP environment variable `OMP_NUM_THREADS` will need to be set to the desired number of threads per MPI rank, and certain flags in the aprun command will need to be set. Some examples are given below, and more information about running is here: [Affinity on Theta](../queueing-and-running-jobs/affinity-theta.md). ### Source code for xthi.c: ``` @@ -94,11 +94,11 @@ int main(int argc, char *argv[]) $ cc -qopenmp xthi.c -o xthi # PrgEnv-intel ``` -2. Run with aprun (either in a batch script that is submitted to the job scheduler a or on the command line as part of an interactive session. See [job scheduling](/docs/theta/queueing-and-running-jobs/job-and-queue-scheduling.md) for more details about how to run. +2. Run with `aprun` (either in a batch script that is submitted to the job scheduler a or on the command line as part of an interactive session. See [job scheduling](../queueing-and-running-jobs/job-and-queue-scheduling.md) for more details about how to run. Mapping of OpenMP threads to hardware threads on a KNL node can be achieved with the “--cc” option in aprun. -One common option described in more detail on [Affinity on Theta](/docs/theta/queueing-and-running-jobs/affinity-theta.md) is to use --cc depth with the -d and -j flags: +One common option described in more detail on [Affinity on Theta](../queueing-and-running-jobs/affinity-theta.md) is to use `--cc depth` with the `-d` and `-j` flags: ``` $ aprun -n 1 -N 1 -d 8 -j 1 -cc depth -e OMP_NUM_THREADS=8 ./a.out @@ -113,7 +113,7 @@ Hello from rank 0, thread 6, on nid03554. (core affinity = 6) Application 19165961 resources: utime ~1s, stime ~1s, Rss ~6284, inblocks ~0, outblocks ~8 ``` -Another option is to use --cc none with OpenMP affinity controls: +Another option is to use `--cc none` with OpenMP affinity controls: ``` $ aprun -n 1 -N 1 -cc none -e OMP_NUM_THREADS=8 -e OMP_PROC_BIND=spread -e OMP_PLACES=cores ./a.out diff --git a/docs/theta/queueing-and-running-jobs/affinity-theta.md b/docs/theta/queueing-and-running-jobs/affinity-theta.md index a7566c782..d6f58ae7e 100644 --- a/docs/theta/queueing-and-running-jobs/affinity-theta.md +++ b/docs/theta/queueing-and-running-jobs/affinity-theta.md @@ -12,7 +12,7 @@ The numbers inside the quadrants identify the specific hardware threads in the c
Visual representation of two tiles in a KNL
-Using the -j, -d, and --cc arguments to aprun and environment variables, MPI ranks and threads can be assigned to run on specific hardware threads. For more information about the flags to aprun, see [Running Jobs on Theta](/docs/theta/queueing-and-running-jobs/job-and-queue-scheduling.md). Four examples of using aprun are given below followed by descriptions of two methods for displaying the mapping produced. +Using the -j, -d, and --cc arguments to aprun and environment variables, MPI ranks and threads can be assigned to run on specific hardware threads. For more information about the flags to aprun, see [Running Jobs on Theta](../queueing-and-running-jobs/job-and-queue-scheduling.md). Four examples of using aprun are given below followed by descriptions of two methods for displaying the mapping produced. **Note:** Logical core and hardware thread are used interchangeably below. diff --git a/mkdocs.yml b/mkdocs.yml index 9ba17bb3f..90a6f2ed0 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -344,6 +344,10 @@ markdown_extensions: - pymdownx.tasklist: custom_checkbox: true +validation: + omitted_files: warn + absolute_links: warn + unrecognized_links: warn repo_name: 'argonne-lcf/alcf-userguide' repo_url: 'https://github.com/argonne-lcf/alcf-userguide'