From feda43599c6f626ec9b89c165f444a27bebe21f7 Mon Sep 17 00:00:00 2001 From: Yan Feng Date: Wed, 4 Dec 2024 12:49:35 +0800 Subject: [PATCH 1/2] Fix content error in documentation Signed-off-by: Yan Feng --- CONTRIBUTING.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 1bb9fdf6b..baea78380 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -235,12 +235,12 @@ and build inside WSL2, e.g. ### Testing Java tests are in the `src/test` directory and c++ tests are in the `src/main/cpp/tests` directory. The c++ tests are built with the `-DBUILD_TESTS` command line option and will build into the -`target/cmake-build/gtests/` directory. Due to building inside the docker container, it is possible +`target/jni/cmake-build/gtests/` directory. Due to building inside the docker container, it is possible that the host environment does not match the container well enough to run these executables, resulting in errors finding libraries. The script `build/run-in-docker` was created to help with this situation. A test can be run directly using this script or the script can be run without any arguments to get into an interactive shell inside the container. -```build/run-in-docker target/cmake-build/gtests/ROW_CONVERSION``` +```build/run-in-docker target/jni/cmake-build/gtests/ROW_CONVERSION``` #### Testing with Compute Sanitizer [Compute Sanitizer](https://docs.nvidia.com/compute-sanitizer/ComputeSanitizer/index.html) is a @@ -311,12 +311,12 @@ in the cuDF [CONTRIBUTING](thirdparty/cudf/CONTRIBUTING.md) guide. ### Benchmarks Benchmarks exist for c++ benchmarks using NVBench and are in the `src/main/cpp/benchmarks` directory. To build these benchmarks requires the `-DBUILD_BENCHMARKS` build option. Once built, the benchmarks -can be found in the `target/cmake-build/benchmarks/` directory. Due to building inside the docker +can be found in the `target/jni/cmake-build/benchmarks/` directory. Due to building inside the docker container, it is possible that the host environment does not match the container well enough to run these executables, resulting in errors finding libraries. The script `build/run-in-docker` was created to help with this situation. A benchmark can be run directly using this script or the script can be run without any arguments to get into an interactive shell inside the container. -```build/run-in-docker target/cmake-build/benchmarks/ROW_CONVERSION_BENCH``` +```build/run-in-docker target/jni/cmake-build/benchmarks/ROW_CONVERSION_BENCH``` ## Code contributions ### Your first issue From 8536fc83a65f2846b4b1f2ff1dd6a66c8db4b207 Mon Sep 17 00:00:00 2001 From: Yan Feng Date: Wed, 4 Dec 2024 15:59:09 +0800 Subject: [PATCH 2/2] Fix content error in documentation Signed-off-by: Yan Feng --- src/main/cpp/faultinj/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/main/cpp/faultinj/README.md b/src/main/cpp/faultinj/README.md index 08ac1e028..a22748eac 100644 --- a/src/main/cpp/faultinj/README.md +++ b/src/main/cpp/faultinj/README.md @@ -33,7 +33,7 @@ Spark local mode is a single CUDA process. We can test is as any standalone single-process application. ```bash -CUDA_INJECTION64_PATH=$PWD/target/cmake-build/faultinj/libcufaultinj.so \ +CUDA_INJECTION64_PATH=$PWD/target/jni/cmake-build/faultinj/libcufaultinj.so \ FAULT_INJECTOR_CONFIG_PATH=src/test/cpp/faultinj/test_faultinj.json \ $SPARK_HOME/bin/pyspark \ --jars $SPARK_RAPIDS_REPO/dist/target/rapids-4-spark_2.12-22.08.0-SNAPSHOT-cuda11.jar \ @@ -44,7 +44,7 @@ $SPARK_HOME/bin/pyspark \ $SPARK_HOME/bin/spark-shell \ --jars $SPARK_RAPIDS_REPO/dist/target/rapids-4-spark_2.12-22.08.0-SNAPSHOT-cuda11.jar \ --conf spark.plugins=com.nvidia.spark.SQLPlugin \ - --files ./target/cmake-build/faultinj/libcufaultinj.so,./src/test/cpp/faultinj/test_faultinj.json \ + --files ./target/jni/cmake-build/faultinj/libcufaultinj.so,./src/test/cpp/faultinj/test_faultinj.json \ --conf spark.executorEnv.CUDA_INJECTION64_PATH=./libcufaultinj.so \ --conf spark.executorEnv.FAULT_INJECTOR_CONFIG_PATH=test_faultinj.json \ --conf spark.rapids.memory.gpu.minAllocFraction=0 \