Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs for how to add debug symbols [skip ci] #1575

Merged
merged 2 commits into from
Nov 21, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,6 +259,39 @@ class NormalCaseTest {
}
```

### Debugging
You can add debug symbols selectively to C++ files in spark-rapids-jni by modifying the appropriate
`CMakeLists.txt` files. You will need to add a specific flag depending on what kind of code you are
debugging. For CUDA code, you need to add the `-G` flag to add device debug symbols:

```cmake
set_source_files_properties(src/row_conversion.cu PROPERTIES COMPILE_OPTIONS "-G")
```

For C++ code, you will need to add the `-g` flag to add host debug symbols.

```cmake
set_source_files_properties(row_conversion.cpp PROPERTIES COMPILE_OPTIONS "-G")
```

For debugging C++ tests, you need to add both device debug symbols to the CUDA kernel files involved
in testing (in `src/main/cpp/CMakeLists.txt`) **and** host debug symbols to the CPP files used for
testing (in `src/main/cpp/tests/CMakeLists.txt`).

You can then use `cuda-gdb` to debug the gtest (NOTE: For Docker, run an interactive shell first and
then run `cuda-gdb`. You do not necessarily need to run `cuda-gdb` in Docker):

```bash
./build/run-in-docker
bash-4.2$ cuda-gdb target/cmake-build/gtests/ROW_CONVERSION
Comment on lines +285 to +286
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: once it is built we do not necessarily have to run it inside docker.

Let us also mention Nsight VSCode extension https://docs.nvidia.com/nsight-visual-studio-code-edition/cuda-debugger/index.html

```

You can also use the [NVIDIA Nsight VSCode Code Integration](https://docs.nvidia.com/nsight-visual-studio-code-edition/cuda-debugger/index.html)
as well to debug within Visual Studio Code.

To debug libcudf code, please see [Debugging cuDF](thirdparty/cudf/CONTRIBUTING.md#debugging-cudf)
in the cuDF [CONTRIBUTING](thirdparty/cudf/CONTRIBUTING.md) guide.

### Benchmarks
Benchmarks exist for c++ benchmarks using NVBench and are in the `src/main/cpp/benchmarks` directory.
To build these benchmarks requires the `-DBUILD_BENCHMARKS` build option. Once built, the benchmarks
Expand Down