Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about prefetch implementation. #31

Open
zzhbrr opened this issue Nov 15, 2024 · 1 comment
Open

Question about prefetch implementation. #31

zzhbrr opened this issue Nov 15, 2024 · 1 comment
Assignees

Comments

@zzhbrr
Copy link

zzhbrr commented Nov 15, 2024

Hi.

I have a question regarding the prefetch implementation in your framework.

As I understand it, prefetching and inference should ideally run concurrently in separate CUDA streams. I noticed that there are some codes about Cuda Stream in your framework.

I use pytorch profiler to profile the readme_example.py, and I found that there is just one stream (stream7), and cudaMemcpy operations are blocking.
image

Could you please clarify how prefetching is implemented in your framework? (Apologies if I haven't fully grasped the code yet.) Additionally, could it be that I'm misunderstanding some basic concepts?

Thanks a lot!

@drunkcoding
Copy link
Contributor

Thanks for pointing this out, there's some miss-alignment in using torch stream API and cuda stream API. By the end of the day, everything should be using stream pool. Will update shortly.

@drunkcoding drunkcoding self-assigned this Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants