Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Pre-allocated input and output buffer #18

Open
oyilmaz-nvidia opened this issue May 21, 2021 · 0 comments
Open

[FEA] Pre-allocated input and output buffer #18

oyilmaz-nvidia opened this issue May 21, 2021 · 0 comments

Comments

@oyilmaz-nvidia
Copy link
Contributor

Input and output buffers are requested using Triton APIs whenever there is a new request. Once the output is sent, they are disposed. If we can keep a pre-allocated buffer for the input and output, we can accelerate the query response time. But, knowing the size of these buffers is not that easy. If we know the max batch size, we can allocate the space based on that. But, if that batch size is large, then we have to occupy a good amount of space on memory.

Maybe, for small batches, we can keep a pre-allocated buffer for input and output. For larger batch sizes, we can request a new space.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant