Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

batch inference #24

Open
orkunozturk opened this issue Jan 15, 2019 · 1 comment
Open

batch inference #24

orkunozturk opened this issue Jan 15, 2019 · 1 comment

Comments

@orkunozturk
Copy link

hi Ghustwb,

for batch processing of images, i changed BATCH_SIZE = 9 and const size_t size = width * height * sizeof(float3) * BATCH_SIZE and filled the unifrom_data accordingly (uniform_data[volImg + line_offset + j] = ... where volImg changes according to batch index). I also changed first dim: 9 in .prototxt. To obtain detections i used indexing like output[ (batch_idx*100 + k) * 7 + 1 ]. There is only person (class 3) in my frames. For batch_id = 0 i got correct results (one person). But for other bathces detection data is like 1, 0.80123, 0.257755, 0.764545, 0.86875, 0.909765 # 2, 7.80123, 0.364645, 0.26875, 0.809765, 0.654343.

what i am doing wrong, can you help me? thanks!

@dmonterom
Copy link

dmonterom commented Feb 6, 2019

Hey, I had the same problem and I finally solved it:

  • In createConcatPlugin set the second param to false (so it will not ignore the batch size).
  • In cudaSoftmax (in the enqueue function of softmaxplugin) you need to modify the first parameter as follows: 1917 * 5 * batchSize.

I hope it helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants