-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking more than 8 frames per sequence #10
Comments
Yes, the released model weights are for S=8. For longer tracking, you need to chain the model over time. There is code for this in chain_demo.py |
Hi @aharley, thanks for your reply ! I have another question: How can I obtain the dense optical flow output similar to RAFT ? In your method, in both example, we need to choose a small amount of points to perform tracking. In my case, I need to obtain the trajectories of every pixels in the starting frame. |
In the part where you specify the start locations: change it to a dense grid, like this:
If you run out of memory when trying to run the model for this many particles, split the list into batches of a good size for your GPU, like this: https://github.com/aharley/pips/blob/main/test_on_davis.py#L111-L125 |
Thanks for the pointer ! I will close the issue now. |
Hi @aharley , I have a further question: in the davis code, I see that you only predict flows of the entire image using a sequence of 8 frames. However, in the chain_demo.py, you show an example of a single tracked pixels. I wonder if you have tried to extend the chain_demo.py with dense predictions ? I assume the confidence score thresholding here is important to make sure the trajectory is correct. |
Tracking all pixels is generally very memory-heavy, and pairing that with the chaining is tricky but doable. The tricky part is: the chaining technique allows each target to choose variable-length step sizes for re-initializing the tracker (within the |
Hi, I also wanted to figure out how to track multiple points in a frame for a longer duration. I've run demo.py and chain_demo.py and as per my understanding, demo takes a grid of points and the chain_demo takes only one point. I would like to run it on longer sequences with different data to make some sense of the outputs. Can we change either of these files to work it out? |
Hi @phongnhhn92 , you had any luck running chain_demo.py with multiple points simultaneously and create a single gif? |
Hi,
In the demo.py file, when I tried to change S = 8 to S = 10 then the model doesn't work. Did you hard-coded the model to only works if there are only 8 input frames each time ?
The text was updated successfully, but these errors were encountered: