-
Hi, We're currently utilizing MAIA through your web interface to analyze videos and detect novelties. However, we've encountered a limitation: the WebUI does not support direct video processing. This feature would be incredibly beneficial for our workflow. Is there any plan to implement this in future updates? To work around this limitation, we have developed a Python script that splits videos into individual images, which we then upload to MAIA. While this method works, it's not the most efficient solution. Hence, I'm curious if there are better workarounds available. For example, do you offer an API that could streamline this process and automate video analysis more effectively? Additionally, I wanted to ask if your models are publicly available for integration on local systems. We are interested in potentially fine-tuning them to better suit our specific use cases. Thank you for your attention and assistance! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi!
Currently we don't have any plans to directly support videos in MAIA. Videos will likely require a different workflow and user interface, as objects often occur in many video frames and you don't want to count them multiple times. For actual video annotation assistance we will probably focus on enhancing BIIGLE's automatic object tracking in the video annotation tool first. If you want to use a workflow like MAIA, extracting video frames as images like you did is the way to go.
Yes, you can do everything via the API. If you upload the files as a storage request, you only have to wait for us to approve it, so this cannot be completely automated in a single pipeline. It could be if you use your own storage location. Once the files are available, you can create a new volume and start a new MAIA job using the Python script.
MAIA in BIIGLE uses a standard Faster R-CNN model trained on COCO by default. The exact checkpoint file is configured here. This model is then fine-tuned on the training data of each MAIA job. The fine-tuned model is not stored and cannot be used for another MAIA job. Instead, you can use existing annotations as training data for new MAIA jobs. So the more annotations you accumulate, the more training data you have for new MAIA jobs. This workflow still has a few gaps, though, as you currently cannot select training data from other volumes without scaling metadata (biigle/maia#126) or from multiple other volumes (biigle/maia#88). We'll be working on that in the future. |
Beta Was this translation helpful? Give feedback.
Hi!
Currently we don't have any plans to directly support videos in MAIA. Videos will likely require a different workflow and user interface, as objects often occur in many video frames and you don't want to count them multiple times. For actual video annotation assistance we will probably focus on enhancing BIIGLE's automatic object tracking in the video annotation tool first. If you want to use a workflow like MAIA, extracting video frames as images like you did is the way to go.