Add issue solutions and superpoint pytorch version that eat up excessive inference memory #23
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hello, thank you for your hard work!
I suggest a minor code correction.
Improved GPU selection
Currently, many people lack GPU memory or have problems requiring too much GPU memory, so the model is not working. 😭
To resolve this issue, add the get_device_framework function to utils.py , which is not a perfect solution, but works more flexibly than before.
Add Pytorch SuperPoint
Add the Pytorch version of the SuperPoint and SuperPoint weight DownloadPath
Changed the code style to PEP8 due to an indentation error in the omniglue_extract.py file.
Let me know what you think 🙂