You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the ParseQ model for inference, if a blank image is input, the output content is out of thin air.
To solve this problem, I generated 10,000 blank images of varying lengths and set their labels to a single space ‘ ’, but these data were ignored when finetuning the model:
# We filter out samples which don't contain any supported characters
How can I solve this problem?
Should I simply set remove_whitespace to false?
For blank images of varying lengths, should I set their labels to a single space or to different numbers of spaces depending on the length of the image?
The text was updated successfully, but these errors were encountered:
When using the ParseQ model for inference, if a blank image is input, the output content is out of thin air.
To solve this problem, I generated 10,000 blank images of varying lengths and set their labels to a single space ‘ ’, but these data were ignored when finetuning the model:
parseq/strhub/data/dataset.py
Line 115 in 1902db0
How can I solve this problem?
Should I simply set remove_whitespace to false?
For blank images of varying lengths, should I set their labels to a single space or to different numbers of spaces depending on the length of the image?
The text was updated successfully, but these errors were encountered: