-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation always produces mAP of 0.0 when using backbones other than Resnet50 #647
Comments
Densenet is a community contribution and I have never really used it. If you find out what the problem is then a PR will be welcome :) |
I have the same problem when i changed the backbone to the shuffleNet. The loss is decreasing, but the MAP is always zero. However when i trained model with resnet50 backbone, everything is okay. I still did not find the problem. Can anyone give me some advice? |
Do you evaluate during training, or after training using the evaluate tool? For densenet and mobilenet I noticed there is a bug when preprocessing the images in the evaluate tool. Those backbones require a |
yeah, I evaluate the map during training. The dataset i used is pascal voc2007. And I did not use pretrained model, because i did not find the pretrained weights of shufflenet-V2 on imagenet. |
@songhuizhong I'm not sure what you mean with shufflenet-V2, we don't have that backbone in our repository. |
It is the model I built by myself. here is the article introducing the shufflenet-v2 :https://arxiv.org/abs/1807.11164 |
In that case I can't help. I only have experience with ResNet backbones. If you find out the solution to this then a PR is welcome. |
well, i find the problem, it is my fault, the reason why map is 0 is because the feature map input into feature pyramid network is in wrong order. |
Alright, I'll assume this issue resolved then. |
Actually, the original issue was something else entirely, DensNet, so I'm reopening.. |
I ran into the same issue using resnet50 for backbone (training and using evaluation.py). I fixed it when I specify image-min-side and image-max-side, otherwise the default values of these parse args don't match up with my image dimension (256x256). |
Above all thanks for your awesome work @hgaiser. I met the same problem with @yyannikb . With 'densenet121' as backbone, I got massive detection results and all the scores of boxes are 1. Yes, there is only such a value for score. Concequently, it leads a zero mAP. My project uses mammography from dataset DDSM. For training, I set |
When I used mobilenet224 to train, got the same issue. Has anyone resolved this problem? I will appreciate that if you can share your experience. Thanks a lot. |
This was the solution for mobilenet. You have to hack the 'evaluate_coco' method in coco_eval.py. |
Hi I also got the same problem when tried to inference model with densenet121 backbone, so someone already have idea how to solve that? |
It's the same issue with mobilenet, just change the same place as @ozyilmaz commented. |
@ozyilmaz when I do this, it throws an error that |
@tonmoyborah , it is hard to guess but it seems like the generator object does not have the correct "preprocess_image" method. |
Same happened to me when using mobilenetv1/v2. Anyone could explain this strange behaviour? |
@tonmoyborah @ozyilmaz Hey, Did you solve the error !!! I am trying to run RetinaNet with Mobilenet224_1.0 backbone and I got map of 0. When I try and train with change in eval.py -> _get_detections method in the line - image = generator.preprocess_image(image) to image = generator.preprocess_image(image, mode = 'tf') as mentioned by @ozyilmaz . I get same error as unexpected keyword mode |
when I train normally I get 0 mAP as shown below. Can anyone help me on this? 10000/10000 [==============================] - 3158s 316ms/step - loss: 5.4151 - regression_loss: 2.5757 - classification_loss: 2.8393 - val_loss: 5.3902 - val_regression_loss: 2.5642 - val_classification_loss: 2.8260 |
Please use the keras-retinanet slack channel for usage questions, or read the readme to find out possible issues. |
I have same issue. Backbone densenet201 downloaded weight from keras github. Training with freeze-backbone, custom csv which worked well with any Tensorflow object detection model. Batch 16, dataset 27,000, one single class. Up to epoch 3 (—> 81,000 iters), retinanet-evaluate produce NO predicted boundingbox on any of 3000 evaluation images! Can someone help... please. Just to add, MobileNet224_2 also does NOT provide any mAP at all. Very tiring ... :( ... Steve |
For mobilenet, I saw keras-retinanet is used in vehicle detection: |
Dear all, I came upon the same issue with DenseNet-121. While training, |
The reply below is still valid:
You could use resnet50, that should work. |
Could the issue be related to this? |
Yes, I already use ResNet-50 as the backbone and wanted to make a comparison with DenseNet-121. |
Hi, |
Hi @Uttaran-IITH, please see this where with the guidance of @hgaiser @ikerodl96 I found the way around this issue. |
@mariaculman18, Thank you for the suggestion. Unfortunately, the solution works only for Densenet but not for Resnet101 or Resnet152. In the case of Densenet121, the mAP is very low even on the training data. I assume that the model you have used in the code is the inference model.
|
@Uttaran-IITH yes, I only tried the solution for DenseNet-121. I can no give you any suggestion for other backbones, sorry :( In my case, I got a |
Hi, I use resnet101, but the loss is always around 1. How can I reduce it? |
I only have one class |
retinanet-evaluate --convert-model ./model/resnet50_csv_100.h5 csv ./train.csv ./class.csv |
@MAGI003769 hello,I met the same problem with you.With 'densenet201' as backbone,I got strange detection results and all the scores are 1.Has your problem been solved?Thank a lot. |
I would also like to know if this problem still persists. |
font{
line-height: 1.6;
}
ul,ol{
padding-left: 20px;
list-style-position: inside;
}
I have solved this problem.
wuyuxin
[email protected]
签名由
网易邮箱大师
定制
On 11/4/2019 18:52,Hans Gaiser<[email protected]> wrote:
Has your problem been solved?
I would also like to know if this problem still persists.
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or unsubscribe.
|
I'm also facing a very strange problem. I get non-zero mAP value when evaluating during training but when I use convert_model.py and evaluate.py, I get zero mAP values. I'm facing this issue with the efficientnet backbones. |
I also encounter the same problem. I change the backbone to detnet59, basicly an modificatition to resnet50. I can see the loss decrease while i am training, however, the each-epoch evaluation on the test set is always 0. The resnet50 backbone works well. I was wondering if there is error in the evaluation function. |
Your solution helped me as well. Why not to make pr? (And use **common_args in all generators) |
Happy to know it worked for you :) I guess contributors are aware of the problem. It is better if they make pr, I don't know how to do that :/ |
Well, I dared to create the pr: #1290. Let's see if it suits. |
I have the same problem.have you found the solution? |
Use the solution here for Densenet. |
@mngata Have you managed to fix this mAP 0.0 issue? I changed backbone to detnet59 and experienced same issue as well. |
Thanks,it worked |
First and foremost, thank you for the awesome package! The dataset I am using is of satellite images consisting of 29 different classes. I have been able to train and evaluate a retinanet model on this dataset using the default 'densenet50' backbone on a subset of the 29 classes.
However, when I switch over to training and evaluating a model with a different backbone network such as 'densenet121', all of the mAP scores for each class is zero. I'm not receiving any issues when training (I am also using the random-transform flag for each epoch), or when converting the model (I also supply the --backbone='densenet121' flag) and it converts successfully. I can also see that losses being optimized during training so it's definitely detecting and classifying the objects in the images.
I even tried using the original resnet50 model trained on a subset of classes to see if it would pick up those classes on the full dataset with 29 classes and it still produces an output of zero. I looked at the validation_annotations.csv file for both cases and the formatting is identical so I don't think it has to do with the annotation files.
I have attached the validation_annotations.csv file, the classes.csv file (converted to .txt files in order to attach them here)
common_classes.txt
common_validation_annotations.txt
Any ideas what could be going on?
EDIT: I just did a comparison of a Resnet50 model and Densenet121 model both trained on the same dataset that I know for sure works and the problem is definetely with the densenet121 implementation because the Resnet50 model is producing output during evaluation.
The text was updated successfully, but these errors were encountered: