Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems on Mis-Recognitions on Non-Living Objects #81

Open
TMaysGGS opened this issue Mar 2, 2020 · 2 comments
Open

Problems on Mis-Recognitions on Non-Living Objects #81

TMaysGGS opened this issue Mar 2, 2020 · 2 comments

Comments

@TMaysGGS
Copy link

TMaysGGS commented Mar 2, 2020

Hi @ipazc ,

Thanks very much for sharing your work and this version of mtcnn is really precise compared with some other versions.

But I found that there are some cases that the detector will always detect some non-living objects as human face, such as mugs with patterns, down jackets, parts of coats with metal buttons and hands holding a cellphone. I tried adding some preprocessing steps before the image goes into the network but did not get the results imporved significantly. And I extracted the confidence for those pics with non-living objects, they can get a confidence as high as 0.9+ after going into the ONet, which makes adjusting the threshold for ONet not work. So I wonder if you or anyone have met similar situations, and what modifications can be applied here to eliminate the error.

Below are some of the mis-recoginized samples.
15
344
548

Any advice would help. Thanks.

@ipazc
Copy link
Owner

ipazc commented Mar 2, 2020

Hi, you could try adjusting the steps_threshold parameter in the MTCNN constructor (which by default is [0.6, 0.7, 0.7]) to better fit your images distribution.

from mtcnn import MTCNN

detector = MTCNN(steps_threshold=[0.6, 0.7, 0.7])

Those values are the default ones given by the MTCNN author, but might not be the best values for all the scenarios.

@TMaysGGS
Copy link
Author

TMaysGGS commented Mar 2, 2020

Hi, you could try adjusting the steps_threshold parameter in the MTCNN constructor (which by default is [0.6, 0.7, 0.7]) to better fit your images distribution.

from mtcnn import MTCNN

detector = MTCNN(steps_threshold=[0.6, 0.7, 0.7])

Those values are the default ones given by the MTCNN author, but might not be the best values for all the scenarios.

Hi, @ipazc ,

Thank you for your reply.

I tried changing the values of the thresholds, and many of those false positives have a confidence as high as 0.8, 0.9 for RNet & ONet, which makes it not work.

Also, I tried freezing your network parameters and add "Dense - PReLU - Dense - Classifier" at the end of the network main branch, which makes ONet have 4 parrallel outputs, one for filtering those FPs. And it is still not working (maybe because I do not have enough samples).

PS: I did not retrain your network as a whole. Because I wrote the training code myself but for now I still cannot get a relatively good PNet. So I am afraid of breaking the good weights that have been balanced already in your network. Here is my training code: https://github.com/TMaysGGS/MTCNN-Keras.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants