Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get face embedding / encoding? #102

Open
josephernest opened this issue Oct 30, 2020 · 6 comments
Open

How to get face embedding / encoding? #102

josephernest opened this issue Oct 30, 2020 · 6 comments

Comments

@josephernest
Copy link

josephernest commented Oct 30, 2020

Congrats for this nice project @ipazc!
I see the output of your algorithm is something like:

[
    {
        'box': [277, 90, 48, 63],
        'keypoints':
        {
            'nose': (303, 131),
            'mouth_right': (313, 141),
            'right_eye': (314, 114),
            'left_eye': (291, 117),
            'mouth_left': (296, 143)
        },
        'confidence': 0.99851983785629272
    }
]

i.e. it gives the bounding box, the keypoints (nose, mouth, eye, etc.).

But how to get an face embedding / face encoding, to be able to do face identification?

@MattyB95
Copy link
Contributor

I may have misunderstood what you are asking, but this project isn't about getting face encodings for recognition purposes. For that, you will need something else, such as https://github.com/ageitgey/face_recognition.

@josephernest
Copy link
Author

@MattyB95 Maybe I misunderstood this project, but I thought MTCNN + FaceNet not only allows face detection, but also recognition / identification, by assigning a 128D vector embedding to each face.

https://github.com/davidsandberg/facenet

Is it correct @ipazc?

@MattyB95
Copy link
Contributor

@josephernest I'm not completely familiar with FaceNet but wouldn't that be providing the face recognition/identification encodings. This project would be more about cropping the image to the facial region for that purpose, but I will let @ipazc give his verdict :)

@imnimn
Copy link

imnimn commented Nov 3, 2020

MTCNN is only used to detect faces in an image,
The value in the the "box" are the corners for the face box in an image
To get the face encoding you will need to pass the the pixels resulted from cropping the image to a face corners to one of the models that are used for face encoding, as facenet, deepface,....

@josephernest
Copy link
Author

josephernest commented Nov 8, 2020

@imnimn Is there a Python implementation that packs together: MTCNN to get the face box + facenet or deepface for the encoding?

@imnimn
Copy link

imnimn commented Nov 9, 2020

@imnimn Is there a Python implementation that packs together: MTCNN to get the face box + facenet or deepface for the encoding?

@josephernest please check this article:
https://arsfutura.com/magazine/face-recognition-with-facenet-and-mtcnn/,
It is implementation is in this repository
https://github.com/arsfutura/face-recognition

ipazc pushed a commit that referenced this issue Oct 7, 2024
…tch processing support

- Completely refactored the MTCNN implementation following best coding practices.
- Optimized code by removing unnecessary transpositions, resulting in faster computation. Fixes #22.
- Transposed convolutional layer weights to eliminate the need for additional transpositions during preprocessing and postprocessing, improving overall efficiency.
- Converted preprocessing and postprocessing functions into matrix operations to accelerate computation. Fixes #14, #110.
- Added batch processing support to enhance performance for multiple input images. Fixes #9, #71.
- Migrated network architecture to TensorFlow >= 2.12 for improved compatibility and performance. Fixes #80, #82, #90, #91, #93, #98, #104, #112, #114, #115, #116.
- Extensively documented the project with detailed explanations of thresholds and parameters. Fixes #12, #41, #52, #57, #99, #122, #117.
- Added support for selecting computation backends (CPU, GPU, etc.) with the `device` parameter. Fixes #23.
- Added new parameters to control the result format (support for x1, y1, x2, y2 instead of x1, y1, width, height) and the ability to return tensors instead of dictionaries. Fixes #72.
- Configured PyLint support to ensure code quality and style adherence.
- Organized functions into specific modules (`mtcnn.utils.*` and `mtcnn.stages.*`) for better modularity.
- Created Jupyter notebooks for visualization and ablation studies of each stage, allowing detailed exploration of layers, weights, and intermediate results. Fixes #88, #102.
- Added a comprehensive training guide for the model. Fixes #35, #39.
- Updated README with information on the new version, including the complete Read the Docs documentation that describes the process, theoretical background, and usage examples. Fixes #53, #73.
- Configured GitHub Actions for continuous integration and delivery (CI/CD).
- Fixed memory leak by switching to a more efficient TensorFlow method (`model(tensor)` instead of `model.predict(tensor)`). Fixes #87, #109, #121, #125, #128.
- Made TensorFlow an optional dependency to prevent conflicts with user-installed versions. Fixes #95.
- Added comprehensive unit tests for increased reliability and coverage.
@ipazc ipazc mentioned this issue Oct 8, 2024
ipazc pushed a commit that referenced this issue Oct 8, 2024
…tch processing support

- Completely refactored the MTCNN implementation following best coding practices.
- Optimized code by removing unnecessary transpositions, resulting in faster computation. Fixes #22.
- Transposed convolutional layer weights to eliminate the need for additional transpositions during preprocessing and postprocessing, improving overall efficiency.
- Converted preprocessing and postprocessing functions into matrix operations to accelerate computation. Fixes #14, #110.
- Added batch processing support to enhance performance for multiple input images. Fixes #9, #71.
- Migrated network architecture to TensorFlow >= 2.12 for improved compatibility and performance. Fixes #80, #82, #90, #91, #93, #98, #104, #112, #114, #115, #116.
- Extensively documented the project with detailed explanations of thresholds and parameters. Fixes #12, #41, #52, #57, #99, #122, #117.
- Added support for selecting computation backends (CPU, GPU, etc.) with the `device` parameter. Fixes #23.
- Added new parameters to control the result format (support for x1, y1, x2, y2 instead of x1, y1, width, height) and the ability to return tensors instead of dictionaries. Fixes #72.
- Configured PyLint support to ensure code quality and style adherence.
- Organized functions into specific modules (`mtcnn.utils.*` and `mtcnn.stages.*`) for better modularity.
- Created Jupyter notebooks for visualization and ablation studies of each stage, allowing detailed exploration of layers, weights, and intermediate results. Fixes #88, #102.
- Added a comprehensive training guide for the model. Fixes #35, #39.
- Updated README with information on the new version, including the complete Read the Docs documentation that describes the process, theoretical background, and usage examples. Fixes #53, #73.
- Configured GitHub Actions for continuous integration and delivery (CI/CD).
- Fixed memory leak by switching to a more efficient TensorFlow method (`model(tensor)` instead of `model.predict(tensor)`). Fixes #87, #109, #121, #125, #128.
- Made TensorFlow an optional dependency to prevent conflicts with user-installed versions. Fixes #95.
- Added comprehensive unit tests for increased reliability and coverage.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants