Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Face detection not working on front camera (iOS) #570

Open
giordy16 opened this issue Jan 16, 2024 · 15 comments
Open

Face detection not working on front camera (iOS) #570

giordy16 opened this issue Jan 16, 2024 · 15 comments
Labels
Face Detection Issues corresponding to Face Detection API

Comments

@giordy16
Copy link

Face detection not working on front camera (iOS)

I am using the example app, and when I try to take a picture using my front camera, the faces found is always 0. If use the back camera, everything works.

Steps to reproduce the behavior:

  1. Go to 'Face detection'
  2. Click on the bottom left icon, then on 'Take a picture'
  3. Take a selfie with the front camera
  4. See error

Platform:

  • OS: iOS
  • Device: iPhone 12 Pro & 14
  • OS: iOS 17.1.1
  • Flutter Version: 3.16.7
  • Plugin version: 0.9.0
@giordy16
Copy link
Author

EDIT: it's actually working sometimes. On the UI of the front camera, there is little button to do a little zoom in/out. If I take the picture with the zoom out the faces are detected, if the camera has that little zoom in (which is the default setting), it doesn't work

@fbernaly fbernaly added the Face Detection Issues corresponding to Face Detection API label Jan 16, 2024
@fbernaly
Copy link
Collaborator

For face recognition, you should use an image with dimensions of at least 480x360 pixels. For ML Kit to accurately detect faces, input images must contain faces that are represented by sufficient pixel data. In general, each face you want to detect in an image should be at least 100x100 pixels.
source: https://developers.google.com/ml-kit/vision/face-detection/android#input-image-guidelines

@giordy16
Copy link
Author

When the front camera is in zoom-in mode, the picture has a dimension of 2316×3088, and my face is NOT detected. When the camera is in zoom-out mode, the picture has a dimension of 3024×4032, and my face is detected.

@henseljahja
Copy link

have you found solution for this one besides zooming out everytime? since the zooming out make the process stuck @giordy16

@giordy16
Copy link
Author

have you found solution for this one besides zooming out everytime? since the zooming out make the process stuck @giordy16

no

Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Apr 18, 2024
@TecHaxter
Copy link

So I have narrowed down the issue to these points:

  1. Real-time face detection works in iOS when the InputImage has bgra8888 image format group, InputImage.fromBytes factory constructor is used (According to the example app)
  2. Face detection works fine with image selected from iOS Photos App, InputImage.fromFilePath factory constructor is used
  3. Face detection does not work with Image captured in iOS from camera plugin using the .takePicture function, InputImage.fromFilePath factory constructor is used
  4. Face detection works when the Image captured in iOS from camera plugin using the .takePicture function is saved to the Photos App in iOS and using the Photo App image path, InputImage.fromFilePath factory constructor is used

Overall this issue is related with File Path in iOS, maybe the library is not able to load the UIImage in Swift code when the image path is given from the captured image using camera plugin, but it is able to load the UIImage when the image path is given from Photos App

@TecHaxter
Copy link

@fbernaly can you remove the stale label from this issue and look into this issue?

@fbernaly fbernaly removed the stale label Apr 29, 2024
@fbernaly
Copy link
Collaborator

@TecHaxter : I have removed the stale label, but I do not have bandwidth to work on this. Feel free to fork the repo and submit your contribution. I will review your PR ASAP and release a new version ASAP.

@itzmail
Copy link

itzmail commented May 17, 2024

try this

`Future getImageAndDetectFaces(XFile imageFile) async {
try {
if (Platform.isIOS) {
await Future.delayed(const Duration(milliseconds: 1000));
}

  List<Face> faces = await processPickedFile(imageFile);

  if (faces.isEmpty) {
    return false;
  }

  double screenWidth = MediaQuery.of(context).size.width;
  double screenHeight = MediaQuery.of(context).size.height;
  final radius = screenWidth * 0.35;
  Rect rectOverlay = Rect.fromLTRB(
    screenWidth / 2 - radius,
    screenHeight / 3.5 - radius,
    screenWidth / 2 + radius,
    screenHeight / 2.5 + radius,
  );

  for (Face face in faces) {
    final Rect boundingBox = face.boundingBox;
    if (boundingBox.bottom < rectOverlay.top ||
        boundingBox.top > rectOverlay.bottom ||
        boundingBox.right < rectOverlay.left ||
        boundingBox.left > rectOverlay.right) {
      return false;
    }
  }

  return true;
} catch (e) {
  return false;
}

}

processPickedFile(XFile pickedFile) async {
final path = pickedFile.path;

InputImage inputImage;
if (Platform.isIOS) {
  final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
  if (iosImageProcessed == null) {
    return [];
  }
  inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
  inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List<Face> faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;

}

Future<File?> bakeImageOrientation(XFile pickedFile) async {
if (Platform.isIOS) {
final directory = await getApplicationDocumentsDirectory();
final path = directory.path;
final filename = DateTime.now().millisecondsSinceEpoch.toString();

  final imglib.Image? capturedImage =
      imglib.decodeImage(await File(pickedFile.path).readAsBytes());

  if (capturedImage == null) {
    return null;
  }

  final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

  File imageToBeProcessed = await File('$path/$filename')
      .writeAsBytes(imglib.encodeJpg(orientedImage));

  return imageToBeProcessed;
}
return null;

}`

Dont forget import this

import 'package:image/image.dart' as imglib;

@CoderJava
Copy link

try this

`Future getImageAndDetectFaces(XFile imageFile) async { try { if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); }

  List<Face> faces = await processPickedFile(imageFile);

  if (faces.isEmpty) {
    return false;
  }

  double screenWidth = MediaQuery.of(context).size.width;
  double screenHeight = MediaQuery.of(context).size.height;
  final radius = screenWidth * 0.35;
  Rect rectOverlay = Rect.fromLTRB(
    screenWidth / 2 - radius,
    screenHeight / 3.5 - radius,
    screenWidth / 2 + radius,
    screenHeight / 2.5 + radius,
  );

  for (Face face in faces) {
    final Rect boundingBox = face.boundingBox;
    if (boundingBox.bottom < rectOverlay.top ||
        boundingBox.top > rectOverlay.bottom ||
        boundingBox.right < rectOverlay.left ||
        boundingBox.left > rectOverlay.right) {
      return false;
    }
  }

  return true;
} catch (e) {
  return false;
}

}

processPickedFile(XFile pickedFile) async { final path = pickedFile.path;

InputImage inputImage;
if (Platform.isIOS) {
  final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
  if (iosImageProcessed == null) {
    return [];
  }
  inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
  inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List<Face> faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;

}

Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();

  final imglib.Image? capturedImage =
      imglib.decodeImage(await File(pickedFile.path).readAsBytes());

  if (capturedImage == null) {
    return null;
  }

  final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

  File imageToBeProcessed = await File('$path/$filename')
      .writeAsBytes(imglib.encodeJpg(orientedImage));

  return imageToBeProcessed;
}
return null;

}`

Dont forget import this

import 'package:image/image.dart' as imglib;

Thanks. It's working to me.

@tsukifell
Copy link

try this

`Future getImageAndDetectFaces(XFile imageFile) async { try { if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); }

  List<Face> faces = await processPickedFile(imageFile);

  if (faces.isEmpty) {
    return false;
  }

  double screenWidth = MediaQuery.of(context).size.width;
  double screenHeight = MediaQuery.of(context).size.height;
  final radius = screenWidth * 0.35;
  Rect rectOverlay = Rect.fromLTRB(
    screenWidth / 2 - radius,
    screenHeight / 3.5 - radius,
    screenWidth / 2 + radius,
    screenHeight / 2.5 + radius,
  );

  for (Face face in faces) {
    final Rect boundingBox = face.boundingBox;
    if (boundingBox.bottom < rectOverlay.top ||
        boundingBox.top > rectOverlay.bottom ||
        boundingBox.right < rectOverlay.left ||
        boundingBox.left > rectOverlay.right) {
      return false;
    }
  }

  return true;
} catch (e) {
  return false;
}

}

processPickedFile(XFile pickedFile) async { final path = pickedFile.path;

InputImage inputImage;
if (Platform.isIOS) {
  final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
  if (iosImageProcessed == null) {
    return [];
  }
  inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
  inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List<Face> faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;

}

Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();

  final imglib.Image? capturedImage =
      imglib.decodeImage(await File(pickedFile.path).readAsBytes());

  if (capturedImage == null) {
    return null;
  }

  final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

  File imageToBeProcessed = await File('$path/$filename')
      .writeAsBytes(imglib.encodeJpg(orientedImage));

  return imageToBeProcessed;
}
return null;

}`

Dont forget import this

import 'package:image/image.dart' as imglib;

Thank you, this does work for me

@husnain067
Copy link

husnain067 commented Sep 12, 2024

Thank you, This works on the IOS @itzmail

InputImage inputImage;
if (Platform.isIOS) {
final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
if (iosImageProcessed == null) {
return [];
}
inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;
}

@akwa-peter
Copy link

akwa-peter commented Oct 29, 2024

Thanks @itzmail this worked like a magi, made it very smooth without stress to detect image

Future<File?> bakeImageOrientation(XFile pickedFile) async {
  if (Platform.isIOS) {
    final directory = await getApplicationDocumentsDirectory();
    final path = directory.path;
    final filename = DateTime.now().millisecondsSinceEpoch.toString();

    final imglib.Image? capturedImage =
    imglib.decodeImage(await File(pickedFile.path).readAsBytes());

    if (capturedImage == null) {
      return null;
    }

    final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

    File imageToBeProcessed = await File('$path/$filename')
        .writeAsBytes(imglib.encodeJpg(orientedImage));

    return imageToBeProcessed;
  }
  return null;
}

@roman-khattak
Copy link

roman-khattak commented Nov 18, 2024

@giordy16 This solution worked for me. I'll leave the solution here in case someone else has the same issue! thanks a lot for mentioning this issue.

import 'dart:io';
import 'package:google_ml_kit/google_ml_kit.dart';
import 'package:image/image.dart' as imglib;
import 'package:image_picker/image_picker.dart';
import 'package:path_provider/path_provider.dart';


getImageAndDetectFaces() async {
    try {
      XFile imageFile = await _imagePicker.pickImage(
        source: ImageSource.camera,
        preferredCameraDevice: CameraDevice.front,
      );
      if (imageFile == null) return;      

      if (Platform.isIOS) {
        await Future.delayed(const Duration(milliseconds: 1000));
      }
      
      List<Face> faces =
          await processPickedFile(imageFile);
      
      if (faces.isEmpty) {        
        throw EvomException('Nenhum rosto foi detectado na imagem!');
      }
      return faces;
    } catch (e) {
      throw Exception('$e');
    }
}

processPickedFile(XFile pickedFile) async {
    final path = pickedFile?.path ?? null;
    if (path == null) {
      throw EvomException('Imagem não encontrada!');
    }
    InputImage inputImage;
    if (Platform.isIOS) {
      final File iosImageProcessed =
          await backImageOrientation(pickedFile);
      inputImage = InputImage.fromFilePath(iosImageProcessed.path);
    } else {
      inputImage = InputImage.fromFilePath(path);
    }
    print(
        'INPUT IMAGE PROCESSED: ${inputImage.filePath} - ${inputImage.imageType}');
    
    List<Face> faces = await _faceDetector.processImage(inputImage);
    print('Found ${_faces.length} faces for picked file');
    return faces;
}

bakeImageOrientation(XFile pickedFile) async {
    if (Platform.isIOS) {
      final directory = await getApplicationDocumentsDirectory();
      final path = directory.path;
      final filename = DateTime.now().millisecondsSinceEpoch.toString();

      final imglib.Image capturedImage =
          imglib.decodeImage(await File(pickedFile.path).readAsBytes());

      final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

      final imageToBeProcessed = await File('$path/$filename')
          .writeAsBytes(imglib.encodeJpg(orientedImage));

      return imageToBeProcessed;
    }
    return null;
}

The above solution has been incredibly helpful in resolving the zoomed-in Front Camera selfie issue on iOS devices and improving overall face detection accuracy. I've implemented this approach with some modifications to fit my specific use case, and I wanted to share how it benefited my project in case it helps others:

  1. Zoomed-out Selfies: The bakeImageOrientation function effectively addressed the IOS front camera default zoom-in issue, resulting in properly framed selfies.

  2. Improved Face Detection: By ensuring correct image orientation, the face detection accuracy significantly improved for both selfies and uploaded images.

  3. Consistency in Image Processing: I applied the same image processing technique to both selfie capture and image uploads from the gallery. This consistency was crucial for our facial recognition feature, where we compare uploaded photos with the user's selfie to ensure user uploads authentic pictures.

Here's a snippet of how I achieved this:

Future<XFile> _processIOSImage(XFile pickedFile) async {
  // ... [Same implementation of bakeImageOrientation] ...
}

// In selfie capture
if (Platform.isIOS && byCamera) {
  file = await _processIOSImage(file);
}

// In image upload from gallery
if (Platform.isIOS) {
  file = await _processIOSImage(file);
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Face Detection Issues corresponding to Face Detection API
Projects
None yet
Development

No branches or pull requests

10 participants