MLKit Firebase android - 如何将 FirebaseVisionFace 转换为图像对象(如位图)?
MLKit Firebase android - How to convert FirebaseVisionFace to Image Object (like Bitmap)?
我已将 MLkit FaceDetection 集成到我的 android 应用程序中。我在下面提到 URL
https://firebase.google.com/docs/ml-kit/android/detect-faces
人脸检测处理器代码Class是
import java.io.IOException;
import java.util.List;
/** Face Detector Demo. */
public class FaceDetectionProcessor extends VisionProcessorBase<List<FirebaseVisionFace>> {
private static final String TAG = "FaceDetectionProcessor";
private final FirebaseVisionFaceDetector detector;
public FaceDetectionProcessor() {
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setLandmarkType(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.setTrackingEnabled(true)
.build();
detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
}
@Override
public void stop() {
try {
detector.close();
} catch (IOException e) {
Log.e(TAG, "Exception thrown while trying to close Face Detector: " + e);
}
}
@Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
return detector.detectInImage(image);
}
@Override
protected void onSuccess(
@NonNull List<FirebaseVisionFace> faces,
@NonNull FrameMetadata frameMetadata,
@NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
@Override
protected void onFailure(@NonNull Exception e) {
Log.e(TAG, "Face detection failed " + e);
}
}
在 "onSuccess" 侦听器中,我们将获得 "FirebaseVisionFace" class 个对象的数组,这些对象将具有 "Bounding Box" 个面孔。
@Override
protected void onSuccess(
@NonNull List<FirebaseVisionFace> faces,
@NonNull FrameMetadata frameMetadata,
@NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
我想知道如何将这个 FirebaseVisionFace 对象转换为位图。
我想提取面部图像并将其显示在 ImageView 中。谁能帮帮我吗 。提前致谢。
注意:我已经从下面 URL
下载了 MLKit android 的示例源代码
https://github.com/firebase/quickstart-android/tree/master/mlkit
您从位图中创建了 FirebaseVisionImage
。在检测 returns 之后,每个 FirebaseVisionFace
将边界框描述为 Rect
,您可以使用它从原始位图中提取检测到的人脸,例如使用 Bitmap.createBitmap().
实际上你可以只读 ByteBuffer
然后你可以用 OutputStream
得到数组来写入你想要的目标文件。当然你也可以从getBoundingBox()
那里得到。
如果您尝试使用 ML Kit 检测人脸并使用 OpenCV 对检测到的人脸执行图像处理,这可能会对您有所帮助。请注意,在此特定示例中,您需要 onSuccess
.
内的原始相机位图
我还没有找到没有位图的方法,老实说还在搜索中。
@Override
protected void onSuccess(@NonNull List<FirebaseVisionFace> faces, @NonNull FrameMetadata frameMetadata, @NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
/* Original implementation has original image. Original Image represents the camera preview from the live camera */
// Create Mat representing the live camera itself
Mat rgba = new Mat(originalCameraImage.getHeight(), originalCameraImage.getWidth(), CvType.CV_8UC4);
// The box with a Imgproc affect made by OpenCV
Mat rgbaInnerWindow;
Mat mIntermediateMat = new Mat();
// Make box for Imgproc the size of the detected face
int rows = (int) face.getBoundingBox().height();
int cols = (int) face.getBoundingBox().width();
int left = cols / 8;
int top = rows / 8;
int width = cols * 3 / 4;
int height = rows * 3 / 4;
// Create a new bitmap based on live preview
// which will show the actual image processing
Bitmap newBitmap = Bitmap.createBitmap(originalCameraImage);
// Bit map to Mat
Utils.bitmapToMat(newBitmap, rgba);
// Imgproc stuff. In this examply I'm doing edge detection.
rgbaInnerWindow = rgba.submat(top, top + height, left, left + width);
Imgproc.Canny(rgbaInnerWindow, mIntermediateMat, 80, 90);
Imgproc.cvtColor(mIntermediateMat, rgbaInnerWindow, Imgproc.COLOR_GRAY2BGRA, 4);
rgbaInnerWindow.release();
// After processing image, back to bitmap
Utils.matToBitmap(rgba, newBitmap);
// Load the bitmap
CameraImageGraphic imageGraphic = new CameraImageGraphic(graphicOverlay, newBitmap);
graphicOverlay.add(imageGraphic);
FaceGraphic faceGraphic;
faceGraphic = new FaceGraphic(graphicOverlay, face, null);
graphicOverlay.add(faceGraphic);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
// I can't speak for this
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
由于接受的答案不够具体,我将尝试解释我所做的。
1.- 在 LivePreviewActivity 上创建一个 ImageView,如下所示:
private ImageView imageViewTest;
2.- 在 Activity xml 和 link 上创建它到 java 文件。我把它放在示例代码之前,所以它可以在相机源的顶部可见。
3.-当他们创建 FaceDetectionProcessor 时传递 imageView 的实例,以便能够在对象内设置源图像。
FaceDetectionProcessor processor = new FaceDetectionProcessor(imageViewTest);
4.-更改 FaceDetectionProcessor 的构造函数以能够接收 ImageView 作为参数并创建一个保存该实例的全局变量。
public FaceDetectionProcessor(ImageView imageView) {
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setTrackingEnabled(true)
.build();
detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
this.imageView = imageView;
}
5.- 我创建了一个裁剪方法,它采用位图和一个 Rect 来仅聚焦于脸部。所以继续做同样的事情。
public static Bitmap cropBitmap(Bitmap bitmap, Rect rect) {
int w = rect.right - rect.left;
int h = rect.bottom - rect.top;
Bitmap ret = Bitmap.createBitmap(w, h, bitmap.getConfig());
Canvas canvas = new Canvas(ret);
canvas.drawBitmap(bitmap, -rect.left, -rect.top, null);
return ret;
}
6.- 修改 detectInImage 方法,将被检测的位图实例保存到全局变量中。
@Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
imageBitmap = image.getBitmapForDebugging();
return detector.detectInImage(image);
}
7.- 最后修改OnSuccess方法,调用裁剪方法,将结果赋值给imageView。
@Override
protected void onSuccess(
@NonNull List<FirebaseVisionFace> faces,
@NonNull FrameMetadata frameMetadata,
@NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
croppedImage = cropBitmap(imageBitmap, face.getBoundingBox());
}
imageView.setImageBitmap(croppedImage);
}
我已将 MLkit FaceDetection 集成到我的 android 应用程序中。我在下面提到 URL
https://firebase.google.com/docs/ml-kit/android/detect-faces
人脸检测处理器代码Class是
import java.io.IOException;
import java.util.List;
/** Face Detector Demo. */
public class FaceDetectionProcessor extends VisionProcessorBase<List<FirebaseVisionFace>> {
private static final String TAG = "FaceDetectionProcessor";
private final FirebaseVisionFaceDetector detector;
public FaceDetectionProcessor() {
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setLandmarkType(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.setTrackingEnabled(true)
.build();
detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
}
@Override
public void stop() {
try {
detector.close();
} catch (IOException e) {
Log.e(TAG, "Exception thrown while trying to close Face Detector: " + e);
}
}
@Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
return detector.detectInImage(image);
}
@Override
protected void onSuccess(
@NonNull List<FirebaseVisionFace> faces,
@NonNull FrameMetadata frameMetadata,
@NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
@Override
protected void onFailure(@NonNull Exception e) {
Log.e(TAG, "Face detection failed " + e);
}
}
在 "onSuccess" 侦听器中,我们将获得 "FirebaseVisionFace" class 个对象的数组,这些对象将具有 "Bounding Box" 个面孔。
@Override
protected void onSuccess(
@NonNull List<FirebaseVisionFace> faces,
@NonNull FrameMetadata frameMetadata,
@NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
我想知道如何将这个 FirebaseVisionFace 对象转换为位图。 我想提取面部图像并将其显示在 ImageView 中。谁能帮帮我吗 。提前致谢。
注意:我已经从下面 URL
下载了 MLKit android 的示例源代码https://github.com/firebase/quickstart-android/tree/master/mlkit
您从位图中创建了 FirebaseVisionImage
。在检测 returns 之后,每个 FirebaseVisionFace
将边界框描述为 Rect
,您可以使用它从原始位图中提取检测到的人脸,例如使用 Bitmap.createBitmap().
实际上你可以只读 ByteBuffer
然后你可以用 OutputStream
得到数组来写入你想要的目标文件。当然你也可以从getBoundingBox()
那里得到。
如果您尝试使用 ML Kit 检测人脸并使用 OpenCV 对检测到的人脸执行图像处理,这可能会对您有所帮助。请注意,在此特定示例中,您需要 onSuccess
.
我还没有找到没有位图的方法,老实说还在搜索中。
@Override
protected void onSuccess(@NonNull List<FirebaseVisionFace> faces, @NonNull FrameMetadata frameMetadata, @NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
/* Original implementation has original image. Original Image represents the camera preview from the live camera */
// Create Mat representing the live camera itself
Mat rgba = new Mat(originalCameraImage.getHeight(), originalCameraImage.getWidth(), CvType.CV_8UC4);
// The box with a Imgproc affect made by OpenCV
Mat rgbaInnerWindow;
Mat mIntermediateMat = new Mat();
// Make box for Imgproc the size of the detected face
int rows = (int) face.getBoundingBox().height();
int cols = (int) face.getBoundingBox().width();
int left = cols / 8;
int top = rows / 8;
int width = cols * 3 / 4;
int height = rows * 3 / 4;
// Create a new bitmap based on live preview
// which will show the actual image processing
Bitmap newBitmap = Bitmap.createBitmap(originalCameraImage);
// Bit map to Mat
Utils.bitmapToMat(newBitmap, rgba);
// Imgproc stuff. In this examply I'm doing edge detection.
rgbaInnerWindow = rgba.submat(top, top + height, left, left + width);
Imgproc.Canny(rgbaInnerWindow, mIntermediateMat, 80, 90);
Imgproc.cvtColor(mIntermediateMat, rgbaInnerWindow, Imgproc.COLOR_GRAY2BGRA, 4);
rgbaInnerWindow.release();
// After processing image, back to bitmap
Utils.matToBitmap(rgba, newBitmap);
// Load the bitmap
CameraImageGraphic imageGraphic = new CameraImageGraphic(graphicOverlay, newBitmap);
graphicOverlay.add(imageGraphic);
FaceGraphic faceGraphic;
faceGraphic = new FaceGraphic(graphicOverlay, face, null);
graphicOverlay.add(faceGraphic);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
// I can't speak for this
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
由于接受的答案不够具体,我将尝试解释我所做的。
1.- 在 LivePreviewActivity 上创建一个 ImageView,如下所示:
private ImageView imageViewTest;
2.- 在 Activity xml 和 link 上创建它到 java 文件。我把它放在示例代码之前,所以它可以在相机源的顶部可见。
3.-当他们创建 FaceDetectionProcessor 时传递 imageView 的实例,以便能够在对象内设置源图像。
FaceDetectionProcessor processor = new FaceDetectionProcessor(imageViewTest);
4.-更改 FaceDetectionProcessor 的构造函数以能够接收 ImageView 作为参数并创建一个保存该实例的全局变量。
public FaceDetectionProcessor(ImageView imageView) {
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setTrackingEnabled(true)
.build();
detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
this.imageView = imageView;
}
5.- 我创建了一个裁剪方法,它采用位图和一个 Rect 来仅聚焦于脸部。所以继续做同样的事情。
public static Bitmap cropBitmap(Bitmap bitmap, Rect rect) {
int w = rect.right - rect.left;
int h = rect.bottom - rect.top;
Bitmap ret = Bitmap.createBitmap(w, h, bitmap.getConfig());
Canvas canvas = new Canvas(ret);
canvas.drawBitmap(bitmap, -rect.left, -rect.top, null);
return ret;
}
6.- 修改 detectInImage 方法,将被检测的位图实例保存到全局变量中。
@Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
imageBitmap = image.getBitmapForDebugging();
return detector.detectInImage(image);
}
7.- 最后修改OnSuccess方法,调用裁剪方法,将结果赋值给imageView。
@Override
protected void onSuccess(
@NonNull List<FirebaseVisionFace> faces,
@NonNull FrameMetadata frameMetadata,
@NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
croppedImage = cropBitmap(imageBitmap, face.getBoundingBox());
}
imageView.setImageBitmap(croppedImage);
}