SceneKit 3D 标记增强现实 iOS

SceneKit 3D Marker Augmented Reality iOS

过去几周,我一直致力于开发一个简单的概念验证应用程序,其中将 3D 模型投影到 IOS(Swift 和 Objective-C)

我用特定的固定镜头位置校准了一个 Ipad 相机,并用它来估计 AR 标记的姿态(从我的调试分析来看,这似乎非常准确)。当我尝试使用 SceneKit 场景将模型投影到标记上时,问题似乎(令人惊讶,令人惊讶)。

我知道 opencv 和 SceneKit 中的轴不同(Y 和 Z)并且已经完成了此更正以及两个库之间的行 order/column 顺序差异。

构建投影矩阵后,我将相同的变换应用于 3D 模型,从我的调试分析来看,对象似乎被平移到所需的位置并进行了所需的旋转。问题是它永远不会与标记的特定图像像素位置重叠。我正在使用 AVCapturePreviewVideoLayer 将视频置于与我的 SceneKit 视图具有相同边界的背景中。

有人知道为什么会这样吗?我尝试使用相机 FOV,但对结果没有实际影响。

谢谢大家的宝贵时间。

EDIT1: 我将 post 这里的一些代码来揭示我目前正在做什么。

我在主视图中有两个子视图,一个是背景 AVCaptureVideoPreviewLayer,另一个是 SceneKitView。两者都与主视图具有相同的边界。

在每一帧,我使用一个 opencv 包装器输出每个标记的姿势:

    std::vector<int> ids;
    std::vector<std::vector<cv::Point2f>> corners, rejected;

    cv::aruco::detectMarkers(frame, _dictionary, corners, ids, _detectorParams, rejected);
    if (ids.size() > 0 ){
    cv::aruco::drawDetectedMarkers(frame, corners, ids);
    cv::Mat rvecs, tvecs;
    cv::aruco::estimatePoseSingleMarkers(corners, 2.6, _intrinsicMatrix, _distCoeffs, rvecs, tvecs);

    // Let's protect ourselves agains multiple markers
    if (rvecs.total() > 1)
        return;
    _markerFound = true;

    cv::Rodrigues(rvecs, _currentR);

    _currentT = tvecs;

    for (int row = 0; row < _currentR.rows; row++){
        for (int col = 0; col < _currentR.cols; col++){
            _currentExtrinsics.at<double>(row, col) = _currentR.at<double>(row, col);
        }
        _currentExtrinsics.at<double>(row, 3) = _currentT.at<double>(row);
    }
    _currentExtrinsics.at<double>(3,3) = 1;
    std::cout << tvecs << std::endl;

    // Convert coordinate systems of opencv to openGL (SceneKit)
    // Note that in openCV z goes away the camera (in openGL goes into the camera)
    // and y points down and on openGL point up
    // Another note: openCV has a column order matrix representation, while SceneKit
    // has a row order matrix, but we'll take care of it later.
    cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64F);
    cvToGl.at<double>(0,0) = 1.0f;
    cvToGl.at<double>(1,1) = -1.0f; // invert the y axis
    cvToGl.at<double>(2,2) = -1.0f; // invert the z axis
    cvToGl.at<double>(3,3) = 1.0f;
    _currentExtrinsics = cvToGl * _currentExtrinsics;
    cv::aruco::drawAxis(frame, _intrinsicMatrix, _distCoeffs, rvecs, tvecs, 5);

然后在每一帧中我将 opencv 矩阵转换为 SCN4Matrix:

- (SCNMatrix4) transformToSceneKit:(cv::Mat&) openCVTransformation{
SCNMatrix4 mat = SCNMatrix4Identity;
// Transpose
openCVTransformation = openCVTransformation.t();

// copy the rotationRows
mat.m11 = (float) openCVTransformation.at<double>(0, 0);
mat.m12 = (float) openCVTransformation.at<double>(0, 1);
mat.m13 = (float) openCVTransformation.at<double>(0, 2);
mat.m14 = (float) openCVTransformation.at<double>(0, 3);

mat.m21 = (float)openCVTransformation.at<double>(1, 0);
mat.m22 = (float)openCVTransformation.at<double>(1, 1);
mat.m23 = (float)openCVTransformation.at<double>(1, 2);
mat.m24 = (float)openCVTransformation.at<double>(1, 3);

mat.m31 = (float)openCVTransformation.at<double>(2, 0);
mat.m32 = (float)openCVTransformation.at<double>(2, 1);
mat.m33 = (float)openCVTransformation.at<double>(2, 2);
mat.m34 = (float)openCVTransformation.at<double>(2, 3);

//copy the translation row
mat.m41 = (float)openCVTransformation.at<double>(3, 0);
mat.m42 = (float)openCVTransformation.at<double>(3, 1)+2.5;
mat.m43 = (float)openCVTransformation.at<double>(3, 2);
mat.m44 = (float)openCVTransformation.at<double>(3, 3);

return mat;

}

在找到 AR 标记的每一帧,我都将一个框添加到场景并将变换应用到对象节点:

SCNBox *box = [SCNBox boxWithWidth:5.0 height:5.0 length:5.0 chamferRadius:0.0];
_boxNode = [SCNNode nodeWithGeometry:box];
if (found){
    [self.delegate returnExtrinsicsMat:extrinsicMatrixOfTheMarker];
    Mat R, T;
    [self.delegate returnRotationMat:R];
    [self.delegate returnTranslationMat:T];
    SCNMatrix4 Transformation;
    Transformation = [self          transformToSceneKit:extrinsicMatrixOfTheMarker];
    //_cameraNode.transform = SCNMatrix4Invert(Transformation);
    [_sceneKitScene.rootNode addChildNode:_cameraNode];
    //_cameraNode.camera.projectionTransform = SCNMatrix4Identity;
    //_cameraNode.camera.zNear = 0.0;
    _sceneKitView.pointOfView = _cameraNode;
    _boxNode.transform = Transformation;


    [_sceneKitScene.rootNode addChildNode:_boxNode];
    //_boxNode.position = SCNVector3Make(Transformation.m41, Transformation.m42, Transformation.m43);

    std::cout << (_boxNode.position.x) << " " << (_boxNode.position.y) << " " << (_boxNode.position.z) << std::endl << std::endl;
}

例如,如果平移向量是(-1, 5, 20),则对象出现在场景中的位置(-1, -5, -20),并且旋转也是正确的。问题是它永远不会出现在背景图像中的正确位置。我将添加一些图像来显示结果。

有人知道为什么会这样吗?

找到解决方案。我没有将变换应用于对象的节点,而是将倒变换矩阵应用于相机节点。然后对于相机透视变换矩阵,我应用了以下矩阵:

    projection = SCNMatrix4Identity
    projection.m11 = (2 * (float)(cameraMatrix[0])) / -(ImageWidth*0.5)
    projection.m12 = (-2 * (float)(cameraMatrix[1])) / (ImageWidth*0.5)
    projection.m13 = (width - (2 * Float(cameraMatrix[2]))) / (ImageWidth*0.5)
    projection.m22 = (2 * (float)(cameraMatrix[4])) / (ImageHeight*0.5)
    projection.m23 = (-height + (2 * (float)(cameraMatrix[5]))) / (ImageHeight*0.5)
    projection.m33 = (-far - near) / (far - near)
    projection.m34 = (-2 * far * near) / (far - near)
    projection.m43 = -1
    projection.m44 = 0

远离和靠近 z 裁剪平面。

我还必须更正框的初始位置以使其在标记上居中。