如何在AR场景运行时提取SceneKit Depth Buffer?
How to Extract SceneKit Depth Buffer at runtime in AR scene?
如何提取 SceneKit 深度缓冲区?我制作了一个基于 运行 Metal 的基于 AR 的应用程序,我真的很难找到有关如何提取 2D 深度缓冲区的任何信息,以便我可以渲染出场景的精美 3D 照片。非常感谢任何帮助。
你的问题不清楚,但我会尽力回答。
来自 VR 视图的深度传递
如果您需要从 SceneKit 的 3D 环境渲染深度通道,那么您应该使用,例如,SCNGeometrySource.Semantic 结构。有 vertex
、normal
、texcoord
、color
和 tangent
类型属性。让我们看看 vertex
类型 属性 是什么:
static let vertex: SCNGeometrySource.Semantic
This semantic identifies data containing the positions of each vertex in the geometry. For a custom shader program, you use this semantic to bind SceneKit’s vertex position data to an input attribute of the shader. Vertex position data is typically an array of three- or four-component vectors.
这是 iOS Depth Sample 项目的代码摘录。
更新: 使用此代码,您可以获得 SCNScene 中每个点的位置并为这些点分配颜色(这就是 zDepth 通道的真正含义):
import SceneKit
struct PointCloudVertex {
var x: Float, y: Float, z: Float
var r: Float, g: Float, b: Float
}
@objc class PointCloud: NSObject {
var pointCloud : [SCNVector3] = []
var colors: [UInt8] = []
public func pointCloudNode() -> SCNNode {
let points = self.pointCloud
var vertices = Array(repeating: PointCloudVertex(x: 0,
y: 0,
z: 0,
r: 0,
g: 0,
b: 0),
count: points.count)
for i in 0...(points.count-1) {
let p = points[i]
vertices[i].x = Float(p.x)
vertices[i].y = Float(p.y)
vertices[i].z = Float(p.z)
vertices[i].r = Float(colors[i * 4]) / 255.0
vertices[i].g = Float(colors[i * 4 + 1]) / 255.0
vertices[i].b = Float(colors[i * 4 + 2]) / 255.0
}
let node = buildNode(points: vertices)
return node
}
private func buildNode(points: [PointCloudVertex]) -> SCNNode {
let vertexData = NSData(
bytes: points,
length: MemoryLayout<PointCloudVertex>.size * points.count
)
let positionSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.vertex,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let colorSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.color,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: MemoryLayout<Float>.size * 3,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let element = SCNGeometryElement(
data: nil,
primitiveType: .point,
primitiveCount: points.count,
bytesPerIndex: MemoryLayout<Int>.size
)
element.pointSize = 1
element.minimumPointScreenSpaceRadius = 1
element.maximumPointScreenSpaceRadius = 5
let pointsGeometry = SCNGeometry(sources: [positionSource, colorSource], elements: [element])
return SCNNode(geometry: pointsGeometry)
}
}
AR 视图的深度传递
如果您需要从 ARSCNView 渲染深度通道,只有在您使用 ARFaceTrackingConfiguration for the front-facing camera. If so, then you can employ capturedDepthData 实例 属性 的情况下才有可能为您带来深度图,并与视频帧一起捕获。
var capturedDepthData: AVDepthData? { get }
但是这张深度图图像比对应的 RGB 图像在 60 fps.
only 15 fps and of lower resolution
Face-based AR uses the front-facing, depth-sensing camera on compatible devices. When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see capturedImage) captured by the color camera. This property’s value is always nil when running other AR configurations.
真正的代码可能是这样的:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.global().async {
guard let frame = self.sceneView.session.currentFrame else {
return
}
if let depthImage = frame.capturedDepthData {
self.depthImage = (depthImage as! CVImageBuffer)
}
}
}
}
视频视图的深度传递
此外,您可以使用 2 个后置摄像头和 AVFoundation
框架提取真正的深度通道。
查看 Image Depth Map 教程,其中 差异 概念将介绍给您。
如何提取 SceneKit 深度缓冲区?我制作了一个基于 运行 Metal 的基于 AR 的应用程序,我真的很难找到有关如何提取 2D 深度缓冲区的任何信息,以便我可以渲染出场景的精美 3D 照片。非常感谢任何帮助。
你的问题不清楚,但我会尽力回答。
来自 VR 视图的深度传递
如果您需要从 SceneKit 的 3D 环境渲染深度通道,那么您应该使用,例如,SCNGeometrySource.Semantic 结构。有 vertex
、normal
、texcoord
、color
和 tangent
类型属性。让我们看看 vertex
类型 属性 是什么:
static let vertex: SCNGeometrySource.Semantic
This semantic identifies data containing the positions of each vertex in the geometry. For a custom shader program, you use this semantic to bind SceneKit’s vertex position data to an input attribute of the shader. Vertex position data is typically an array of three- or four-component vectors.
这是 iOS Depth Sample 项目的代码摘录。
更新: 使用此代码,您可以获得 SCNScene 中每个点的位置并为这些点分配颜色(这就是 zDepth 通道的真正含义):
import SceneKit
struct PointCloudVertex {
var x: Float, y: Float, z: Float
var r: Float, g: Float, b: Float
}
@objc class PointCloud: NSObject {
var pointCloud : [SCNVector3] = []
var colors: [UInt8] = []
public func pointCloudNode() -> SCNNode {
let points = self.pointCloud
var vertices = Array(repeating: PointCloudVertex(x: 0,
y: 0,
z: 0,
r: 0,
g: 0,
b: 0),
count: points.count)
for i in 0...(points.count-1) {
let p = points[i]
vertices[i].x = Float(p.x)
vertices[i].y = Float(p.y)
vertices[i].z = Float(p.z)
vertices[i].r = Float(colors[i * 4]) / 255.0
vertices[i].g = Float(colors[i * 4 + 1]) / 255.0
vertices[i].b = Float(colors[i * 4 + 2]) / 255.0
}
let node = buildNode(points: vertices)
return node
}
private func buildNode(points: [PointCloudVertex]) -> SCNNode {
let vertexData = NSData(
bytes: points,
length: MemoryLayout<PointCloudVertex>.size * points.count
)
let positionSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.vertex,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let colorSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.color,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: MemoryLayout<Float>.size * 3,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let element = SCNGeometryElement(
data: nil,
primitiveType: .point,
primitiveCount: points.count,
bytesPerIndex: MemoryLayout<Int>.size
)
element.pointSize = 1
element.minimumPointScreenSpaceRadius = 1
element.maximumPointScreenSpaceRadius = 5
let pointsGeometry = SCNGeometry(sources: [positionSource, colorSource], elements: [element])
return SCNNode(geometry: pointsGeometry)
}
}
AR 视图的深度传递
如果您需要从 ARSCNView 渲染深度通道,只有在您使用 ARFaceTrackingConfiguration for the front-facing camera. If so, then you can employ capturedDepthData 实例 属性 的情况下才有可能为您带来深度图,并与视频帧一起捕获。
var capturedDepthData: AVDepthData? { get }
但是这张深度图图像比对应的 RGB 图像在 60 fps.
only 15 fps and of lower resolution
Face-based AR uses the front-facing, depth-sensing camera on compatible devices. When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see capturedImage) captured by the color camera. This property’s value is always nil when running other AR configurations.
真正的代码可能是这样的:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.global().async {
guard let frame = self.sceneView.session.currentFrame else {
return
}
if let depthImage = frame.capturedDepthData {
self.depthImage = (depthImage as! CVImageBuffer)
}
}
}
}
视频视图的深度传递
此外,您可以使用 2 个后置摄像头和 AVFoundation
框架提取真正的深度通道。
查看 Image Depth Map 教程,其中 差异 概念将介绍给您。