Swift SpriteKit 从 Sprite 的纹理中使用 SKPhysicsBody 检测 SKSpriteNode 上的 TouchesBegan

Swift SpriteKit Detect TouchesBegan on SKSpriteNode with SKPhysicsBody from Sprite's Texture

我有一个带有精灵的 SpriteKit 场景。精灵具有从纹理的 alpha 派生的物理 body 以获得准确的物理形状,如下所示:

let texture_bottle = SKTexture(imageNamed:"Bottle")
let sprite_bottle = SKSpriteNode(texture: texture_bottle)
physicsBody_bottle = SKPhysicsBody(texture: texture_bottle, size: size)
physicsBody_bottle.affectedByGravity = false
sprite_bottle.physicsBody = physicsBody_bottle
root.addChild(sprite_bottle)

....

func touchesBegan(_ touches: Set<UITouch>?, with event: UIEvent?, touchLocation: CGPoint!) {

    let hitNodes = self.nodes(at: touchLocation)

}

当用户点击屏幕时,我如何检测他们是否确实在物理 body 形状(而不是精灵的矩形)内触摸?

你"can't"(不容易)

UITouch 命令基于 CGRects,因此 let hitNodes = self.nodes(at: touchLocation) 将填充框架与该触摸相交的任何节点。

这是无法避免的,因此下一步是确定注册为 "hit" 的节点的像素精度。你应该做的第一件事是将触摸位置转换为局部坐标到你的精灵。

for node in hitNodes
{
    //assuming touchLocation is based on screen coordinates
    let localLocation = node.convertPoint(touchLocation,from:scene)
}

那么从这一点开始你就需要想清楚你要使用哪种方法了。

如果您需要速度,那么我会建议创建一个充当遮罩的 2D 布尔数组,并为该数组填充 false 表示透明区域,true 表示不透明区域。然后你可以使用 localLocation 指向数组的某个索引(记得将 anchorPoint * width 和 height 添加到你的 x 和 y 值然后转换为 int)

func isHit(node: SKNode,mask: [[Boolean]],position:CGPoint) -> Boolean
{
    return mask[Int(node.size.height * node.anchorPoint.y + position.y)][Int(node.size.width * node.anchorPoint.x + position.x)]
} 

如果不在意速度,那么你可以创建一个CGContext,将你的纹理填充到这个上下文中,然后检查上下文中的点是否透明。

像这样的东西可以帮助你:

//: Playground - noun: a place where people can play

import UIKit
import XCPlayground

extension CALayer {

    func colorOfPoint(point:CGPoint) -> UIColor
    {
        var pixel:[CUnsignedChar] = [0,0,0,0]

        let colorSpace = CGColorSpaceCreateDeviceRGB()
        let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue)

        let context = CGBitmapContextCreate(&pixel, 1, 1, 8, 4, colorSpace,bitmapInfo.rawValue)

        CGContextTranslateCTM(context, -point.x, -point.y)

        self.renderInContext(context!)

        let red:CGFloat = CGFloat(pixel[0])/255.0
        let green:CGFloat = CGFloat(pixel[1])/255.0
        let blue:CGFloat = CGFloat(pixel[2])/255.0
        let alpha:CGFloat = CGFloat(pixel[3])/255.0

        //println("point color - red:\(red) green:\(green) blue:\(blue)")

        let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)

        return color
    }
}

extension UIColor {
    var components:(red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) {
        var r:CGFloat = 0
        var g:CGFloat = 0
        var b:CGFloat = 0
        var a:CGFloat = 0
        getRed(&r, green: &g, blue: &b, alpha: &a)
        return (r,g,b,a)
    }
}


//get an image we can work on
var imageFromURL = UIImage(data: NSData(contentsOfURL: NSURL(string:"https://www.gravatar.com/avatar/ba4178644a33a51e928ffd820269347c?s=328&d=identicon&r=PG&f=1")!)!)
//only use a small area of that image - 50 x 50 square
let imageSliceArea = CGRectMake(0, 0, 50, 50);
let imageSlice  = CGImageCreateWithImageInRect(imageFromURL?.CGImage, imageSliceArea);
//we'll work on this image
var image = UIImage(CGImage: imageSlice!)


let imageView = UIImageView(image: image)
//test out the extension above on the point (0,0) - returns r 0.541 g 0.78 b 0.227 a 1.0
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: 0, y: 0))



let imageRect = CGRectMake(0, 0, image.size.width, image.size.height)

UIGraphicsBeginImageContext(image.size)
let context = UIGraphicsGetCurrentContext()

CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, image.CGImage)

for x in 0...Int(image.size.width) {
    for y in 0...Int(image.size.height) {
        var pointColor = imageView.layer.colorOfPoint(CGPoint(x: x, y: y))
        //I used my own creativity here - change this to whatever logic you want
        if y % 2 == 0 {
            CGContextSetRGBFillColor(context, pointColor.components.red , 0.5, 0.5, 1)
        }
        else {
            CGContextSetRGBFillColor(context, 255, 0.5, 0.5, 1)
        }

        CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
    }
}
CGContextRestoreGState(context)
image = UIGraphicsGetImageFromCurrentImageContext()

你最终会在哪里调用 colorOfPoint(point:localLocation).cgColor.alpha > 0 来确定你是否正在触摸一个节点。

现在我建议你将 colorOfPoint 做为 SKSpriteNode 的扩展,所以在上面发布的代码中要有创意。

func isHit(node: SKSpriteNode,position:CGPoint) -> Boolean
{
    return node.colorOfPoint(point:localLocation).cgColor.alpha > 0
} 

您的最终代码如下所示:

hitNodes = hitNodes.filter
           {
               node in
               //assuming touchLocation is based on screen coordinates
               let localLocation = node.convertPoint(touchLocation,from:node.scene)
               return isHit(node:node,mask:mask,position:localLocation)
           }

hitNodes = hitNodes.filter
           {
               node in
               //assuming touchLocation is based on screen coordinates
               let localLocation = node.convertPoint(touchLocation,from:node.scene)
               return isHit(node:node,position:localLocation)
           }

这基本上过滤掉了帧比较检测到的所有节点,留下像素完美的触摸节点。

注意:来自单独 SO link 的代码可能需要转换为 Swift 4.