在图像中找到边缘后可变调整线条粗细的好方法是什么?
What would be a good method to variably adjust line thickness after finding edges in an image?
CIEdge(以及其他 CIFilters)在设备和 iOS 版本之间不再一致地工作。所以我决定用卷积滤波器寻找边缘。成功:我可以得到一个很好的图像黑白蒙版,让渐变或纯色显示出来:
问题是,我想不出一个可变调整线条粗细的好方法。我尝试的一种方法是降低源图像的一个版本,并在黑白相机图像之上应用卷积和放大。那种作品,但线条看起来锯齿状和粗糙。有任何想法吗?这是我的代码:
func sketch(with ciImage: CIImage) -> CIImage {
var sourceCore = ciImage
var convolutionValue_A:CGFloat = -0.0925937220454216
var convolutionValue_B:CGFloat = -0.4166666567325592
var convolutionValue_C:CGFloat = -1.8518532514572144
var convolutionValue_D:CGFloat = 0.23148006200790405
var convolutionValue_E:CGFloat = 4.5833334922790527
var convolutionValue_F:CGFloat = 14.166666984558105
var brightnessVal:CGFloat = 1.1041666269302368
var contrastVal:CGFloat = 3.0555555820465088
// radially symmetrical convolution weights:
var weightsArr: [CGFloat] = [
convolutionValue_A, convolutionValue_A, convolutionValue_B, convolutionValue_B, convolutionValue_B, convolutionValue_A, convolutionValue_A,
convolutionValue_A, convolutionValue_B, convolutionValue_C, convolutionValue_C, convolutionValue_C, convolutionValue_B, convolutionValue_A,
convolutionValue_B, convolutionValue_C, convolutionValue_D, convolutionValue_E, convolutionValue_D, convolutionValue_C, convolutionValue_B,
convolutionValue_B, convolutionValue_C, convolutionValue_E, convolutionValue_F, convolutionValue_E, convolutionValue_C, convolutionValue_B,
convolutionValue_B, convolutionValue_C, convolutionValue_D, convolutionValue_E, convolutionValue_D, convolutionValue_C, convolutionValue_B,
convolutionValue_A, convolutionValue_B, convolutionValue_C, convolutionValue_C, convolutionValue_C, convolutionValue_B, convolutionValue_A,
convolutionValue_A, convolutionValue_A, convolutionValue_B, convolutionValue_B, convolutionValue_B, convolutionValue_A, convolutionValue_A
]
let inputWeights:CIVector = CIVector(values: weightsArr, count: weightsArr.count)
sourceCore = sourceCore
.applyingFilter("CIColorControls", parameters: [kCIInputImageKey: sourceCore,
kCIInputSaturationKey: 0.0,
kCIInputBrightnessKey: brightnessVal,
kCIInputContrastKey: contrastVal])
// transforms image to only show edges in black and white
sourceCore = sourceCore
.applyingFilter("CIConvolution7X7", parameters: [kCIInputImageKey: sourceCore,
let whiteCIColor = CIColor.white
let whiteColor = CIImage(color: whiteCIColor).cropped(to: ciImage.extent)
// for some reason, I need to blend the black and white mask with the color white.
// either CIColorDodgeBlendMode or CILinearDodgeBlendMode seems to work fine here:
sourceCore = sourceCore
.applyingFilter("CIColorDodgeBlendMode", parameters: [kCIInputImageKey: sourceCore,
kCIInputBackgroundImageKey: whiteColor])
kCIInputWeightsKey: inputWeights]).cropped(to: sourceCore.extent)
// give camera image a black and white Noir effect
var ciImage = ciImage
.applyingFilter("CIPhotoEffectNoir", parameters: [kCIInputImageKey: ciImage])
// make solid color
let color = CIColor(red: 0.819, green: 0.309, blue: 0.309)
let colFilter = CIFilter(name: "CIConstantColorGenerator")!
colFilter.setValue(color, forKey: kCIInputColorKey)
var solidColor = colFilter.value(forKey: "outputImage") as! CIImage
solidColor = solidColor.cropped(to: ciImage.extent)
// color is shown through outlines correctly,
// and image is black and white
sourceCore = sourceCore
.applyingFilter("CIBlendWithMask", parameters: [
kCIInputImageKey: ciImage, // black and white image
kCIInputBackgroundImageKey: solidColor, // solid color
kCIInputMaskImageKey:sourceCore]) // edge work image
ciImage = sourceCore
return ciImage
}
边缘的厚度取决于卷积核的宽度或图像分辨率。
要(动态)增加卷积的宽度,您可能需要实现自己的自定义 CIKernel
,因为 Core Image 仅支持最多 9x9 的卷积核。更宽的内核应该提供更平滑的结果,但也更昂贵。
您可以通过在应用内核之前缩小图像(就像您已经做的那样)并在之后再次放大来实现类似的效果。您可以尝试使用 "smart" 放大过滤器,例如 CILanczosScaleTransform
或 CIBicubicScaleTransform
并使用它们的参数。使用 inputC
的双三次滤波器,您应该能够控制结果的柔和度。
CIEdge(以及其他 CIFilters)在设备和 iOS 版本之间不再一致地工作。所以我决定用卷积滤波器寻找边缘。成功:我可以得到一个很好的图像黑白蒙版,让渐变或纯色显示出来:
问题是,我想不出一个可变调整线条粗细的好方法。我尝试的一种方法是降低源图像的一个版本,并在黑白相机图像之上应用卷积和放大。那种作品,但线条看起来锯齿状和粗糙。有任何想法吗?这是我的代码:
func sketch(with ciImage: CIImage) -> CIImage {
var sourceCore = ciImage
var convolutionValue_A:CGFloat = -0.0925937220454216
var convolutionValue_B:CGFloat = -0.4166666567325592
var convolutionValue_C:CGFloat = -1.8518532514572144
var convolutionValue_D:CGFloat = 0.23148006200790405
var convolutionValue_E:CGFloat = 4.5833334922790527
var convolutionValue_F:CGFloat = 14.166666984558105
var brightnessVal:CGFloat = 1.1041666269302368
var contrastVal:CGFloat = 3.0555555820465088
// radially symmetrical convolution weights:
var weightsArr: [CGFloat] = [
convolutionValue_A, convolutionValue_A, convolutionValue_B, convolutionValue_B, convolutionValue_B, convolutionValue_A, convolutionValue_A,
convolutionValue_A, convolutionValue_B, convolutionValue_C, convolutionValue_C, convolutionValue_C, convolutionValue_B, convolutionValue_A,
convolutionValue_B, convolutionValue_C, convolutionValue_D, convolutionValue_E, convolutionValue_D, convolutionValue_C, convolutionValue_B,
convolutionValue_B, convolutionValue_C, convolutionValue_E, convolutionValue_F, convolutionValue_E, convolutionValue_C, convolutionValue_B,
convolutionValue_B, convolutionValue_C, convolutionValue_D, convolutionValue_E, convolutionValue_D, convolutionValue_C, convolutionValue_B,
convolutionValue_A, convolutionValue_B, convolutionValue_C, convolutionValue_C, convolutionValue_C, convolutionValue_B, convolutionValue_A,
convolutionValue_A, convolutionValue_A, convolutionValue_B, convolutionValue_B, convolutionValue_B, convolutionValue_A, convolutionValue_A
]
let inputWeights:CIVector = CIVector(values: weightsArr, count: weightsArr.count)
sourceCore = sourceCore
.applyingFilter("CIColorControls", parameters: [kCIInputImageKey: sourceCore,
kCIInputSaturationKey: 0.0,
kCIInputBrightnessKey: brightnessVal,
kCIInputContrastKey: contrastVal])
// transforms image to only show edges in black and white
sourceCore = sourceCore
.applyingFilter("CIConvolution7X7", parameters: [kCIInputImageKey: sourceCore,
let whiteCIColor = CIColor.white
let whiteColor = CIImage(color: whiteCIColor).cropped(to: ciImage.extent)
// for some reason, I need to blend the black and white mask with the color white.
// either CIColorDodgeBlendMode or CILinearDodgeBlendMode seems to work fine here:
sourceCore = sourceCore
.applyingFilter("CIColorDodgeBlendMode", parameters: [kCIInputImageKey: sourceCore,
kCIInputBackgroundImageKey: whiteColor])
kCIInputWeightsKey: inputWeights]).cropped(to: sourceCore.extent)
// give camera image a black and white Noir effect
var ciImage = ciImage
.applyingFilter("CIPhotoEffectNoir", parameters: [kCIInputImageKey: ciImage])
// make solid color
let color = CIColor(red: 0.819, green: 0.309, blue: 0.309)
let colFilter = CIFilter(name: "CIConstantColorGenerator")!
colFilter.setValue(color, forKey: kCIInputColorKey)
var solidColor = colFilter.value(forKey: "outputImage") as! CIImage
solidColor = solidColor.cropped(to: ciImage.extent)
// color is shown through outlines correctly,
// and image is black and white
sourceCore = sourceCore
.applyingFilter("CIBlendWithMask", parameters: [
kCIInputImageKey: ciImage, // black and white image
kCIInputBackgroundImageKey: solidColor, // solid color
kCIInputMaskImageKey:sourceCore]) // edge work image
ciImage = sourceCore
return ciImage
}
边缘的厚度取决于卷积核的宽度或图像分辨率。
要(动态)增加卷积的宽度,您可能需要实现自己的自定义 CIKernel
,因为 Core Image 仅支持最多 9x9 的卷积核。更宽的内核应该提供更平滑的结果,但也更昂贵。
您可以通过在应用内核之前缩小图像(就像您已经做的那样)并在之后再次放大来实现类似的效果。您可以尝试使用 "smart" 放大过滤器,例如 CILanczosScaleTransform
或 CIBicubicScaleTransform
并使用它们的参数。使用 inputC
的双三次滤波器,您应该能够控制结果的柔和度。