是否没有用于选择图像并从该图像中选择正方形裁剪的库?

Is there no library for selecting an image and choosing a square crop from that image?

作为一名初出茅庐的程序员,有人告诉我,当我经常舒适地使用库时,我就会知道什么时候达到中级水平。我面临着允许我的用户选择个人资料图片的问题,并且在它不是正方形的情况下,允许我的用户在图像周围移动一个正方形以裁剪它。

这是一个很常见的功能(tinder、GitHub、Facebook 等)我认为我可以在网上找到 swift 3 兼容的解决方案,但我没有运气;我对 WDImagePicker 进行了大量研究,结果发现它并不完全符合用户习惯看到的标准。

如果不存在解决方案,我将自己编写代码,如果成功,请在线分享,但我很难相信这是事实。有人知道任何解决方案吗?

澄清我的确切意图:用户应该能够选择要上传的图像。如果该图像不是正方形,则显示的正方形的尺寸应等于 min(imageWidth, imageHeight)。该正方形应以滚动视图的方式与图片重叠,以便用户可以移动图像以使其适合正方形,使其覆盖他或她希望上传的正方形图像的外观。然后应该上传图像的那部分。本质上,我想要一个方面填充,用户可以在其中决定图像的哪一部分被切断。真的没有API处理这个吗?

好的,首先让我为没有早点把这个告诉你而道歉,但这是我完全自定义的一个图像裁剪工具的实现,它确实有效(不像 Apple 的)。它非常简单,但不幸的是我(愚蠢地)没有将它构建为易于重用,因此您将不得不尝试重新创建故事板中的内容。我会尽量用截图来说明。

首先,请注意,这是所有 Swift 2.3 代码,因为它是一个旧项目,如果您在 Swift 2.3 中工作,太好了,如果没有,你需要更新它。

这是您需要添加到项目中的视图控制器文件,注意解释各个部分如何工作的注释:

import Foundation
import UIKit
import AVFoundation

class EditProfilePictureViewController : UIViewController {

    var imageView: UIImageView?
    var image : UIImage!
    var center: CGPoint!

    @IBOutlet var indicatorView: UIView!
    @IBOutlet var spaceView: UIView!

    // Set these to the desired width and height of your output image.
    let desiredWidth:CGFloat = 75
    let desiredHeight: CGFloat = 100

    override func viewDidLoad() {
        super.viewDidLoad()

        // Set up UI and gesture recognisers.
        indicatorView.layer.borderColor = UIColor.whiteColor().CGColor
        indicatorView.layer.borderWidth = 8

        let pan = UIPanGestureRecognizer(target: self, action: "didPan:")
        let pinch = UIPinchGestureRecognizer(target: self, action: "didPinch:")
        indicatorView.addGestureRecognizer(pan)
        indicatorView.addGestureRecognizer(pinch)
    }

    override func viewDidAppear(animated: Bool) {
        super.viewDidAppear(animated)

        // Set up the image view in relation to the storyboard views.
        imageView = UIImageView()
        imageView?.image = image
        imageView!.frame = AVMakeRectWithAspectRatioInsideRect(image.size, indicatorView.frame)
        center = imageView!.center
        spaceView.insertSubview(imageView!, belowSubview: indicatorView)
    }


    // Invoked when the user pans accross the screen. The logic happening here basically just checks if the pan would move the image outside the cropping square and if so, don't allow the pan to happen.
    func didPan(recognizer: UIPanGestureRecognizer) {

        let translation = recognizer.translationInView(spaceView)

        if (imageView!.frame.minX + translation.x >= indicatorView.frame.minX && imageView!.frame.maxX + translation.x <= indicatorView.frame.maxX) || ((imageView!.frame.size.width >= indicatorView.frame.size.width) && (imageView!.frame.minX + translation.x <= indicatorView.frame.minX && imageView!.frame.maxX + translation.x >= indicatorView.frame.maxX)) {
            imageView!.center.x += translation.x
        }

        if (imageView!.frame.minY + translation.y >= indicatorView.frame.minY && imageView!.frame.maxY + translation.y <= indicatorView.frame.maxY) || ((imageView!.frame.size.height >= indicatorView.frame.size.height) && (imageView!.frame.minY + translation.y <= indicatorView.frame.minY && imageView!.frame.maxY + translation.y >= indicatorView.frame.maxY)) {
            imageView!.center.y += translation.y
        }

        recognizer.setTranslation(CGPointZero, inView: spaceView)
    }

    // Invoked when the user pinches the screen. Again the logic here just ensures that zooming the image would not make it exceed the bounds of the cropping square and cancels the zoom if it does.
    func didPinch(recognizer: UIPinchGestureRecognizer) {

        let view = UIView(frame: imageView!.frame)

        view.transform = CGAffineTransformScale(imageView!.transform, recognizer.scale, recognizer.scale)

        if view.frame.size.width >= indicatorView.frame.size.width || view.frame.size.height >= indicatorView.frame.size.height {

            imageView!.transform = CGAffineTransformScale(imageView!.transform, recognizer.scale, recognizer.scale)
            recognizer.scale = 1
        }

        if recognizer.state == UIGestureRecognizerState.Ended {

            if imageView!.frame.minX > indicatorView.frame.minX || imageView!.frame.maxX < indicatorView.frame.maxX {

                UIView.animateWithDuration(0.3, animations: { () -> Void in
                    self.imageView!.center = self.indicatorView.center
                })
            }

            if imageView!.frame.size.height < indicatorView.frame.size.height && imageView!.frame.size.width < indicatorView.frame.size.width {

                UIView.animateWithDuration(0.3, animations: { () -> Void in
                    self.imageView!.frame = AVMakeRectWithAspectRatioInsideRect(self.image.size, self.indicatorView.frame)
                })
            }
        }
    }

    // Outlet for the cancel button.
    @IBAction func cancelButtonPressed(sender: AnyObject) {
        dismissViewControllerAnimated(true, completion: nil)
    }

    // Outlet for the save button. The logic here scales the outputed image down to the desired size.
    @IBAction func saveButtonPressed(sender: AnyObject) {

        let croppedImage = grabIndicatedImage()
        UIGraphicsBeginImageContext(CGSizeMake(desiredWidth, desiredHeight))
        CGContextSetFillColorWithColor(UIGraphicsGetCurrentContext(), UIColor.blackColor().CGColor)

        if desiredWidth / desiredHeight == croppedImage.size.width / croppedImage.size.height  {
            croppedImage.drawInRect(CGRect(x: 0, y: 0, width: desiredWidth, height: desiredHeight))
        } else {
            let croppedImageSize : CGRect = AVMakeRectWithAspectRatioInsideRect(croppedImage.size, CGRectMake(0, 0, desiredWidth, desiredHeight))
            croppedImage.drawInRect(croppedImageSize)
        }

        let resizedCroppedImage = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext()
        let data = UIImagePNGRepresentation(resizedCroppedImage)
        // At this point you now have an image cropped to your desired size, as well as data representation of it should you want to send to an API.
    }

    // When you call this method it basically takes a screenshot of the crop area and gets the UIImage object from it.
    func grabIndicatedImage() -> UIImage  {

        UIGraphicsBeginImageContext(self.view.layer.frame.size)
        let context : CGContextRef = UIGraphicsGetCurrentContext();
        self.view.layer.renderInContext(context)
        let screenshot : UIImage = UIGraphicsGetImageFromCurrentImageContext();

        let rectToCrop = CGRectMake(indicatorView.frame.minX + 8, indicatorView.frame.minY + 72, indicatorView.frame.width - 16, indicatorView.frame.height - 16)

        let imageRef : CGImageRef = CGImageCreateWithImageInRect(screenshot.CGImage, rectToCrop)
        let croppedImage = UIImage(CGImage: imageRef)!


        UIGraphicsEndImageContext();
        return croppedImage
    }

    // MARK: The following methods relate to re-laying out the view if the user changes the device orientation.

    override func didRotateFromInterfaceOrientation(fromInterfaceOrientation: UIInterfaceOrientation) {
        if (floor(NSFoundationVersionNumber) <= NSFoundationVersionNumber_iOS_7_1)
        {
            UIView.animateWithDuration(0.3, animations: { () -> Void in
                self.imageView!.center = self.indicatorView.center
                self.imageView!.frame = AVMakeRectWithAspectRatioInsideRect(self.image.size, self.indicatorView.frame)
            })
        }
    }

    override func viewWillTransitionToSize(size: CGSize, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {

        coordinator.animateAlongsideTransition({ (context) -> Void in

            if UIDevice.currentDevice().userInterfaceIdiom == .Pad
            {
                UIView.animateWithDuration(0.3, animations: { () -> Void in
                    self.imageView!.center = self.indicatorView.center
                    self.imageView!.frame = AVMakeRectWithAspectRatioInsideRect(self.image.size, self.indicatorView.frame)
                })
            }
        }, completion: { (context) -> Void in
        })
    }
}

接下来您需要设置 storyboard/nib 并连接插座。故事板中的 UI 如下所示:

不是特别有用,我知道。视图层次结构一点更有见地,看起来像这样:

如您所见,故事板中没有太多设置。 Space View 实际上只是主视图的子视图。它在所有四个边上都有约束,因此它与根视图的大小相匹配。非常容易复制。它有黑色背景,但这可以是您选择的任何颜色。

Indicator View稍微复杂一点。它是 Space View 的子视图,从视图层次结构截图中可以看出。就约束而言,最重要的一个是纵横比。这需要是您想要的作物的纵横比。在我的例子中它是 4:3 但对你来说它很可能是 1:1 如果你想要一个正方形的裁剪。您可以轻松更改此设置,但请注意必须设置 desiredHeightdesiredWidth 以反映纵横比。

这些约束可能看起来很复杂,但实际上很简单,让我分解一下:

正如我提到的,设置纵横比。接下来在 space 视图中水平和垂直居中。然后为 space 视图创建一组等宽、等高的约束。使这两个'less than or equal to'。之后,创建另一组等宽、等高约束。将它们的优先级都设置为 750。

对了,就是这样布置UI;现在你只需要连接插座。连接 spaceViewindicatorView 插座的内容非常明显,所以请继续这样做。也不要忘记将取消和保存按钮连接到它们的操作。

最后,我将解释如何使用它。为这个新的视图控制器创建一个 segue 并覆盖视图控制器上的 prepareForSegue 方法。通过您选择的任何方式获取对视图控制器的引用,并将图像 属性 设置为您要裁剪的图像。请注意,这不是 UIImagePickerController 的替代品,而是对它的补充。您仍然需要使用 UIImagePickerController 从相机胶卷或相机中获取图像,但这用于处理编辑。例如:

override func prepareForSegue(segue: UIStoryboardSegue) {
    if let editVC = segue.destinationViewController as? EditProfilePictureViewController {
        editVC.image = self.pickedImage // Set image to the image you want to crop.
    }
}

然后会弹出编辑器,您可以在点击保存之前根据自己的喜好放置图像。您可以选择实现委托以从编辑器中取回裁剪后的图像,但我会把它留给您。

再次抱歉未能及时将此信息发送给您,但我希望您会发现这是值得的!祝你的应用程序好运,请 post 上线时 link 以便我查看!