dispatch_barrier_async 不等待 CoreImage 完成处理

dispatch_barrier_async doesn't wait for CoreImage to finish with processing

在我问我的问题之前,我应该说我已经阅读了很多关于它的内容并且我尝试了很多方法但是 none 有效。我正在并发队列中进行数十个核心图像处理,我需要使用 dispatch_barrier_async 等待它们完成,这样我才能进行最终渲染并转到下一个视图控制器,但具有讽刺意味的是, dispatch_barrier 不等待我的并发队列完成,这是为什么?是因为我在错误的线程中进行核心图像处理吗?

//这是我的并发队列。

dispatch_queue_t concurrentQueue = 
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0);

这里以处理效果为例

-(void)setupEffects{
//It's one of my effects as an example which renders for previewing the //effect.

case Effect4:{
              dispatch_async(concurrentQueue, ^{
                  //BG
                  self.firstCIFilter = [CIFilter filterWithName:@"CIHexagonalPixellate"
                                            withInputParameters:@{@"inputImage": [self getFirstCIImage],@"inputScale":@26}];
                  self.lastSelectedInputImgforBG =[self applyCIEffectWithCrop];

                  //FG
                  self.firstCIFilter = [CIFilter filterWithName:@"CIPhotoEffectProcess"
                                            withInputParameters:@{@"inputImage":[self getFirstCIImage]}];
                  self.fgImgWithEffect = [self applyCIEffect];

                  dispatch_async(dispatch_get_main_queue(), ^{
                      self.lastSelectedInputImgforFG= [self cropAndFadeAndRenderFGImage];
                      [self saveEffect];
                      [self loadEffectsWithIndex:effectIndex];
                  });
              });
 }

//Once user is done, it renders the image once again
-(UIImage *)applyCIEffectWithCrop{
    __weak typeof(self) weakSelf = self;
    @autoreleasepool{
        weakSelf.firstCIContext =nil;
        weakSelf.firstResultCIImage=nil;
        weakSelf.croppingCIImage=nil;

        weakSelf.firstCIContext = [CIContext contextWithOptions:nil];
        weakSelf.firstResultCIImage = [weakSelf.firstCIFilter valueForKey:kCIOutputImageKey];
        weakSelf.croppingCIImage=[weakSelf.firstResultCIImage imageByCroppingToRect:CGRectMake(0,0, weakSelf.affineClampImage1.size.width*scale , weakSelf.affineClampImage1.size.height*scale)];
        return  [UIImage imageFromCIImage:weakSelf.croppingCIImage scale:1.0 orientation:weakSelf.scaledDownInputImage.imageOrientation cropped:YES withFirstCIImage:[weakSelf getFirstCIImage]];
    }
}

然后对于我的最终渲染,此方法需要等待我的 setupEffect 完成然后使用 segue,但它不会。

- (void)doneButtonAction {
    _finalRender =YES;
    CGFloat max=MAX(self.originalSizeInputImage.size.width,self.originalSizeInputImage.size.height);
    if (max<=1700){
        //Do nothing for Final Render
        self.scaledDownInputImage= self.originalSizeInputImage;
    }else{
        CGSize scaledDownSize = [self getScalingSizeForFinalRenderForImage: self.originalSizeInputImage];
        self.scaledDownInputImage = [self scaleThisImage:self.originalSizeInputImage scaledToFillSize:scaledDownSize];
    }
    imageRect = AVMakeRectWithAspectRatioInsideRect(self.scaledDownInputImage.size, self.viewWithLoadedImages.bounds);

    //Preparation for high quality render with high resolution input 
    //image.
    self.affineClampImage1 = [self affineClampImage];
    self.selectionCropAndBlurredImage = [self croppedFGtoGetBlurred];
    [self.imgData appendData:UIImagePNGRepresentation(self.scaledDownInputImage)];
    [self.effectClass getimageWithImageData:self.imgData];

    if (_effectMode) {
        //Applying effects again for the high resolution input image.
        [self setupEffects];
    }else{
        [self setupFilters];
    }

    dispatch_async(concurrentQueue, ^{
        //Rendering the high quality Images in full resolution here.
        CGRect frame = CGRectMake(0.0, 0.0,
                                  self.lastSelectedInputImgforBG.size.width  *self.lastSelectedInputImgforBG.scale,
                                  self.lastSelectedInputImgforBG.size.height *self.lastSelectedInputImgforBG.scale);
        UIGraphicsBeginImageContextWithOptions(frame.size, NO, 1.0);
        // Draw transparent images on top of each other
        [self.lastSelectedInputImgforBG drawInRect:frame];
        [self.lastSelectedInputImgforFG drawInRect:frame];
        self.tempImage=nil;
        self.tempImage = UIGraphicsGetImageFromCurrentImageContext();        
        UIGraphicsEndImageContext();
    });

    dispatch_barrier_async(concurrentQueue, ^{
        // Getting the full resolution rendered image and going to 
        //the next viewcontroller when the setupEffect and render is 
        //finished... which it doesn't wait until they're finished...
        self.finalHightqualityRenderedImage = self.tempImage;        
        [self performSegueWithIdentifier:@"showShareVC" sender:self];
    });
}

我应该提到我的代码在没有使用我的并发队列的情况下没有问题,但当然会阻塞 UI 直到它完成,这不是我的目标。 非常感谢您的帮助。

我认为解释在dispatch_barrier_async的底部:

The queue you specify should be a concurrent queue that you create yourself using the dispatch_queue_create function. If the queue you pass to this function is a serial queue or one of the global concurrent queues, this function behaves like the dispatch_async function.

所以不要按照你的方式抓取 DISPATCH_QUEUE_PRIORITY_BACKGROUND 第一行代码,使用 dispatch_queue_create.

自己创建 concurrentQueue