iOS-Capture Screenshot in Background

iOS系统,App运行在前台时,截屏的方式有很多,但是当被切到后台,或者锁屏时,截屏就会出现问题。

解决方案如下:

后台运行

iOS系统后台运行的方式参考:iOS-Background Execution

截屏方式

一般情况下,使用以下方式是最好的:

1
2
3
4
5
6
7
8
9
10
11
12
- (UIImage *)snapshot{
CGSize videoSize = CGSizeMake(150, 100);
UIImage *captured = nil;
UIGraphicsBeginImageContextWithOptions(videoSize, NO, 1);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationLow);
UIWindow *window = [[UIApplication sharedApplication] delegate].window;
[window drawViewHierarchyInRect:CGRectMake(0, 0, videoSize.width, videoSize.height) afterScreenUpdates:NO];
captured = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return captured;
}

但是,这种方式在切到后台,或者锁屏时,得到的图片的内容是空白,所以,这种情况下,我们直接利用CLLayer来渲染,生成图片。但是,利用CLLayer渲染生成的图片是跟手机屏幕大小一致的,为了得到我们想要的大小,需要进行拉伸,调用CGContextScaleCTM来完成:

1
2
CGContextScaleCTM(UIGraphicsGetCurrentContext(), videoSize.width / SCREEN_WIDTH, videoSize.height / SCREEN_HEIGHT);
[window.layer renderInContext:context];

OpenGL

用上述的方式截屏时,由于OpenGL的特殊性,无法截取到其渲染内容,所以,我们需要进一步进行修改:

创建一个CAEAGLLayer的子类:

CaptureCAEAGLLayer.h文件:

1
2
3
4
5
6
7
8
9
#import <QuartzCore/QuartzCore.h>

@protocol CaptureDelegate <NSObject>
- (void) renderInContext: (CGContextRef) context;
@end

@interface CaptureCAEAGLLayer : CAEAGLLayer
@property (nonatomic, assign) id<CaptureDelegate> captureDelegate;
@end

CaptureCAEAGLLayer.m文件:

1
2
3
4
5
6
7
8
9
#import "CaptureCAEAGLLayer.h"

@implementation CaptureCAEAGLLayer
- (void)renderInContext:(CGContextRef)ctx
{
[super renderInContext:ctx];
[self.captureDelegate renderInContext:ctx];
}
@end

对使用CAEAGLLayer的View进行修改:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
CaptureCAEAGLLayer *eaglLayer = (CaptureCAEAGLLayer *)self.layer;
eaglLayer.captureDelegate = self;

+ (Class)layerClass
{
return [CaptureCAEAGLLayer class];
}

- (void) renderInContext: (CGContextRef) context
{
GLint backingWidth, backingHeight;

// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);

// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);

NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));

// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);

// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
}

// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);


// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
}