获取UIImage的像素颜色

如何获得UIImage中的特定像素的RGB值?

你不能直接访问原始数据,但通过获取这个图像的CGImage,你可以访问它。 这里是一个链接到另一个问题,回答你的问题和其他你可能有关于详细的image processing: CGImage

试试这个非常简单的代码:

我曾经在我的迷宫游戏中检测到一堵墙(我需要的唯一信息是Alpha通道,但是我包含了代码以获取其他颜色):

- (BOOL)isWallPixel:(UIImage *)image xCoordinate:(int)x yCoordinate:(int)y { CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); const UInt8* data = CFDataGetBytePtr(pixelData); int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png //UInt8 red = data[pixelInfo]; // If you need this info, enable it //UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it //UInt8 blue = data[pixelInfo + 2]; // If you need this info, enable it UInt8 alpha = data[pixelInfo + 3]; // I need only this info for my maze game CFRelease(pixelData); //UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f]; // The pixel color info if (alpha) return YES; else return NO; } 

OnTouch

 -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [[touches allObjects] objectAtIndex:0]; CGPoint point1 = [touch locationInView:self.view]; touch = [[event allTouches] anyObject]; if ([touch view] == imgZoneWheel) { CGPoint location = [touch locationInView:imgZoneWheel]; [self getPixelColorAtLocation:location]; if(alpha==255) { NSLog(@"In Image Touch view alpha %d",alpha); [self translateCurrentTouchPoint:point1.x :point1.y]; [imgZoneWheel setImage:[UIImage imageNamed:[NSString stringWithFormat:@"blue%d.png",GrndFild]]]; } } } - (UIColor*) getPixelColorAtLocation:(CGPoint)point { UIColor* color = nil; CGImageRef inImage; inImage = imgZoneWheel.image.CGImage; // Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage]; if (cgctx == NULL) { return nil; /* error */ } size_t w = CGImageGetWidth(inImage); size_t h = CGImageGetHeight(inImage); CGRect rect = {{0,0},{w,h}}; // Draw the image to the bitmap context. Once we draw, the memory // allocated for the context for rendering will then contain the // raw image data in the specified color space. CGContextDrawImage(cgctx, rect, inImage); // Now we can get a pointer to the image data associated with the bitmap // context. unsigned char* data = CGBitmapContextGetData (cgctx); if (data != NULL) { //offset locates the pixel in the data from x,y. //4 for 4 bytes of data per pixel, w is width of one row of data. int offset = 4*((w*round(point.y))+round(point.x)); alpha = data[offset]; int red = data[offset+1]; int green = data[offset+2]; int blue = data[offset+3]; color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)]; } // When finished, release the context //CGContextRelease(cgctx); // Free image data memory for the context if (data) { free(data); } return color; } - (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef)inImage { CGContextRef context = NULL; CGColorSpaceRef colorSpace; void * bitmapData; int bitmapByteCount; int bitmapBytesPerRow; // Get image width, height. We'll use the entire image. size_t pixelsWide = CGImageGetWidth(inImage); size_t pixelsHigh = CGImageGetHeight(inImage); // Declare the number of bytes per row. Each pixel in the bitmap in this // example is represented by 4 bytes; 8 bits each of red, green, blue, and // alpha. bitmapBytesPerRow = (pixelsWide * 4); bitmapByteCount = (bitmapBytesPerRow * pixelsHigh); // Use the generic RGB color space. colorSpace = CGColorSpaceCreateDeviceRGB(); if (colorSpace == NULL) { fprintf(stderr, "Error allocating color space\n"); return NULL; } // Allocate memory for image data. This is the destination in memory // where any drawing to the bitmap context will be rendered. bitmapData = malloc( bitmapByteCount ); if (bitmapData == NULL) { fprintf (stderr, "Memory not allocated!"); CGColorSpaceRelease( colorSpace ); return NULL; } // Create the bitmap context. We want pre-multiplied ARGB, 8-bits // per component. Regardless of what the source image format is // (CMYK, Grayscale, and so on) it will be converted over to the format // specified here by CGBitmapContextCreate. context = CGBitmapContextCreate (bitmapData, pixelsWide, pixelsHigh, 8, // bits per component bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst); if (context == NULL) { free (bitmapData); fprintf (stderr, "Context not created!"); } // Make sure and release colorspace before returning CGColorSpaceRelease( colorSpace ); return context; } 

下面是一个在UI图像中获取像素颜色的通用方法,基于Minas Petterson的回答:

 - (UIColor*)pixelColorInImage:(UIImage*)image atX:(int)x atY:(int)y { CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); const UInt8* data = CFDataGetBytePtr(pixelData); int pixelInfo = ((image.size.width * y) + x ) * 4; // 4 bytes per pixel UInt8 red = data[pixelInfo + 0]; UInt8 green = data[pixelInfo + 1]; UInt8 blue = data[pixelInfo + 2]; UInt8 alpha = data[pixelInfo + 3]; CFRelease(pixelData); return [UIColor colorWithRed:red /255.0f green:green/255.0f blue:blue /255.0f alpha:alpha/255.0f]; } 

请注意,X和Y可能会交换; 此函数直接访问底层位图,并不考虑可能是UIImage的一部分的旋转。

 - (UIColor *)colorAtPixel:(CGPoint)point inImage:(UIImage *)image { if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), point)) { return nil; } // Create a 1x1 pixel byte array and bitmap context to draw the pixel into. NSInteger pointX = trunc(point.x); NSInteger pointY = trunc(point.y); CGImageRef cgImage = image.CGImage; NSUInteger width = image.size.width; NSUInteger height = image.size.height; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); int bytesPerPixel = 4; int bytesPerRow = bytesPerPixel * 1; NSUInteger bitsPerComponent = 8; unsigned char pixelData[4] = { 0, 0, 0, 0 }; CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); CGContextSetBlendMode(context, kCGBlendModeCopy); // Draw the pixel we are interested in onto the bitmap context CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height); CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage); CGContextRelease(context); // Convert color values [0..255] to floats [0.0..1.0] CGFloat red = (CGFloat)pixelData[0] / 255.0f; CGFloat green = (CGFloat)pixelData[1] / 255.0f; CGFloat blue = (CGFloat)pixelData[2] / 255.0f; CGFloat alpha = (CGFloat)pixelData[3] / 255.0f; return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } 

一些基于Minas的答案的Swift代码。 我已经扩展了UIImage以使其可以在任何地方访问,并且我已经添加了一些简单的逻辑来猜测基于像素跨度(1,3或4)的图像格式,

Swift 3:

 public extension UIImage { func getPixelColor(point: CGPoint) -> UIColor { guard let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage)) else { return UIColor.clearColor() } let data = CFDataGetBytePtr(pixelData) let x = Int(point.x) let y = Int(point.y) let index = Int(self.size.width) * y + x let expectedLengthA = Int(self.size.width * self.size.height) let expectedLengthRGB = 3 * expectedLengthA let expectedLengthRGBA = 4 * expectedLengthA let numBytes = CFDataGetLength(pixelData) switch numBytes { case expectedLengthA: return UIColor(red: 0, green: 0, blue: 0, alpha: CGFloat(data[index])/255.0) case expectedLengthRGB: return UIColor(red: CGFloat(data[3*index])/255.0, green: CGFloat(data[3*index+1])/255.0, blue: CGFloat(data[3*index+2])/255.0, alpha: 1.0) case expectedLengthRGBA: return UIColor(red: CGFloat(data[4*index])/255.0, green: CGFloat(data[4*index+1])/255.0, blue: CGFloat(data[4*index+2])/255.0, alpha: CGFloat(data[4*index+3])/255.0) default: // unsupported format return UIColor.clearColor() } } } 

更新了Swift 4:

 func getPixelColor(_ image:UIImage, _ point: CGPoint) -> UIColor { let cgImage : CGImage = image.cgImage! guard let pixelData = CGDataProvider(data: (cgImage.dataProvider?.data)!)?.data else { return UIColor.clear } let data = CFDataGetBytePtr(pixelData)! let x = Int(point.x) let y = Int(point.y) let index = Int(image.size.width) * y + x let expectedLengthA = Int(image.size.width * image.size.height) let expectedLengthRGB = 3 * expectedLengthA let expectedLengthRGBA = 4 * expectedLengthA let numBytes = CFDataGetLength(pixelData) switch numBytes { case expectedLengthA: return UIColor(red: 0, green: 0, blue: 0, alpha: CGFloat(data[index])/255.0) case expectedLengthRGB: return UIColor(red: CGFloat(data[3*index])/255.0, green: CGFloat(data[3*index+1])/255.0, blue: CGFloat(data[3*index+2])/255.0, alpha: 1.0) case expectedLengthRGBA: return UIColor(red: CGFloat(data[4*index])/255.0, green: CGFloat(data[4*index+1])/255.0, blue: CGFloat(data[4*index+2])/255.0, alpha: CGFloat(data[4*index+3])/255.0) default: // unsupported format return UIColor.clear } } 

首先创build并附加水龙头手势识别器,允许用户交互:

 UITapGestureRecognizer * tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(tapGesture:)]; [self.label addGestureRecognizer:tapRecognizer]; self.label.userInteractionEnabled = YES; 

现在执行-tapGesture:

 - (void)tapGesture:(UITapGestureRecognizer *)recognizer { CGPoint point = [recognizer locationInView:self.label]; UIGraphicsBeginImageContext(self.label.bounds.size); CGContextRef context = UIGraphicsGetCurrentContext(); [self.label.layer renderInContext:context]; int bpr = CGBitmapContextGetBytesPerRow(context); unsigned char * data = CGBitmapContextGetData(context); if (data != NULL) { int offset = bpr*round(point.y) + 4*round(point.x); int blue = data[offset+0]; int green = data[offset+1]; int red = data[offset+2]; int alpha = data[offset+3]; NSLog(@"%d %d %d %d", alpha, red, green, blue); if (alpha == 0) { // Here is tap out of text } else { // Here is tap right into text } } UIGraphicsEndImageContext(); } 

这将工作在透明背景的UILabel,如果这不是你想要的,你可以比较alpha,红色,绿色,蓝色与self.label.backgroundColor