Based on the concepts in my question here and the problems with PNG files outlined in my other question here, I’m going to try loading an RGBA image from two jpeg. One will contain the RGB and the other only the Alpha. I can either save the 2nd as a greyscale jpeg or as an RGB and pull the alpha data from the red component.
In a second step I’ll save the raw image data out to a file in the cache. I’ll then do a test to determine it’s faster to load the raw data or to decompress the jpegs and build the raw data. If I determine it’s faster, on subsequent loads I could check for the existence of the raw file in the cache. If not I’ll skip that file save.
I know how to load the two jpeg into two UIImages. What I’m not sure about is the fastest or most efficient way of interleaving the rgb from the one UIImage with whatever channel of the other UIImage I use for the alpha.
I see two possibilities. One would be in comment B below.. iterate through all the pixels and copy the red from the “alpha jpeg” to the alpha of the imageData steam.
The other is that maybe there’s some magic UIImage command to copy a channel from one to a channel of another. If I did that it would be somewhere around comment A.
Any ideas?
EDIT – Also.. the process can’t destroy any RGB information. The whole reason I need this process is that PNG’s from photoshop premultiply the rgb with the alpha and thus destroy the rgb information. I’m using the alpha for something other than an alpha in a custom openGL shader. So I’m looking for raw RGBA data that I can set alpha to anything to be used as a specular map or an illumination map or height map or something other than alpha.
Here’s my starter code minus my error checking and other proprietary crap. I have an array of textures that I use to manage everything about textures:
if (textureInfo[texIndex].generated==NO) {
glGenTextures(1, &textureInfo[texIndex].texture);
textureInfo[texIndex].generated=YES;
}
glBindTexture(GL_TEXTURE_2D, textureInfo[texIndex].texture);
// glTexParameteri commands are here based on options for this texture
NSString *path = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:@"%@_RGB",name] ofType:type];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *imageRGB = [[UIImage alloc] initWithData:texData];
path = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:@"%@_A",name] ofType:type];
texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *imageAlpha = [[UIImage alloc] initWithData:texData];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageDataRGB = malloc( heightRGB * widthRGB * 4 );
void *imageDataAlpha = malloc( heightA * widthA * 4 );
CGContextRef thisContextRGB = CGBitmapContextCreate( imageDataRGB, widthRGB, heightRGB, 8, 4 * widthRGB, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGContextRef thisContextA = CGBitmapContextCreate( imageDataAlpha, widthA, heightA, 8, 4 * widthA, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
// **** A. In here I want to mix the two.. take the R of imageA and stick it in the Alpha of imageRGB.
CGColorSpaceRelease( colorSpace );
CGContextClearRect( thisContextRGB, CGRectMake( 0, 0, widthRGB, heightRGB ) );
CGContextDrawImage( thisContextRGB, CGRectMake( 0, 0, widthRGB, heightRGB ), imageRGB.CGImage );
// **** B. OR maybe repeat the above 3 lines for the imageA.CGImage and then
// **** It could be done in here by iterating through the data and copying the R byte of imageDataA on to the A byte of the imageDataRGB
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, widthRGB, heightRGB, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageDataRGB);
// **** In here I could save off the merged imageData to a binary file and load that later if it's faster
glBindTexture(GL_TEXTURE_2D, textureInfo[texIndex].texture);
// Generates a full MipMap chain to the current bound texture.
if (useMipmap) {
glGenerateMipmap(GL_TEXTURE_2D);
}
CGContextRelease(thisContextRGB);
CGContextRelease(thisContextA);
free(imageDataRGB);
free(imageDataAlpha);
2
Answers
It is a pretty simple copying of the alpha over to the version that has the clean rgb. Despite the debate as to whether or not my shader is faster calling texture2D 2 or 4 times per fragment, the method below worked as a way to get an un-premultiplied RGBA into my
glTexImage2D(GL_TEXTURE_2D...
.This is my entire method:
I did try using a TIF instead of a PNG and no matter what I tried somewhere in the process the rgb was getting premultiplies with the alpha thus destroying the rgb.
This method might be considered ugly but it works on many levels for me and is the only way I've been able to get full RGBA8888 un-premultiplied images into OpenGL.
I second the suggestion made in a comment above to use separate textures and combine in a shader. I would however explain why that is likely to be faster…
The number of texture2D calls itself should not have much to do with speed. The important factors affecting speed are: (1) how much data needs to be copied from CPU to GPU? (you can test easily, calling texture2D twice with N/2 pixels in each call is almost exactly as fast as calling it once with N pixels everything else being equal), and (2) whether the implementation needs to rearrange the data in CPU memory before texture2D (if yes, then the call can be extremely slow). Some texture formats need a rearrange, some don’t; usually at least RGBA, RGB565 and some variant of YUV420 or YUYV do not need a rearrange. The stride and width/height being a power of two may also be important.
I think that, if there is no need to rearrange data, one call with RGB and one call with A will be approximately as fast as a call with RGBA.
Since the rearrange is much slower than the copy, it would probably even be faster to copy RGBX (ignoring the fourth channel) and then A, rather than rearrange RGB and A on the CPU and then copy.
P.S. “In a second step I’ll save the raw image data out to a file in the cache. I’ll then do a test to determine it’s faster to load the raw data or to decompress the jpegs and build the raw data.” – reading raw data from anything other than memory is likely to be a lot slower than decompress. 1 megapixel image takes a few tens of milliseconds to decompress from jpeg, or hundreds of milliseconds to read raw data from flash drive.