I have two images of different sizes that I want to composite together using a CIFilter
. ImageA
size is 2400x1800
. ImageB
size is 1200x900
.
As the two images are different sizes, when the images are composited together, ImageB is positioned at the bottom left and quarter the size of ImageA
.
This does make sense, but it is not what I intended. I would like ImageB
to resize to fit the same full size as ImageA
.
I use an extension to resize ImageB but the performance is very slow and doesn’t seem to have any effect.
Questions
How do I efficiently resize ImageB to fit the same full size as ImageA, keeping both images center aligned, and maintaining the aspect ratio?
Does CIFilter
have an inbuilt option to resize images before applying a filter?
Note: As a side thought, I considered an unconventional approach using an UIImageView
with the size of ImageA
, loading in ImageB
, and then taking a snapshot. That would seem to guarantee the correct size and aspect with good performance.
Code
// Resize image extension
extension UIImage {
func resized(to size: CGSize) -> UIImage {
return UIGraphicsImageRenderer(size: size).image { _ in
draw(in: CGRect(origin: .zero, size: size))
}
}
}
// Resize image usage
let ImageA = UIImage()
let ImageB = UIImage()
let resizedImage = ImageB.resized(to: CGSize(width: ImageA.size.width, height: ImageA.size.height))
ImageB = resizedImage
// Composite filter
let addCIFilter = CIFilter(name: "CIColorDodgeBlendMode")!
addCIFilter.setValue(CIImage(image: ImageA), forKey: kCIInputImageKey)
addCIFilter.setValue(CIImage(image: ImageB), forKey: kCIInputBackgroundImageKey)
let outputImage = addCIFilter.outputImage
2
Answers
Found a way. It’s possible to directly resize or scale the input image which offers good performance.
Example
Reference
.resized(to:)
This resizes an image to fit within a given size.
.scaled(to:)
This scales an image to fit within a given size while maintaining its aspect ratio.
Credit
Thanks to this Objective-C
CGAffineTransform
for the hint: https://stackoverflow.com/a/19778622Yes, Core Image has built-in support for applying arbitrary transformations to an image at any point in the processing pipeline.
This is what a
resized
helper for aCIImage
could look like:I highly recommend that you use CI’s mechanisms for transforming images, as this will yield better performance. When you use the
UIImage
APIs, a new resized image needs to be created in memory, whereas Core Image would still operate on the original image, just with transformed pixel sampling.