skip to Main Content

I’m trying to recreate Apple’s festival lights image in SwiftUI (screenshot from Apple India’s website). Expected result:

Apple India Diwali logo

Here’s what I’ve managed to achieve so far:

enter image description here

MY UNDERSTANDING SO FAR: Images are not Shapes, so we can’t stroke their borders, but I also found that shadow() modifier places shadows on image borders just fine. So, I need a way to customize the shadow somehow and understand how it works.

WHAT I’VE TRIED SO FAR: Besides the code above, I tried to unsuccessfully convert a given SF Symbol to a Shape using Vision framework’s contour detection, based on my understanding of this article: https://www.iosdevie.com/p/new-in-ios-14-vision-contour-detection

Can someone please guide me on how I would go about doing this, preferably using SF symbols only.

2

Answers


  1. Chosen as BEST ANSWER

    Looks like the Vision contour detection isn't a bad approach after all. I was just missing a few things, as helpfully pointed out by @DonMag. Here's my final answer using SwiftUI, in case someone's interested.

    First, we create an InsettableShape:

    struct MKSymbolShape: InsettableShape {
        var insetAmount = 0.0
        let systemName: String
        
        var trimmedImage: UIImage {
            let cfg = UIImage.SymbolConfiguration(pointSize: 256.0)
            // get the symbol
            guard let imgA = UIImage(systemName: systemName, withConfiguration: cfg)?.withTintColor(.black, renderingMode: .alwaysOriginal) else {
                fatalError("Could not load SF Symbol: (systemName)!")
            }
            
            // we want to "strip" the bounding box empty space
            // get a cgRef from imgA
            guard let cgRef = imgA.cgImage else {
                fatalError("Could not get cgImage!")
            }
            // create imgB from the cgRef
            let imgB = UIImage(cgImage: cgRef, scale: imgA.scale, orientation: imgA.imageOrientation)
                .withTintColor(.black, renderingMode: .alwaysOriginal)
            
            // now render it on a white background
            let resultImage = UIGraphicsImageRenderer(size: imgB.size).image { ctx in
                UIColor.white.setFill()
                ctx.fill(CGRect(origin: .zero, size: imgB.size))
                imgB.draw(at: .zero)
            }
            
            return resultImage
        }
        
        func path(in rect: CGRect) -> Path {
            // cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
            //  so we want to scale the path to our view bounds
            
            let inputImage = self.trimmedImage
            guard let cgPath = detectVisionContours(from: inputImage) else { return Path() }
            let scW: CGFloat = (rect.width - CGFloat(insetAmount)) / cgPath.boundingBox.width
            let scH: CGFloat = (rect.height - CGFloat(insetAmount)) / cgPath.boundingBox.height
            
            // we need to invert the Y-coordinate space
            var transform = CGAffineTransform.identity
                .scaledBy(x: scW, y: -scH)
                .translatedBy(x: 0.0, y: -cgPath.boundingBox.height)
            
            if let imagePath = cgPath.copy(using: &transform) {
                return Path(imagePath)
            } else {
                return Path()
            }
        }
        
        func inset(by amount: CGFloat) -> some InsettableShape {
            var shape = self
            shape.insetAmount += amount
            return shape
        }
        
        func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
            let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
            let contourRequest = VNDetectContoursRequest()
            contourRequest.revision = VNDetectContourRequestRevision1
            contourRequest.contrastAdjustment = 1.0
            contourRequest.maximumImageDimension = 512
            
            let requestHandler = VNImageRequestHandler(ciImage: inputImage, options: [:])
            try! requestHandler.perform([contourRequest])
            if let contoursObservation = contourRequest.results?.first {
                return contoursObservation.normalizedPath
            }
            
            return nil
        }
    }
    

    Then we create our main view:

    struct PreviewView: View {
        var body: some View {
            ZStack {
                LinearGradient(colors: [.black, .purple], startPoint: .top, endPoint: .bottom)
                    .edgesIgnoringSafeArea(.all)
                MKSymbolShape(systemName: "applelogo")
                    .stroke(LinearGradient(colors: [.yellow, .orange, .pink, .red], startPoint: .top, endPoint: .bottom), style: StrokeStyle(lineWidth: 8, lineCap: .round, dash: [2.0, 21.0]))
                    .aspectRatio(CGSize(width: 30, height: 36), contentMode: .fit)
                    .padding()
            }
        }
    }
    

    Final look: Final look


  2. We can use the Vision framework with VNDetectContourRequestRevision1 to get a cgPath:

    func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
        
        let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
        
        let contourRequest = VNDetectContoursRequest.init()
        contourRequest.revision = VNDetectContourRequestRevision1
        contourRequest.contrastAdjustment = 1.0
        contourRequest.maximumImageDimension = 512
        
        let requestHandler = VNImageRequestHandler.init(ciImage: inputImage, options: [:])
        try! requestHandler.perform([contourRequest])
        if let contoursObservation = contourRequest.results?.first {
            return contoursObservation.normalizedPath
        }
        
        return nil
    }
    

    The path will be based on a 0,0 1.0,1.0 coordinate space, so to use it we need to scale the path to our desired size. It also uses inverted Y-coordinates, so we’ll need to flip it also:

        // cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
        //  so we want to scale the path to our view bounds
        let scW: CGFloat = targetRect.bounds.width / cgPth.boundingBox.width
        let scH: CGFloat = targetRect.bounds.height / cgPth.boundingBox.height
        
        // we need to invert the Y-coordinate space
        var transform = CGAffineTransform.identity
            .scaledBy(x: scW, y: -scH)
            .translatedBy(x: 0.0, y: -cgPth.boundingBox.height)
        
        return cgPth.copy(using: &transform)
    

    Couple notes…

    When using UIImage(systemName: "applelogo"), we get an image with "font" characteristics – namely, empty space. See this https://stackoverflow.com/a/71743787/6257435 and this https://stackoverflow.com/a/66293917/6257435 for some discussion.

    So, we could use it directly, but it makes the path scaling and translation a bit complex.

    So, instead of this "default":

    enter image description here

    we can use a little code to "trim" the space for a more usable image:

    enter image description here

    Then we can use the path from Vision as the path of a CAShapeLayer, along with these layer properties: .lineCap = .round / .lineWidth = 8 / .lineDashPattern = [2.0, 20.0] (for example) to get a "dotted line" stroke:

    enter image description here

    Then we can use that same path on a shape layer as a mask on a gradient layer:

    enter image description here

    and finally remove the image view so we see only the view with the masked gradient layer:

    enter image description here

    Here’s example code to produce that:

    import UIKit
    import Vision
    
    class ViewController: UIViewController {
        
        let myOutlineView = UIView()
        let myGradientView = UIView()
        let shapeLayer = CAShapeLayer()
        let gradientLayer = CAGradientLayer()
    
        let defaultImageView = UIImageView()
        let trimmedImageView = UIImageView()
    
        var defaultImage: UIImage!
        var trimmedImage: UIImage!
        
        var visionPath: CGPath!
    
        // an information label
        let infoLabel: UILabel = {
            let v = UILabel()
            v.backgroundColor = UIColor(white: 0.95, alpha: 1.0)
            v.textAlignment = .center
            v.numberOfLines = 0
            return v
        }()
    
        override func viewDidLoad() {
            super.viewDidLoad()
            
            view.backgroundColor = .systemBlue
            
            // get the system image at 240-points (so we can get a good path from Vision)
            //  experiment with different sizes if the path doesn't appear smooth
            let cfg = UIImage.SymbolConfiguration(pointSize: 240.0)
            
            // get "applelogo" symbol
            guard let imgA = UIImage(systemName: "applelogo", withConfiguration: cfg)?.withTintColor(.black, renderingMode: .alwaysOriginal) else {
                fatalError("Could not load SF Symbol: applelogo!")
            }
            // now render it on a white background
            self.defaultImage = UIGraphicsImageRenderer(size: imgA.size).image { ctx in
                UIColor.white.setFill()
                ctx.fill(CGRect(origin: .zero, size: imgA.size))
                imgA.draw(at: .zero)
            }
    
            // we want to "strip" the bounding box empty space
            // get a cgRef from imgA
            guard let cgRef = imgA.cgImage else {
                fatalError("Could not get cgImage!")
            }
            // create imgB from the cgRef
            let imgB = UIImage(cgImage: cgRef, scale: imgA.scale, orientation: imgA.imageOrientation)
                .withTintColor(.black, renderingMode: .alwaysOriginal)
            
            // now render it on a white background
            self.trimmedImage = UIGraphicsImageRenderer(size: imgB.size).image { ctx in
                UIColor.white.setFill()
                ctx.fill(CGRect(origin: .zero, size: imgB.size))
                imgB.draw(at: .zero)
            }
    
            defaultImageView.image = defaultImage
            defaultImageView.translatesAutoresizingMaskIntoConstraints = false
            view.addSubview(defaultImageView)
    
            trimmedImageView.image = trimmedImage
            trimmedImageView.translatesAutoresizingMaskIntoConstraints = false
            view.addSubview(trimmedImageView)
            
            myOutlineView.translatesAutoresizingMaskIntoConstraints = false
            view.addSubview(myOutlineView)
            
            myGradientView.translatesAutoresizingMaskIntoConstraints = false
            view.addSubview(myGradientView)
            
            // next step button
            let btn = UIButton()
            btn.setTitle("Next Step", for: [])
            btn.setTitleColor(.white, for: .normal)
            btn.setTitleColor(.lightGray, for: .highlighted)
            btn.backgroundColor = .systemRed
            btn.layer.cornerRadius = 8
            
            btn.translatesAutoresizingMaskIntoConstraints = false
            view.addSubview(btn)
            
            infoLabel.translatesAutoresizingMaskIntoConstraints = false
            view.addSubview(infoLabel)
            
            let g = view.safeAreaLayoutGuide
            NSLayoutConstraint.activate([
                
                // inset default image view 20-points on each side
                //  height proportional to the image
                //  near the top
                defaultImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0),
                defaultImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
                defaultImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0),
                defaultImageView.heightAnchor.constraint(equalTo: defaultImageView.widthAnchor, multiplier: defaultImage.size.height / defaultImage.size.width),
                
                // inset trimmed image view 40-points on each side
                //  height proportional to the image
                //  centered vertically
                trimmedImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 40.0),
                trimmedImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 40.0),
                trimmedImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -40.0),
                trimmedImageView.heightAnchor.constraint(equalTo: trimmedImageView.widthAnchor, multiplier: self.trimmedImage.size.height / self.trimmedImage.size.width),
                
                // add outline view on top of trimmed image view
                myOutlineView.topAnchor.constraint(equalTo: trimmedImageView.topAnchor, constant: 0.0),
                myOutlineView.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
                myOutlineView.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
                myOutlineView.bottomAnchor.constraint(equalTo: trimmedImageView.bottomAnchor, constant: 0.0),
                
                // add gradient view on top of trimmed image view
                myGradientView.topAnchor.constraint(equalTo: trimmedImageView.topAnchor, constant: 0.0),
                myGradientView.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
                myGradientView.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
                myGradientView.bottomAnchor.constraint(equalTo: trimmedImageView.bottomAnchor, constant: 0.0),
                
                // button and info label below
                btn.topAnchor.constraint(equalTo: defaultImageView.bottomAnchor, constant: 20.0),
                btn.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
                btn.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
    
                infoLabel.topAnchor.constraint(equalTo: btn.bottomAnchor, constant: 20.0),
                infoLabel.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
                infoLabel.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
                infoLabel.heightAnchor.constraint(greaterThanOrEqualToConstant: 60.0),
                
            ])
            
            // setup the shape layer
            shapeLayer.strokeColor = UIColor.red.cgColor
            shapeLayer.fillColor = UIColor.clear.cgColor
            
            // this will give use round dots for the shape layer's stroke
            shapeLayer.lineCap = .round
            shapeLayer.lineWidth = 8
            shapeLayer.lineDashPattern = [2.0, 20.0]
            
            // setup the gradient layer
            let c1: UIColor = .init(red: 0.95, green: 0.73, blue: 0.32, alpha: 1.0)
            let c2: UIColor = .init(red: 0.95, green: 0.25, blue: 0.45, alpha: 1.0)
            gradientLayer.colors = [c1.cgColor, c2.cgColor]
            
            myOutlineView.layer.addSublayer(shapeLayer)
            myGradientView.layer.addSublayer(gradientLayer)
    
            btn.addTarget(self, action: #selector(nextStep), for: .touchUpInside)
        }
        
        override func viewDidAppear(_ animated: Bool) {
            super.viewDidAppear(animated)
        
            guard let pth = pathSetup()
            else {
                fatalError("Vision could not create path")
            }
            self.visionPath = pth
            
            shapeLayer.path = pth
            
            gradientLayer.frame = myGradientView.bounds.insetBy(dx: -8.0, dy: -8.0)
            let gradMask = CAShapeLayer()
            gradMask.strokeColor = UIColor.red.cgColor
            gradMask.fillColor = UIColor.clear.cgColor
            gradMask.lineCap = .round
            gradMask.lineWidth = 8
            gradMask.lineDashPattern = [2.0, 20.0]
            
            gradMask.path = pth
            gradMask.position.x += 8.0
            gradMask.position.y += 8.0
            gradientLayer.mask = gradMask
            
            nextStep()
        }
        
        var idx: Int = -1
        
        @objc func nextStep() {
            idx += 1
            switch idx % 5 {
            case 1:
                defaultImageView.isHidden = true
                trimmedImageView.isHidden = false
                infoLabel.text = ""applelogo" system image - with trimmed empty-space bounding-box."
            case 2:
                myOutlineView.isHidden = false
                shapeLayer.opacity = 1.0
                infoLabel.text = "Dotted outline shape using Vision detected path."
            case 3:
                myOutlineView.isHidden = true
                myGradientView.isHidden = false
                infoLabel.text = "Use Dotted outline shape as a gradient layer mask."
            case 4:
                trimmedImageView.isHidden = true
                view.backgroundColor = .black
                infoLabel.text = "View by itself with Dotted outline shape as a gradient layer mask."
            default:
                view.backgroundColor = .systemBlue
                defaultImageView.isHidden = false
                trimmedImageView.isHidden = true
                myOutlineView.isHidden = true
                myGradientView.isHidden = true
                shapeLayer.opacity = 0.0
                infoLabel.text = "Default "applelogo" system image - note empty-space bounding-box."
            }
        }
    
        func pathSetup() -> CGPath? {
            // get the cgPath from the image
            guard let cgPth = detectVisionContours(from: self.trimmedImage)
            else {
                print("Failed to get path!")
                return nil
            }
            
            // cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
            //  so we want to scale the path to our view bounds
            let scW: CGFloat = myOutlineView.bounds.width / cgPth.boundingBox.width
            let scH: CGFloat = myOutlineView.bounds.height / cgPth.boundingBox.height
            
            // we need to invert the Y-coordinate space
            var transform = CGAffineTransform.identity
                .scaledBy(x: scW, y: -scH)
                .translatedBy(x: 0.0, y: -cgPth.boundingBox.height)
            
            return cgPth.copy(using: &transform)
        }
        
        func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
            
            let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
            
            let contourRequest = VNDetectContoursRequest.init()
            contourRequest.revision = VNDetectContourRequestRevision1
            contourRequest.contrastAdjustment = 1.0
            contourRequest.maximumImageDimension = 512
            
            let requestHandler = VNImageRequestHandler.init(ciImage: inputImage, options: [:])
            try! requestHandler.perform([contourRequest])
            if let contoursObservation = contourRequest.results?.first {
                return contoursObservation.normalizedPath
            }
            
            return nil
        }
    }
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search