Sometimes we need to create images on the fly.

These images can’t be put in our Assets Catalog because, for example, they’re tailored to the current user, or for similar reasons.

If these images are compute-intensive, a.k.a. the user will notice that something is going on before the picture is shown, it’s better to create them once, as early as possible, and store them on the device.

Let’s walk through the whole process!

Storing an UIImage ⬇️

The storing part is quite easy.

First, we declare where do we want to store the image, in order to do so we create a URL that ends with the image name and format:

let imageName = "" // your image name here
let imagePath: String = "\(NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0])/\(imageName).png"
let imageUrl: URL = URL(fileURLWithPath: imagePath)

Obviously, we will need to pay attention to uniquely name each one of our pictures.

Second, we do the actual storing:

let newImage: UIImage = // create your UIImage here
try? UIImagePNGRepresentation(newImage)?.write(to: imageUrl)

Loading an UIImage ⬆️

This is also super easy. Again, first we will declare where the image is stored (exactly the same code snippet above):

let imageName = "" // your image name here
let imagePath: String = "\(NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0])/\(imageName).png"
let imageUrl: URL = URL(fileURLWithPath: imagePath)

Second, we do the actual loading:

guard
  FileManager.default.fileExists(atPath: imagePath),
  let imageData: Data = try? Data(contentsOf: imageUrl),
  let image: UIImage = UIImage(data: imageData) { 
    return // No image found!
}

The Catch 😱

If our device was a non-retina one, the code above would work as promised.

However, if you run it on an iPhone 7 for example, the loaded image would be twice as big as the original one (on both axes)!

How come is that? Well…we’re working with UIImage!

We’re storing UIImages, which are made out of points, as PNGs, which are made out of pixels!

Quick reminder: points are a resolution-independent measurement.

To make our code work regardless of the device pixel density, we must handle each possible correspondence (a.k.a. render factor) between points and pixels. So how do we handle all of this variety?

The solution couldn’t be any simpler: load the stored image with a scale based on the current device screen render factor.

let image: UIImage = UIImage(data: imageData, scale: UIScreen.main.scale)

That’s it, now we can create, store, and load as many images as we want, it won’t matter on which device they’re displayed, they will always render properly! 🎉

Code Snippet

As every Swift post, it wouldn’t be right to end it without introducing at least an extension that makes everybody’s life simpler. Without further ado:

extension UIImage {
  func load(image imageName: String) -> UIImage {
    // declare image location
    let imagePath: String = "\(NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0])/\(imageName).png"
    let imageUrl: URL = URL(fileURLWithPath: imagePath)

    // check if the image is stored already
    if FileManager.default.fileExists(atPath: imagePath),
       let imageData: Data = try? Data(contentsOf: imageUrl),
       let image: UIImage = UIImage(data: imageData, scale: UIScreen.main.scale) {
      return image
    }

    // image has not been created yet: create it, store it, return it
    let newImage: UIImage = // create your UIImage here
    try? UIImagePNGRepresentation(newImage)?.write(to: imageUrl)
    return newImage
  }
}

Thanks to this extension we don’t have to worry about creating, storing, or loading images: we just call UIImage.load(_:) and we’re done!

Conclusions

Working with UIImages and images is easy, however we must pay close attention when our pictures move from one world to the other!

Happy coding!

⭐⭐⭐⭐⭐