iOS Quickstart
Getting started with the Perfectly Clear SDK in an iOS application
This guide walks you through setting up PFC_iOS_Framework.xcframework in an Xcode project and correcting your first image using AI scene detection.
Add the framework
Option A — CocoaPods (recommended)
Add the pod to your Podfile and run pod install:
target 'MyApp' do
use_frameworks!
pod 'PFC_iOS_Framework', :path => './PFC_iOS_Framework.xcframework'
endOption B — Manual xcframework embed
Drag PFC_iOS_Framework.xcframework into your Xcode project and select Embed & Sign in the target's Frameworks, Libraries, and Embedded Content settings.
Add model and resource files
The engine requires several model files (.pnn) and a creative looks bundle (.looks). Copy them into your Xcode target so they are included in the app bundle.
Bundle with the app (static, never change at runtime):
| File | Purpose |
|---|---|
aicolor_coreml.pnn | AI color correction (CoreML) |
fdfront_tflite.pnn | Front-facing face detection (TFLite) |
fdback_tflite.pnn | Back-facing face detection (TFLite) |
facemesh_tflite.pnn | Face mesh model |
faceshape_tflite.pnn | Face blend-shape model |
creative_filters.looks | Creative look definitions |
sd_universal.preset | Scene presets for the Universal workflow |
Downloaded at runtime (stored in Documents/):
| File | Purpose |
|---|---|
sd_universal.pnn | Scene detection model (Universal) |
skintone_universal.pnn | Skin tone model (Universal) |
Contact EyeQ to receive the model files and download URLs for your distribution.
Initialize the engine
Create a PCSEngine instance during app startup or before the first image correction. The engine is expensive to construct — create it once and reuse it.
import PFC_iOS_Framework
// Bundle model URLs (shipped with the app)
let aiColorModelUrl = Bundle.main.url(forResource: "aicolor_coreml", withExtension: "pnn")!
let fdFrontModelUrl = Bundle.main.url(forResource: "fdfront_tflite", withExtension: "pnn")!
let fdBackModelUrl = Bundle.main.url(forResource: "fdback_tflite", withExtension: "pnn")!
let facemeshModelUrl = Bundle.main.url(forResource: "facemesh_tflite", withExtension: "pnn")!
let blendshapeModelUrl = Bundle.main.url(forResource: "faceshape_tflite",withExtension: "pnn")!
let looksUrl = Bundle.main.url(forResource: "creative_filters",withExtension: "looks")!
let presetsUrl = Bundle.main.url(forResource: "sd_universal", withExtension: "preset")!
// Runtime-downloaded model URLs
let documentsDir = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let sceneModelUrl = documentsDir.appendingPathComponent("sd_universal.pnn")
let skintoneModelUrl = documentsDir.appendingPathComponent("skintone_universal.pnn")
let engine = PCSEngine(
apiKey: "YOUR_API_KEY",
certificate: "YOUR_CERTIFICATE_STRING",
sceneDetectionModelUrL: sceneModelUrl,
skintoneModelUrl: skintoneModelUrl,
aiColorModelUrl: aiColorModelUrl,
aiFDFrontModel: fdFrontModelUrl,
aiFDBackModel: fdBackModelUrl,
aiFacemeshModel: facemeshModelUrl,
aiBlendshapeModel: blendshapeModelUrl,
scenePresetsUrl: presetsUrl,
addonLooksUrl: looksUrl
)PCSEngine is not thread-safe. Serialize all calls with a lock (e.g., NSLock) if the engine is shared across threads.
Prepare the image
The SDK requires images in 8-bit sRGB format (bitmapInfo == 16385, i.e. alphaFirst | byteOrder32Big). Always re-render through UIGraphicsImageRenderer before creating PCSImage to guarantee the correct format, especially for HDR or Photos-library images.
func prepareSRGBImage(_ source: UIImage, maxDimension: CGFloat = 2048) -> UIImage {
// Compute target size, capping at maxDimension
let size = source.size
let scale = min(1.0, maxDimension / max(size.width, size.height))
let targetSize = CGSize(width: (size.width * scale).rounded(),
height: (size.height * scale).rounded())
// Force 8-bit sRGB — required by the SDK
let format = UIGraphicsImageRendererFormat()
format.scale = 1
format.preferredRange = .standard
return UIGraphicsImageRenderer(size: targetSize, format: format).image { ctx in
source.draw(in: CGRect(origin: .zero, size: targetSize))
}
}Then wrap the result in PCSImage:
let workImage = prepareSRGBImage(originalUIImage)
guard let pcsImage = PCSImage(image: workImage) else {
// Image format not supported — check bitmapInfo and colorSpace
return
}Detect scene and apply correction
Run scene detection to get an optimized PCSParam for the image content, then apply the correction.
// Scene detection — run on a background thread
DispatchQueue.global(qos: .userInitiated).async {
guard let result = engine.detectSceneForImage(pcsImage) else {
// Detection failed — fall back to a default PCSParam
return
}
// Build a param from the detected scene
let param = PCSParam(sceneLabel: result.sceneLabel, usingEngine: engine)
// Always pass a copy — the SDK can mutate the param
let paramCopy = param.copy() as! PCSParam
// Apply correction — reuse the profile from the same detectSceneForImage call
var errorCode: Int32 = 0
engine.applyParam(paramCopy, toImage: pcsImage, profile: result.profile, errorCode: &errorCode)
if errorCode == 0 {
// Extract the corrected UIImage on the same thread
let colorSpace = CGColorSpaceCreateDeviceRGB()
let correctedImage = pcsImage.generateImage(withColorSpace: colorSpace)
DispatchQueue.main.async {
imageView.image = correctedImage
}
}
}Always pass the PCSProfile received from detectSceneForImage back into applyParam. The profile encodes scene-specific color characteristics calibrated for that exact image.
Complete example
The following snippet shows the full pipeline in a single function:
import UIKit
import PFC_iOS_Framework
func correctImage(_ source: UIImage, using engine: PCSEngine, completion: @escaping (UIImage?) -> Void) {
DispatchQueue.global(qos: .userInitiated).async {
// 1. Prepare: cap size and ensure 8-bit sRGB
let format = UIGraphicsImageRendererFormat()
format.scale = 1
format.preferredRange = .standard
let size = source.size
let scale = min(1.0, 2048 / max(size.width, size.height))
let targetSize = CGSize(width: (size.width * scale).rounded(),
height: (size.height * scale).rounded())
let workImage = UIGraphicsImageRenderer(size: targetSize, format: format).image { ctx in
source.draw(in: CGRect(origin: .zero, size: targetSize))
}
// 2. Wrap in PCSImage
guard let pcsImage = PCSImage(image: workImage) else {
DispatchQueue.main.async { completion(nil) }
return
}
// 3. Detect scene
guard let result = engine.detectSceneForImage(pcsImage) else {
DispatchQueue.main.async { completion(nil) }
return
}
// 4. Build param and apply
let param = PCSParam(sceneLabel: result.sceneLabel, usingEngine: engine)
let paramCopy = param.copy() as! PCSParam
var errorCode: Int32 = 0
engine.applyParam(paramCopy, toImage: pcsImage, profile: result.profile, errorCode: &errorCode)
// 5. Extract result
let corrected = errorCode == 0
? pcsImage.generateImage(withColorSpace: CGColorSpaceCreateDeviceRGB())
: nil
DispatchQueue.main.async { completion(corrected) }
}
}