EyeQ Docs

Using the iOS SDK

In-depth guide to using the Perfectly Clear iOS SDK — thread safety, image format, scene detection, and memory management

The iOS SDK wraps the Perfectly Clear correction engine in a set of Objective-C classes (PCSEngine, PCSImage, PCSParam, PCSProfile, PCSSceneDetectionResult) that are fully accessible from Swift. This page covers the critical integration patterns you need to understand before shipping.

Core objects

The SDK revolves around five classes. Every image correction call involves all of them.

ClassDescription
PCSEngineThe correction engine. Create once per workflow; not thread-safe
PCSImageImage input/output. Created from UIImage; thread-affine
PCSParamAll correction parameters — strength, face tools, color, LUTs
PCSProfileOpaque color profile from detection; tied to its engine
PCSSceneDetectionResultResult of scene detection — label and profile

Thread safety

PCSEngine is not thread-safe. All calls must be serialized. The recommended approach is a dedicated wrapper that holds an NSLock:

final class ImageEngineWrapper {
    private let engine: PCSEngine
    private let lock = NSLock()

    init(engine: PCSEngine) {
        self.engine = engine
    }

    func detectScene(for image: PCSImage) -> PCSSceneDetectionResult? {
        lock.lock()
        defer { lock.unlock() }
        return engine.detectSceneForImage(image)
    }

    func apply(_ param: PCSParam, to image: PCSImage, profile: PCSProfile?) -> Int32 {
        lock.lock()
        defer { lock.unlock() }
        var errorCode: Int32 = 0
        engine.applyParam(param, toImage: image, profile: profile, errorCode: &errorCode)
        return errorCode
    }
}

PCSImage is thread-affine: create it and consume it on the same thread. Do not create a PCSImage on the main thread and then pass it to a background queue for applyParam.

If your app supports workflow switching (e.g., Universal, Pro) you will create a new PCSEngine instance per workflow. In-flight operations that captured a reference to the old engine must be guarded — check that your wrapper reference still equals the current singleton before proceeding inside the lock.

Image format requirements

The SDK reads raw pixel bytes and requires a precise bitmap layout. Passing an incompatible image causes wrong scene detection, PCSImage(image:) returning nil, or a crash inside the C++ engine.

Required format

PropertyRequired value
bitsPerComponent8
bitsPerPixel32
Color spacesRGB
bitmapInfo raw value16385 (alphaFirst | byteOrder32Big)

Why HDR and wide-gamut images fail

iPhones capture in Display P3 wide-gamut. Images delivered from PHImageManager with deliveryMode: .highQualityFormat may have bitsPerComponent = 16 and a P3 color space. The SDK reads the raw bytes, so mismatched formats produce incorrect pixel values, wrong scene classification, and incorrect corrections.

The fix — always re-render before passing to PCSImage

func makeSRGBImage(_ source: UIImage, maxDimension: CGFloat) -> UIImage {
    let size = source.size
    let scale = min(1.0, maxDimension / max(size.width, size.height))
    let targetSize = CGSize(width: (size.width * scale).rounded(),
                            height: (size.height * scale).rounded())

    let format = UIGraphicsImageRendererFormat()
    format.scale = 1
    format.preferredRange = .standard   // forces 8-bit sRGB

    return UIGraphicsImageRenderer(size: targetSize, format: format).image { ctx in
        source.draw(in: CGRect(origin: .zero, size: targetSize))
    }
}

Apply this transform before every PCSImage(image:) call — for images from the photo library, file URLs, the camera, and any other source.

File-backed UIImage instances (loaded from a URL without decoding into memory) also fail. The UIGraphicsImageRenderer pass decodes them into a fresh in-memory bitmap, fixing both problems in one step.

Scene detection flow

Scene detection classifies the image content using on-device CoreML and TFLite models and returns an optimized set of correction parameters for that scene. See detectSceneForImage for the full method signature.

// Must run on a background thread — blocks for the full classification pass
DispatchQueue.global(qos: .userInitiated).async {
    // 1. Prepare image (see format requirements above)
    let workImage = makeSRGBImage(uiImage, maxDimension: 2048)

    // 2. Wrap in PCSImage — on this same thread
    guard let pcsImage = PCSImage(image: workImage) else { return }

    // 3. Detect — serialized inside your engine wrapper
    guard let result = engineWrapper.detectScene(for: pcsImage) else { return }

    // result.sceneLabel — integer scene ID (see scene catalogue below)
    // result.profile    — opaque color profile, pair with this image only

    // 4. Build param from detected scene
    let param = PCSParam(sceneLabel: result.sceneLabel, usingEngine: engineWrapper.engine)

    // 5. Apply (see next section)
}

Scene catalogue

Scenes are identified by integer labels. Common scenes across AI Models:

SceneLabelWorkflow
Auto — People1Universal, Pro
Newborn2Universal, Pro
Group Portraits7Universal, Pro
White Backgrounds8Universal, Pro
People at Night9Universal, Pro
Auto — Landscape60Universal, Pro
Animals57Universal, Pro
Food & Drinks58Universal, Pro
Flowers59Universal, Pro
Underwater91Universal, Pro
Black & White100Universal, Pro

Applying corrections

The apply step runs the engine against the prepared PCSImage using the parameters and profile from detection. See applyParam for the full method signature.

// Always copy param before passing to the SDK — the engine can mutate it
let paramCopy = param.copy() as! PCSParam

var errorCode: Int32 = 0
engine.applyParam(paramCopy, toImage: pcsImage, profile: result.profile, errorCode: &errorCode)

if errorCode == 0 {
    // Extract the corrected image on the same thread as pcsImage was created
    let corrected = pcsImage.generateImage(withColorSpace: CGColorSpaceCreateDeviceRGB())
    // corrected is a new UIImage with corrections applied
}

Always pass a copy of PCSParam to applyParam. The SDK writes computed values back into the param (for example, the resolved iStrength). If you pass the original, your stored parameters will drift over successive calls.

Separating calc and apply

If you need to apply the same analysis result multiple times (for example, when a slider changes strength), retain the PCSProfile and call applyParam again with a new param:

// Detection is expensive — do it once
let detectionResult = engine.detectSceneForImage(pcsImage)

// Slider changes strength — reuse the profile
func applyWithStrength(_ strength: Int) {
    let param = PCSParam(sceneLabel: detectionResult.sceneLabel, usingEngine: engine)
    let adjusted = param.withStrength(strength)
    let copy = adjusted.copy() as! PCSParam
    engine.applyParam(copy, toImage: pcsImage, profile: detectionResult.profile, errorCode: &errorCode)
}

PCSProfile is tied to the engine that created it. Do not cache a profile across engine replacements (workflow switches).

Parameter management

PCSParam holds all correction parameters. The default initializer PCSParam() represents the identity state — no correction applied.

// No correction (identity)
let identity = PCSParam()

// Scene-optimized param
let sceneParam = PCSParam(sceneLabel: result.sceneLabel, usingEngine: engine)

// Adjust strength
let adjusted = sceneParam.withStrength(80)

// Check if no correction has been selected
if param == PCSParam() {
    print("No correction applied")
}

Key correction parameters

PropertyTypeDescription
iStrengthIntOverall correction strength (0–100+)
bUseAutomaticStrengthSelectionBoolLet the SDK choose strength automatically
iSkintoneTypeIntSkin tone category index (1–10)
iSmoothLevelIntSkin smoothing amount
bSmoothBoolEnable skin smoothing
iEyeCircIntEye circle correction amount
bEyeCircBoolEnable eye circle correction
iCatchLightIntCatchlight enhancement amount
iTeethLevelIntTeeth whitening amount
bTeethBoolEnable teeth whitening
iColorVibrancyIntColor vibrancy
iContrastIntContrast adjustment
lutOutputGuidString?Creative look GUID (nil = no look)
lutOutputStrengthIntCreative look strength (0–100)

Memory management

The SDK's face-aware correction internals can allocate approximately 2.3 GB of temporary memory per apply call on large images. Apply two caps:

Preview cap — 2048 px

Scale the image down to 2048 px on its longest axis before scene detection and preview rendering:

let maxPreviewDimension: CGFloat = 2048
let workImage = makeSRGBImage(uiImage, maxDimension: maxPreviewDimension)

Export cap — 4096 px / 12 megapixels

For full-resolution export, apply a stricter cap and release cached images before calling apply:

let maxExportDimension: CGFloat = 4096
let maxExportMegapixels: CGFloat = 12_000_000

let w = uiImage.size.width, h = uiImage.size.height
let scaleDim = min(1.0, maxExportDimension / max(w, h))
let scaleMP  = min(1.0, sqrt(maxExportMegapixels / (w * h)))
let scale    = min(scaleDim, scaleMP)

let exportImage = makeSRGBImage(uiImage, maxDimension: max(w, h) * scale)

Autorelease pools

Wrap every PCSImage + applyParam + generateImage sequence in an autoreleasepool to drain SDK temporaries immediately:

let corrected: UIImage? = autoreleasepool {
    guard let pcsImage = PCSImage(image: workImage) else { return nil }
    var errorCode: Int32 = 0
    engine.applyParam(paramCopy, toImage: pcsImage, profile: profile, errorCode: &errorCode)
    return errorCode == 0 ? pcsImage.generateImage(withColorSpace: CGColorSpaceCreateDeviceRGB()) : nil
}

Common patterns

Batch processing — reuse the engine

// Create the engine once
let engine = PCSEngine(apiKey: apiKey, certificate: cert, /* ... */)
let wrapper = ImageEngineWrapper(engine: engine)

// Process multiple images serially (or in parallel with multiple engines)
for imageUrl in imageUrls {
    let source = UIImage(contentsOfFile: imageUrl.path)!
    correctImage(source, using: wrapper) { result in
        // save result
    }
}

Creative looks

Apply a creative look by setting lutOutputGuid on the param before calling applyParam:

var param = PCSParam(sceneLabel: result.sceneLabel, usingEngine: engine)
param.lutOutputGuid = "LOOK_GUID_FROM_SDK"   // GUID from PCSEngine.getSceneDescriptions()
param.lutOutputStrength = 80                 // 0–100
param.iOutLUTcontrast = 0
param.iOutLUTsaturation = 0

let copy = param.copy() as! PCSParam
engine.applyParam(copy, toImage: pcsImage, profile: result.profile, errorCode: &errorCode)

Pitfalls and solutions

PCSImage(image:) returns nil

The image is in the wrong format (16-bit, wide-gamut, or file-backed). Re-render through UIGraphicsImageRenderer(preferredRange: .standard) as shown in Image format requirements.

Crash (EXC_BAD_ACCESS) inside the SDK

Same format issue, or PCSImage was created on one thread and used on another (thread-affinity violation). Ensure both creation and consumption happen on the same thread.

Correction strength is always 0 after apply

You forgot to copy the param: param.copy() as! PCSParam. The SDK wrote 0 back into the original — pass a copy on every call.

Wrong scene detected, incorrect corrections

Image was captured in Display P3 or delivered as a 16-bit extended range image. Apply the sRGB re-render fix before creating PCSImage.

applyParam has no visible effect after workflow switch

The PCSProfile you cached was tied to the old engine. After replacing the engine, run scene detection again to get a fresh profile from the new engine.

On this page