EyeQ Docs

Android API Reference

Complete API reference for the Android Video SDK

This reference documents all public classes, methods, and types in the Android Video SDK.

DynamicProcessor

photos.eyeq.dynamic

The main entry point for running Dynamic AI inference. Loads a bundled model, prepares buffers, and provides methods for image and video processing.

class DynamicProcessor(context: Context)
ParameterTypeDescription
contextContextApplication context, used to load the bundled model file from assets

Licensing

setLicense

Sets the license credentials required to initialize the inference engine. Must be called before init().

fun setLicense(apiKey: String, cert: String, recreate: Boolean = false)
ParameterTypeDescription
apiKeyStringAPI key from EyeQ (digits only)
certStringCertificate string paired with the API key
recreateBooleanIf true, re-initializes the engine immediately with new credentials. Default: false

checkCertificate

Validates an API key and certificate pair against the license server without affecting the currently loaded license.

fun checkCertificate(apiKey: String, cert: String): Int
ParameterTypeDescription
apiKeyStringAPI key to validate
certStringCertificate to validate

Returns: Positive value = days remaining, -1 = expired, -2 = invalid key/certificate combination.

getDaysLeft

Returns remaining days on the current license.

fun getDaysLeft(): Int

Returns: Positive days remaining. -1 if expired or engine not initialized. -2 if key/certificate combination is invalid.

Initialization

init

Initializes the inference engine. Must be called after setLicense() and before any process calls.

fun init(): Int

Returns: Days remaining on the license. -1 if expired, -2 if key/certificate combination is invalid.

Call on a background thread. Initialization may take several seconds.

isInitialized

Whether the processor has been successfully initialized via init(). Returns false until init() completes, or after a failed initialization.

var isInitialized: Boolean

Image processing

processImage

Processes a bitmap through the full inference pipeline and returns the result as a new Bitmap. Runs inference and immediately applies the Dynamic effect via an offscreen GL render. For preview use cases, prefer processImagePreview() which is significantly faster.

fun processImage(bitmap: Bitmap, strength: Float = 1.0f): Bitmap
ParameterTypeDescription
bitmapBitmapSource image. Not recycled by this method
strengthFloatCorrection intensity (0.0–1.0). Default: 1.0

Returns: New Bitmap with the Dynamic effect applied at original resolution.

processImagePreview

Processes a bitmap through inference and returns raw DynamicOutputs. Faster than processImage() because it skips the offscreen GL render step. Use the returned outputs with DynamicView.setDynamicOutputs() to display the effect.

fun processImagePreview(bitmap: Bitmap, strength: Float = 1.0f): DynamicOutputs
ParameterTypeDescription
bitmapBitmapSource image. Not recycled by this method
strengthFloatCorrection intensity (0.0–1.0). Default: 1.0

Returns: DynamicOutputs for use with DynamicView.

Video processing

processVideoFrame

Processes a single video frame through the inference pipeline. Frames are skipped according to the interval set in setDeflickerParams(). Returns null when a frame is skipped or when a previous inference is still running.

fun processVideoFrame(
    frameBuffer: ByteBuffer,
    strength: Float,
    rotateOutput: Int = 0,
    flipVertical: Boolean = false
): DynamicOutputs?
ParameterTypeDescription
frameBufferByteBufferRaw 256×256 RGBA frame data. Must be rewound before passing
strengthFloatCorrection intensity (0.0–1.0)
rotateOutputIntClockwise rotation in degrees applied after inference. Values: 0, 90, 180, 270. Default: 0
flipVerticalBooleanIf true, flips output arrays vertically after inference. Default: false

Returns: DynamicOutputs for this frame, or null if skipped.

processVideoFrameDebug

Processes a single video frame, bypassing frame skipping. Always runs inference regardless of the skip interval. Intended for debugging and benchmarking.

fun processVideoFrameDebug(frameBuffer: ByteBuffer, strength: Float): DynamicOutputs
ParameterTypeDescription
frameBufferByteBufferRaw 256×256 RGBA frame data
strengthFloatCorrection intensity (0.0–1.0)

Returns: DynamicOutputs for this frame. Never null.

Deflickering

setDeflickerParams

Configures the temporal deflicker filter applied across frames. Higher averaging values produce smoother but slower-adapting results.

fun setDeflickerParams(skipFrames: Int, curveAvg: Float, imgAvg: Float)
ParameterTypeDescription
skipFramesIntFrames to skip between inference runs. Higher values reduce load at the cost of responsiveness
curveAvgFloatTemporal smoothing weight for the tone curve (0.0–1.0). Recommended: 0.08
imgAvgFloatTemporal smoothing weight for the image-level output (0.0–1.0). Recommended: 0.9

startVideoFrames

Resets the deflicker state and the frame skip counter. Call when starting a new video session — before playback, before transcoding, or when a new camera session starts. Ensures inference runs on the very next frame and no blending artifacts from a previous session carry over.

fun startVideoFrames()

Output orientation

Methods for correcting DynamicOutputs orientation after inference when the render target has a different orientation than the inference input.

rotateOutputs

Rotates the DynamicOutputs arrays in place to match a target orientation. Use when inference was run on a rotated (upright) image but the outputs need to be applied back to the original sensor-rotated frame.

fun rotateOutputs(outputs: DynamicOutputs, rotation: Int)
ParameterTypeDescription
outputsDynamicOutputsOutputs whose local and fusion arrays will be rotated in place
rotationIntClockwise rotation in degrees: 90, 180, or 270

flipOutputs

Flips the DynamicOutputs arrays vertically in place. Use when inference was run on an upright image but the render target expects the original flipped sensor orientation.

fun flipOutputs(outputs: DynamicOutputs)
ParameterTypeDescription
outputsDynamicOutputsOutputs whose local and fusion arrays will be flipped in place

DynamicView

photos.eyeq.dynamic.image

A GLSurfaceView that renders a Bitmap with the Dynamic effect applied in real time. All public methods are thread-safe — they queue their work onto the GL thread internally.

class DynamicView(context: Context, attrs: AttributeSet? = null) : GLSurfaceView

Typical usage:

  1. Add DynamicView to your layout
  2. Call setBitmap() to load the source image
  3. Run inference with DynamicProcessor.processImagePreview() and pass the result to setDynamicOutputs()
  4. Use setStrength() to let the user adjust effect intensity interactively
  5. Call release() when the view is no longer needed

setBitmap

Loads a Bitmap as the source image to render. Must be called before setDynamicOutputs().

fun setBitmap(bitmap: Bitmap)
ParameterTypeDescription
bitmapBitmapSource bitmap to display. Not recycled by this method

setDynamicOutputs

Applies DynamicOutputs from inference to the rendered image. Call after receiving outputs from DynamicProcessor.processImagePreview().

fun setDynamicOutputs(outputs: DynamicOutputs)
ParameterTypeDescription
outputsDynamicOutputsInference outputs to apply to the current bitmap

setStrength

Adjusts the intensity of the Dynamic effect without re-running inference. Use for interactive tuning after setDynamicOutputs() has been called.

fun setStrength(strength: Float)
ParameterTypeDescription
strengthFloatEffect intensity (0.0–1.0). 0.0 shows the original image, 1.0 applies the full effect

Throws: IllegalArgumentException if strength is outside the 0.0–1.0 range.

release

Releases all GL resources held by this view. Call in onDestroyView() or onDestroy(). The view must not be used after calling this.

fun release()

DynamicEffectMedia

photos.eyeq.dynamic.media

A GlEffect that applies the Dynamic effect to video frames via the Media3 GL pipeline. Compatible with:

  • ExoPlayer via ExoPlayer.setVideoEffects() — real-time playback
  • Transformer via EditedMediaItem.Builder.setEffects() — offline video transcoding
  • LifecycleCameraController via LifecycleCameraController.setEffects() — real-time camera preview
class DynamicEffectMedia(
    context: Context,
    asyncInput: Boolean = true,
    listener: DynamicListener,
    initListener: (() -> Unit)? = null
) : GlEffect
ParameterTypeDescription
contextContextApplication context
asyncInputBooleanIf true, calls onInputReadyAsync(). If false, calls onInputReady(). Default: true
listenerDynamicListenerReceives pixel buffers for inference and FPS bench callbacks
initListener(() -> Unit)?Called when the GL context is ready. Can also be set via onInitialized()

Always call release() when the effect is no longer needed to free GL resources.

Inference output delivery

setDynamicOutputs

Delivers DynamicOutputs from inference to the GL thread. Safe to call from any thread — queued and applied before the next frame.

fun setDynamicOutputs(outputs: DynamicOutputs?)
ParameterTypeDescription
outputsDynamicOutputs?Inference outputs to apply, or null to continue rendering with previously uploaded outputs

Snapshot orientation

Correct the orientation of the inference input snapshot when the video stream is rotated or mirrored (e.g., front camera, landscape sensor). Set these before inference begins.

setSnapshotTransform

Applies a combined rotation and flip transform to the inference input snapshot. Transforms are applied in order: flipVerflipHorrotation.

fun setSnapshotTransform(rotation: Int = 0, flipVer: Boolean = false, flipHor: Boolean = false)
ParameterTypeDescription
rotationIntClockwise rotation in degrees: 0, 90, 180, or 270. Default: 0
flipVerBooleanFlip vertically (GL origin correction). Default: false
flipHorBooleanFlip horizontally (front camera mirror correction). Default: false

setSnapshotRotation

Rotates the inference input snapshot clockwise.

fun setSnapshotRotation(rotation: Int)
ParameterTypeDescription
rotationIntClockwise rotation in degrees: 0, 90, 180, or 270

setSnapshotFlip

Flips the inference input snapshot along one or both axes.

fun setSnapshotFlip(flipVer: Boolean = false, flipHor: Boolean = false)
ParameterTypeDescription
flipVerBooleanFlip vertically. Default: false
flipHorBooleanFlip horizontally. Default: false

setSnapshotFlipVert

Flips the inference input snapshot vertically (GL origin correction).

fun setSnapshotFlipVert(flipVer: Boolean)

setSnapshotFlipHor

Flips the inference input snapshot horizontally (front camera mirror correction).

fun setSnapshotFlipHor(flipHor: Boolean)

LUT

setLut

Sets the 3D LUT data and controls whether it is applied during rendering. Safe to call from any thread — queued and applied before the next frame.

fun setLut(lut: FloatArray?, enabled: Boolean = true)
ParameterTypeDescription
lutFloatArray?FloatArray of size 16×16×16×3 representing the 3D LUT, or null to clear
enabledBooleanWhether to apply the LUT in the shader. Default: true

setLutEnabled

Enables or disables LUT rendering without replacing the current LUT data.

fun setLutEnabled(enabled: Boolean)

getLutEnabled

Returns whether the LUT is currently enabled and has valid data. Must be called on the GL thread.

fun getLutEnabled(): Boolean

Lifecycle

onInitialized

Registers a callback to be invoked when the GL context is ready.

fun onInitialized(listener: () -> Unit)

release

Releases all GL resources held by this effect. Must be called when no longer needed.

fun release()

DynamicListener

photos.eyeq.dynamic

Callback interface for receiving video frames ready for inference. Implement this interface and pass it to the DynamicEffectMedia constructor.

interface DynamicListener

onInputReady

Called when DynamicEffectMedia provides a new 256×256 RGBA frame for synchronous inference. Run inference and return the output immediately. Used when asyncInput = false.

open fun onInputReady(pixels: ByteBuffer): DynamicOutputs?
ParameterTypeDescription
pixelsByteBufferDirect RGBA ByteBuffer (256×256×4 bytes)

Returns: DynamicOutputs result, or null to skip.

onInputReadyAsync

Called when a new frame is ready for asynchronous processing. Run inference on a background thread and deliver outputs back via DynamicEffectMedia.setDynamicOutputs(). Used when asyncInput = true (default).

open fun onInputReadyAsync(pixels: ByteBuffer)
ParameterTypeDescription
pixelsByteBufferDirect RGBA ByteBuffer (256×256×4 bytes)

onBench

Called periodically with the current FPS reported by DynamicEffectMedia. Useful for benchmarking.

open fun onBench(fps: Float)
ParameterTypeDescription
fpsFloatCurrent frames per second

DynamicOutputs

photos.eyeq.dynamic

Holds the inference output arrays produced by DynamicProcessor. Pass to DynamicEffectMedia.setDynamicOutputs() or DynamicView.setDynamicOutputs() to apply the Dynamic effect.

data class DynamicOutputs(
    var global: FloatArray,
    var local: FloatArray,
    var fusion: FloatArray,
    var lut: FloatBuffer? = null
)
PropertyTypeDescription
globalFloatArrayGlobal tone-mapping curve parameters. Size: 54
localFloatArraySpatially-local tone-mapping parameters. Size: 16×16×54
fusionFloatArrayPer-region blending weights. Size: 16×16
lutFloatBuffer?Optional 3D LUT data. null if no LUT is applied

VIDEO-SDK Version 1.0.0.23 built from aa5eef97017e23db1d3051b079500606825ef474 on 5-6-2023.

Copyright © 2026 EyeQ Imaging Inc. All rights reserved.

On this page