Using the Android Video SDK
Image processing, real-time video, transcoding, camera preview, and deflickering
The SDK supports four workflows: still image correction with interactive preview, real-time video playback with ExoPlayer, offline transcoding with Media3 Transformer, and live camera preview with CameraX. All workflows share a common pattern — run inference through DynamicProcessor, then deliver the resulting DynamicOutputs to a view or effect for GPU rendering.
Image workflow
For still images, use DynamicView for interactive preview and processImage for final export.
Add DynamicView to your layout:
<photos.eyeq.dynamic.image.DynamicView
android:id="@+id/dynamicView"
android:layout_width="match_parent"
android:layout_height="match_parent" />Run inference and display the result:
// Load the source bitmap into the view first
dynamicView.setBitmap(bitmap)
// Run inference at full strength — tuning happens on the GPU via setStrength()
val outputs = processor.processImagePreview(bitmap, strength = 1.0f)
dynamicView.setDynamicOutputs(outputs)
// Let the user adjust interactively without re-running inference
dynamicView.setStrength(0.8f)When the user is done adjusting, produce a full-resolution bitmap for saving:
val result = processor.processImage(bitmap, strength = 0.8f)processImagePreview returns raw DynamicOutputs and skips the offscreen GL render, making it significantly faster than processImage. Re-run it only when the source bitmap changes, not when strength changes.
processImagePreview and processImage must be called from a background thread. setBitmap, setDynamicOutputs, and setStrength are safe to call from any thread.
ExoPlayer — real-time playback
DynamicEffectMedia is a GlEffect that plugs into the Media3 GL pipeline. For real-time playback, use async inference so the GL thread is not blocked.
Call startVideoFrames() when loading a new media item or after a seek.
class PlayerFragment : Fragment(), DynamicListener {
private val dynamicEffect by lazy {
DynamicEffectMedia(context = requireContext(), listener = this)
}
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
// Correct GL origin for video frames
dynamicEffect.setSnapshotFlipVert(flipVer = true)
player.setVideoEffects(listOf(dynamicEffect))
processor.startVideoFrames()
}
// Async mode — inference runs on a background thread (recommended)
override fun onInputReadyAsync(pixels: ByteBuffer) {
lifecycleScope.launch(Dispatchers.IO) {
val output = processor.processVideoFrame(frameBuffer = pixels, strength = 0.8f)
processor.flipOutputs(output)
dynamicEffect.setDynamicOutputs(output)
}
}
override fun onDestroyView() {
dynamicEffect.release()
super.onDestroyView()
}
}For debugging or short clips, you can use synchronous inference instead. Create the effect with asyncInput = false and implement onInputReady:
// Synchronous mode for debugging or short clips
private val dynamicEffect by lazy {
DynamicEffectMedia(context = requireContext(), asyncInput = false, listener = this)
}
override fun onInputReady(pixels: ByteBuffer): DynamicOutputs? {
val output = processor.processVideoFrame(frameBuffer = pixels, strength = 0.8f)
processor.flipOutputs(output)
return output
}Media3 Transformer — video transcoding
For offline transcoding, use DynamicEffectMedia with asyncInput = false so inference runs synchronously on the GL thread. Transformer controls the frame rate, so blocking is safe.
Call startVideoFrames() before starting each transcode job.
class MovieFragment : Fragment(), DynamicListener {
// asyncInput = false: inference runs synchronously on the GL thread
private val dynamicEffect by lazy {
DynamicEffectMedia(context = requireContext(), asyncInput = false, listener = this)
}
private fun createComposition(uri: Uri): Composition {
val effects = Effects(emptyList(), listOf(dynamicEffect))
val editedMediaItem = EditedMediaItem.Builder(MediaItem.fromUri(uri))
.setEffects(effects)
.build()
return Composition.Builder(EditedMediaItemSequence.Builder(editedMediaItem).build()).build()
}
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
dynamicEffect.setSnapshotFlipVert(flipVer = true)
processor.startVideoFrames()
transformer.start(createComposition(uri), outputPath)
}
override fun onInputReady(pixels: ByteBuffer): DynamicOutputs? {
val output = processor.processVideoFrame(pixels, strength = 0.8f)
processor.flipOutputs(output)
return output
}
override fun onDestroyView() {
dynamicEffect.release()
super.onDestroyView()
}
}CameraX — real-time camera preview
For live camera preview, use DynamicEffectMedia with CameraX's LifecycleCameraController. Camera frames arrive in sensor orientation, so you need to correct the snapshot orientation before inference and rotate outputs back afterward.
Call startVideoFrames() when the camera session starts and when switching lenses.
class CameraFragment : Fragment(), DynamicListener {
private val dynamicEffect by lazy {
DynamicEffectMedia(context = requireContext(), listener = this)
}
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
// Back camera: sensor rotation is typically 90°
// correctionRotate = (360 - 90) % 360 = 270
val sensorRotation = 90
val correctionRotate = (360 - sensorRotation) % 360
dynamicEffect.setSnapshotTransform(rotation = correctionRotate, flipVer = true)
processor.startVideoFrames()
val mediaEffect = Media3Effect(requireContext(), PREVIEW or VIDEO_CAPTURE, mainThreadExecutor()) {}
mediaEffect.setEffects(listOf(dynamicEffect))
cameraController.setEffects(setOf(mediaEffect))
cameraController.bindToLifecycle(viewLifecycleOwner)
}
override fun onInputReadyAsync(pixels: ByteBuffer) {
lifecycleScope.launch(Dispatchers.IO) {
val outputs = processor.processVideoFrame(
frameBuffer = pixels,
strength = 0.8f
) ?: return@launch
// Rotate outputs back to match the original sensor orientation
processor.rotateOutputs(outputs, sensorRotation)
// Back camera only — flip outputs vertically to correct sensor orientation
processor.flipOutputs(outputs)
dynamicEffect.setDynamicOutputs(outputs)
}
}
override fun onDestroyView() {
dynamicEffect.release()
super.onDestroyView()
}
}For the front camera, sensor rotation is typically 270° and no flip is needed, but output rotation is required:
// Front camera setup
dynamicEffect.setSnapshotTransform(rotation = 90, flipVer = false)
// In onInputReadyAsync — rotate outputs by 270°
processor.rotateOutputs(outputs = outputs, 270)
dynamicEffect.setDynamicOutputs(outputs)Deflickering
The SDK applies temporal smoothing across video frames to prevent flickering. Configure it with setDeflickerParams before processing:
processor.setDeflickerParams(
skipFrames = 1, // skip 1 frame between inference runs
curveAvg = 0.08f, // temporal smoothing for the tone curve
imgAvg = 0.9f // temporal smoothing for image-level output
)| Parameter | Effect |
|---|---|
skipFrames | Frames skipped between inference runs. 0 = every frame, 1 = every other frame, 2 = every third. Higher values reduce load but make corrections less responsive |
curveAvg | Temporal smoothing for the tone curve. Lower values respond faster, higher values are smoother. Recommended: 0.08 |
imgAvg | Temporal smoothing for image-level output. Recommended: 0.9 |
Call startVideoFrames() before each new video session (playback start, transcode start, camera start, or after a seek) to reset the deflicker state and ensure inference runs on the very first frame.
Cleanup
Always call release() on DynamicEffectMedia or DynamicView when they are no longer needed to free GL resources. Do this in onDestroyView() or onDestroy().
VIDEO-SDK Version 1.0.0.23 built from aa5eef97017e23db1d3051b079500606825ef474 on 5-6-2023.
Copyright © 2026 EyeQ Imaging Inc. All rights reserved.