0

I have an application that takes in the camera feed (Camera2 API) and creates a video output at the chosen resolution using MediaCodec and MediaMuxer. This app works like a dashcam producing 10 sec video output segments. Now I want to produce a lower quality video at the same time.

I am using https://bigflake.com/mediacodec/#DecodeEditEncodeTest as my point of reference. I've also looked at several other examples including the other in BigFlake.

But I'm facing several problems at this point (details are further below):

  1. Some devices are not able to take in the multiple surfaces while creating the CaptureRequest.
  2. Hitting some kind of internal memory limit from the codec.
  3. Frozen frames at higher resolutions and sometimes the decoder does not produce output buffer after the 1st segment.

I know this is a mountain of text, But I would appreciate any help.

Initially when I tried to add another surface to the camera capture session so that I can have HQ and LQ videos produced via 2 dedicated surfaces for their respective MediaCodecs it fails on several devices. (Note: these were not the only 2 surfaces that are here)

So my work around is to have a single surface that inputs to the HQ encoder and a decoder that decodes the data that I receive from the HQ encoder output buffers and feed that into a LQ encoder. This works fine for the most part at resolutions below 1920x1080. On some devices like Lenovo Tab M10 (Gen3 running android 13) and Samsung Tab (Galaxy Tab A9+ running Android 14) I'm hitting some kind of internal memory limit as shown by the error below

[OMX.qcom.video.encoder.avc] failed to set input port definition parameters. [OMX.qcom.video.encoder.avc] configureCodec returning error -12 signalError(omxError 0x80001001, internalError -12) [OMX.qcom.video.encoder.avc] configureCodec returning error -12 signalError(omxError 0x80001001, internalError -12) Codec reported err 0xfffffff4/NO_MEMORY, actionCode 0, while in state 3/CONFIGURING flushMediametrics Codec reported err 0xfffffff4/NO_MEMORY, actionCode 0, while in state 0/UNINITIALIZED FATAL EXCEPTION: EncodeThread (Ask Gemini) Process: com.qdev.singlesurfacedualquality, PID: 21251 android.media.MediaCodec$CodecException: Error 0xfffffff4 at android.media.MediaCodec.native_configure(Native Method) at android.media.MediaCodec.configure(MediaCodec.java:2214) at android.media.MediaCodec.configure(MediaCodec.java:2130) at com.qdev.singlesurfacedualquality.MainActivity.encodeFrames$lambda$7(MainActivity.kt:366) ... 

Another issue I'm facing is that when I bump up the resolution to 4k 30fps even the High Quality video is produced at a low bit rate. Even having frozen frames when I try to stop and start the muxer to produce a new file segment. I have tried changing the key frame interval from 1 sec to 10 sec. The quality is slightly better with this. What I mean by this frozen frame issue is, what in theory should have been a 10 sec video will in actuality have 10sec + another 9sec or so of frozen frame video totalling 19sec.

To hopefully improve on performance, I tried using Surfaces where the decoder outputs to a surface that can act as an input to the LQ encoder. However I was not able to get this to work. (following the same example mentioned above) I'm not experienced with OpenGL and so I'm not sure if I've made any fundamental setup mistakes.

Could not compile shader 35633: FATAL EXCEPTION: EncodeThread (Ask Gemini) Process: com.qdev.singlesurfacedualquality, PID: 32130 java.lang.RuntimeException: failed creating program at com.qdev.singlesurfacedualquality.utils.TextureRender.surfaceCreated(TextureRender.java:115) at com.qdev.singlesurfacedualquality.utils.OutputSurface.setup(OutputSurface.java:87) at com.qdev.singlesurfacedualquality.utils.OutputSurface.<init>(OutputSurface.java:79) at com.qdev.singlesurfacedualquality.MainActivity.encodeFrames$lambda$8(MainActivity.kt:305) at com.qdev.singlesurfacedualquality.MainActivity.$r8$lambda$vss7pVmD3tZM7r99IGCNlRwOFco(Unknown Source:0) at com.qdev.singlesurfacedualquality.MainActivity$$ExternalSyntheticLambda5.run(D8$$SyntheticClass:0) at android.os.Handler.handleCallback(Handler.java:942) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loopOnce(Looper.java:201) at android.os.Looper.loop(Looper.java:288) at android.os.HandlerThread.run(HandlerThread.java:67) 

Having looked and trying to make sense of this for longer than I will admit, I am not having much luck with this I'm lost on how to proceed TBH.

Is there a way I can get this working to a point where I can make reliable 4k videos as well?

My code (without the mediaCodec surface experiments) is below:

private fun encodeSyncVideo() { encoderThreadHandler?.post { if (captureLq) { decoder = MediaCodec.createDecoderByType(VIDEO_MIME_TYPE) // these must be called before configureLqVideoFormat lqVideoCodec = MediaCodec.createEncoderByType(VIDEO_MIME_TYPE) // these must be called before configureLqVideoFormat try { configureLqVideoFormat() } catch (e: CodecException){ // looks like LQ video is not supported on this device at this resolution ... return@post } lqVideoCodec.start() lqVideoCodecStarted = true } val bufferInfo = MediaCodec.BufferInfo() val decoderBufferInfo = MediaCodec.BufferInfo() val lqBufferInfo = MediaCodec.BufferInfo() var decoderInputBuffers: Array<ByteBuffer>? = null var decoderOutputBuffers: Array<ByteBuffer>? = null var hqToLqFormat: MediaFormat? = null var decoderConfigured: Boolean = false var highQualityVideoTrackIndex: Int = -1 var lowQualityVideoTrackIndex: Int = -1 var hqFrameCount = 0 var lqFrameCount = 0 var firstKeyFrame = false var muxLast = false while (true) { if (videoCodecStarted) { // video codec is started val hqOutputBufferIndex = videoCodec.dequeueOutputBuffer(bufferInfo, 0) if (hqOutputBufferIndex >= 0) { val hqOutputBuffer = videoCodec.getOutputBuffer(hqOutputBufferIndex) hqFrameCount++ if (bufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG != 0) { bufferInfo.size = 0 } if (bufferInfo.size != 0) { // start the muxer if not stated if (!muxerStarted && muxer != null) { reuseVideoFormat = videoCodec.outputFormat highQualityVideoTrackIndex = muxer!!.addTrack(videoCodec.outputFormat) muxer!!.start() muxerStarted = true } // write the data to the muxer hqOutputBuffer?.apply { position(bufferInfo.offset) limit(bufferInfo.offset + bufferInfo.size) if (muxerStarted) { muxer?.writeSampleData(highQualityVideoTrackIndex, this, bufferInfo) if ((hqFrameCount > segmentAtFrameCount || !captureFrames.get()) && bufferInfo.flags and MediaCodec.BUFFER_FLAG_KEY_FRAME != 0) { Log.d( TAG, "encodeSyncVideo: hq file count incremented to $hqFileCount @ Frame $hqFrameCount, segmentation at $segmentAtFrameCount" ) hqFrameCount = 0 // reset the frame count audioRecorder?.stop() audioRecorder?.release() audioRecorder = null audioRecorderStarted = false if (captureLq) { val hqAudioIs = File(paths.INTERNAL_VIDEO_DIR + File.separator + "audio-${hqFileCount}.mp3").inputStream() val lqAudioOs = File(paths.INTERNAL_VIDEO_DIR + File.separator + "audio-${hqFileCount}-lq.mp3").outputStream() val copied = FileUtils.copy(hqAudioIs, lqAudioOs) Log.d(TAG, "encodeSyncVideo: copied $copied bytes from hq to lq audio file") } val startTimeForMuxerRestart = System.currentTimeMillis()/1000 muxer!!.stop() muxerStarted = false muxer?.release() muxer = null startMuxService(false) if (captureFrames.get().also { Log.d(TAG, "encodeSyncVideo: capture frames = $it") }) { hqFileCount++ attemptAudioRecorderStart(isLq = false, isTest = false) if (!captureLq) { // create a new snipback output file outputFilePath = createOutputMediaFile()!!.absolutePath segmentCreateTime = System.currentTimeMillis() } muxer = MediaMuxer( paths.INTERNAL_VIDEO_DIR + File.separator + "out-${hqFileCount}.mp4", MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4 ) videoTrackIndex = muxer?.addTrack(reuseVideoFormat)!! muxer?.start() muxerStarted = true Log.d(TAG, "encodeSyncVideo: starting new muxer for hq file $hqFileCount") } Log.d(TAG, "encodeSyncVideo: time taken to restart muxer ${System.currentTimeMillis()/1000 - startTimeForMuxerRestart}") } } } } // this is the first key frame if (bufferInfo.flags and MediaCodec.BUFFER_FLAG_KEY_FRAME != 0 && !firstKeyFrame) { // this is a key frame firstKeyFrame = true recordUIListener?.resetRecorderButton() recordUIListener?.startChronometerUI() setupSegmentationTimer() } if (captureLq) { if ((bufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) { // Codec config info. Only expected on first packet. One way to // handle this is to manually stuff the data into the MediaFormat // and pass that to configure(). We do that here to exercise the API. hqToLqFormat = MediaFormat.createVideoFormat("video/avc", mVideoSize!!.width, mVideoSize!!.height) hqToLqFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat) hqToLqFormat.setByteBuffer("csd-0", hqOutputBuffer) try { decoder.configure(hqToLqFormat, null, null, 0) } catch (e: MediaCodec.CodecException) { Log.e(TAG, "decodeFrame: codec config failed ${e.diagnosticInfo}") ... return@post } decoder.start() decoderStarted = true decoderInputBuffers = decoder.inputBuffers decoderOutputBuffers = decoder.outputBuffers decoderConfigured = true } else { // Get a decoder input buffer, blocking until it's available. val inputBufIndex = decoder.dequeueInputBuffer(-1) val inputBuf: ByteBuffer? = decoderInputBuffers?.get(inputBufIndex) inputBuf?.clear() inputBuf?.put(hqOutputBuffer!!) decoder.queueInputBuffer(inputBufIndex, 0, bufferInfo.size, bufferInfo.presentationTimeUs, bufferInfo.flags) } } videoCodec.releaseOutputBuffer(hqOutputBufferIndex, false) if (captureLq && decoderConfigured) { val decoderStatus = decoder.dequeueOutputBuffer(decoderBufferInfo, 0) if (decoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) { // no output available yet Log.d(TAG, "decoder: no output available yet") } else if (decoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) { decoderOutputBuffers = decoder.outputBuffers } else if (decoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) { hqToLqFormat = decoder.outputFormat Log.d(TAG, "encodeFrames: new format $hqToLqFormat") } else if (decoderStatus < 0) { Log.e(TAG, "decodeFrame: unexpected result from decoder.dequeueOutputBuffer: $decoderStatus") } else { val outputBuffer = decoderOutputBuffers!![decoderStatus] outputBuffer.position(decoderBufferInfo.offset) outputBuffer.limit(decoderBufferInfo.offset + decoderBufferInfo.size) if (decoderBufferInfo.size != 0) { // send this to the lq codec val lqInputBufferIndex = lqVideoCodec.dequeueInputBuffer(-1) val lqInputBuffer = lqVideoCodec.getInputBuffer(lqInputBufferIndex) lqInputBuffer?.clear() lqInputBuffer?.put(outputBuffer) if (lqFrameCount == segmentAtFrameCount) { lqVideoCodec.queueInputBuffer( lqInputBufferIndex, 0, decoderBufferInfo.size, decoderBufferInfo.presentationTimeUs, MediaCodec.BUFFER_FLAG_KEY_FRAME ) } else { lqVideoCodec.queueInputBuffer( lqInputBufferIndex, 0, decoderBufferInfo.size, decoderBufferInfo.presentationTimeUs, decoderBufferInfo.flags ) } } decoder.releaseOutputBuffer(decoderStatus, false) } val lqOutputBufferIndex = lqVideoCodec.dequeueOutputBuffer(lqBufferInfo, 0) if (lqOutputBufferIndex >= 0) { val lqOutputBuffer = lqVideoCodec.getOutputBuffer(lqOutputBufferIndex) if (lqBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG != 0) { lqBufferInfo.size = 0 } if (lqBufferInfo.size != 0) { lqFrameCount++ // start the muxer if not stated if (!lqMuxerStarted && lqMuxer != null) { lowQualityVideoTrackIndex = lqMuxer!!.addTrack(lqVideoCodec.outputFormat) lqMuxer?.start() lqMuxerStarted = true } // write the data to the muxer lqOutputBuffer?.apply { position(lqBufferInfo.offset) limit(lqBufferInfo.offset + lqBufferInfo.size) if (lqMuxerStarted) { lqMuxer?.writeSampleData(lowQualityVideoTrackIndex, this, lqBufferInfo) if ((lqFrameCount > segmentAtFrameCount || !captureFrames.get()) && bufferInfo.flags and MediaCodec.BUFFER_FLAG_KEY_FRAME != 0) { Log.d( TAG, "encodeSyncVideo: lq file count incremented to $lqFileCount @ Frame $lqFrameCount, segmentation at $segmentAtFrameCount" ) lqFrameCount = 0 // reset the frame count lqMuxer!!.stop() lqMuxerStarted = false lqMuxer?.release() lqMuxer = null startMuxService(true) if (captureFrames.get()) { lqFileCount++ // create a new snipback output file outputFilePath = createOutputMediaFile()!!.absolutePath segmentCreateTime = System.currentTimeMillis() lqMuxer = MediaMuxer( paths.INTERNAL_VIDEO_DIR + File.separator + "out-${lqFileCount}-lq.mp4", MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4 ) videoTrackIndex = lqMuxer?.addTrack(reuseVideoFormat)!! lqMuxer?.start() lqMuxerStarted = true } } } } } lqVideoCodec.releaseOutputBuffer(lqOutputBufferIndex, false) } } } } // end of video codec started if ((bufferInfo.flags and MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) { break } if (!captureFrames.get() && isEnding && bufferInfo.flags and MediaCodec.BUFFER_FLAG_KEY_FRAME != 0) { muxLast = true break } } try { audioRecorder?.stop() } catch (e: IllegalStateException) { e.printStackTrace() } audioRecorder?.release() try { muxer?.stop() } catch (e: IllegalStateException) { e.printStackTrace() } muxer?.release() muxerStarted = false if (captureLq){ try { lqMuxer?.stop() } catch (e: IllegalStateException) { e.printStackTrace() } lqMuxer?.release() lqMuxerStarted = false } if (muxLast) { if (captureLq) { val hqAudioIs = File(paths.INTERNAL_VIDEO_DIR + File.separator + "audio-${hqFileCount}.mp3").inputStream() val lqAudioOs = File(paths.INTERNAL_VIDEO_DIR + File.separator + "audio-${hqFileCount}-lq.mp3").outputStream() val copied = FileUtils.copy(hqAudioIs, lqAudioOs) Log.d(TAG, "encodeSyncVideo: copied $copied bytes from hq to lq audio file") startMuxService(true) } startMuxService(false) } videoCodec.stop() videoCodecStarted = false videoCodec.release() if (captureLq) { decoder.stop() decoderStarted = false decoder.release() lqVideoCodec.stop() lqVideoCodecStarted = false lqVideoCodec.release() } muxer = null lqMuxer = null audioRecorder = null canShutDown = true } } 

May be I'm going about this wrong, I don't know anything about OpenGL so I wasn't able to make much sense of examples with that. Anyway, I would appreciate any help you can provide. Thanks!

2
  • Just making sure you've seen: developer.android.com/media/camera/camera2/… For the devices which have issues (and the ones that worked), have you checked their Performance Class to see if they meet your needs. Commented Aug 21, 2024 at 16:31
  • Hi @MorrisonChang! I have not checked the performance class before, perhaps this will hint towards what the device can support and at what resolution. Thanks! Do you also have some ideas/advice/examples on how I can fix the freeze frame or improve the performance of capturing and re-encoding 4k videos on devices that do support it? Commented Aug 22, 2024 at 4:32

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.