1

I am trying to use picamera to record and save a high resolution video, and capture low resolution images for processing using openCV.

import picamera import picamera.array import cv2 import numpy as np camera = picamera.PiCamera() camera.resolution = (1920, 1088) camera.framerate = 30 camera.start_recording('testRecording.h264') cap=picamera.array.PiRGBArray(camera, size = (640,368)) for x in range(250): print(x) cap=picamera.array.PiRGBArray(camera) camera.capture(cap, use_video_port=True, resize = (640,368), format="bgr") img=cap.array cv2.imshow("resized image", img) cv2.waitKey(1) 

I have tried using this code, and it throws the following error:

Traceback (most recent call last): File "/home/pi/recordingTest/simultanious capture test.py", line 18, in <module> camera.capture(cap, use_video_port=True, resize = (640,360), format="bgr") File "/usr/lib/python3/dist-packages/picamera/camera.py", line 1421, in capture if not encoder.wait(self.CAPTURE_TIMEOUT): File "/usr/lib/python3/dist-packages/picamera/encoders.py", line 395, in wait self.stop() File "/usr/lib/python3/dist-packages/picamera/encoders.py", line 419, in stop self._close_output() File "/usr/lib/python3/dist-packages/picamera/encoders.py", line 349, in _close_output mo.close_stream(output, opened) File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 371, in close_stream stream.flush() File "/usr/lib/python3/dist-packages/picamera/array.py", line 238, in flush self.array = bytes_to_rgb(self.getvalue(), self.size or self.camera.resolution) File "/usr/lib/python3/dist-packages/picamera/array.py", line 127, in bytes_to_rgb 'Incorrect buffer length for resolution %dx%d' % (width, height)) picamera.exc.PiCameraValueError: Incorrect buffer length for resolution 1920x1088 

By removing resize = (640,368) from line 18, the code no longer errors, however, the capture is then taken at the full resolution, and I want a low resolution image.

I know I could resize the capture afterwards, but I believe this would be more intensive on the Pi, so would ideally like to just capture a low res image in the first place.

Any help understanding the cause of this, and finding a solution that works would be greatly appreciated :)

1 Answer 1

1

Basically, if you would like to store an image in a buffer, you need to:

  1. Initialize the buffer (cap variable in your code) with the size of image you want to store (640, 368 in your case)
  2. Put image to the buffer (using camera.capture(cap, ...))
  3. Clear the buffer before you will use it again (cap.truncate(0)) - it is easy to forget about it, and the error says that the buffer length is incorrect, which can be confusing

The problem in your code is that the cap variable is assigned twice (before and inside loop) and it is not truncated after each iteration.

The corrected code below should work fine:

import picamera import picamera.array import cv2 import numpy as np camera = picamera.PiCamera() camera.resolution = (1920, 1088) camera.framerate = 30 camera.start_recording('testRecording.h264') cap=picamera.array.PiRGBArray(camera, size = (640,368)) for x in range(250): print(x) camera.capture(cap, use_video_port=True, resize = (640,368), format="bgr") img=cap.array cv2.imshow("resized image", img) cv2.waitKey(1) cap.truncate(0) 

Changes:

  1. cap=picamera.array.PiRGBArray(camera) line was removed
  2. cap.truncate(0) was added

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.