So I got a Raspberry Pi Zero Wireless with the Pi camera and I would like to use the camera with OpenCV in Python. The problem is, I know I can do this easily on the Pi itself but I highly doubt the Pi will be able to process what I want to do. I might be able to optimize my code sometimes in the future but I don't want to worry about that for now.
So what I wanna do instead is send the camera Data to my PC and run all the Python/Opencv code there. Ideally over Bluetooth. An USB connection to send the data would be sufficient as well.
I can find a lot about streaming video to a PC using VLC but not how to get the data into Python and latency seems to be a problem as well with this method.
If there is no easy solution for this I might just buy a tiny USB cam for now.
EDIT:
So I tried Dave Jones suggestion and went with this: On the Pi I simply use the provided code from rapid-capture-and-streaming and I can get close to 60fps with a decent enough resolution. The code looks like this:
import io import socket import struct import time import picamera class SplitFrames(object): def __init__(self, connection): self.connection = connection self.stream = io.BytesIO() self.count = 0 def write(self, buf): if buf.startswith(b'\xff\xd8'): # Start of new frame; send the old one's length # then the data size = self.stream.tell() if size > 0: self.connection.write(struct.pack('<L', size)) self.connection.flush() self.stream.seek(0) self.connection.write(self.stream.read(size)) self.count += 1 self.stream.seek(0) self.stream.write(buf) client_socket = socket.socket() client_socket.connect(('my_server', 8000)) connection = client_socket.makefile('wb') try: output = SplitFrames(connection) with picamera.PiCamera(resolution='853x480', framerate=60) as camera: time.sleep(2) start = time.time() camera.start_recording(output, format='mjpeg') camera.wait_recording(30) camera.stop_recording() # Write the terminating 0-length to the connection to let the # server know we're done connection.write(struct.pack('<L', 0)) finally: connection.close() client_socket.close() finish = time.time() print('Sent %d images in %d seconds at %.2ffps' % ( output.count, finish-start, output.count / (finish-start))) On the client side I'm basically using the code from capturing-to-a-network-stream with an added cv2.imshow to get a preview. Everything displays fine but with a little bit of delay. Maybe a second or less.
import io import socket import struct from PIL import Image import cv2 import numpy as np # Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means # all interfaces) server_socket = socket.socket() server_socket.bind(('0.0.0.0', 8000)) server_socket.listen(0) # Accept a single connection and make a file-like object out of it connection = server_socket.accept()[0].makefile('rb') try: while True: # Read the length of the image as a 32-bit unsigned int. If the # length is zero, quit the loop image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0] if not image_len: break # Construct a stream to hold the image data and read the image # data from the connection image_stream = io.BytesIO() image_stream.write(connection.read(image_len)) # Rewind the stream, open it as an image with PIL and do some # processing on it image_stream.seek(0) image = Image.open(image_stream) cv_image = np.array(image) cv2.imshow('Stream',cv_image) if cv2.waitKey(1) & 0xFF == ord('q'): break finally: connection.close() server_socket.close() If I can get this working with an even lower delay I would like to get an even higher resolution at 60fps. I only need grayscale images on the client, so if I could only send grayscale images on the server side this should also give me some more headroom.
bcm2835-v4l2driver enabled and I access the camera as any normal camera attached to the linux system.