Timeline for Pi camera v2 - fast, full sensor capture mode with downsampling
Current License: CC BY-SA 3.0
11 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| May 17, 2018 at 13:51 | comment | added | Dave Jones | For still images when use_video_port=False, not normally. When sensor_mode is 0, and that parameter is False, the firmware forces a (temporary) switch to mode 2 or 3 during the capture. I'm not sure what happens when sensor mode is forced in the constructor though (as in the answer above) ... might work, might not! | |
| May 16, 2018 at 18:30 | comment | added | James S | Reading the docs I assumed that I couldn't use mode 4 for still images. The bullet points under the auto-mode heuristic suggest that that is not "acceptable" for image captures unless use_video_port = True. Am I misunderstanding? | |
| Dec 7, 2017 at 10:01 | comment | added | Dave Jones | @mrplants - no it won't be overwritten; there's a slight white lie in this answer in that the "buf" that write() receives is actually a copy of the buffer it gets from the camera so the code in the answer doesn't do any copying, but picamera already has. So: the buffer is owned by Python, won't be overwritten by the next call, and will de-allocate when you're done with it via normal garbage collection. This isn't the highest performance option, but it's by far the simplest to code and work with (and the performance is sufficient most of the time). | |
| Dec 7, 2017 at 3:56 | comment | added | mrplants | If I use "np.frombuffer", will the buffer be overwritten during the next call to write()? Is there a way to prevent overwriting the buffer until I'm finished processing it? | |
| Dec 18, 2016 at 10:43 | comment | added | Dave Jones | Ah, I think you're using python 2 which imposes the one dimensional buffer requirement - that doesn't apply on python 3 so you can skip the reshape step | |
| Dec 18, 2016 at 9:44 | comment | added | Florin Andrei | @DaveJones your first example throws some errors. Here's the fixed code: gist.github.com/FlorinAndrei/281662a59dec0d3cbb902cb3be6d79f6 | |
| Dec 14, 2016 at 21:11 | comment | added | Dave Jones | I've been fleshing out the camera hardware chapter for the next release but I should probably add in a bit about it all being implemented in the GPU - I just take that knowledge for granted but reading the docs it isn't obvious at all. | |
| Dec 14, 2016 at 19:10 | comment | added | Florin Andrei | BTW, the info in this answer is so useful, it should be added to the picamera module documentation somewhere. At least the first half of the answer. I had no idea how all that processing was done on the GPU, etc. Maybe the camera documentation itself should mention this stuff in a tech details blurb somewhere. The way the camera works and integrates with the RPi platform and the software is a lot better than I thought. | |
| Dec 14, 2016 at 19:07 | comment | added | Florin Andrei | Thank you so much Dave, your answers are always great! This is EXACTLY what I need. The image size is not set in stone, I'll try to find the maximum size that allows the rest of the .py code (Tensorflow) to do image recognition at a good enough pace on the Pi3; I just need full sensor data for a decent view angle. I'm still self-debating whether the whole thing will be one loop (capture / image recog / react), or I should have two separate units (capture like in your 2nd example in one process, and share numpys async with Tensorflow process). #1 is easier, #2 is faster. Need to think about it. | |
| Dec 14, 2016 at 18:50 | vote | accept | Florin Andrei | ||
| Dec 14, 2016 at 10:33 | history | answered | Dave Jones | CC BY-SA 3.0 |