Automatic Exposure Compensation Testing for the Pi Camera
In the 4 weeks since the Pi camera came out, I’ve spent a fair amount of time testing and fiddling with different things. I’ve also made two RasPiCamcorders – I hope to document the Mk2 version soon. It’s the reason I haven’t blogged much in the last week. I’ve been working on the hardware and software for it.
One thing I hadn’t done until this week was test out the various settings for exposure compensation and white balance. Previously, I’d only used the automatic settings, which work pretty well. But there are some circumstances where it’s nice to be able to override these manually.
I want a video demo
I wanted to make a video showing each of the settings. But to do that for each of -10 to +10 ev settings and then edit the video seemed like a right royal nuisance. Wouldn’t it be good if we had a computer to do the repetitive boring bit for us? Hold on, we do. ;)
So I wrote a Python program to shoot a short 10s clip at each of the 21 different settings. That was the easy bit.
But I’ve still got to convert them from .h264 to something I can edit. So, using MP4Box, we’re doing that programmatically as well.
Wouldn’t it be nice if we could trim and edit programmatically too?
For each clip, during the first few frames, the camera is setting its exposure levels. So I wanted to be able to trim off the start from each clip and then put them all together in sequence.
I wanted to do all this, along with conversion and filming in one Python program. (This is how software applications grow – just keep adding features.)
I spent a lot of time unsuccessfully messing about with FFMPEG and AVCONV, and then eventually realised that what I needed to do could all be done with MP4Box. (It helps to read the documentation sometimes.)
I think the reason it wouldn’t work with FFMPEG & AVCONV was that the clips I was shooting were too short. Originally I was shooting for just four seconds and wanted to trim off the first second and the last second. Once I started shooting 10 seconds at each setting it worked fine with MP4Box, and would probably work with the others as well.
Shoot, trim, convert
So, now the program shoots a 10 second clip at each setting. Then, once it’s done all the shooting, it trims and wraps the clips into an MP4 wrapper. That’s great! But wouldn’t it be nice if it could join all the shortened clips together in the right order to make a little movie?
So, I added a final step where it adds all the trimmed clips (concatenates) into one MP4 file in the order they were shot.
So we’re getting the Pi to do ALL the work. Very rewarding to have worked it all out. :) It took me most of Monday, but it was a very good learning exercise.
Here’s the Video
Here’s the Python program
#!/usr/bin/env python2.7 # picam-evtest script by Alex Eames https://raspi.tv # https://raspi.tv/2013/automatic-exposure-compensation-testing-for-the-pi-camera # needs MP4Box (sudo apt-get install gpac) from subprocess import call from time import sleep import time import sys rec_num = 0 length_of_clip = 10000 base_vidfile = "raspivid -t " + str(length_of_clip) + " -o /home/pi/videoEVtest-" file_list =  ####### Camera LED control################################## # Set up GPIO 5 for camera LED control and rear LED control # to have manual control of the camera LED through GPIO 5, # simply add, to your /boot/config.txt file the line # disable_camera_led=1 # If LED control not needed, comment out lines 22-25 and lines 45,46,48,49,88 # (the four GPIO calls in the start_recording() function) import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) GPIO.setup(5, GPIO.OUT) GPIO.setup(22, GPIO.OUT) # I use 22 for an additional rear LED # function for splitting the input video files and packaging the output as MP4 def h264_mp4(i): global file_list infile = "/home/pi/videoEVtest-" + str(i).zfill(5) + ".h264" outfile = "/home/pi/videoEVtest-" + str(i).zfill(5) + ".mp4" mp4b_command = "MP4Box -add " + infile + " -splitx 3:5 " + outfile print mp4b_command file_list.append(outfile) call ([mp4b_command], shell=True) sleep(0.3) # function for doing the recordings def start_recording(rec_num): vidfile = base_vidfile vidfile += str(rec_num).zfill(5) vidfile += ".h264 -fps 25 -b 15000000 -ev " + str(rec_num) # removed -vs print vidfile print "starting recording" GPIO.output(5, 1) GPIO.output(22, 1) call ([vidfile], shell=True) GPIO.output(5, 0) GPIO.output(22, 0) print "finished recording %s" % (str(rec_num).zfill(5)) #need a loop that will start at zero ev and go to 10 for i in range(0,11): start_recording(i) sleep(1) #then another loop starting at -1 and going to -10 for i in range(-1, -11, -1): start_recording(i) sleep(1) #### split and convert files for EV 0 to 10 #### for i in range(0,11): h264_mp4(i) #### write the first half (EV 0-10) of final video file#### mp4box_cmd = "MP4Box" for file in file_list: mp4box_cmd += " -cat " + file mp4box_cmd += " first-half.mp4" print mp4box_cmd call ([mp4box_cmd], shell=True) #### convert files for EV -1 to -10 #### file_list =  for i in range(-1, -11, -1): h264_mp4(i) #### add the rest of the files to the final vid file #### mp4box_cmd = "MP4Box -cat /home/pi/first-half.mp4 -cat /home/pi/videoEVtest-00000.mp4" for file in file_list: mp4box_cmd += " -cat " + file mp4box_cmd += " /home/pi/EVtest.mp4" print mp4box_cmd call ([mp4box_cmd], shell=True) sleep(1) GPIO.cleanup()
It sometimes “errors” at the splitting stage if you’ve still got files left over from last time.
If you run the program more than once, you should delete or rename the files it generates before running it again. This could be added to the program, but I’ve got to leave something for you guys to do. I’m not doing it ALL for you.
Don’t forget to delete the files
EVtest.mp4 as well, or the program will keep appending the new stuff to the end of the old files.
Download and run
If you want to try out this program, you can Download the file picam-evtest.py
Or you can grab it from your Pi directly with…
Make sure you have MP4Box installed
sudo apt-get update && sudo apt-get install gpac
Then run it with
sudo python picam-evtest.py
You can omit the sudo if you commented out all the GPIO bits.
Now have fun with it
This has been a lot of fun. I’m not really into low level stuff, but this kind of high level programming – standing on the shoulders of giants – is right up my street.
I would encourage you to take this script and modify it to meet your own requirements. I’ve already done a version for white balance. You might want to add a “check for files from a previous run” routine to eliminate the errors I mentioned above.
Whatever you do with it, I ask only four things…
- Enjoy it
- Learn from it
- Add to it
- Come back here and tell us about what you did
I really could have spent a lot of time trying to figure out the syntax for calling the Raspberry Pi camera from a python program but with your helpful example, I was able to get it running in pretty short order.
My goal was much simpler than yours. I’m just interested in timelapse photography at this point. I know the camera has a -tl option but I’ve lost connectivity a few times and think that by calling “raspistill” from a python program, it might be more stable. We’ll see . . .
Have you seen the timelapse film blog I did a couple of weeks ago? The way I do it, connectivity doesn’t matter. Using screen, you can detach from the process and leave it running.
Manuals with all the commands and options in are great, but to actually do something, I think examples are King. :)
This is actually where Linux in general sucks bigtime as much documentation contains few, if any, actual examples.
I assume you are referring to the 9 days of weather in 1 minute post? I’ve seen it now ; )
The connectivity issue I thought I might have was between the camera and the pi, not wi-fi connectivity.
I started the camera into a timelapse sequence and had the camera stop recording after 3 hours and 8 or 9 shots. It was supposed to run for 10 hours. The last shot was 0 bytes and, although the process was still running, the shots weren’t being taken. I’ve gotten a fair amount of mmal type errors with the camera so I attributed it to the fact that the camera is continuously running video if you use the -tl option with raspistill versus calling the camera each time from python using raspistill. By not keeping the camera continuously engaged, maybe less chance of hiccups? What do you think?
I usually would run the process with “nohup raspistill &”. This seems to work fine for me but taking it a step further, installing screen and being able to log back into the process is something to look at for the future.
Yep, 9 days weather.
Oh, I assumed you meant wifi.
Certainly using raspistill would use a lot less power. If you time intervals are not critical within a couple of seconds, there’s no reason not to do it your way.
I was away in France for a week. It was nice to be able to log in from there and check if it was still running.
Nice one Alex!
Yeah, it also has parallels with http://en.wikipedia.org/wiki/Unix_philosophy – building clever high-level tools by combining several simpler low-level tools together :-) (with Python in this example being your http://en.wikipedia.org/wiki/Glue_language)
Today’s Python tip – instead of messing about with building your vidfile command by joining several shorter strings together, you could just do it in a single step using string formatting:
base_vidfile = “raspivid -t %d -o /home/pi/videoEVtest-%05d.h264 -fps 25 -b 15000000 -ev %d”
vidfile = base_vidfile % (length_of_clip, rec_num, rec_num)
And I’d personally rename the variables to base_vidcmd and vidcmd to make it clearer that they’re storing command lines, and not simple filenames ;-)
Thanks for this great script!
As a matter of fact i’ve updated the script with additions to use a dynamic folder. It replaces “/home/pi/” to the folder where the script will be run from.
“…….” + pwd + “/