I once saw a Reddit post about timelapse photography, a technique where you merge photos of different hours but from the same spot into one image. The OP shared his website, [www.timelapse-photography.de](http://www.timelapse-photography.de/), if you want to take a look. Basically the idea is to setup the camera in a nice location, take pictures at dawn, noon and dusk, and merge them by selecting interesting features on each one. I was pretty excited to try it out myself, yet I feared that it would be logistically complex to execute. Sunset is fine, but sunrise is early. Especially in summer! So I tried two different things.
# Manual Shooting
First I just manually took pictures, in the afternoon, at dusk and night. That left me the possibility to adjust camera settings, which is rather important as we are trying to catch subtle light changes. I then used Photoshop's auto-alignment feature to correct slight camera movements. Here are the photos:
# Automatic Shooting
But I also wanted to try something more audacious: render a 24h picture made from a lot of parts, something like one photo every 15 minutes or so. But again, staying up all night to take four pictures an hour isn't very appealing. So I did some research, and found out you can control a camera with a Raspberry Pi and a program called [gphoto2](https://github.com/gphoto/gphoto2) (that is the command line interface for [libgphoto2](https://github.com/gphoto/libgphoto2)). It allows taking pictures, downloading them, changing settings, etc. Also, there are dummy batteries for cameras that let you plug them to a power outlet, so you won't have to worry about them dying off. So you can have the camera automatically taking pictures at regular intervals. Here is how the command looks:
sudo gphoto2 --set-config /main/capturesettings/exposurecompensation=-2
sudo /usr/local/bin/gphoto2 --capture-image-and-download --no-keep --filename '/media/usb/%Y_%m_%d_%H_%M_%S.%C
The second line takes the picture and saves it to an external storage. The first one changes a camera setting, the `exposurecompensation`, which is controlling the target of the automatic exposure mode, which varies between -5 and +5 on my camera. Using auto mode, I expected pictures at night to be underexposed and those at noon overexposed. To cope with that, my strategy was to take not one picture at a time but a whole batch, with different exposure compensations (-5, -2, 0, 2, 5), and then do a selection.
## Striped Pattern
So of course, you could merge those photos using Photoshop or any alternative software. But I wanted to try make a Python script to automatically do that for me. The idea was to input a striped pattern, like `X` lines with the angle `α`. The script would automatically place images accordingly.
Without much knowledge, I chose the [Pillow](https://github.com/python-pillow/Pillow) library for the image manipulations. To actually merge photo A onto photo B, there is the [`PIL.Image.blend`](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.blend) method, which pastes image B on image A after applying it an opacity mask. Basically, it is just like flattening layers in an image editing software.
## Straight Masks
The mask would then be generated according to the desired stripe pattern. Basically, the script uses a support line starting from the top left corner, going down with a specified angle. At regular intervals, it generates gradients on the orthogonal section. Here a simple example:
## Oblique Masks
Here it works fine, as lines are straight. But as soon as you introduce an angle, problems will appear: how to evenly distribute stripes on the image? Stripes will not have the same surface, and might not end nicely on the bottom right corner, depending on the original aspect ratio. That took me some time to figure out (I do not love geometry): you need to distribute stripes evenly on a longer distance, the length of the line starting from the top left corner and orthogonally intersecting the line starting from the bottom right corner orthogonal to the stripe angle there. This might not be very clear, so here's a sketch:
And here is how the 45° mask looks like:
To make transition smoother, gradients can be extended outside of their stripe with decreasing opacity. The final blending will use the opacity as a weight for averaging the input photos. Here a the same mask as before but with feathering:
The source code for all this is available on GitHub: [ychalier/blend_images.py](https://gist.github.com/ychalier/84571483008535b8d4fef9afec5a973a).