Blog

Timelapse Photography

July 25, 2022

I once saw a Reddit post about timelapse photography, a technique where you merge photos of different hours but from the same spot into one image. The OP shared his website, [www.timelapse-photography.de](http://www.timelapse-photography.de/), if you want to take a look. Basically the idea is to setup the camera in a nice location, take pictures at dawn, noon and dusk, and merge them by selecting interesting features on each one. I was pretty excited to try it out myself, yet I feared that it would be logistically complex to execute. Sunset is fine, but sunrise is early. Especially in summer! So I tried two different things. # Manual Shooting First I just manually took pictures, in the afternoon, at dusk and night. That left me the possibility to adjust camera settings, which is rather important as we are trying to catch subtle light changes. I then used Photoshop's auto-alignment feature to correct slight camera movements. Here are the photos:
Window view in the afternoon
Afternoon
Window view at dusk
Dusk
Window view at dusk
Dusk
Window view at night
Night
Window view at dusk
Night
Window view in the afternoon
Afternoon
# Automatic Shooting But I also wanted to try something more audacious: render a 24h picture made from a lot of parts, something like one photo every 15 minutes or so. But again, staying up all night to take four pictures an hour isn't very appealing. So I did some research, and found out you can control a camera with a Raspberry Pi and a program called [gphoto2](https://github.com/gphoto/gphoto2) (that is the command line interface for [libgphoto2](https://github.com/gphoto/libgphoto2)). It allows taking pictures, downloading them, changing settings, etc. Also, there are dummy batteries for cameras that let you plug them to a power outlet, so you won't have to worry about them dying off. So you can have the camera automatically taking pictures at regular intervals. Here is how the command looks: ```bash sudo gphoto2 --set-config /main/capturesettings/exposurecompensation=-2 sudo /usr/local/bin/gphoto2 --capture-image-and-download --no-keep --filename '/media/usb/%Y_%m_%d_%H_%M_%S.%C ``` The second line takes the picture and saves it to an external storage. The first one changes a camera setting, the `exposurecompensation`, which is controlling the target of the automatic exposure mode, which varies between -5 and +5 on my camera. Using auto mode, I expected pictures at night to be underexposed and those at noon overexposed. To cope with that, my strategy was to take not one picture at a time but a whole batch, with different exposure compensations (-5, -2, 0, 2, 5), and then do a selection.
Example of a photo batch
Batch of photos with exposure compensation at -5, -2, 0, 2 and 5
# Merging ## Striped Pattern So of course, you could merge those photos using Photoshop or any alternative software. But I wanted to try make a Python script to automatically do that for me. The idea was to input a striped pattern, like `X` lines with the angle `α`. The script would automatically place images accordingly. Without much knowledge, I chose the [Pillow](https://github.com/python-pillow/Pillow) library for the image manipulations. To actually merge photo A onto photo B, there is the [`PIL.Image.blend`](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.blend) method, which pastes image B on image A after applying it an opacity mask. Basically, it is just like flattening layers in an image editing software. ## Straight Masks The mask would then be generated according to the desired stripe pattern. Basically, the script uses a support line starting from the top left corner, going down with a specified angle. At regular intervals, it generates gradients on the orthogonal section. Here a simple example:
Masks for merging 3 photos with horizontal stripes
Masks for merging 3 photos with horizontal stripes
## Oblique Masks Here it works fine, as lines are straight. But as soon as you introduce an angle, problems will appear: how to evenly distribute stripes on the image? Stripes will not have the same surface, and might not end nicely on the bottom right corner, depending on the original aspect ratio. That took me some time to figure out (I do not love geometry): you need to distribute stripes evenly on a longer distance, the length of the line starting from the top left corner and orthogonally intersecting the line starting from the bottom right corner orthogonal to the stripe angle there. This might not be very clear, so here's a sketch:
Hand drawn schema for diagonaly distributing stripes
The line on which to evenly distribute stripes is the red one
And here is how the 45° mask looks like:
Masks for merging 3 photos with 45° stripes
Masks for merging 3 photos with oblique stripes
## Feathering To make transition smoother, gradients can be extended outside of their stripe with decreasing opacity. The final blending will use the opacity as a weight for averaging the input photos. Here a the same mask as before but with feathering:
Masks for merging 3 photos with 45° stripes and feathering
Masks for merging 3 photos with oblique stripes and feathering
The source code for all this is available on GitHub: [ychalier/blend_images.py](https://gist.github.com/ychalier/84571483008535b8d4fef9afec5a973a). # Results
Side by side
Side by side
Bottom up
Bottom up
Oblique
Oblique
Repeated oblique
Repeated oblique
Feathered
Feathered
Side by side, from 05:30 to 22:00
Side by side, from 05:30 to 22:00 (night photos were too dark)
Oblique, from 05:30 to 22:00
Oblique, from 05:30 to 22:00
Clear-cut animation
Feathered animation