The aim of this project is the 3-dimensional positioning and control of multiple tiny drones with the aid of embedded systems like Raspberry Pi. We envision a modular station for events where multiple tiny drones fly in formations. With slight modifications of the station we will achieve further applications:
- Camera-mount for control via gestures
- Flight within the restricted space of our workshop’s display window
- More interactions: eg. games of tic-tac-toe, geometric shapes etc.
For our mini drones we use the low cost Eachine E010, to which we attach markers. For our first trials we used the ArUco-markers, which showed immense problems with low-light, long distances and rapid movement. For those reasons we selected simple LED-markers instead and attach those to the drones.
A Raspberry Pi system is handling the recognition of the markers, since the PiCamera as well as the platform itself are flexible. For the effective recognition it is especially important, that the exposure time is set very low, so the LEDs are not overexposed.
We decided to utilise the hardware of the Raspberry Pi and the integrated blob-detection algorithm of the VideoCore GPU with OpenGL as our backend, instead of an OpenCV-implementation on the CPU. That way we reach higher frame rates and most importantly lower latency since we avoid the transfer of data from the GPU to the CPU. This enables us to use the little 5€ computers Raspberry Pi Zero with real time performance.
The 3D position of the marker must be recognised by the CPU after the recognition of the LED. This will most likely be implemented with solvePnP by OpenCV. Then the data is filtered and the position predicted for a smooth and precise tracking of the drone. At last, the needed movements of the drone are being calculated, transferred into console commands and send to the drone via an external RF-module.
The following steps are necessary for the completion of the fundamental software:
- Hardware-integrated LED recognition via blob detection
- Marker identification among the recognized spots
- Derivation of the markers 3D-pose from the identified spots
- Filtration of the pose relative to time and prediction of future positions
- Porting of the driver, which steers the drone via RF-module, to the Raspberry Pi
- Adjustment of the drones’ position to the desired position
Then we complete our project-goals. For our first goal: The camera stick, we aspire to design a custom model with ergonomic handle and elegant design (However no new software). For formations more complex predictions and more accurate settings (PIDs) are required to maintain steady paths.
We will show that a low-cost embedded system can perform computer vision tasks in real time. With a Raspberry Pi Zero (5€) and a cheap PiCamera (2.50€) this is an extremely competitive and quite strong low-cost optical tracker. Prior to optimisations the current system reaches 640x480@45fps / 960x544@30fps.
Additionally, the setup can be quite easily extended to a full stereo-system, since two zeros can be synchronised with an external micro controller (3€), through which a much higher accuracy becomes possible.
More complicated CV-algorithms, which write directly to the GPU, are not out of the question.
Still have questions or want to join us? Simply send us a mail.