Analysis of containerization deployment overhead on execution time and energy consumption
Containerization software has become increasingly popular in the last decades as it provides a lightweight operating system-level virtualization for a large variety of purposes, including cloud and edge computing. Even though it provides increased security, internal networking, and application isolation, it also tends to generate additional workload and overhead in comparison with native application deployment. This issue is especially relevant in regards with limited computation power and storage devices, such as mobile and IoT devices. The aim of this paper is to analyse this generated overhead of deploying applications in containerization solutions and its effect on total execution time and energy consumption. To do so, two widely used container platforms, Docker and Podman, were used. To evaluate the performance of the low-level container runtime of a platform, two OCI (Open Container Initiative) compatible container runtimes, namely runC and crun, are compared. This results in a comparison of 5 configurations - docker with runC, docker with crun, podman with runC, podman with crun, and native deployment. A test workload application that simulates CPU load was deployed on a Raspberry Pi single board computer for all configurations. The results show that in the containerization deployment models, container runtime selection seems to have a minor effect on overall execution time and energy consumption, while the container platform significantly affects both metrics. Among container platforms, Podman platform was found to be both faster and more energy-efficient than Docker. It has also been discovered that native application deployment might significantly decrease the energy consumption level at the expense of losing containerization benefits.