If you are like me, you just want to run your application in SystemD with no Docker or Kubernetes involved. I made a build/deployment application for my own applications, private closed source, that builds applications using docker/podman, extracts distribution files from the container and deploys by talking to SystemD. I respect SystemD and grown more appreciation of it 😀.
Goal
The goal of the application was to be able to use commands like `deployer build <project> <build name>` to produce an archive that would be extracted and run on a target machine with `deployer deploy <environment> <project> <deployable>`. These are the core commands for any easy to use deployment system. The details of how that would work came after some research into process supervisors and reproducible builds.
Some Background
Before this, I used pm2 and some scripts (bundled with my application, to move code from my machine to the target server and then build on the target machine, move to the desired execution area and run with pm2. This wasn't cumbersome, but it ran as my user, which I was not fond of. PM2 is great, but it really prefers running node applications. I wondered why wasn't I using SystemD to run these applications instead. SystemD just works or else I wouldn't have a running system.
The Research
Ok, sure I want to run using SystemD. This was not straightforward as I looked at other process supervisors and considered bringing on Supervisord. Eventually, I figured I just need a user that can talk to SystemD and generate a Unit File on demand, so that was set.
With deploy set. How was I going to get the code to the servers, built without using my user or any privileged user really and then archived for deployment? Jenkins? Drone? Some obscure application that no one updated anymore because Docker and Kubernetes craze? Yeah, there wasn't much choice, so I knew I had to find a custom solution. The question evolved into how can I spawn an environment that is a sandbox that has the build tools/runtime that the application needed order to build. Docker was the answer. In hindsight, I could make it work with SystemD's single shot services and DynamicUser.
Docker allowed me to build images with the tools that were needed easily, but the one problem was how would I get the files out? Luckily, `docker cp` comes into play. Ok, so I have a way to build an application, but who is going to archive the files?
That's the part that further sealed the need to build an always running application to orchestrate the build/release process.
Stitching Up of A Build/Release Process
I won't get too technical here. The idea was to build an application that would run as a user that can talk to SystemD, run docker/podman (on Fedora) and access to a build/distribution area on the system.
Build. When a user (me) wants to build an application, the client is the laptop, just run `deployer build <project> <build name>`. This would make a request to a build server to do some lookup/validation, source code fetch and then run the container command to build and finally archive. Archive would be named 'archiveId.tar'.
No writing dockerfiles, just produce execution files. As part of the easy to use mandate, there is no creating dockerfiles in the application. The application is responsible for producing files into a specific area that will be archived and providing a run command where the current working directory would be a directory that contains everything that was produced. The build commands are specified in a dotfile in the application directory and that'll get run in the docker container.
Release. When a user (me) wants to run the application just run `deployer deploy <environment> <project> <deployable>`. Well, the command evolved into `deployer deploy <environment> <project> <deployable> <archiveId>` so I could move backward or forward. How would I get the archive id? That's when I created a `deployer list <environment> <project> <deployable>` command to list out the archive ids. Anyway, once the command is run, some validation, extract of archive, run with SystemD, validate run and exit.
No writing SystemD unit files. These unit files if user defined is a security risk, so I autogenerate them. One thing I discovered when creating the SystemD communication (via DBus) part was DynamicUser. This effectively allowed me to run a service as if the process were in a container. It has its own user and directories. I have some other "hardening" parameters to make sure application cannot do things a network application should not do otherwise it is terminated with cause. Everything outside the designated directories is read-only, no user creation and etc. Dynamic user will remove the generated user and runtime directories (but not state directories) once the application is dead. Anyway, no writing SystemD unit files.
Runtime validation is not important nor finished yet. I desire a plugin system such that they can be disabled system-wide when needed. The idea is to allow healthchecks, check runtime metrics without TSDB, check logs without log server, prime caches and etc.
Conclusion/Learnings
Maybe I can get rid of Docker/Podman? I think that SystemD's DynamicUser turned me off from using Docker to run applications at this point. I get processes running with zero cost that you would have with Docker. I use podman on Fedora since daemonless and rootless was very much desired. docker is more towards MacOS (my dev laptop). I am actually thinking of running applications like Redis and MySQL with DynamicUser at this point. Inspires a build process without docker/podman. Docker is useful if I wanted to run containers on multiple OSes, but I am independent and it is the last thing I care for. It is in fact the last thing most care for as server machines are dominated by one OS at most small companies.
Multiple archives. It wasn't called build name in the deployment command, but application name. One of my peeves with docker and other build systems is that they don't take into consideration that an application built in Java/Rust internal modules and therefore can produce multiple applications. I had to change the build process slightly to accommodate build once, produce multiple applications to save time and energy literally.
Really is lightweight. The agent only does orchestration, so there isn't much CPU or memory that it needs. Transfer, extraction and coordination with SystemD doesn't require much. Archives are streamed via HTTP to a one time endpoint. Tar is another that can do stream operations such as extraction (Maybe I'll switch in the future if needed. Tar is simple). I was deliberate on looking more into these parts since an archive could be 1GB or more. I learned initially a lot about data streaming from another one of my projects. Brought in my RPC via ZeroMQ library (https://www.normansoven.com/post/been-working-on-a-high-performance-async-low-latency-rpc-library), it is private closed source. The less resources required the better.
Why'd I do this? I like Docker, obviously worked with it as it is part of the build process, but don't want to accept the costs of it as an independent developer. Just wanted to evolve my current system of running scripts that build/release for me.
As usual, I don't condone doing the things I do especially at work. I do encourage learning more via experimenting and not getting sucked into the mono culture.