A universal truth we’ve experienced as both an IT software vendor and as an application delivery team is that application owners are constantly trying to deliver better software faster.
We all try different things to help make that happen, and when appropriate, share our findings with others. As we set out to Dockerize our self-hosted application components, we found out some great lessons for dockerizing applications.
Let’s face it – organizations of all sizes are shifting away from classic software delivery models. As they strive to deliver better software faster, packaging binaries and shipping pieces of a distributed application to customers is no longer a sufficient way to build and release software. At Instana, we face many of the same challenge
As our customers do in the never ending effort to innovate, automate, and deliver the best user experience possible. Thus, we completely revamped how our on-prem customers install the Instana Backend. Instana’s self-hosted platform now utilizes a fully Docker based installation process.
This article looks at the reasons for dockerizing our application (including why Docker was our technology of choice), why we decided now was the right time, and some key lessons we learned about dockerization along the way.
Why Dockerize Our Application Now?
At Instana, we offer our Application Performance Monitoring (APM) platform as both SaaS and self-hosted on-premise offerings. Until now, we shipped our SaaS to self-hosted platforms by packaging the binaries we created for our SaaS platform into RPM/DEB packages to support different Linux distributions. We had all of the components wired together with a Chef cookbook that had grown to cover all the edge use cases we discovered over time.
This approach served us fairly well for the past 4 years. As our customer base grew and demands shifted, we were faced with challenges supporting various linux distributions, outgrowing the simple single host installation, and needing to run in various environments like AWS, GCP, and Azure, as well as private data centers. This had a drastic impact on our release cycle predictability. We also wanted to make it easier for our customers to update from one version to the next.
The increasing diversity of supported Linux distributions, the demands for deployment flexibility, and growing release complexity slowing down our release cycles led to a situation where a change was required. Once we completed customer conversations and interviews, we had a better understanding of ways we could improve their operational experience. Armed with this knowledge, we started looking for a solution that would enable us to ship an enterprise ready self-hosted version of our product that is scalable, continuously upgradable, with as little effort as possible.
Why Docker is Our Technology of Choice
Docker was an obvious choice based on its wide adoption in the enterprise. We found that even those organizations that still do not have containerized production workloads use processes to allow vendors to bring Docker in. Additionally, when we looked at the containerization ecosystem we found that the OCI (Open Containers Initiative) standard that Docker helped establish, is the most widely adopted and respected. This ensures that any new or upcoming runtimes will be compliant, future proofing our technology of choice.
Another reason we chose this route is that we already use containerized packages in our SaaS platform. Those containers are scheduled by Nomad as well as Kubernetes (K8s). By bringing this same technology strategy to our self-hosted platform, we are able to use the EXACT same artifacts we leverage for our SaaS platform.
The Instana components are already containerized and run in our SaaS platform, but we still needed to create containers for our databases, Clickhouse, Cassandra, etc., and set up the release pipeline for them. Most of the complexity is not in creating a container with the database running, but in the management of the configuration and how to pass it down in a maintainable way to the corresponding component. We dealt with this complexity by providing the configurations through a mount point in the container. This has the added benefit of allowing us to use that pattern in all of our containers regardless of whether it’s a database or processing component. Additionally, it’s easy to maintain as all changes are only touching the configuration files that are accessible on the host system. It also simplifies the docker run directive and enables us to collect debug bundles for troubleshooting purposes without the need to hook into the container.
The new Docker-based installer has been rolled out to our customers. We have benefited from this change by increasing our deployment flexibility, making our platform more scalable, and drastically reducing the complexity of our releases which has made our release cycles more predictable. We also have a much faster Operating System (OS) support qualification cycle, helping us and our self-hosted customers shrink the time to production value to match up with our SaaS users.
Three Lessons About Dockerizing Applications
All in all, the development, testing and deployment process were as quick to put together and as simple to roll out as expected, but we did come across some interesting issues along the way. Here are some subtle, but important, items to consider and examine when you dockerize your own applications.
ulimits on CentOS
Many of our customers use CentOs . As we rolled out the first version of our new dockerized setup to CentOS, we found that Clickhouse was not able to properly write to disk. We suspected a database configuration issue, but once we gathered all log files and compared the system settings to other operating systems that ran smoothly, we identified the problem in a very low default setting of ulimits. Knowing this, the next migrations and initial setups went off without a hitch.
First we evaluated the option to manipulate the ulimit on the host, but realized quickly that every customer setup could be very unique where and how the setting is set. Then we found out that the –ulimit setting for the docker container overrides the default setting resolving the issues for our databases and is the last parameter in the override hierarchy so we can rely on it to be set properly now.
Eliminating Docker Network Overhead
The only other concern that remained was what impact adding a Docker layer would have on the overhead. We did not want to move away from a single host deployment model, so we tapped directly into the host network, eliminating the bottleneck that could have caused overhead issues on the Docker network device. This eliminated our overhead worries.
Binding Systemd Services
Soon we recognized that customers are executing maintenance windows for system restarts or upgrades with the confidence that the Instana components will behave correctly and take no additional effort while doing so. Taking a mandatory startup order into account it was not sufficient to rely on docker startup policy. We added multiple systemd bindings to docker.service within our Systemd setting.
[Unit] After=docker.service BindsTo=docker.service [Service] Type=oneshot ExecStart=/usr/bin/instana start -s ExecStop=/usr/bin/instana stop Restart=no RemainAfterExit=yes [Install] WantedBy=docker.service
This helps us ensure our components are always up and healthy.
Should You Dockerize Your Application?
It’s been 4 months now, and we couldn’t be happier with the results — nor could our customers. Through 4 releases, we’ve been able to keep the self-hosted version of our back-end in lock step with our SaaS version, which has extra benefits:
Customers, regardless of back-end, all have access to new features immediately. Dev and Ops get to work on a single development and deployment cycle. Customer Success doesn’t have to concern themselves with unique versions in the field.
So if you were to ask our Ops team, Customer Success, Developers, our Solution Architects, they would tell you “Don’t wait! Dockerize Applications now!
So what’s next for our journey of Dockerization? One of the things we want to do is to provide an easy path to scalable out your system to process more transactions, by leveraging the microservice architecture of Instana.
Spoiler alert: We will extend our self-hosted offering to be deployable on K8s, stay tuned 😀
P.S. Curious about observability?
We've built an interactive sandbox for you to play with!