5 minutes
Migrating to Helm3 with CI/CD, containers and no Developers hassle

Helm 3
On the 13th of November 2019, Helm stable version 3.0.0. was released. We have been expecting this major version for a long time.
Helm 3 has brought us many singnificat changes, the most appreciated was the removal of tiller.
We were eager to start the migration, but we decided to wait for community response. In the meantime, we worked on our migration strategy.
On the 17th of December bugfix version 3.0.2 came out, and we felt confident to begin with the migration.
The Goal
Achieve migration of all running releases, in all clusters, across all environments with the least amount of manual interaction as possible. In other words: Do it in the background, automatically and do not bother Developers if it is not necessary.
A goal defined in numbers:
Name | Count |
---|---|
Environments | 16 |
Clusters | 19 |
Repositories | ~50 |
Helm Releases | 233 |

Environment setup
Luckily, the Develop to Deploy lifecycle path is well automated in our CI/CD( self-hosted Drone):
-
The pipeline is configured with .drone.yaml. (or some equivalent like .drone1.yaml …) located in the repository.
- The pipeline defines all desired steps to be launched, including a deployment step, which uses drone-helm:<version-tag> image.
-
The pipeline is started with a trigger. The trigger can be manual or automatic (event-based) and can come in various forms.
- Teams use their preferred way to trigger the pipeline: tag, promote, merge, push …
-
Our self-hosted Drone server recognizes the trigger and starts the pipeline with all defined steps.
-
The deployment step in the pipeline uses custom-build drone-helm:<version-tag> image to deploy Helm chart to a particular Kubernetes cluster.
-
The drone-helm image does the authentication and installation to a Kubernetes cluster.
-
Afterwards, the results are available in the Drone Web interface.
We used 2 plugins for Helm2:
The Strategy
Helm developers have provided very nice and easy to follow guides and a plugin (helm-2to3) for the migration:
and it even supports Helm tiller plugin! - What a time to be alive and be a DevOps guy! … 🤣
With our goal in mind, and after considering our configuration, it is evident that all the auto-magic needs to occur in the drone-helm:<version-tag> image, from a technology perspective. From the project, or migration perspective, it is getting a bit “less elegant”, and more “real life” experience 😄
Here is the list of steps which we had to do in chronological order:
-
Create a new drone-helm:<version-tag> image which will do the migration.
-
Change the tag of drone-helm:<version-tag> image in every repository.
-
Create a PR for every repository with the change above
-
Monitor the progress of the migration and provide ad-hoc support if needed.
Thankfully, most of the environment is following company wide standard, which defines:
- master branch = production environment infrastructure
- /charts/app_name contains helm chart for the application
Therefore 90% of this repetitive task we were able to do automatically.

Execution
Create a new drone-helm image
Installing Helm2 and 3 and all plugins
The hardest part was to install both helm versions and its plugins. The reason is the hardcoded path of helm binary location in the Helm tiller plugin, to check installed Helm 2 version. Helm tiller binary used in this plugin has to be in matching version to the Helm 2 version.
After some time spent on fighting, considering to clone the Helm tiller plugin repository and do a string replace in the source code, I’ve returned to earth from this mad mind trip, and simply installed Helm 3 first. Afterwards I renamed it to helm3. This allowed me to install all necessary plugins and both helm versions without further hassle.
Lesson learned: sometimes, a most complex problem can have an elegant and comfortable solution. Take a rest, and try to change the perspective.
Migration in image
Big thanks to my colleague, who showed me The real hero of programming is the one who writes negative code mindset.
In the end, the only code we had to add were 3 conditions in 12 lines, and Voila! The image is ready.
Changing pipeline files, Creating PRs
As I mentioned before, except for a few repositories, the rest of them follow the company wide standard of repositories and pipeline configuration.
Therefore we were able to download all repositories, change their pipeline configurations to use the drone-helm image tag :latest (more about that in the conclusion section) and create the PRs with a single loop in a bash script.
Checking the progress
I have created a simple script which checks all releases in all clusters as part of this task, so we can check the progress. Check occurs on monthly basics and looks fantastic (more about that in the conclusion section).

Conclusion
:latest tag
First of all, I would like to explain our decision to implement drone-helm:latest
tag.
We at @DevOps are aware that in the community of SRE or DevOps it is not advised to use :latest
tags. It is not a good practice and can create inconsistencies in the environment.
Thanks to our environment setup, and our ultimate control of the cache in drone, we decided for it. In our situation, it allowed us to control (We Important! 🤣 ) the drone-helm image version used by all teams with one change, it took away the hassle to repeat the process of changing configuration of all pipelines.
Exceptions:
-
One repository does not use its master branch as primary.
-
Few repositories had directory ./k8s instead of ./charts
-
Some repositories have a different name of .drone.yaml file
-
One repository has it in ./drone subdirectory
-
Few services were not deployed for more than 6 months and therefore were not present in our original list
Status today
We have 207 running services converted to Helm 3.
Twenty-six services have their release in Helm 2. Out of those 26 services, 4 are deprecated and are on the list to delete. 2 services are supportive and not product related.
Remaining 20 has its task assigned in the corresponding team.
Final words
We have encountered 3 issues during the migration, but none of them were caused by the image or the strategy we used.
During the migration, structure of go yaml template was changed
, and helm charts had to be adapted.
Overall, the migration did not cause substantial manual overhead and in my humble opinion can be considered a success.