Skip to content
James Dawson By James Dawson Principal I
Does your GitHub Repo need 'Code Operations'?

In this post I'm going to talk about the notion of operational tasks aimed at your git repositories and 'meta' aspects of the code bases they contain. This is something separate to the tasks that look after the your actual git infrastructure or fixing code issues. Instead, these are tasks such as:

  • ensure a consistency of approach across repositories
  • perform house-keeping on outstanding pull requests or other repo artefacts
  • maintain repo-level automated processes aimed at removing friction from the day-to-day development workflows

This creeping realisation and the development of some nascent tooling forced us to think about a name (why is naming the hardest part?) and thus far I have inflicted the name 'CodeOps' on everyone.

Background

As evidenced by an earlier blog post we've been thinking a lot about how we reduce the friction and overhead of managing our 30+ open source projects. We've also been looking at how we might fully-migrate to GitHub Actions when the time comes. At present it doesn't have feature parity with Azure Pipelines - but the front-and-centre eventing model that it uses has made us think about our pipelines and processes differently.

Does GitHub Actions (GHA) Bring a Micro-Services Twist to CI/CD?

In much the same way that micro-services have moved application development towards having many small components doing a specific task before handing-off to the next (rather than larger, long-running components that implement an entire process); so we can see the same paradigm shift in how automated processes can be orchestrated.

Whilst GitHub Actions is not unique in this explicitly event-based approach (Brigade for example) it does seem likely to become the most mainstream proponent over the next year or so. Also, the eventing feels very natural because it is leveraging the same events that we are used to seeing in our everyday work (assuming you host your code on GitHub) - this brings a uncanny air of familiarity for what would otherwise be a new DevOps product with a bewildering range of triggers.

How We Are Using

In part stemming from additional ideas that sprung from some earlier work and partly out of necessity to plug some gaps in the current GHA feature set, we have found ourselves developing a number of automated processes that look after different aspects of our GitHub repositories:

  • Migrating solutions to use the meta package described here
  • Synchronising GHA workflow templates across the 6 GitHub organisations we manage
  • Deploying workflows and associated configuration to repos as part of enrolling them in our Dependabot automation
  • Applying consistent repo configuration settings

GitOps For The Win

Behind these discrete pieces of automation, we have a adopted a GitOps approach for managing how these processes are applied to each repository (or group of repositories) - whereby we have a git repo that stores this configuration so any Endjineer can make changes via a pull request review and the automated processes simply reads the latest version, consuming those settings relevant to that particular process.

We've made efforts to ensure that the automated processes are idempotent, so that we can safely have these processes run whenever a changed is merged.

Each process follows the same pattern:

  1. Clone the repo being processed
  2. Apply all the changes required by the particular process being run
  3. Commit those changes and create a PR in the repository (unless it's already up-to-date, in which case do nothing)
  4. Move on to the next repository that the process applies to

Step 2 is different for each process, but rest is able to leverage a shared module that implements much of the boilerplate functionality.

Due to how these processes integrate with the typical pull request workflow, it's been interesting to see how these processes have been able to make use of the workflows we'd built to streamline our use of Dependabot (e.g. optional auto-approve, auto-merge & auto-release).

I Thought GitOps Was No Good For Automated Changes?

It has been said that the GitOps approach is not as friendly for supporting automated changes due to the issues with merge conflicts and approvals etc. - which is certainly true. However, our recent Dependabot work has shown us that the key is to have a controllable mechanism for granting a 'fast-track' PR for certain processes that have proven themselves to be reliable.

Ultimately you make a trade-off - does the benefit from having 'straight-through' processing for the 95% outweigh the cost of rolling-back/fixing the 5%? (In the Dependabot case, this trust is directly related to the trust you have in your own automated tests!).

Moreover, that 5% can be iteratively whittled down by bolstering existing processes with additional tests and/or other quality gates as the edge cases are discovered. Hence, the longer you run with (and maintain!) such a process, the more you lower the risk and raise the confidence in its reliability.

As the scenarios we've thought of have grown it seemed to me that this is a different category of automation not often talked about, at least not explicitly - we're more used to the idea of automating infrastructure provisioning, builds, deployments and releases.

Closing Thoughts

Getting back to the title of this post:

  • Do we need to think about the operations aspects of looking after our code repositories and the code within them? (i.e. as a separate concern to the git infrastructure and the functional side of the code within it)
  • Do you already do this? If so, how do you talk about it within your organisation?
  • Perhaps you think this is a false or contrived line to draw, which only serves to fracture the wider DevOps ethos?
  • Do you have any alternate name suggestions?

I'm interested to hear your thoughts - leave a comment below, or ping me via twitter (@James_Dawson)

FAQs

What is CodeOps? The operational concerns of automating processes that manage/maintain your source code repositories and 'meta' aspects of the code within them.

James Dawson

Principal I

James Dawson

James is an experienced consultant with a 20+ year history of working across such wide-ranging fields as infrastructure platform design, internet security, application lifecycle management and DevOps consulting - both technical and in a coaching capacity. He enjoys solving problems, particularly those that reduce friction for others or otherwise makes them more effective.