OpenGitOps

(opengitops.dev)

41 points | by locknitpicker 18 hours ago ago

45 comments

  • cedws 13 hours ago ago

    I’m less positive about GitOps. GitOps is a lie. I’ve never seen software that actually manages to adhere to the ‘repo is the state’ principle. Inevitably you push something, it doesn’t work, now you have to do something out of band or revert to get it working again. Sometimes you revert and it’s still not fixed…

    Looking at you Argo CD.

    • gpi 9 hours ago ago

      You may have to use Kargo as well, also by the makers of Argo

  • almaight 11 hours ago ago

    Both FluxCD and ArgoCD, which use CRDs in Gitops, have a serious flaw: will these tools fail when your Kubernetes needs an update? I've encountered incompatibility issues even with simple Helm (which failed due to changes in the HPA API), let alone OPSs that forcibly depend on CRDs. GitLab CI + Pulumi/Kusion is the most stable solution.

    • jzebedee 2 hours ago ago

      Interesting. I've used Pulumi but this is the first I've heard of Kusion.

      From a quick look, it still requires all of the resource specification to be present in the AppConfiguration, and it's written in their own DSL called KCL. Is there more to the use case that I'm missing?

      It seems like if I'm already specifying the details of the entire workload, I'd either use Terraform, where I probably already know the DSL, or Pulumi, where I could skip the DSLs entirely.

  • gpi 17 hours ago ago

    ArgoCD is the defacto gitops standard now and has the lions share of gitops deployments.

    • XorNot 17 hours ago ago

      Which is funny because ArgoCD is...miserable.

      Like it just doesn't do anything other then clone a branch and run a blind kubectl apply - it'll happily wedge your cluster into a state requiring manual intervention.

      • FridgeSeal 15 hours ago ago

        Yeah I have to use it at work currently, and it’s not great. Personally I find FluxCD so much better.

        Less setup, faster, and in my experience so far, no “wedging the cluster in a bad state” which I’ve defs observed with Argo.

    • dijit 17 hours ago ago

      Last I saw, ArgoCD was heavily reliant on Kubernetes.

      Has this changed?

      (also, it seems like the site is heavily supporting ArgoCD)

      • master_crab 17 hours ago ago

        ArgoCD only works on kubernetes. That hasn’t changed.

      • gpi 17 hours ago ago

        The whole premise of opengitops is heavily reliant on kubernetes.

        • locknitpicker 16 hours ago ago

          > The whole premise of opengitops is heavily reliant on kubernetes.

          There's indeed a fair degree of short-sightedness in some GitOps proponents, who conflate their own personal implementation with the one true GitOps.

          Back in the real world, the bulk of cloud infrastructure covers resources that go well beyond applying changes to pre-baked Kubernetes cluster. Any service running on the likes of AWS/Google Cloud/Azure/etc require configuring plenty of cloud resources with whatever IaC platform they use, and Kubernetes operators neither cover those nor are a reasonable approach to the problem domain.

          • acedTrex 14 hours ago ago

            > and Kubernetes operators neither cover those nor are a reasonable approach to the problem domain.

            I mean Crossplane is a pretty popular k8s operator that does exactly that, create cloud infrastructure from K8s objects.

            • locknitpicker 13 hours ago ago

              > I mean Crossplane is a pretty popular k8s operator that does exactly that, create cloud infrastructure from K8s objects.

              If your only tool is a hammer then every problem looks like a nail. It's absurd how anyone would think it's a good idea to implement their IaC infrastructure, the one think you want and need to be bootstrapable, to require a full blown K8s cluster already up-and-running with custom operators perfectly configured and working flawlessly. Madness.

              • acedTrex 12 hours ago ago

                Its more of a usecase for large platform teams that want to automate and enable hundreds of teams with thousands of disparate cloud resources.

                You can have a small bit of terraform for crossplane then crossplane for the 99% of the other resources

              • amluto 12 hours ago ago

                I hope someone somewhere has managed to run a K8s cluster on a bunch of EC2 instances that are themselves described as objects in that K8s cluster. Maybe the VPC is also an object in the cluster.

      • formerly_proven 17 hours ago ago

        The Argo project exclusively targets Kubernetes.

    • locknitpicker 16 hours ago ago

      > ArgoCD is the defacto gitops standard now and has the lions share of gitops deployments.

      That only covers pull-based GitOps. Push-based GitOps doesn't require Kubernetes, let alone magical Kubernetes operators, only plain old CICD pipelines.

      • neurodyne 16 hours ago ago

        I often hear folks use terminology like “push GitOps”. But as far as I understand things, it’s only GitOps if you’re following the four principles described in TFA. Otherwise it’s just sparkling pipelines.

        • gitttnwmrlg 14 hours ago ago

          The problem with automatically applying whatever crap that is stored in git, means that you cannot reverse anything without a long heavyweight process, clone code, make a branch, create a pr, approve a pr, merge to master/main.

          That can take hours, and that’s not acceptable when PROD is burning.

          That’s why most places don’t dare syncing to prod automatically.

          • locknitpicker 13 hours ago ago

            > The problem with automatically applying whatever crap that is stored in git, means that you cannot reverse anything without a long heavyweight process, clone code, make a branch, create a pr, approve a pr, merge to master/main.

            Nonsense. Your process is only as complex as you want it to be. Virtually all git providers allow you to revert commits from their GUI, and you can configure pipelines to allow specific operations to not require PRs.

            As a reference, in push-based GitOps it's pretty mundane to configure your pipelines to automatically commit to other git repos without requiring PRs. This is the most basic aspect of this whole approach. I mean, think about it: if your goal is to automate a process then why would you go through great lengths to prevent yourself from automating it?

        • locknitpicker 13 hours ago ago

          > I often hear folks use terminology like “push GitOps”. But as far as I understand things, it’s only GitOps if you’re following the four principles described in TFA.

          Not quite. You hear gatekeeping from some ill-advised people who either are deeply invested in pushing specific tools or a type of approach, or fool themselves into believing they know the one true way.

          Meanwhile, people who do push-based GitOps just get stuff done by configuring the pipelines to do very simple and straigh-forward stuff: package and deliver deployment units, update references to which deployment units must be deployed, and actually deploy them.

          The ugly truth is that push-based GitOps is terribly simple to pull off, and doesn't require any fancy tools or software or kubernetes operators. Configuration drift and automatic reconciliation, supposedly the biggest selling point of pull-based gitops, is a non-issue because deployment pipelines are idempotent. You can even run them as cron jobs, if you'd like to. But maintaining trivial, single-stage pipelines does not make a impressive CV.

          > Otherwise it’s just sparkling pipelines.

          Yeah, simple systems. God forbid something is reliable, easy to maintain, robust, and straight-forward. We need full blown kubernetes operators running on timers instead, isn't it? Like digging moats around their job stability. It's madness.

  • Carrett 13 hours ago ago

    Does it really make sense to use Kubernetes in 2026? Especially in the cloud? I think it’s just adding unnecessary layers, increasing operational debt, and complicating the developer experience.

    • nwmcsween 7 hours ago ago

      The whole business model of cloud providers is to charge a premium for their ecosystem and to create lock-in by making everything interdependent. A Kubernetes deployment could cost 100k/yr a similar cloud deployment would be ~1m/yr.

    • 0xbadcafebee 13 hours ago ago

      It's not a black and white question. For you it doesn't make sense, for others it makes tons of sense.

    • locknitpicker 13 hours ago ago

      > Does it really make sense to use Kubernetes in 2026? Especially in the cloud?

      I can't tell if your comment is a joke or not.

      • pfix 11 hours ago ago

        That in itself is an answer :D

      • antonvs 11 hours ago ago

        It’s a joke whether or not it’s intended as one.

  • jpillora 15 hours ago ago

    The problem with git ops only manifests after it’s become a standard within an org:

    With 1 git ops pipeline, it’s fine, it’s the human merge gate, it’s doing its job protecting downstream

    With multiple git ops pipelines however, they start to get in the way of progress - especially when they need to be joined in series

    The better approach is to build API-first then optionally, add an API client into your git pipeline

    • cortesoft 15 hours ago ago

      For me, the key trait that makes gitops valuable is that it is declarative rather than imperative. The state of the git repo is the desired state of the system.

      The key point there is that we don't have to worry about the existing state of the system. We don't have to worry that the api call made in your git pipeline actually failed, or that something else changed the system before your api call, and your state has drifted.

      You can't replicate that by adding an API client to your git pipeline.

      I am not sure how you end up with multiple git ops pipelines; ideally, you shouldn't be having multiple gitops pipelines. You should have a git repo that is defining the state, and then some reconciliation system that is checking the state of the system and the state of the repo, and taking any corrective actions needed to make the system state match the git state.

    • dijit 15 hours ago ago

      > With multiple git ops pipelines however, they start to get in the way of progress - especially when they need to be joined in series

      Definitely, that's why systems like Zuul exist.

      They're esoteric and require a lot of engineering discipline in patience- but in my experience most people who reach for gitops aren't doing it for a sense of "everything as code" (for the audibility and theoretical reproducibility of it) it's because they think it will allow them to go faster; and a tool like Zuul is hard to learn and will intentionally slow you down.

      Because slow is smooth, and smooth is fast.

      • mroche 14 hours ago ago

        Would you mind elaborating on this more, describing the differences and how tools like Zuul introduce degrees of friction that result in smooth operation and pipelines?

        I know my phrasing may come off wrong, I apologize for that. But I'm asking genuinely; I've only ever seen Zuul in the wild in the Red Hat and OpenStack ecosystems.

        • dijit 13 hours ago ago

          Right, so Zuul is properly interesting if you're dealing with multi-repo setups and want to test changes across them before they merge; that's the key bit that something like GitLab CI doesn't really do.

          The main thing with Zuul is speculative execution. Say you've got a queue of patches waiting to merge across different repos. Zuul will optimistically test each patch as if all the patches ahead of it in the queue have already merged.

          So if you've got patches A, B, and C queued up, Zuul tests:

          * A on its own

          * B with A already applied

          * C with both A and B applied

          If something fails, Zuul rewinds and retests without the failing patch. This means you're not waiting for A to fully merge before you can even start testing B - massive time saver when you've got lots of changes flowing through. With GitLab CI, you're basically testing each MR in isolation against the current state of the target branch. If you've got interdependent changes across repos, you end up with this annoying pattern:

          * Merge change in repo A

          * Wait for it to land

          * Now test change in repo B that depends on it

          * Merge that

          * Now test change in repo C...

          It's serial and slow, and you find out about problems late. If change C reveals an issue with change A, you've already merged A ages ago.

          Zuul also has this concept of cross-repo dependencies built in. You can explicitly say "this patch in repo A depends on that patch in repo B" and Zuul will test them together. GitLab CI can sort of hack this together with trigger pipelines and artifacts, but it's not the same thing... you're still not getting that speculative testing across the dependency tree.

          The trade-off is that Zuul is significantly more complex to set up and run. It's designed for the OpenStack-style workflow where you've got dozens of repos and hundreds of patches in flight. For a single repo or even a handful of loosely-coupled repos, GitLab CI (and it's ilk) is probably fine and much simpler. But once you hit that multi-repo, high-velocity scenario, Zuul starts to make proper sense. Yet nobodies using it except hardcore foundational infrastructure providers.

          • locknitpicker 13 hours ago ago

            > Right, so Zuul is properly interesting if you're dealing with multi-repo setups and want to test changes across them before they merge; that's the key bit that something like GitLab CI doesn't really do.

            I'm not sure about that. Even when we ignore plain old commits pushed by pipeline jobs, GitLab does support multi-project pipelines.

            https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#m...

            • dijit 11 hours ago ago

              I'm aware of that functionality.

              GitLab's multi-project pipelines trigger downstream jobs, but you're still testing each MR against the current merged state of dependencies.

              Zuul's whole thing is testing unmerged changes together.

              You've got MR A in repo 1, MR B in repo 2 that needs A, and MR C in repo 3 that needs B... all unmerged. Zuul lets you declare these dependencies and tests A+B+C as a unit before anything merges. Plus it speculatively applies queued changes so you're not serialising the whole lot.

              GitLab has the mechanism to connect repos, but not the workflow for testing a DAG of unmerged interdependent changes. You'd need to manually coordinate checking out specific MR branches together, which is exactly the faff Zuul sorts out.

  • zaps 13 hours ago ago

    brb, registering openjjops.org for when Jujutsu overtakes Git as the default scm

  • gitttnwmrlg 14 hours ago ago

    What’s the (practical) difference between section 3 and 4? Please explain.

    • bo0tzz 14 hours ago ago

      The system should undo state drift even if a run hasn't been prompted by changes to the upstream definitions in the repository

  • coredog64 18 hours ago ago

    Serious question: How do organizations deal with having git on the critical path for deployment? Current employer actually prohibits this due to frequent outages in the git plant.

    • master_crab 16 hours ago ago

      Git is just the repo store.

      Everything you are referring to is the CI/CD pipeline. GitHub actions, Gitlab Runners, ArgoCD; they can all do some sort of gitops. Those dependencies existed before gitops anyway, so nothing new is being added.

    • cortesoft 15 hours ago ago

      We use flux for our gitops, and it is easy to bypass if we need to in an emergency.

      You can disable reconciliation and then manually apply a change if your git repo is unavailable.

    • calgoo 17 hours ago ago

      gitlab on internal network works quite well. You can still use K8 / EKS / AKS for runners etc, just run it all on the internal network. Im always surprised at how many orgs use public platforms for their code and ci/cd.

    • grayhatter 17 hours ago ago

      > Current employer actually prohibits this due to frequent outages in the git plant.

      Even without knowing what you mean by git plant. I can tell you you're holding it wrong.

      I'd like to help, but I can't offer useful suggestions if I have no idea what you're even doing, or prohibiting.

    • numbsafari 17 hours ago ago

      Ummm … github is not git … if you must, keep your git stored locally and simply use webhooks to keep it synced whenever changes are merged via your forge of choice… you can, if necessary, make updates to your locally hosted repo in the event of an outage at the forge, but you’ll need a procedure to “sync back” any changes made during the outage.

      Fortunately, the whole thing is git based, so you have all the tools you need to do it.

    • antonvs 11 hours ago ago

      > frequent outages in the git plant.

      What are you talking about. And whatever it is, it’s wrong.

    • locknitpicker 16 hours ago ago

      > Serious question: How do organizations deal with having git on the critical path for deployment?

      You mean like storing source code? CICD pipeline definitions? Container image specs? IaC manifests? Service configuration files?

      I don't think that arbitrarily drawing the line on what package version is being deployed is a rational opinion to have. I mean, look at npm's package.json. Is it ok to specify which dependency versions you bundle with your software, but specifying which version you're deploying is unheard of?

      The only drawback I see in GitOps is that CICD pipelines are still lagging way behind on providing adequate visualization strategies to monitor the current state of multi-environment deployments.