Does anybody have a GitOps deployment process for ...
# ask-metaflow
i
Does anybody have a GitOps deployment process for their Metaflow Flows that they like and wouldn't mind sharing notes? My current org heavily utilizes ArgoCD and helm for non-ML deployments. There is a monorepo where users check in their helm values and their service (any source code, Dockerfile, etc.). The traditional CI/CD process here is: 1. Build and publish the docker image. 2. Inject the new docker tag into the helm values. 3. Utilize helm to package up a copy of our internal base helm chart & the users helm values. 4. Publish that helm package to a container registry. 5. ArgoCD (app of apps) then monitors the container registry for changes, hydrates the helm chart and installs changes.
Is anybody doing something similar? Right now my game plan is to build out a new helm chart specifically for Metaflow Flows. Maybe I'll crawl the users directory and publish their python scripts as configmaps? I think for single file flows this is very doable, my concern is when users have a tree structure that their flow depends on like:
Copy code
| - src
    | - __init__.py
    | - module1.py
    | - submodule
        | - module2.py
| - flow.py
v
maybe tangentially related, we have this
flowproject
template that includes a simple `deploy` script which gets triggered e.g. by GitHub actions to deploy a corresponding Metaflow branch on Argo Workflows on Outerbounds images get baked on the fly by Fast Bakery as a part of CI/CD but you could bake them separately too
when it comes to directory hierarchy, the existing guidance applies, but we are working on making it more flexible to include dependencies outside the hierarchy rooted at the flow - in particular custom Python packages
stay tuned for updates on this front hopefully in a few weeks! excited monkey