Hello, I am looking on some advice on best practi...
# ask-metaflow
b
Hello, I am looking on some advice on best practices for continuous deployment workflows. In my repository we have individual data science projects within their own respective directories which will be handled by a single CD workflow. Some flows need to manually triggered after being created and I want to know if there is a way to properly detect this? Right now I am looking for the need for a manual trigger through some hacky grep stuff which is not great. Was wondering if there were any best practices on this. Thanks!
1
v
hey Zander. Typically folks do something like this: 1. Develop code in a repo - it sounds like you might want to have a single repo with multiple projects in separate directories 2. When you want to deploy something, create a commit and (optionally) a pull request 3. Pull request triggers a CI/CD worker like GitHub Actions, which deploys the workflow to a production scheduler like Argo Workflows (or Step Functions) Once deployed, you can trigger a workflow either directly programmatically (or on the CLI) or through an event
this article/video shows how to do the above on Outerbounds when using Metaflow, but the same principle applies in OSS too down the road you can make the setup even more robust by deploying pull requests as separate branched deployments for A/B testing etc, but that's optional. Read more here if you are curious about the big picture