I'm looking for feedback on how others have handle...
# ask-metaflow
e
I'm looking for feedback on how others have handled defining scheduled Argo workflows as code. I'm the the infrastructure lead for a team of ~20 data scientists, so I'm hoping for a solution that will allow the scientists to schedule workflows with minimal complexity. Here are our two current trains of thought: • Create a pipeline that authenticates with the kubernetes cluster so that we can run things like
python example_flow.py --with retry argo-workflows create
and have the pipeline automatically deploy that command • Run the metaflow command locally, and copy/paste the argo yaml spec to the repo. Use a argo API key + CICD to deploy the spec (see here) I suspect the second option is the more flexible one, because then we can modify the yaml spec to do things like post alerts when jobs fail
1