most-analyst-45184
07/18/2025, 6:32 AMhallowed-soccer-94479
07/17/2025, 5:50 PMhundreds-wire-22547
07/17/2025, 5:43 PMhundreds-wire-22547
07/17/2025, 5:08 PMred-accountant-23764
07/17/2025, 12:38 PM@trigger_on_finish
. The flows that I want to base the trigger on can change so I would like to specify the flow using a command line argument (or some other way of supplying arguments). Basically I want to do something like this
parser = parser = argparse.ArgumentParser()
parser.add_argument("--flow")
args = parser.parse_args()
@trigger_on_finish(flow=args.flow)
class MyFlow(FlowSpec):
@step
def some_step(self)
I am deploying the pipelines to Airflow so I don't think I can use Runner
to specify these arguments. When trying to run
python my_flow.py --flow flow_one airflow create my_dag.py
I get error: unrecognized arguments: airflow create my_dag.py
Is there a way to specify arguments to flow decorators from outside of the python script?astonishing-train-18397
07/17/2025, 3:38 AMnarrow-garden-54875
07/16/2025, 3:13 PMargo-workflows create
command, we instead have to run a metaflow sync client in each of our deployment envs, making metaflow a CD-pattern outlier.great-egg-84692
07/15/2025, 9:27 PMforeach
?alert-truck-95951
07/14/2025, 9:05 PMuv
feature. How do we use it in conjunction with google artifact registries? We have some private python artifacts that we would use in our flows.limited-monitor-27839
07/14/2025, 10:23 AMfull-kilobyte-32033
07/11/2025, 1:09 AMbulky-portugal-95315
07/10/2025, 3:02 AMmetaflow==2.15.18
and micromamba v1.5.10
haven’t seen this in a while, but saw some mentions about micromamba2 being a problem, however i’m on 1.5.10. any other things worth checking?
critical libmamba Download error (1) Unsupported protocol [<s3://metaflow-bucket/metaflow/conda_env/packages/conda/packages/conda/conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.1-hb9d3cd8_1.conda/libnsl-2.0.1-hb9d3cd8_1.tar.bz2/6786aa05708c1e034b22564380b19f75/libnsl-2.0.1-hb9d3cd8_1.tar.bz2>]
Protocol "s3" not supported
hallowed-soccer-94479
07/09/2025, 8:04 PM2.15.17
where objects in the config are not available in future steps when they are assigned as instance variables.hundreds-rainbow-67050
07/08/2025, 3:42 PMfew-dress-69520
07/03/2025, 7:56 AMmetaflow environment resolve --arch osx-arm64 --arch linux-64 --force -r requirements.txt --alias my_test_env
There is the option of using fetch_at_exec
to determine the name of the named environment at runtime. Then I could resolve the environments for both architectures and give them different aliases, e.g. my_test_env:<arch>
, and define through runtime information, which is the correct one. I'm just wondering whether this is an appropriate way of doing it. Is there a better way?dry-umbrella-11948
07/02/2025, 3:01 PMlively-lunch-9285
07/02/2025, 3:04 AMbrief-kite-90012
06/30/2025, 9:29 AMmetaflow_ray
on argo workflows, I've installed JobSet to the same namespace with my argo workflows runner namespace, and had the permission added to my service account, but hitting some error at the moment about the webhook service path, is there anyway i can modify this value? can't seem to find it either from the metaflow configs or anywhere, thanks in advance!straight-dog-3982
06/27/2025, 7:21 AMmetaflow-ui:
uiBackend:
metaflowDefaultDatastore: "s3"
metaflowDatastoreSysRootS3: "<s3://metaflow>"
metaflowS3EndpointURL: "<http://minio.caic:9000>"
plus AWS_SECRET_* values through env
. Previously I had DefaultDatastore set to "local"
I have restarted both deployments for mataflow-ui. Now I'm running yet another flow, and in UI it still shows me
Since this run uses local datastore instead of cloud services, some information may be incomplete.
And of course there are no data in my minio bucket
Is my s3 configuration correct? Do I need to drop the database or deploy the backend service too to see the effect?ripe-car-38698
06/26/2025, 5:06 PMURL/api/flows/ComponentDemoFlow/runs/4543/steps/start/tasks/222732/cards/blank/0490a0de7c024207af081c2187dcea5d?embed=true
returns
[Errno 2] No such file or directory: '/root/services/ui_backend_service/ui/index.html'
.
• We are using a custom DNS behind a vpn in http
• Recent versions for all services: metaflow-ui.uiBackend.image.tag=2.4.13-2-g70af4ed
, metaflow-service.image.tag=2.4.13-2-g70af4ed
, metaflow-ui.uiStatic.image.tag=1.3.5-123-g95238f8-obp
Thank you!astonishing-train-18397
06/25/2025, 7:32 PMstraight-dog-3982
06/25/2025, 10:51 AMMETAFLOW_DEFAULT_DATASTORE=local
as I did not find out how to set up credentials for s3/minio. Now, I port-forward the service (port 8080) to localhost and with METAFLOW_SERVICE_URL=<http://localhost:8080/>
I'm able to run example flow; from the logs on service pod it seems it is connecting to remote server which is nice. I can even check the flow status on port 8083, so it seems this is logged corerctly
OK, but now I cannot see anything in the UI. The UI (running through ingress) just shows Error loading data
on empty dashboard page, nothing about my flow and nothing more specific about the error, just Unknown error
/ generic-error
.
I assume some part of my stack is misconfigured but I can't see any hints anywhere ... Is it complaining about connection to backend? Or is it backend complaning about actual access to datastore? I see nothing relevant in the logs...lively-lunch-9285
06/23/2025, 8:43 PMbulky-portugal-95315
06/23/2025, 6:57 PMsquare-wire-39606
06/23/2025, 5:49 PMambitious-evening-58240
06/23/2025, 7:02 AM@step
?ripe-car-38698
06/20/2025, 7:03 PM/tmp
seems to be well used as I can see that some data have been written: /tmp/metaflow_client/gs.gs:/argo-workflows-artifacts-pilot.ComponentDemoFlow/98
. Here is the error in the thread. Thank you!chilly-monkey-19638
06/20/2025, 6:23 AMlively-lunch-9285
06/19/2025, 3:46 AMself.
, e.g. self.x
• and I want to access it on current.
, e.g. current.x
(accessing it in the same flow run)
But for the life of me, I can't find a way to do that. Could anyone help me out?acoustic-van-30942
06/17/2025, 4:13 AMThe node was low on resource: ephemeral-storage.
Threshold quantity: 32204283974, available: 30875356Ki.
Container main was using 51664160Ki, request is 10240M, has larger consumption of ephemeral-storage.
To resolve this, would I need to increase disk space or tmpfs storage?