Hi, we are using Argo workflows to deploy meta flo...
# ask-metaflow
g
Hi, we are using Argo workflows to deploy meta flow into our Amazon EKS. Our node has volume of 50Gb. We currently have a forach branch, in our flow. In a particular use case, the foreach has a list of 140 entries. We also use cards to store some artifact information. What we observed is that, the Argo workflow fails with a Disk back pressure issue. We added a couple of sql query strings to cards and this error occurred. Just removing the same fixed the issue. Can you please help me understand, why this happens , how is cards impacting ? How is the volume allocated to each container? Is there possible for all of the containers to access 50Gb? As I think 50Gb is still huge space and should be sufficient.