Hi friends. We're experiencing a problem when runn...
# ask-metaflow
u
Hi friends. We're experiencing a problem when running many parallel instances of a flow that uploads files to S3. These flows are failing due to S3
SlowDown
errors while using Metaflow's
s3c.put_many
. I see that the client already is supposed to handle this, but I guess the if there are too many concurrent flows running, the backoff isn't sufficient. Is there a pattern you would recommend to work around this? Would it make sense to create a feature request to expose the number of retries as a kwarg?
1