Hi all, What would be the correct way to parallel...
# ask-metaflow
a
Hi all, What would be the correct way to parallelize downloads of some image files in s3 using Metaflow's s3 client? They would need to be saved to a local folder. So far, I did something like this:
Copy code
from metaflow import S3
import os
with S3(s3root='<s3://metaflow-s3-ampsdemo/test_data/sample_training/>') as s3:
    if not os.path.exists('sample_training'):
       os.makedirs('sample_training')
    res = s3.get_all()
    for obj in res:    
        with open(f'sample_training/{obj.key}', 'w') as f:
            f.write(obj.text)
1