I am running out of disk space. And requiring more...
# ask-metaflow
t
I am running out of disk space. And requiring more memory does indeed successfully spawn bigger instances on AWS Batch, however, when I use
shutil
to print the available diskspace it look like this even though I am on a g5.16xlarge
Copy code
import shutil
total, used, free = shutil.disk_usage("/")

# Outputs
# Total: 29 GiB                                                                                                                    
# Used: 24 GiB
# Free: 5 GiB
Any guesses what might cause this limit? (I am running using
resume
, but that should not matter right?)
1