Hi all, We are running into a somewhat bizarre pr...
# ask-metaflow
s
Hi all, We are running into a somewhat bizarre problem when running flows in AWS Batch. The largest instance in our compute environment is a
c7g.16xlarge
(i.e. 128GB RAM). When any step ends up running on this instance, querying total memory with
psutil
returns ~128GB. However, the actually available memory is what is specified in the @batch decorator. This leads to problems when any function tries to police its own memory use (e.g. output errors when available memory is too low, or restrict its memory use to a fraction of system memory). Is there anything fundamental we are missing here on how this could be avoided or fixed? Minimal working example in 🧵, thanks!
✅ 1