yes. You can set a lifecycle policy on the bucket used by Metaflow.
A gotcha is that artifacts are shared across runs so old objects may be used by newer runs too. If an old object is suddenly deleted by a lifecycle policy, a task may fail but @retry should take care of it.
Another safer option is
S3 intelligent tiering which can move least frequently accessed objects to a cheaper storage tier where you can delete them safely