Thanks for the update. I saw the doc for event based triggering, it is indeed exciting but we were looking for a more synchronous approach wherein the parent flow would wait for (potentially multiple, as in foreach) child flows to complete and then be able to access their outputs in downstream steps as if they were outputs produced by its own step.
Our motivations such flow composition is two-fold. First, ease of maintenance, we collect various flow-level statistics such as time taken, failure rate etc, doing this at each subflow-level would be useful, plus we can then version each subflow separately (if each subflow were maintained by a different team). Second is that we want to enable flow-level caching. That is, by intelligently determining a cache key based on input and source code of subflow, we can determine if a matching subflow run already exists, and if so save ourselves a trigger. We initially thought of doing this caching at a step-level but that is more complicated because metaflow DAG is by design static.