Our current strategy is to use Databricks as a 'data engine' bringing relevant data in to Databricks before short cutting it to Fabric (Bronze) for subsequent processing and consumption. My question is there a way to orchestrate the start Fabric data pipeline after Databricks job of moving the raw data to bronze finishes. It might seem a little unconventional to use Databricks to extract the data initially but that is direction we have adopted, any guidance on how we might trigger the pipeline (in Fabric) after the Databricks extraction would be much appreciated.