Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
A pipeline run occurs when a pipeline is started and the activities are executed.
You can start a pipeline run in three ways:
On-demand runs: Select Run in the pipeline editor to trigger an immediate run. You'll need to save any changes before the pipeline starts.
Scheduled runs: Set up automatic runs based on time and frequency. When you create a schedule, you specify start and end dates, frequency, and time zone.
Event-based runs: Use event triggers to start your pipeline when specific events occur, such as new files arriving in a data lake or changes in a database.
On-demand pipeline run
To manually trigger a pipeline run, select Run found in the top banner of the Home tab.
You can also select Schedule in the top banner of the Home tab and select Run now to trigger an immediate run.
You are prompted to save your changes before triggering the pipeline run. Select Save and run to continue.
After your changes are saved, your pipeline will run. You can view the progress of the run in the Output tab found at the bottom of the canvas.
Once an activity completes in a run, a green check mark appears in the corner of the activity.
Once the entire pipeline executes and the output status updates to Succeeded, you have a successful pipeline run!
Scheduled pipeline runs
When you schedule a pipeline run, you can set multiple, specific schedules for each pipeline, so your data is prepared and available when you need it.
Select Schedule, found in the top banner of the Home tab, and then select Add Schedule to view your options. By default, your pipeline isn't set on a schedule.
On the Schedule configuration page, you can specify a schedule frequency, start and end dates and times, and time zone.
Tip
When scheduling a pipeline, you must set both a start and end date. There's no option for an open-ended schedule. To keep a pipeline running long-term, set the end date far in the future (for example, 01/01/2099 12:00 AM). You can update or stop the schedule at any time.
Once configured, select Save to set your schedule.
You can add up to 20 schedules for a single pipeline by selecting Add Schedule again after saving your first schedule. Each schedule can have different frequencies and start and end times.
Manage scheduled runs
You can manage your scheduled runs by selecting Schedule in the top banner of the Home tab. From there, you can edit existing schedules, or enable or disable schedules using the toggle switch.
To delete a schedule, select the Edit icon (pencil) next to the schedule you want to delete. In the Edit Schedule pane, select Delete schedule at the bottom of the pane.
Event-based pipeline runs
Event triggers let you start pipelines when specific events happen, like when files arrive or get deleted in storage. You can trigger pipelines from file events, job events, and workspace events. If you're moving from Azure Data Factory, you'll find storage events familiar.
Triggers use Fabric platform features including eventstreams and Data Activator alerts. You can create triggers from the pipeline canvas or directly in the Data Activator experience.
Set up storage event triggers
Select the Trigger button on the Home ribbon at the top of the pipeline canvas.
The Set alert panel opens. Here you can define source events for your trigger using the Data Activator alert service.
Select the type of events you want to listen for. Choose
OneLake
events for OneLake file events, Azure Blob events, etc.Select Source and then Select events to pick the storage events you want to monitor.
Choose your Azure subscription and Blob Storage account.
A new eventstream object gets created in your Fabric workspace. Select the correct workspace under Stream details.
Select Next.
Choose event types. You'll see many event options beyond file created and file deleted.
Filter events to match specific files and folders by specifying folder name, file name, file type, and container using the Subject field.
Note
File name and folder name are part of the Subject field.
An event has this top-level data:
Property Type Description Example source string Full resource path to the event source. This field isn't writeable. Event Grid provides this value. /subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account subject string Publisher-defined path to the event subject. /blobServices/default/containers/my-file-system/blobs/new-file.txt type string One of the registered event types for this event source. Microsoft.Storage.BlobCreated time string The time the event is generated based on the provider's UTC time. 2017-06-26T18:41:00.9584103Z id string Unique identifier for the event. 00000000-0000-0000-0000-000000000000 data object Blob storage event data. {{Data object}} specversion string CloudEvents schema specification version. 1.0 On the trigger configuration panel, choose the workspace to store the trigger items, pipeline name, pipeline action, and name your trigger as a Reflex item.
Select Create to create the trigger. The trigger becomes active on your pipeline and responds to the storage events you defined.
View and manage triggers
To view the trigger, go to your workspace list and find the Reflex object by name in your Fabric browser.
The object type is Reflex. Select the trigger to open the Reflex object for viewing and editing.
To view triggers that are part of your pipeline, use Triggers > View triggers from the pipeline menu.
Use trigger file and folder names in expressions
You can use the file name and folder path from storage events in your pipeline using built-in trigger parameters. Data Factory sets these parameters when it receives the storage event.
Select the trigger parameters tab on the expression builder in your pipeline, and Data Factory automatically parses the file name and folder names, letting you add them dynamically to your pipeline expressions.
These built-in parameters come from the Subject and Topic fields of the file event and are created automatically for your pipeline logic.
@pipeline()?.TriggerEvent?.FileName
Notice the ?
after the pipeline()
object reference. This handles NULL values in the pipeline expression language. You need this syntax when testing a pipeline that uses trigger parameters because during manual testing, file and folder name parameters aren't set, returning a NULL value. When you trigger the pipeline from a file event, you'll see the file name and folder name filled out in those fields.