Edit

Share via


Run, schedule, or use events to trigger a pipeline

A pipeline run occurs when a pipeline is started and the activities are executed.

You can start a pipeline run in three ways:

  • On-demand runs: Select Run in the pipeline editor to trigger an immediate run. You'll need to save any changes before the pipeline starts.

    Screenshot showing where to select Run on the Home tab.

  • Scheduled runs: Set up automatic runs based on time and frequency. When you create a schedule, you specify start and end dates, frequency, and time zone.

    Screenshot showing where to select Schedule on the Home tab.

  • Event-based runs: Use event triggers to start your pipeline when specific events occur, such as new files arriving in a data lake or changes in a database.

    Screenshot showing where to select Trigger to add event-based run triggers on the home tab.

On-demand pipeline run

To manually trigger a pipeline run, select Run found in the top banner of the Home tab.

Screenshot showing where to select Run on the Home tab.

You can also select Schedule in the top banner of the Home tab and select Run now to trigger an immediate run.

Screenshot showing where to select Schedule on the Home tab and then the Run Now button in the scheduler.

You are prompted to save your changes before triggering the pipeline run. Select Save and run to continue.

Screenshot showing the Save and run prompt.

After your changes are saved, your pipeline will run. You can view the progress of the run in the Output tab found at the bottom of the canvas.

Screenshot showing where the run status displays on the Output tab.

Once an activity completes in a run, a green check mark appears in the corner of the activity.

Screenshot showing where the green check mark is displayed.

Once the entire pipeline executes and the output status updates to Succeeded, you have a successful pipeline run!

Screenshot showing where Succeeded status shows in Output tab.

Scheduled pipeline runs

When you schedule a pipeline run, you can set multiple, specific schedules for each pipeline, so your data is prepared and available when you need it.

Select Schedule, found in the top banner of the Home tab, and then select Add Schedule to view your options. By default, your pipeline isn't set on a schedule.

Screenshot showing where to select Schedule on the Home tab.

On the Schedule configuration page, you can specify a schedule frequency, start and end dates and times, and time zone.

Tip

When scheduling a pipeline, you must set both a start and end date. There's no option for an open-ended schedule. To keep a pipeline running long-term, set the end date far in the future (for example, 01/01/2099 12:00 AM). You can update or stop the schedule at any time.

Screenshot of the Schedule configuration screen.

Once configured, select Save to set your schedule.

You can add up to 20 schedules for a single pipeline by selecting Add Schedule again after saving your first schedule. Each schedule can have different frequencies and start and end times.

Manage scheduled runs

You can manage your scheduled runs by selecting Schedule in the top banner of the Home tab. From there, you can edit existing schedules, or enable or disable schedules using the toggle switch.

Screenshot showing where in the schedule menu you can toggle or edit a scheduled run.

To delete a schedule, select the Edit icon (pencil) next to the schedule you want to delete. In the Edit Schedule pane, select Delete schedule at the bottom of the pane.

Screenshot showing the delete button can be found in the edit window.

Event-based pipeline runs

Event triggers let you start pipelines when specific events happen, like when files arrive or get deleted in storage. You can trigger pipelines from file events, job events, and workspace events. If you're moving from Azure Data Factory, you'll find storage events familiar.

Triggers use Fabric platform features including eventstreams and Data Activator alerts. You can create triggers from the pipeline canvas or directly in the Data Activator experience.

Set up storage event triggers

  1. Select the Trigger button on the Home ribbon at the top of the pipeline canvas.

    Screenshot showing the Trigger button on the Home ribbon of the pipeline canvas editor window.

  2. The Set alert panel opens. Here you can define source events for your trigger using the Data Activator alert service.

    Screenshot showing the Set Alert panel.

  3. Select the type of events you want to listen for. Choose OneLake events for OneLake file events, Azure Blob events, etc.

    Screenshot showing the Select a data source dialog.

  4. Select Source and then Select events to pick the storage events you want to monitor.

    Screenshot showing where to connect to Azure Blob Storage events for a Blob Storage account.

  5. Choose your Azure subscription and Blob Storage account.

  6. A new eventstream object gets created in your Fabric workspace. Select the correct workspace under Stream details.

  7. Select Next.

  8. Choose event types. You'll see many event options beyond file created and file deleted.

    Screenshot showing the Configure events type and source page of the trigger configuration dialog.

  9. Filter events to match specific files and folders by specifying folder name, file name, file type, and container using the Subject field.

    Screenshot showing the filter configuration page.

    Note

    File name and folder name are part of the Subject field.

    An event has this top-level data:

    Property Type Description Example
    source string Full resource path to the event source. This field isn't writeable. Event Grid provides this value. /subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account
    subject string Publisher-defined path to the event subject. /blobServices/default/containers/my-file-system/blobs/new-file.txt
    type string One of the registered event types for this event source. Microsoft.Storage.BlobCreated
    time string The time the event is generated based on the provider's UTC time. 2017-06-26T18:41:00.9584103Z
    id string Unique identifier for the event. 00000000-0000-0000-0000-000000000000
    data object Blob storage event data. {{Data object}}
    specversion string CloudEvents schema specification version. 1.0
  10. On the trigger configuration panel, choose the workspace to store the trigger items, pipeline name, pipeline action, and name your trigger as a Reflex item.

  11. Select Create to create the trigger. The trigger becomes active on your pipeline and responds to the storage events you defined.

    Screenshot showing the Alert created notification.

View and manage triggers

  1. To view the trigger, go to your workspace list and find the Reflex object by name in your Fabric browser.

  2. The object type is Reflex. Select the trigger to open the Reflex object for viewing and editing.

    Screenshot showing the details of the Reflex object.

  3. To view triggers that are part of your pipeline, use Triggers > View triggers from the pipeline menu.

    Screenshot showing the manage trigger rules pane.

Use trigger file and folder names in expressions

You can use the file name and folder path from storage events in your pipeline using built-in trigger parameters. Data Factory sets these parameters when it receives the storage event.

Select the trigger parameters tab on the expression builder in your pipeline, and Data Factory automatically parses the file name and folder names, letting you add them dynamically to your pipeline expressions.

Screenshot showing the Add dynamic content dialog.

These built-in parameters come from the Subject and Topic fields of the file event and are created automatically for your pipeline logic.

@pipeline()?.TriggerEvent?.FileName

Notice the ? after the pipeline() object reference. This handles NULL values in the pipeline expression language. You need this syntax when testing a pipeline that uses trigger parameters because during manual testing, file and folder name parameters aren't set, returning a NULL value. When you trigger the pipeline from a file event, you'll see the file name and folder name filled out in those fields.