How to trigger an Azure Data Factory pipeline through API using praameters

Datta, Andolina 0 Reputation points
2025-05-15T12:06:06.6+00:00

Hello All,

I have a use case where I want to trigger an Azure Data Factory pipeline through API. Right now I am calling the API in Databricks and using Service Principal(token based) to connect to ADF from Databricks.The ADF pipeline has some parameters which I want to pass via the API.

The issue is when I run the notebook, the ADF pipeline gets triggered, but the parameters get passed as empty values or ''. Below is the code for POST method to trigger the pipeline (some details have been masked for security reasons):

tenant_id = dbutils.secrets.get(scope="adb-dev-secret-scope",key="spn-databricks-dev-eastus2-001-tenantid")
client_id = dbutils.secrets.get(scope="adb-dev-secret-scope",key="spn-databricks-dev-eastus2-001-clientid")
client_secret = dbutils.secrets.get(scope="adb-dev-secret-scope",key="spn-databricks-dev-eastus2-001-secret")

subscription_id = "****"
resource_group = "****"
factory_name = "****"
pipeline_name = "Test Adf from databricks"
api_version = "2018-06-01"


# Optional: parameters to pass to pipeline
parameters = {
    'curr_working_user': 'xyz'
}

# === Get token using client secret auth ===
credential = ClientSecretCredential(tenant_id, client_id, client_secret)
token = credential.get_token("https://management.azure.com/.default").token


url = f"https://management.azure.com/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.DataFactory/factories/{factory_name}/pipelines/{pipeline_name}/createRun?api-version={api_version}"

headers = {
    "Authorization": f"Bearer {token}",
    "Content-Type": "application/json"
}


body = {
    'parameters': f'{parameters}'
}

print(json.dumps(body, indent=1))


response = requests.post(url, headers=headers, json=body)
print(response.status_code)
print(response.json())

I want the parameter "curr_working_user" to be passed as 'xyz' in ADF pipeline. The parameter in ADF is same: curr_working_user.

You help is greatly appreciated!!

Thanks,

Andolina


Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
11,582 questions
{count} votes

2 answers

Sort by: Oldest
  1. Mihir Saxena 0 Reputation points Microsoft External Staff Moderator
    2025-05-29T10:37:45.7466667+00:00

    ADF does not throw an error when an expected parameter is missing or mismatched it just ignores it or sets it as null, which can lead to silent failures.

    Since the issue persists even when using Postman, it strongly suggests that the ADF pipeline is either not interpreting the request payload correctly or is not receiving it as expected.

    Try below troubleshooting methods as well:

    1. Parameter Name Validation: Ensure that the parameter name in the ADF pipeline definition is exactly curr_working_user:
    • Parameter names are case-sensitive.
    • Ensure there are no leading or trailing spaces.
    1. Try triggering the pipeline via Azure CLI: If this succeeds and the parameter is received, the issue likely lies in how your Databricks/REST API code is crafting the payload.
    
    az login
    
    az datafactory pipeline create-run \
    
      --resource-group <resource-group> \
    
      --factory-name <factory-name> \
    
      --name <pipeline-name> \
    
      --parameters curr_working_user='xyz'
    
    
    1. Enable diagnostics logging for ADF and route the logs to a Log Analytics workspace or Storage Account to inspect raw requests. This helps capture the exact request structure being received by ADF. Kindly refer - Diagnostics settings in Azure Monitor

  2. Datta, Andolina 0 Reputation points
    2025-06-12T15:47:45.4766667+00:00

    Hello All,

    Thank you for all your suggestions on this thread.

    We have raised the issue as a product bug to Microsoft. They did not update us as to why parameters do not work with http request. However they did demo to us that parameters work with python script.

    https://learn.microsoft.com/en-us/rest/api/datafactory/pipelines/create-run?view=rest-datafactory-2018-06-01&tabs=Python

    This resolved our problem and I have tested by passing multiple parameters through databricks to ADF and it was taking the parameters correctly.

    The document doesn't mention how to write the parameters; you have to write it inside the response. Eg:

    ##code******************

    from azure.identity import DefaultAzureCredential

    from azure.mgmt.datafactory import DataFactoryManagementClient

    print("Running ADF Pipeline....")

    tenant_id = ***

    client_id = ***

    client_secret = ***

    parameters = {

    'curr_working_user': f'{current_user}',
    
    'ServerName': f'{servername}',
    
    'SharedFolder': f'{shared_folder}',
    
    'Folder': f'{folder}'
    

    }

    def main():

    client = DataFactoryManagementClient(
    
        credential=ClientSecretCredential(tenant_id, client_id, client_secret),
    
        subscription_id = "***",
    
    )
    
    response = client.pipelines.create_run(
    
        resource_group_name = "***",
    
        factory_name = "***",
    
        pipeline_name = "Test Adf from databricks",
    
        parameters = parameters or {}
    
    )
    
    print(response)
    

    if name == "main":

    main()
    
    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.