The ADF Databricks job Activity doesn't seem to pass parameters

Debie, Hanneke 35 Reputation points
2025-05-14T12:15:22.5966667+00:00

I have created an ADF pipeline that uses the databricks job activity. The linked service works well. When I run the pipeline, the databricks job fails - the ADF component fails, and I see a failed job run in databricks too.

The problem seems to be the parameters. When I look in the ADF log in the input from the databricks job task, I see that the parameters have been passed correctly.

When I look at the databricks job run, the parameter fields are empty. This is also evident by the error when it failed - where the parameters are supposed to be used, the value was empty.

Below are the adf task settings and the databricks run details from my debug run.

setup adf

task run details

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
11,596 questions
{count} votes

Accepted answer
  1. SSingh-MSFT 16,371 Reputation points Moderator
    2025-06-11T05:11:49.66+00:00

    Hi @Debie, Hanneke

    We are sorry about the issue you are facing in parameter fetch.

    When passing parameters between jobs & pipelines, we can pass parameters to jobs using jobParameters property in Databricks activity.

    Note:

    Job parameters are only supported in Self-hosted IR version 5.52.0.0 or greater.

    As per the official document as of today https://learn.microsoft.com/en-us/azure/data-factory/transform-data-databricks-job, the Azure Databricks Jobs activity is currently in preview. This information relates to a pre-release product that may be substantially modified before it's released. Microsoft makes no warranties, expressed or implied, with respect to the information provided here.

    User's image

    If you are using "Databricks Notebook Activity", you can pass parameters to notebooks using baseParameters property in databricks activity.

    In certain cases, you might require to pass back certain values from notebook back to the service, which can be used for control flow (conditional checks) in the service or be consumed by downstream activities (size limit is 2 MB).

    1. In your notebook, you can call dbutils.notebook.exit("returnValue") and corresponding "returnValue" will be returned to the service.
    2. You can consume the output in the service by using expression such as @{activity('databricks notebook activity name').output.runOutput}.

    https://learn.microsoft.com/en-us/azure/data-factory/transform-data-databricks-notebook

    Please let us know if you have queries.

    Thank you!

    1 person found this answer helpful.

1 additional answer

Sort by: Most helpful
  1. Amira Bedhiafi 32,756 Reputation points Volunteer Moderator
    2025-05-14T13:12:58.61+00:00

    Hello Debbie !

    Thank you for posting on Microsoft Learn.

    The parameter names in ADF exactly match the Databricks job parameter names (please be aware that they are case-sensitive).

    Try to check the case of both pipeline().parameters.* and the left-hand name field in ADF job parameters matches exactly including capitalization.

    If you are running debug mode in ADF and did not set pipeline parameters in the debug pane, then:

    @pipeline().parameters.Entity
    

    If your notebook expects JSON or structured input, and you're using dynamic content from ADF, you must serialize it properly.

    Your Databricks notebook should use the parameters like this:

    dbutils.widgets.get("EntityName")
    

    If the widget isn't defined in the notebook with:

    dbutils.widgets.text("EntityName", "")
    

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.