Edit

Share via


Get data from Azure Storage

In this article, you learn how to get data from Azure Storage (ADLS Gen2 container, blob container, or individual blobs). You can ingest data into your table continuously or as a one-time ingestion. Once ingested, the data becomes available for query.

  • Continuous ingestion (Preview): Continuous ingestion involves setting up an ingestion pipeline that allows an eventhouse to listen to Azure Storage events. The pipeline notifies the eventhouse to pull information when subscribed events occur. The events are BlobCreated and BlobRenamed.

    Important

    This feature is in preview.

    Note

    A continuous ingestion stream can affect your billing. For more information, see Eventhouse and KQL Database consumption.

  • One-time ingestion: Use this method to retrieve data from Azure Storage as a one-time operation.

Prerequisites

For continuous ingestion you also require:

Add the workspace identity role assignment to the storage account

  1. From the Workspace settings in Fabric, copy your workspace identity ID.

    Screenshot of the workspace setting, with the workspace ID highlighted.

  2. In the Azure portal, browse to your Azure Storage account, and select Access Control (IAM) > Add > Add role assignment.

  3. Select Storage Blob Data Reader.

  4. In the Add role assignment dialog, select + Select members.

  5. Paste in the workspace identity ID, select the application, and then Select > Review + assign.

Create a container with data file

  1. In the storage account, select Containers.

  2. Select + Container, enter a name for the container and select Save.

  3. Enter the container, select upload, and upload the data file prepared earlier.

    For more information, see supported formats and supported compressions.

  4. From the context menu, [...], select Container properties, and copy the URL to input during the configuration.

    Screenshot showing the list of containers with the context menu open with container properties highlighted.

Source

Set the source to get data.

  1. From your Workspace, open the EventHouse, and select the database.

  2. On the KQL database ribbon, select Get Data.

  3. Select the data source from the available list. In this example, you're ingesting data from Azure storage.

    Screenshot of get data window with source tab selected.

Configure

  1. Select a destination table. If you want to ingest data into a new table, select + New table and enter a table name.

    Note

    Table names can be up to 1,024 characters including spaces, alphanumeric, hyphens, and underscores. Special characters aren't supported.

  2. In the Configure Azure Blob Storage connection, ensure that Continuous ingestion is turned on. It's turned on by default.

  3. Configure the connection by creating a new connection, or by using an existing connection.

    To create a new connection:

    1. Select Connect to a storage account.

      Screenshot of configure tab with Continuous ingestion and connect to an account selected.

    2. Use the following descriptions to help fill in the fields.

      Setting Field description
      Subscription The storage account subscription.
      Blob storage account Storage account name.
      Container The storage container containing the file you want to ingest.
    3. In the Connection field, open the dropdown and select + New connection, then Save > Close. The connection settings are prepopulated.

    Note

    Creating a new connection results in a new Eventstream. The name is defined as <storate_account_name>_eventstream. Make sure you don't remove the continuous ingestion eventstream from the workspace.

    To use an existing connection:

    1. Select Select an existing storage account.

      Screenshot of configure tab with Continuous ingestion and connect to an existing account selected.

    2. Use the following descriptions to help fill in the fields.

      Setting Field description
      RTAStorageAccount An event stream connected to your storage account from Fabric.
      Container The storage container containing the file you want to ingest.
      Connection This is prepopulated with the connection string
    3. In the Connection field, open the dropdown and select the existing connection string from the list. Then select Save > Close.

  4. Optionally, expand File filters and specify the following filters:

    Setting Field description
    Folder path Filters data to ingest files with a specific folder path.
    File extension Filters data to ingest files with a specific file extension only.
  5. In the Eventstearm settings section, you can select the events to monitor in Advanced settings > Event type(s). By default, Blob created is selected. You can also select Blob renamed.

    Screenshot of Advanced settings with the Event type(s) dropdown expanded.

  6. Select Next to preview the data.

Inspect

The Inspect tab opens with a preview of the data.

To complete the ingestion process, select Finish.

Screenshot of the inspect tab.

Note

To evoke continuous ingestion and preview data, ensure you uploaded a new storage blob after the configuration.

Optionally:

  • Use the schema definition file dropdown to change the file that the schema is inferred from.

  • Use the file type dropdown to explore Advanced options based on data type.

  • Use the Table_mapping dropdown to define a new mapping.

  • Select </> to open the command viewer to view and copy the automatic commands generated from your inputs. You can also open the commands in a Queryset

  • Select the pencil icon to Edit columns.

Edit columns

Note

  • For tabular formats (CSV, TSV, PSV), you can't map a column twice. To map to an existing column, first delete the new column.
  • You can't change an existing column type. If you try to map to a column having a different format, you may end up with empty columns.

The changes you can make in a table depend on the following parameters:

  • Table type is new or existing
  • Mapping type is new or existing
Table type Mapping type Available adjustments
New table New mapping Rename column, change data type, change data source, mapping transformation, add column, delete column
Existing table New mapping Add column (on which you can then change data type, rename, and update)
Existing table Existing mapping none

Screenshot of columns open for editing.

Mapping transformations

Some data format mappings (Parquet, JSON, and Avro) support simple ingest-time transformations. To apply mapping transformations, create or update a column in the Edit columns window.

Mapping transformations can be performed on a column of type string or datetime, with the source having data type int or long. For more information, see the full list of supported mapping transformations.

Advanced options based on data type

Tabular (CSV, TSV, PSV):

  • If you're ingesting tabular formats in an existing table, you can select Advanced > Keep table schema. Tabular data doesn't necessarily include the column names that are used to map source data to the existing columns. When this option is checked, mapping is done by-order, and the table schema remains the same. If this option is unchecked, new columns are created for incoming data, regardless of data structure.

    Screenshot of advanced options.

  • Tabular data doesn't necessarily include the column names that are used to map source data to the existing columns. To use the first row as column names, select First row is column header.

    Screenshot of the First row is column header switch.

Tabular (CSV, TSV, PSV):

  • If you're ingesting tabular formats in an existing table, you can select Table_mapping > Use existing schema. Tabular data doesn't necessarily include the column names that are used to map source data to the existing columns. When this option is checked, mapping is done by-order, and the table schema remains the same. If this option is unchecked, new columns are created for incoming data, regardless of data structure.

  • To use the first row as column names, select First row header.

    Screenshot of advanced CSV options.

JSON:

  • To determine column division of JSON data, select Nested levels, from 1 to 100.

    Screenshot of advanced JSON options.

Summary

In the Summary window, all the steps are marked with green check marks when data ingestion finishes successfully. You can select a card to explore the data, delete the ingested data, or create a dashboard with key metrics.

Screenshot of summary page for continuous ingestion with successful ingestion completed.

When you close the window, you can see the connection in the Explorer tab, under Data streams. From here, you can filter the data streams and delete a data stream.

Screenshot of the KQL database explorer with Data streams highlighted.