Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
In this article, you learn how to get data from Azure Storage (ADLS Gen2 container, blob container, or individual blobs). You can ingest data into your table continuously or as a one-time ingestion. Once ingested, the data becomes available for query.
Continuous ingestion (Preview): Continuous ingestion involves setting up an ingestion pipeline that allows an eventhouse to listen to Azure Storage events. The pipeline notifies the eventhouse to pull information when subscribed events occur. The events are BlobCreated and BlobRenamed.
Important
This feature is in preview.
Note
A continuous ingestion stream can affect your billing. For more information, see Eventhouse and KQL Database consumption.
One-time ingestion: Use this method to retrieve data from Azure Storage as a one-time operation.
Prerequisites
- A workspace with a Microsoft Fabric-enabled capacity.
- A KQL database with editing permissions.
- A storage account.
For continuous ingestion you also require:
A workspace identity. My Workspace isn't supported. If necessary, Create a new Workspace.
Enable Hierarchical namespace on the storage account.
Storage Blob Data Reader role permissions assigned to the workspace identity.
A container to hold the data files.
A data file uploaded to the container. The data file structure is used to define the table schema. For more information, see Data formats supported by Real-Time Intelligence.
Note
You must upload a data file:
- Before the configuration to define the table schema during set-up.
- After the configuration to trigger the continuous ingestion, to preview data, and to verify the connection.
Add the workspace identity role assignment to the storage account
From the Workspace settings in Fabric, copy your workspace identity ID.
In the Azure portal, browse to your Azure Storage account, and select Access Control (IAM) > Add > Add role assignment.
Select Storage Blob Data Reader.
In the Add role assignment dialog, select + Select members.
Paste in the workspace identity ID, select the application, and then Select > Review + assign.
Create a container with data file
In the storage account, select Containers.
Select + Container, enter a name for the container and select Save.
Enter the container, select upload, and upload the data file prepared earlier.
For more information, see supported formats and supported compressions.
From the context menu, [...], select Container properties, and copy the URL to input during the configuration.
Source
Set the source to get data.
From your Workspace, open the EventHouse, and select the database.
On the KQL database ribbon, select Get Data.
Select the data source from the available list. In this example, you're ingesting data from Azure storage.
Configure
Select a destination table. If you want to ingest data into a new table, select + New table and enter a table name.
Note
Table names can be up to 1,024 characters including spaces, alphanumeric, hyphens, and underscores. Special characters aren't supported.
In the Configure Azure Blob Storage connection, ensure that Continuous ingestion is turned on. It's turned on by default.
Configure the connection by creating a new connection, or by using an existing connection.
To create a new connection:
Select Connect to a storage account.
Use the following descriptions to help fill in the fields.
Setting Field description Subscription The storage account subscription. Blob storage account Storage account name. Container The storage container containing the file you want to ingest. In the Connection field, open the dropdown and select + New connection, then Save > Close. The connection settings are prepopulated.
Note
Creating a new connection results in a new Eventstream. The name is defined as <storate_account_name>_eventstream. Make sure you don't remove the continuous ingestion eventstream from the workspace.
To use an existing connection:
Select Select an existing storage account.
Use the following descriptions to help fill in the fields.
Setting Field description RTAStorageAccount An event stream connected to your storage account from Fabric. Container The storage container containing the file you want to ingest. Connection This is prepopulated with the connection string In the Connection field, open the dropdown and select the existing connection string from the list. Then select Save > Close.
Optionally, expand File filters and specify the following filters:
Setting Field description Folder path Filters data to ingest files with a specific folder path. File extension Filters data to ingest files with a specific file extension only. In the Eventstearm settings section, you can select the events to monitor in Advanced settings > Event type(s). By default, Blob created is selected. You can also select Blob renamed.
Select Next to preview the data.
Inspect
The Inspect tab opens with a preview of the data.
To complete the ingestion process, select Finish.
Note
To evoke continuous ingestion and preview data, ensure you uploaded a new storage blob after the configuration.
Optionally:
Use the schema definition file dropdown to change the file that the schema is inferred from.
Use the file type dropdown to explore Advanced options based on data type.
Use the Table_mapping dropdown to define a new mapping.
Select </> to open the command viewer to view and copy the automatic commands generated from your inputs. You can also open the commands in a Queryset
Select the pencil icon to Edit columns.
Edit columns
Note
- For tabular formats (CSV, TSV, PSV), you can't map a column twice. To map to an existing column, first delete the new column.
- You can't change an existing column type. If you try to map to a column having a different format, you may end up with empty columns.
The changes you can make in a table depend on the following parameters:
- Table type is new or existing
- Mapping type is new or existing
Table type | Mapping type | Available adjustments |
---|---|---|
New table | New mapping | Rename column, change data type, change data source, mapping transformation, add column, delete column |
Existing table | New mapping | Add column (on which you can then change data type, rename, and update) |
Existing table | Existing mapping | none |
Mapping transformations
Some data format mappings (Parquet, JSON, and Avro) support simple ingest-time transformations. To apply mapping transformations, create or update a column in the Edit columns window.
Mapping transformations can be performed on a column of type string or datetime, with the source having data type int or long. For more information, see the full list of supported mapping transformations.
Advanced options based on data type
Tabular (CSV, TSV, PSV):
If you're ingesting tabular formats in an existing table, you can select Advanced > Keep table schema. Tabular data doesn't necessarily include the column names that are used to map source data to the existing columns. When this option is checked, mapping is done by-order, and the table schema remains the same. If this option is unchecked, new columns are created for incoming data, regardless of data structure.
Tabular data doesn't necessarily include the column names that are used to map source data to the existing columns. To use the first row as column names, select First row is column header.
Tabular (CSV, TSV, PSV):
If you're ingesting tabular formats in an existing table, you can select Table_mapping > Use existing schema. Tabular data doesn't necessarily include the column names that are used to map source data to the existing columns. When this option is checked, mapping is done by-order, and the table schema remains the same. If this option is unchecked, new columns are created for incoming data, regardless of data structure.
To use the first row as column names, select First row header.
JSON:
To determine column division of JSON data, select Nested levels, from 1 to 100.
Summary
In the Summary window, all the steps are marked with green check marks when data ingestion finishes successfully. You can select a card to explore the data, delete the ingested data, or create a dashboard with key metrics.
When you close the window, you can see the connection in the Explorer tab, under Data streams. From here, you can filter the data streams and delete a data stream.
Related content
- To manage your database, see Manage data
- To create, store, and export queries, see Query data in a KQL queryset