Try one the following:
Option 1: Use Azure Data Factory (ADF). This provides a fine-grained control, it's not performance-tier dependent, and can move just data
- Create an empty target database in the Standard (DTU) tier (
S2
, for example). - Use ADF Copy Data Activity to copy:
- From: Source Azure SQL DB (Business Critical, Elastic Pool)
- To: Target Azure SQL DB (Standard DTU-based)
- Select:
- Source/Target linked services using SQL authentication
- Auto-create tables or use pre-created schema
- Use staging if required (for large datasets)
ADF can copy schema and data, but you may want to pre-create indexes, constraints, etc., if you need 100% parity.
Option 2: Export/Import BACPAC via Azure Storage. This gives you full compatibility with any tiers and doesn't involve downtime
- Export the source DB to a BACPAC file (in Azure Blob Storage).
-- In PowerShell or Azure CLI (not T-SQL), use: az sql db export --name your-db-name --resource-group your-rg \ --server your-server-name --storage-key-type StorageAccessKey \ --storage-key "your-key" --storage-uri "https://yourstorage.blob.core.windows.net/container/your.bacpac"
- Import that BACPAC into a new Standard-tier database:
az sql db import --name new-db-name --edition Standard --service-objective S2 ...
You can delete the BACPAC after import, and you won't incur extra Business Critical costs during the copy — Azure handles the resource movement in the backend.
Option 3: Geo-Restore or Point-in-Time restore (this would involve an extra cost short term)
- Restore the Business Critical DB to a General Purpose (vCore) or Standard (DTU) tier first
- Then do the copy
- Drawback: Requires temporary storage of a full DB in an intermediate tier, may incur higher costs briefly
If the above response helps answer your question, remember to "Accept Answer" so that others in the community facing similar issues can easily find the solution. Your contribution is highly appreciated.
hth
Marcin