#azuresqldatabase
Explore tagged Tumblr posts
Text
Building Metadata-Driven Pipelines in Azure Data Factory
1. Introduction to Metadata-Driven Pipelines
Metadata-driven pipelines in Azure Data Factory (ADF) provide a dynamic and scalable approach to orchestrating data workflows. Instead of hardcoding pipeline configurations, metadata (stored in a database or JSON file) defines:
Source & destination locations
File formats & schemas
Transformation logic
Processing rules
This approach enhances reusability, reduces maintenance efforts, and allows for seamless pipeline modifications without redeploying code.
2. Storing and Managing Metadata
Metadata can be stored in:
Azure SQL Database: Structured metadata for multiple pipelines
Azure Blob Storage (JSON/CSV files): Unstructured metadata for flexible processing
Azure Table Storage: NoSQL metadata storage for key-value pairs
For this blog, we’ll cover two practical examples:
Using a JSON file stored in Azure Blob Storage
Using a metadata table in Azure SQL Database
3. Example 1: JSON-Based Metadata in Azure Blob Storage
Step 1: Define Metadata JSON File
Create a JSON file (metadata.json) in Azure Blob Storage to define source and destination details:json{ "pipelines": [ { "pipeline_name": "CopyDataPipeline", "source": { "type": "AzureBlobStorage", "path": "source-container/raw-data/" }, "destination": { "type": "AzureSQLDatabase", "table": "ProcessedData" }, "file_format": "csv" } ] }
Step 2: Create a Lookup Activity in ADF
Add a Lookup Activity in ADF to read the JSON metadata from Azure Blob Storage.
Configure the Dataset to point to the JSON file.
Enable the First row only option if fetching a single record.
Step 3: Use Metadata in a ForEach Activity
Add a ForEach Activity to iterate over metadata records.
Inside the loop, use a Copy Activity to dynamically move data based on metadata.
Step 4: Configure Dynamic Parameters
In the Copy Activity, set dynamic parameters:
Source Dataset: @activity('Lookup').output.pipelines[0].source.path
Destination Table: @activity('Lookup').output.pipelines[0].destination.table
Now, the pipeline dynamically reads metadata and copies data accordingly.
4. Example 2: SQL-Based Metadata for Pipeline Execution
Step 1: Create Metadata Table in Azure SQL Database
Execute the following SQL script to create a metadata table:sqlCREATE TABLE MetadataPipelineConfig ( ID INT IDENTITY(1,1) PRIMARY KEY, PipelineName NVARCHAR(100), SourceType NVARCHAR(50), SourcePath NVARCHAR(255), DestinationType NVARCHAR(50), DestinationTable NVARCHAR(100), FileFormat NVARCHAR(50) );INSERT INTO MetadataPipelineConfig (PipelineName, SourceType, SourcePath, DestinationType, DestinationTable, FileFormat) VALUES ('CopyDataPipeline', 'AzureBlobStorage', 'source-container/raw-data/', 'AzureSQLDatabase', 'ProcessedData', 'csv');
Step 2: Use a Lookup Activity to Fetch Metadata
Add a Lookup Activity in ADF.
Configure the Source Dataset to point to the MetadataPipelineConfig table.
Fetch all metadata records by disabling the First row only option.
Step 3: Use ForEach Activity and Copy Activity
Add a ForEach Activity to loop over the metadata rows.
Inside the loop, configure a Copy Activity with dynamic expressions:
Source Dataset: @item().SourcePath
Destination Table: @item().DestinationTable
Step 4: Deploy and Run the Pipeline
Once the pipeline is deployed, it dynamically pulls metadata from SQL and executes data movement accordingly.
5. Benefits of Metadata-Driven Pipelines
✅ Flexibility: Modify metadata without changing pipeline logic ✅ Scalability: Handle multiple pipelines with minimal effort ✅ Efficiency: Reduce redundant pipelines and enhance maintainability
6. Conclusion
Metadata-driven pipelines in Azure Data Factory significantly improve the efficiency of data workflows. Whether using JSON files in Azure Blob Storage or structured tables in Azure SQL Database, this approach allows for dynamic and scalable automation.
WEBSITE: https://www.ficusoft.in/azure-data-factory-training-in-chennai/
0 notes
Photo

When users click on the Get Data icon in Power BI, a drop-down menu appears and it shows all data sources from which data can be ingested. Check our Info : www.incegna.com Reg Link for Programs : http://www.incegna.com/contact-us Follow us on Facebook : www.facebook.com/INCEGNA/? Follow us on Instagram : https://www.instagram.com/_incegna/ For Queries : [email protected] #powerbi,#getdataicon,#powerbidesktop,#Pivot,#query,#sql,#DAX,#crossfiltering,#Azuresqldatabase,#Azure,#businessintelligence,#excelbi,#Visualization,#Oracle,#adhoc https://www.instagram.com/p/B-WbM1-ACA6/?igshid=14q97wiowtlr5
#powerbi#getdataicon#powerbidesktop#pivot#query#sql#dax#crossfiltering#azuresqldatabase#azure#businessintelligence#excelbi#visualization#oracle#adhoc
0 notes
Text
Azure Data Studio: January 2019 Update
https://www.quanrel.com/azure-data-studio-january-2019-update/ Azure Data Studio: January 2019 Update - https://www.quanrel.com/azure-data-studio-january-2019-update/ The team working on Azure Data Studio continues to release updates making the product better and better. I demonstrate one of the improvements in this video. #azuredatastudio #microsoft #azuresqldatabase Check Price & Availability azure sql database,azure,microsoft,microsoft azure,sql database,azure data studio,azure data studio: january 2019 update,sql server,microsoft sql server,t-sql,grant fritchey,grantfritchey,grantbrfr55
0 notes
Link
It's not a good thing that Microsoft dropped five minutes worth of data in Azure SQL Database. However, if any of us had been in charge, it might have been a lot worse than five minutes. We need to have better perspective about how well the cloud operates and our expectations from that operation. #azure #azuresqldatabase #sqlserver
0 notes
Text
Assess an Enterprise With Data Migration Assistant–Part 3: Loading an Assessment
In my previous post I took you through how to run a scaled assessment using the dmaDataCollector PowerShell function.
In this post I’ll take you through how to consume the assessment JSON output using the dmaProcessor PowerShell function.
Consuming the assessment JSON file
Once your assessment has finished you are now ready to import the data into SQL Server for analysis.
Open PowerShell and run the dmaProcessor function.
Parameters
processTo – Where the JSON file will be processed too. Possible values are SQLServer and AzureSQLDatabase
serverName – The SQL Server instance of where the data is to be processed too. If using AzureSQLDatabase then put the SQL Server name only (do not put .database.windows.net). You will be prompted for 2 logins when targeting Azure SQL Database. The first is your Azure tenant credentials, the second is your admin login for the Azure SQL Server.
CreateDMAReporting – This will create the staging database where the JSON file will be processed too. If the database already exists and this flag is set to one then the objects do not get created. This is useful is a single object is dropped as this will create the object again
CreateDataWarehouse – This will create the datawarehouse which will be used by the PowerBI report
databaseName – The name of the DMAReporting database
warehouseName – The name of the data warehouse database
jsonDirectory – The directory containing the JSON assessment file. If there are multiple JSON files in the directory then they are processed 1 by 1
The dmaProcessor should take only a few seconds to process a single file.
Loading the data warehouse
Once the dmaProcessor has finishing processing the assessment files, the data will be loaded into the DMAReporting database in the ReportData table.
From here we need to load the data warehouse.
Use the attached script to populate any missing values in the dimensions then load the data warehouse.
The script will take the data from the ReportData table in the DMAReporting database and load it into the warehouse. If there are any errors during this load process it will likely be due to missing entries in the dimension tables.
Set your database owners
To get the most value from the reports it’s a great idea to now set the database owners in the dimDBOwner dimension and update DBOwnerKey in the FactAssessment table. This will allow slicing and filtering the PowerBI report based on specific database owners.
The attached script provides the basic TSQL statements for you to do this. However, it is not mandatory.
In the next post we will wrap this process up by adding the reporting on top of this data warehouse.
Get the warehouse loading script here
LoadWarehouse
Script Disclaimer
The sample scripts provided here are not supported under any Microsoft standard support program or service. All scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages. Please seek permission before reposting these scripts on other sites/repositories/blogs.
from Microsoft Data Migration Blog http://ift.tt/2FiDFqP via IFTTT
0 notes
Text
Azure SQL Database Content Synchronisation to Downstream Environments
Data Flow ====> Production –> Staging –> Integration –> Test –> Dev
I have been working with various clients with different kind of requirements for each one of them. Thinking about it all of them lead to one common purpose. What is that? We develop applications and write automated tests to validate them on regular basis or on each release and even on nightly to run wide range of tests. One of the…
View On WordPress
#Automation#Azure Powershell#azureautomation#AzureSQLDatabase#Database#Infrastructure#Migration#runbooks#SQL Server#Test Data
0 notes
Text
Azure Data Factory for Healthcare Data Workflows
Introduction
Azure Data Factory (ADF) is a cloud-based ETL (Extract, Transform, Load) service that enables healthcare organizations to automate data movement, transformation, and integration across multiple sources. ADF is particularly useful for handling electronic health records (EHRs), HL7/FHIR data, insurance claims, and real-time patient monitoring data while ensuring compliance with HIPAA and other healthcare regulations.
1. Why Use Azure Data Factory in Healthcare?
✅ Secure Data Integration — Connects to EHR systems (e.g., Epic, Cerner), cloud databases, and APIs securely. ✅ Data Transformation — Supports mapping, cleansing, and anonymizing sensitive patient data. ✅ Compliance — Ensures data security standards like HIPAA, HITRUST, and GDPR. ✅ Real-time Processing — Can ingest and process real-time patient data for analytics and AI-driven insights. ✅ Cost Optimization — Pay-as-you-go model, eliminating infrastructure overhead.
2. Healthcare Data Sources Integrated with ADF
3. Healthcare Data Workflow with Azure Data Factory
Step 1: Ingesting Healthcare Data
Batch ingestion (EHR, HL7, FHIR, CSV, JSON)
Streaming ingestion (IoT sensors, real-time patient monitoring)
Example: Ingest HL7/FHIR data from an APIjson{ "source": { "type": "REST", "url": "https://healthcare-api.com/fhir", "authentication": { "type": "OAuth2", "token": "<ACCESS_TOKEN>" } }, "sink": { "type": "AzureBlobStorage", "path": "healthcare-data/raw" } }
Step 2: Data Transformation in ADF
Using Mapping Data Flows, you can:
Convert HL7/FHIR JSON to structured tables
Standardize ICD-10 medical codes
Encrypt or de-identify PHI (Protected Health Information)
Example: SQL Query for Data Transformationsql SELECT patient_id, diagnosis_code, UPPER(first_name) AS first_name, LEFT(ssn, 3) + 'XXX-XXX' AS masked_ssn FROM raw_healthcare_data;
Step 3: Storing Processed Healthcare Data
Processed data can be stored in: ✅ Azure Data Lake (for large-scale analytics) ✅ Azure SQL Database (for structured storage) ✅ Azure Synapse Analytics (for research & BI insights)
Example: Writing transformed data to a SQL Databasejson{ "type": "AzureSqlDatabase", "connectionString": "Server=tcp:healthserver.database.windows.net;Database=healthDB;", "query": "INSERT INTO Patients (patient_id, name, diagnosis_code) VALUES (?, ?, ?)" }
Step 4: Automating & Monitoring Healthcare Pipelines
Trigger ADF Pipelines daily/hourly or based on event-driven logic
Monitor execution logs in Azure Monitor
Set up alerts for failures & anomalies
Example: Create a pipeline trigger to refresh data every 6 hoursjson{ "type": "ScheduleTrigger", "recurrence": { "frequency": "Hour", "interval": 6 }, "pipeline": "healthcare_data_pipeline" }
4. Best Practices for Healthcare Data in ADF
🔹 Use Azure Key Vault to securely store API keys & database credentials. 🔹 Implement Data Encryption (using Azure Managed Identity). 🔹 Optimize ETL Performance by using Partitioning & Incremental Loads. 🔹 Enable Data Lineage in Azure Purview for audit trails. 🔹 Use Databricks or Synapse Analytics for AI-driven predictive healthcare analytics.
5. Conclusion
Azure Data Factory is a powerful tool for automating, securing, and optimizing healthcare data workflows. By integrating with EHRs, APIs, IoT devices, and cloud storage, ADF helps healthcare providers improve patient care, optimize operations, and ensure compliance with industry regulations.
WEBSITE: https://www.ficusoft.in/azure-data-factory-training-in-chennai/
0 notes
Link
The team working on Azure Data Studio continues to release updates making the product better and better. I demonstrate one of the improvements in this video. #azuredatastudio #microsoft #azuresqldatabase
0 notes
Link
In order to support realistic testing, you need to have both a realistic environment and some way to generate load in that environment. This video shows you one way of setting up a collection of Azure SQL Databases to simulate a real environment. I then use PowerShell to generate load within that environment. #azure #azuresqldatabase #sqlserver
0 notes
Text
Assess an Enterprise With Data Migration Assistant–Part 2: Running an Assessment
In my previous post I took you through the prerequisites for successfully running a scaled assessment against your SQL Server enterprise.
In this post I’ll take you through how to run a scaled assessment using the dmaDataCollector PowerShell script.
Running a scaled assessment
Ensure that the PowerShell modules have been loaded into the modules directory and that an inventory has been created. If not, see Part 1 of this series before continuing.
Open PowerShell and run the dmaDataCollector function.
Parameters
getServerListFrom – Your inventory. Possible values are SqlServer or CSV
serverName – The SQL Server instance name of the inventory when using SqlServer in the getServerListFrom parameter
databaseName – The database hosting the inventory table
AssessmentName – The name of the DMA Assessment
TargetPlatform – The assessment target type you would like to perform. Possible values are AzureSQLDatabase, SQLServer2012, SQLServer2014, SQLServer2016, SQLServerLinux2017, SQLServerWindows2017
AuthenticationMethod – The authentication method for connecting to the SQL Server targets to assess. Possible values are SQLAuth and WindowsAuth
OutputLocation – The location to store the JSON assessment output file
If there is an unexpected error then the command window which gets initiated by this process will be terminated. Review the error log to see why it failed.
The output file
The output file will be written to the directory specified in the OutputLocation parameter.
Note that depending on the number of databases being assessed and the number of objects within the databases, the assessments can take a very long time. The file will be written once the assessments have completed.
In the next post we will look at how to consume this assessment JSON file in preparation for reporting.
Script Disclaimer
The sample scripts provided here are not supported under any Microsoft standard support program or service. All scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages. Please seek permission before reposting these scripts on other sites/repositories/blogs.
from Microsoft Data Migration Blog http://ift.tt/2G1HiCN via IFTTT
0 notes
Text
How to restore Deleted Azure SQL Databases - Microsoft Azure
How to restore Deleted Azure SQL Databases – Microsoft Azure
This is a fine sunny day, writing a blogpost siting in my garden is really very relaxing. This is a simple blog post, how to recover the azure sql databases if it’s been accidently delete or in some other manner. Below script will help you to restore the databases. Also, I had a comment on one of my past post saying not to have passwords in the script. Hence this version been amended. Hope this…
View On WordPress
0 notes
Text
Redgate Tools - Enabling Continuous Delivery for Database Changes with Octopus Deploy and Teamcity
Redgate Tools – Enabling Continuous Delivery for Database Changes with Octopus Deploy and Teamcity
I have been recently working with various client around enabling and adopting devops practices within enterprise organisations. It’s interesting to see many people think it’s just code development team and operations have to work together. In reality there is a team who develop/maintain databases which needs to be highlighted to be part of these practices which in my mind is considered as key…
View On WordPress
#AzureSQLDatabase#Continuous Deployment#Continuous Integration#Git#Octopus Deploy#Red-Gate#TeamCity#VIsual Studio 2015
0 notes