Don't wanna be here? Send us removal request.
Link
Introduction
In a scenario where SQL audit data requires special processing, reviews, and analysis, it can be accomplished by designing an in-house custom application to read the data and provide desired output and visualization. Since the ApexSQL Audit central repository database uses a proprietary design with encryption applied, a custom database will have to be used to provide data to the said in-house application.
ApexSQL Audit API can be used to retrieve the SQL audit data in raw form (formatted in most capable JSON format) and populate a custom database with it. The final problem that remains is how to ensure that the custom database always has the latest data imported. This is best resolved with an automated data retrieval method which can be scheduled with the desired frequency, to retrieve data from the repository and import it into a custom database.
Such a solution can be easily designed with a Windows PowerShell script and this article will show an example of how should such a script work.
Custom database
The design of the 3rd party database, used for the custom application, depends on the requirements of that application. However, the basic example of the table that can be used to store the data extracted with the PowerShell API call would look like this:
CREATE TABLE [dbo].[report] ( [ID] [int] IDENTITY(1, 1) NOT NULL ,[Access list] [varchar](max) NULL ,[Application] [varchar](40) NULL ,[Client host] [varchar](20) NULL ,[Database] [varchar](20) NULL ,[Date] [datetime] NULL ,[Event type] [varchar](20) NULL ,[Internal event type] [varchar](20) NULL ,[Login] [varchar](20) NULL ,[Object] [varchar](20) NULL ,[Operation] [varchar](20) NULL ,[Schema] [varchar](20) NULL ,[Server] [varchar](20) NULL ,[State] [varchar](10) NULL ,[Text data] [varchar](max) NULL ,CONSTRAINT [PK__ID__A2CF65630E6B9C22] PRIMARY KEY CLUSTERED ([ID] ASC) )
The table design is proposed based on all details that are available for each event within the SQL auditing data. When the table is ready, it is time to utilize the PowerShell and populate it.
Automation script
The PowerShell script that can be used to populate a SQL table can be designed differently, based on requirements. The basic principle that will be respected with the example in this article consists of three phases:
Read out the SQL audit data using the API call
Create the SQL query to insert the data
Run the query against the target database
For the part where a query is run, we will create a PowerShell function that will use the target SQL Server name, target database, SQL credentials, and the PowerShell object that contains SQL commands as parameters. This means that with this function, we can just change target parameters and run the query against a different target if necessary.
The function should look like this:
Function Add-APIData ($server, $database, $query, $SQLCredentials) { $SQLConnection = New-Object System.Data.SqlClient.SqlConnection $SQLConnection.Credential = $SQLCredentials $SQLConnection.ConnectionString = "SERVER=$server;DATABASE=$database;Integrated Security=true" $SQLCommand = New-Object System.Data.SqlClient.SqlCommand $SQLCommand.Connection = $SQLConnection $SQLCommand.CommandText = $text $SQLCommand.CommandTimeout = 0 $SQLConnection.Open() $SQLCommand.ExecuteNonQuery() $SQLConnection.Close() $SQLCommand.Dispose() $SQLConnection.Dispose() }
This function will establish a connection to the target SQL Server and the custom database, using given credentials and execute the query constructed with the main part of the automation script. Now, the credentials part is usually resolved by adding this command line in the script body:
$SQLCredentials = Get-Credentials
But this will prompt for credentials entry every time the script is executed, which defies the purpose of automation. Instead of that, a technique described in the PowerShell encrypt password techniques for SQL Server article can be used to save credentials in encrypted form and use them with every execution.
With the function defined it is time to move to the main part of the script. The first thing to do is to define SQL Server name and database name for the target, i.e. the custom database:
$SQLServer ='Server Name' $DatabaseName ='Database Name'
Next, we can proceed with pulling the data from the SQL audit data repository. These commands will run this job:
$WebCall = New-Object System.Net.WebClient $WebCall.Credentials = Get-Credential $result = $WebCall.DownloadString("https://Q4V4Y6Y2:49152/report/data?name=Q4V4Y6Y2%5cMilan%2fMonthly+report&last=31:0:00:00&page=1&eventsPerPage=100000")
The „DownloadString“ argument in the last command is the URL needed to run the API call and retrieve the SQL audit data. This string is generated from the ApexSQL Audit application and instructions on how to do that are available in the article: Using API to generate SQL auditing reports
We can see that the command $WebCall.Credentials = Get-Credential Credential is used, and the command will prompt for the ApexSQL Audit access credentials. Those credentials are referring to the accounts dedicated to the ApexSQL Audit access:
Note that it is implied that the account used for the ApexSQL Audit administration is also a valid access account for the API calls:
Again, prompting for credentials disables the automation purpose, so instead of using this command, repeat the same procedure to save credentials as for the SQL credentials.
If the SQL audit reporting API call was run, it is time to create the SQL script that will populate the target table with the data. The following commands are required for the SQL script creation:
$result = ConvertFrom-Json $result $SQLScript = @() foreach ($r in $result.Events) { if ($r.'Text data'.Contains('''')) { $r.'Text data'=$r.'Text data'.Replace('''','"') } $SQLScript += "INSERT INTO dbo.report VALUES ('" + $r.'Access list' + "','" + $r.Application + "','" + $r.'Client host' + "','" + $r.Database + "','" + $r.Date + "','" + $r.'Event type' + "','" + $r.'Internal event type' + "','" + $r.Login + "','" + $r.Object + "','" + $r.Operation + "','" + $r.Schema + "','" + $r.Server + "','" + $r.State + "','" + $r.'Text data' + "')" + $nl }
Since the data is streamlined in JSON format, the command $result = ConvertFrom-Json $result will convert the JSON records into a PowerShell object array. In other words, it will convert data output from this form:
To this form:
And now we can convert those records into standard SQL INSERT values which this part of the commands listed above will do:
$SQLScript += "INSERT INTO dbo.report VALUES ('" + $r.'Access list' + "','" + $r.Application + "','" + $r.'Client host' + "','" + $r.Database + "','" + $r.Date + "','" + $r.'Event type' + "','" + $r.'Internal event type' + "','" + $r.Login + "','" + $r.Object + "','" + $r.Operation + "','" + $r.Schema + "','" + $r.Server + "','" + $r.State + "','" + $r.'Text data' + "')" + $nl
We can see that the list of values corresponds to the columns in the table example we used at the beginning, so if a different set of columns is used for the table, the matching set of values should be used.
One problem that might occur when using the query text data as the INSERT value is that the text can contain the apostrophe character which will not work with the INSERT command. This is where we used the following clause:
if ($r.'Text data'.Contains('''')) { $r.'Text data'=$r.'Text data'.Replace('''','"') }
The clause will examine query text data strings and will replace apostrophe characters with double quotes which is fine for the INSERT command. This clause can be used for similar transformations.
The final command will run the created function, establish the SQL Server and database connection, and run the INSERT script:
Add-APIData -server $SQLServer -database $DatabaseName -text $SQLScript -SQLCredentials $SQLCredentials
We can see that, for the arguments in this function, the values that we stored in the script body were used.
If the script was run manually we can open SSMS and with the query SELECT * FROM [dbo].[report] we can verify that the table has been populated:
The entire PowerShell script would look like this:
The save password part (which should be run only once in a separate script):
#Type the SQL account password to save it for automation Read-Host -AsSecureString |ConvertFrom-SecureString |Out-File C:\Credentials\SQLPassword.txt #Type the ApexSQL Audit account password to save it for automation Read-Host -AsSecureString |ConvertFrom-SecureString |Out-File C:\Credentials\ApexSQLAuditPassword.txt
The automation script:
#Define the function that establishes the SQL connection and runs the script Function Add-APIData ($server, $database, $query, $SQLCredentials) { $SQLConnection = New-Object System.Data.SqlClient.SqlConnection $SQLConnection.Credential = $SQLCredentials $SQLConnection.ConnectionString = "SERVER=$server;DATABASE=$database;Integrated Security=true" $SQLCommand = New-Object System.Data.SqlClient.SqlCommand $SQLCommand.Connection = $SQLConnection $SQLCommand.CommandText = $text $SQLCommand.CommandTimeout = 0 $SQLConnection.Open() $SQLCommand.ExecuteNonQuery() $SQLConnection.Close() $SQLCommand.Dispose() $SQLConnection.Dispose() } #Set accounts for script automated runs $SQLAccount = 'Account name' $ApexSQLAccount = 'Account name' $EncryptedSQLPass = Get-Content -Path "C:\Credentials\SQLPassword.txt" $EncryptedApexSQLCPass = Get-Content -Path "C:\Credentials\ApexSQLAuditPassword.txt" $SQLCredentials New-Object System.Management.Automation.PSCredential -ArgumentList $SQLAccount, (ConvertTo-SecureString $EncryptedSQLPass) #SEt the target where the SQL script will be run $SQLServer = 'SQL Server name' $DatabaseName = 'Database name' #Run the Web API call $WebCall = New-Object System.Net.WebClient $WebCall.Credentials = New-Object System.Management.Automation.PSCredential -ArgumentList $ApexSQLAccount, (ConvertTo-SecureString $EncryptedApexSQLPass) $result = $WebCall.DownloadString("https://Q4V4Y6Y2:49152/report/data?name=Q4V4Y6Y2%5cMilan%2fMonthly+report&last=31:0:00:00&page=1&eventsPerPage=100000") #Convert the JSON data to PowerShell obect array $result = ConvertFrom-Json $result #Populate the required script with the INSERT values and convert values where necessary $SQLscript = @() foreach ($r in $result.Events) { if ($r.'Text data'.Contains('''')) { $r.'Text data'=$r.'Text data'.Replace('''','"') } $SQscript += "INSERT INTO dbo.report VALUES ('" + $r.'Access list' + "','" + $r.Application + "','" + $r.'Client host' + "','" + $r.Database + "','" + $r.Date + "','" + $r.'Event type' + "','" + $r.'Internal event type' + "','" + $r.Login + "','" + $r.Object + "','" + $r.Operation + "','" + $r.Schema + "','" + $r.Server + "','" + $r.State + "','" + $r.'Text data' + "')" + $nl } #run the function Add-APIData -server $SQLServer -database $DatabaseName -text $SQLScript -SQLCredentials $SQLCredentials
Scheduling
The PowerShell script should be saved as a standard .ps1 file and any scheduler can be used to run it at the desired frequency. Native methods include the Windows Task Scheduler and SQL Agent jobs. The frequency should be set according to the individual needs, i.e. to keep the data set as fresh as needed. Note that the SQL audit report definition created with the ApexsSQL Audit, that the API call will be referring to, will have to be configured with the time range that matches that exact frequency to avoid duplicate records in the custom database. For example, if we need daily retrieval frequency, the report definition should be configured to take the last 1 day of records:
Conclusion
There are already available popular tools for data analysis, like Excel or Power BI that can be used with the ApexSQL Audit API, but if a custom solution is used, including a custom database to work with this solution, the best way to keep the analysis accurate is to automate the SQL audit data retrieval.
0 notes
Link
Growing datasets and information in the modern digital age become considerable businesses’ assets as data drives many important decisions for organizations. Therefore, many companies are challenged with keeping a close eye on data governance processes that expand together with the amount of data being accumulated. To an extent, SQL auditing data is not an exception, and it might play an important role while screening for potential data breaches, learning about activity trends and data flows.
In this article, we will explore the possibilities of how to collect audit data and put it to use in the context of data analytics using MS Power BI, create valuable information, and learn about databases activities and data access.
Extracting data using native auditing techniques
There are fewer options to collect, store, and organize auditing data using the native SQL auditing techniques. In this article, we’ll be using Extended Events sessions for demonstration purposes. Let us quickly jump into creating a session file to track access, more particularly SELECT statements against the database using SQL Server Management Studio (SSMS) user interface.
Create new Extended Events session
Connect to a SQL Server using SSMS
In the Object Explorer, expand the Management node > Extended Events > Right click on the Sessions node and choose New Session:
In the New Session wizard, inside the General tab give your session a familiar name:
In the Events tab we will configure the policy that will tell SQL Server what type of auditing data we are collecting, and to track SELECT statements:
Navigate to Events tab and into Events library search for sp_statement_completed
Highlight it in the grid and choose right arrow button “>” to move the event to the Selected events box:
Staying on the Events tab, highlight the sql_statement_completed from the Selected events grid and choose Configure:
Events configuration options allows to be more specific on what information will be captured during the SQL audit process, for this demo, inside the Global Fields we’ve chosen:
Client application name
Client hostname
System time
Database name
SQL text, and
Username
In the Filter (Predicate) tab, add the new clause and select sqlserver.sql_text field, in the operator choose like_i_sql_unicode_string and put “%SELECT%” as a value:
Now that we’ve set up the fields and filters, continue by navigating to the Data storage tab
Select event file for the storage type
Configure the file properties to instantiate the files on a filesystem location of your choice and aspiration for maximum file size and rollover policy:
Choose OK to finish. The event session should be instantiated in the Extended Events session:
Prepare auditing data output for Power BI use
SQL audit data collection is in commence the moment a session is started. The data is being accumulated in the form of a flat-file on the file system per the configuration used in this demo. Moving forward, to get some sample data it is required to roll some SELECT queries against the database on the server to collect the data. Considering that we have data inside our session files ready, let’s put it to use. Auditing by nature can grow datasets really fast for the highly transactional databases and systems, and we might have a bunch of files storing our data on the file system. To consolidate and prepare data for the output, SSMS provides the ability to merge data files as follows:
Go to File -> Open -> Merge Extended Events files…:
In the Merge Extended Events Files dialog choose Add and select files from the location chosen to store session files on the file system:
Upon files load, SSMS navigation bar is expanded with additional drop-down “Extended Events” that provides the Export option
Power BI can load from any Export options, for this demo we are using export to csv file:
Import Extended events data into Power BI
Power BI yet as freeware is powerful enough for data processing to create understandable and upgradable reports for modern business needs nowadays. As a comprehensive tool, it provides the ability to import data from various ranges and types of data sources, including the CSV file which we’ve created from the SQL auditing source.
Import audit data using the Get data option from the main application ribbon in Power BI desktop application as follows:
In the Get Data dialog select File tab and choose Text/CSV:
The data can be loaded as-is, or transformed prior to the load from the data preview dialog
Regardless of the fact that we’ve chosen a subset of global fields in Extended Events session configuration, there are yet several system-related information in our data set, and to remove the irrelevant data, continue by choosing Transform data button
In the transform data working pane, select all the irrelevant columns and from the context menu choose Remove columns:
After the data set is transformed to match the expectations and configuration setup continue with Close & Apply option from the ribbon:
Upon application, the data is ready for visualization, with unlimited options to filter and present data in the Power BI. In the below screenshot we’ve created the graph that shows numbers of databases access from the various applications:
To learn more about the Power BI capabilities, reporting visualization options you can explore the Get started creating in the Power BI service article
Extracting audit data using third-party software
ApexSQL Audit is a comprehensive third-party software for SQL auditing that is focused on ease of configuration, use, and making compliance with many auditing regulations easy. As a solution, it provides functionalities for a wide range of scenarios during the auditing process, a tamper-evident central repository database for data keeping, automated reporting, and alerting features.
Configuring auditing policy
ApexSQL Audit is developed to easy set up and configure auditing policies through an interactive user interface for databases and SQL Server instances alike, regardless of the edition or version. Let us dive deeper into how to configure and put auditing policy into motion using ApexSQL Audit interface:
Run ApexSQL Audit and go to the Configure tab
If not already done, add SQL Server to the list
Add database for auditing on Add database button in the server pane. Select the database(s) from the databases list and choose Add
Select the database and choose desired SQL operations for auditing
Upon Apply, the auditing policy will be configured, and the data collected per defined configuration which can be examined in the summary configuration pane:
Creating report template
Now when the configuration is applied, and data collection is active, the audit reports can be easily generated into a preview grid, exported file, or retrieved via the API calls. To create a meaningful and expected SQL audit report, the Reports feature provides the ability to filter data output using a subset of options to filter by event data properties, including the database name, logins, applications, time filters, operations, and more:
Reporting in the ApexSQL Audit provides a wide range of options and the ability to create, manage and automate data output from the SQL audit trail, as it is detailed in the Create report definitions, output and schedule article. Report data output is based on the filter conditions used; besides the ability to create reports from a pre-defined report template list, data output can be filtered per high-granularity custom filters that are available in the tool.
Importing audit data into Power BI using API call
Once the reporting template and filter conditions are configured, the data output can be consumed by Power BI using the API call. Before we get into detailed workflow, let’s examine the specifics and pre-requisites on how to setup and use API web server in ApexSQL Audit as described in Using API to generate SQL auditing reports.
As the web API server is up and running, we can utilize the Reports option to get raw SQL audit data via the API call, as follows:
Go to Reports tab, highlight the report template and choose API call from the Generate drop-down list:
Copy the API call string in the Report API call pop-up:
Open the Power BI console, choose Get data and in the opened dialog continue with the Web option from the Other tab:
In the From Web dialog, paste the API call string in the URL field and choose OK:
To access SQL Audit report it is required to authenticate using Windows credentials. Due to application-level security features in ApexSQL Audit, choose the account that has access to auditing data trail:
The Connect button will run the API call and load the data information in the Power Query Editor. From here, open the Events list to convert it to table in the next step:
Continue with To Table option from the Convert tab of the Power Query Editor’s main ribbon menu:
Inside the To Table dialog, use the default values for delimiter and extra columns:
Use the header columns button to expand the columns from the table controls:
Optionally, change data type for date column so you can create time-lined visualizations:
Complete data loading by choosing the Close and Apply button in the main ribbon menu:
The data is loaded and ready to be consumed for data analytics, visualization of data access trends, and variety range of information that can be reconstructed from the audit data. Below is an example of the number of operations per operation and per login from the SQL audit data report:
Conclusion
Extra data visibility and consumption is the number one priority for data-driven businesses, however, the road to get the consumable, easy understand and trace-able data is challenging without an effective approach to collect and prepare data samples when creating information. With that in mind, using native SQL auditing techniques is hard to efficiently implement on large-scale environments from configuration, use, security, and maintainability perspectives. Whilst native auditing is suitable for small and low-transactional systems and databases to a degree, ApexSQL Audit is a considerable solution for both, big and small, low and high transactional systems, as one can easily configure auditing policies and use pre-defined regulatory oriented templates to collect and report data fast and easy.
0 notes
Link
Text file represents a document that contains information in plain text. Any text editor and most popular web browsers are suitable for viewing or editing text file formats. Also, all smartphones are equipped with suitable applications that can open and edit text file formats. As such, text file formats are the most relevant file formats for sharing information between users. SQL Server users, among other things, can export data in text file format, which will be explained further in the article.
Exporting SQL Server data to a text file format will be explained through using the next two application:
SQL Server Import and Export Wizard – the Wizard that allows users to copy data from a source to a destination
ApexSQL Pump – the standalone application that allows the users to import and export SQL Server data in various file formats
Export SQL Server data by using the SQL Server Import and Export Wizard
The SQL Server Import and Export Wizard allowing users to copied data from and to data sources. This Wizard can be opened in two ways, through SQL Server Management Studio, or a stand-alone application through the Start menu. This article will explain how to with SQL Server Import and Export Wizard opened through SQL Server Management Studio.
When the SQL Server Management Studio is opened establish connection to a desired SQL Server. In the Object Explorer panel right-click on the database from where the data should be exported. Select the Tasks option to open the submenu in which the Export data action should be chosen:
The SQL Server Import and Export Wizard will appear, with the Welcome to SQL Server Import and Export Wizard step as the first step of export SQL Server data. The information about what this wizard can do is contained in this step. By clicking on the Next button, the wizard will move to the next step:
The Choose a Data source is the next step, as the name suggests, the source, from which the data will be copied, is selected. When the data from SQL Server are ready to be exported, from the Data source drop-down list SQL Server Native Clientis needed to be selected. Based on selected data source the section below that drop-down list will automatically change. For the SQL Server Native Client data source type it is necessary to set SQL Server, type of Authentication, and a Database from which the data will be exported. For demonstration purpose in this article the Phonebook database which is located on SQL Server 2019 instance is chosen. When all is select click on the Next button to proceed further:
The Choose a Destination is the next step that in the SQL Server Import and Export Wizard is presented. From the Destination drop-down list, the Flat File Destination option should be selected. The Flat File Destination writes data to a text file:
In the section below the Destination drop-down list, the options related to selecting and specifying the file properties as well as the file format need to be configured. In the File name text box, the path and file name of the flat file should be entered or it can be browsed with the click on the Browse button:
When the file location is defined in the File name text box, the locale to provided language-specific information for sorting and for date and time format is need to be specified from the Locale drop-down list. When the Locale is selected, checking or unchecking the Unicode checkbox will specify if the file uses Unicode character set. If the file uses Unicode the code page can not be specified, otherwise, from the Code page drop-down list, the code for non-Unicode text can be specified:
In the Format section of Choose a Destination step whether the file uses delimited, fixed-width or ragged right formatting can be set:
When all of the previously explained options are set click on the Next button to proceed to the next step:
In the Specify Table Copy or Query step, the Copy data from one or more tables or views or to Write a query to specify the data to transfer options can be chosen. The Copy from one or more tables or views option will execute the process within the wizard, so it will be selected for this article. By clicking on the Next button the wizard will be taken to the next step in the export SQL Server data process:
The Configure Flat File Destination is the step in which the source table or view will be chosen to export SQL Server data from. In this example the dbo.Phone table is selected for exporting SQL Server data. When a table or view is selected in the Specify the characters that delimit the destination file section, characters to use as row and column delimiter are set. After configuring this step click on the Next button to proceed:
The Save and Run Package is the next step in the export SQL Server data process. In this step are two options available:
Run immediately – if this option is selected the import or export SQL Server data will run immediately upon wizard ending, and this option is checked by default
Save SSIS Package – this option offers to save the setting as an SSIS package, which optionally can be customized and run again later
The goal of the example in this article is to export SQL Server data, and for this the Run Immediately option should stay checked in this step. Click on the Next button will take the process further:
The last step in the export SQL Server data process through this wizard is the Complete the Wizard step in which the choices that are made in the wizard can be verified. If users agree with all choices in this step, clicking on the Finish button will finalize the configuration to export SQL Server data:
When is the configuration part is finished, the last step in the process of exporting data is the Summary step with information does the process is finished successfully or not. Click on the Close button the SQL Server Import and Export Wizard will be closed:
After the process is finished, the exported SQL Server data will look like this:
Export SQL Server data in text file format by using ApexSQL Pump
ApexSQL Pump is a standalone application with the possibility to import and export SQL Server data in different file formats, and the text file format is one of them.
On the application startup, the New project windowwill be shown, wherein the Data Source tab the SQL Server instance, the type of Authentication and Database needs to be set. As with previous example, the SQL Server 2019 instance and the Phonebook database will be used. When this is set in the Data Source tab, click on the Next button:
The view will go to the Action tab, whereby the Export action should be selected. Click on the OK button to switch the view to the main window of ApexSQL Pump:
The main window will show the list of all tables and views from the selected database. The application offers the capability to select multiple tables or views to export SQL Server data from, and for this article, the dbo.Phone table will be picked. If a desired table is checked in the main grid, the next step is to choose in which file format that table will be exported. That should be done by clicking on the Manage button in the Home tab of the main window:
The Manage export window will be opened. Click on the Text button to select the text file format for export:
The Settings button will appear to the right of the selected file format. Click on it to opent the Text options window:
In the Text options window, the options related to the exported text file can be set, and there are divided into three sections:
Tab – this section contains options related to the header of the exported table. In this section it is possible to set if the exported table will have the column header or not, and how will columns in the header be separated (with tabs or with spaces)
Grid – in this section the options to show data as grid can be set, and if grid is enabled a column and row separator can be customized
Horizontal text alignment – this section is related to text alignment and maximum column length, if some data value exceeds this length it will be truncated
When formatting options are set, click the OK button to bring the view back to the Manage export window. The Rows tab can be configured next . This is where the number and scope of rows to export can be specified:
For additional operations and advanced usage the Additional scripts tab, where SQL scripts for manipulating the SQL Server data, can be used. The script that is used before the export SQL Server data process is the Pre-processing script, and the script used after the export process is the Post-processing scripts. When the export configuration is all set, click the OK button to confirm it and return to the main window:
To procede with the export SQL Server data process, click on the Process button in the Home tab:
The Job summary window will be shown containing the information about how many tables or views, columns are ready to be exported in the Summary section, and the information about mistakes or deviation which need to be fixed before the export SQL Server data process is started in the Warnings section. If there are no warnings to fix, and after reviewing the summary information, the process can start by clicking on the Export button:
The Browse For Folder window will appear, in which the location to save the exported file to is chosen. Click on the OK button will close that window, and the SQL Server data will be exported:
The exported file will have the name as the table in this case dbo.Phone, and it will look like this:
ApexSQL Pump is allowing easy export data from multiple tables and views to text file formats at the same time. Besides text file formats, ApexSQL Pump can export SQL Server data to 12 more file formats and import data to SQL Server from 11 file formats.
0 notes
Link
Although ApexSQL Audit provides high versatility for SQL auditing, it lacks internal functionalities for data repository redundancy or disaster recovery. There is the capability to archive the central repository but that would require further manual manipulation to backup those archives and create some disaster recovery plan. Inheritably, the active repository remains unattended. For disaster recovery, the application will have to rely on the native SQL Server backup method.
The problem starts with the idea that scheduled backups should be focused on transaction logs, with the advantage to enable restoration at a point in time and, naturally, prevent multiplying database backups. This requires that the repository database is set with the Full recovery model and it is always created with the Simple model by the SQL auditing application. Using the Simple model has the advantage to minimize the transaction logs and preserve storage space (all operations against the repository database are already logged in it) and, as consequence, transaction logs would be useless for backups.
In this article we will explain how to resolve the SQL audit data repository database recovery model, set it to Full automatically, and set scheduled backups.
Requirements:
SQL Server Agent active on the SQL Server where the central repository is hosted
SQL audit data central repository creation
The central repository database (named “ApexSQLCrd”) is initially automatically created during the ApexSQL Audit main application installation. As mentioned in the introduction, it is created with the Simple recovery model. If we assume that we can simply change the database after that to the Full recovery model and continue with database disaster recovery solution creation based on incremental transaction log backups, it will not be possible. The application has the built-in archiving scheduler which will trigger archiving repository data based on lapsed period or database size limit:
More information about the central repository database archiving can be found in the Archiving the Central repository database – ApexSQL Audit article.
This process will rename the database, keep it as the data source, and in the end, initiate a new, blank database with the original name (“ApexSQLCrd”). After this process, the created disaster recovery solution will stop working as the new database will again be created with the Simple recovery model.
Setting the Full recovery model
Since the central repository database is always created with the Simple recovery model, we would need to automate the switching upon database creation. We can easily achieve that using the SQL Agent with a SQL Job and the alert mechanism that comes with the agent.
Open SSMS and from the Object Explorer find the Jobs node under the SQL Server Agent node. Right-click on the Jobs node and from the context menu choose the New Job… option to invoke the New Job window. In the General tab, enter the recognizable job name, like “Convert DB” seen in the screenshot, and choose the account as the job owner:
The account will need to have enough privileges to perform database properties changes, so it is safe to use the built-in “sa” account.
Switch to the Steps tab and start with the new step creation:
In the General tab of the New Job Step window, set the following parameters:
Enter the step name Go to the Steps tab and add a new step
Choose the Transact SQL Script (T-SQL) action type (it is selected by default)
In the command field insert the following command: exec sp_msforeachdb ‘alter database ApexSQLCrd set recovery full’ (the command will convert the “ApexSQLCrd” database to Full model)
Confirm with OK
The rest of the parameters can be left as they are, which includes the parameters in the Advanced tab also.
To enable the transaction log backup, it is necessary to create an initial, full database backup. This can also be accomplished within the same job. So, with the previous step configured, the view will return to the New Job window and from there, we need to add another step. The step parameters are:
Enter the step name
Choose the Transact SQL Script (T-SQL) action type again
In the Command field enter the following command:
BACKUP DATABASE [ApexSQLCrd] TO DISK = N'C:\ApexSQLCrd.bak' WITH NOFORMAT,NOINIT,NAME = N'ApexSQLCrd-Full Database Backup',SKIP,NOREWIND,NOUNLOAD,STATS = 10 GO
This is a native SQL Server backup command which only requires additional customization based on the backup location
Confirm with OK
Created steps will be listed in the Job step list for review and the job configuration can be confirmed with OK:
Creating the SQL Server Agent alert
We created the job and defined what should be executed, now it is necessary to automate that execution on database creation and this is done through the SQL Server Agent alerts.
From the Object Explorer find the Alerts node, right-click on it and choose the New Alert… option to invoke the New Alert configuration window:
In the General tab, set the following parameters:
Alert name (e.g. Database created)
For the Database name, choose the “master” database (this targets the monitoring activities on the “master” database which is used during database creation)
The Severity field should be set to 010 – Information
The option Raise alert when message contains should be enabled and
The Message text field should be populated with the text: Starting up database ‘ApexSQLCrd (Note that the message does not end with an apostrophe as the rest of the name is dynamically created, with a timestamp)
The alert will be triggered when the information type message is written in the SQL Server log with the text that matches the Message text criteria, and when a database is created the message will contain that string that was entered:
What is left for the alert configuration is to switch to the Response tab, tick the Execute job option and from the dropdown list, pick the name of the job that was created in the previous step:
Optionally, notification on alert can be enabled to send an email containing the message that a new SQL audit data repository database is created, but that option is also available from within the ApexSQL Audit:
More information about notification email configuration can be found in the How to create a custom alert with an email notification article.
Automating transaction log backups
To finish with this SQL audit database disaster recovery concept, we just need to set the scheduled backup job. This is also configured through the SQL Server Agent.
Let’s create another job and name it, for example, Transaction log backup:
We should create one step with the Transact SQL Script (T-SQL) action type for this job to execute the following command:
BACKUP LOG [ApexSQLCrd] TO DISK = N'C:\ApexSQLCrd.bak' WITH NOFORMAT,NOINIT,NAME = N'ApexSQLCrd-Full Database Backup',SKIP,NOREWIND,NOUNLOAD,STATS = 10 GO
Make sure that the target folder is the same as the initial, the full backup folder:
When the job step settings are confirmed and the view returns to the New Job window, switch to the Schedules tab and add New schedule:
Within the New Job Schedule configuration window, we just need to name the schedule item and customize the frequency for the backup execution, for example, to execute the backup every day at midnight:
The confirmed configuration will return to the New Job configuration window and the schedule item will be listed there:
Conclusion
The presented concept can be used as a solution to ensure SQL audit data recovery in case of some unexpected SQL Server failure and data corruption. The solution provides an optimal method for SQL data recovery with the possibility to restore data to a specific point in time.
0 notes
Link
Problem
Before we show how to decrypt stored procedures, we need to give some background about this concept. When stored procedures are built in the SQL Server, their body text is reachable to all that have required permissions to access the data. Because of that, it’s simple to expose underlying content during created stored procedures and analyze that content through the SQL Server Management Studio, Windows PowerShell, or any commercial tool. As a result of data transparency, a SQL database can become vulnerable and compromised by any type of cyber-criminal activity. This is where encryption comes into place.
Encryption
One way to protect the data or intellectual property in a company or personal SQL database, from possible attacks, is to encrypt the stored procedures, views, functions or triggers. To apply encryption for mentioned SQL objects use the WITH ENCRYPTION option and it will disguise the data/content of desired procedures, views, functions or triggers from discovering. This is the example of the query that should be run into the SSMS to encrypt the stored procedure:
CREATE PROCEDURE dbo.TestEncrypt WITH ENCRYPTION AS SELECT name FROM dbo.test GO
Decryption
The procedure to decrypt stored procedure is not that difficult. A privileged user who can access system tables, over the dedicated administration connection (DAC) to the SQL Server, can see database objects and decrypt them if needed using the SQL Server Management Studio, or any type of application that can connect to the server and is capable to retrieve the procedure.
The first method to decrypt stored procedure, view, function or trigger is to use a dedicated administration connection with sysadmin account. The connection can be established through SSMS using the prefix ADMIN on the desired server. For example ADMIN:SQLInstServer2019:
By default, the connection is only allowed from the SQL Server host, and if the remote admin connection is not enabled with the sp_configure, the error will occur similar to this “A network-related or instance-specific error occurred while establishing a connection to the SQL Server”.
If there is a difficulty to connect and use the dedicated administration connection please visit this article and follow the steps in it: SQL Server Dedicated Admin Connection (DAC) – how to enable, connect and use.
Note: The error will be shown, and the SQL Server will be unreachable if the user is not using TCP/IP for dedicated admin connection (DAC) (which was explained in the article)
Simple decryption method can be described through few steps: obtain encrypted value, describing the stored procedure, from sys.sysobjvalues while connected via DAC, then, obtain the encrypted value of any blank procedure, and finally, get the statement for that blank procedure in the plaintext format and apply XOR operation between all three values.
The second and easier way to decrypt stored procedure, view, function or trigger from a desired SQL database is with the ApexSQL Decrypt, the SQL decryption tool that can integrate into SSMS or work as standalone application. The tool can be downloaded from this link, and the instructions on how to install and integrate the ApexSQL application into the SSMS or Visual Studio can be found here.
When the tool is installed and integrated into the SQL Server Management Studio, open the SSMS, navigate to the SQL database and procedure that needs to be decrypted, use the right click on the mouse and click the Decrypt object option:
When the process is done it will pop up the window in the SSMS where the message is shown like on the picture below:
Now, any user can read this object’s definition in plain text.
Both of the explained methods, executed via SSMS, are fine if only one object is decrypted at the time.
However, there is an advantage with the ApexSQL Decrypt where the tool is capable to decrypt stored procedures, views, functions and triggers on the multiple SQL databases at the same time. To do that start the ApexSQL Decrypt application as standalone and establish connection to the desired SQL Server. In this case, we are connecting to the SQL Server 2019:
Click the connect button and in the main grid all loaded SQL databases from that server will be shown:
To avoid showing already decrypted objects, on the right side in the main grid under the Object filter panel uncheck the Unencrypted option and expand the desired SQL databases. Select the encrypted objects with the checkbox next to their name:
Note: If the there are no encrypted objects in the SQL database message will be shown “No objects found” under that SQL database:
After selection of SQL objects for decrypting is made, click from the Home tab the Decrypt button to start the Decryption wizard:
The next step in the Decryption wizard window is to choose the action which will create the decryption script or it will directly decrypt the objects in the selected SQL databases. For this example, we will apply decryption directly to the SQL databases and therefore choose the Decrypt now action. There are additional options to set the saving location for the decryption script and saving the backup of those SQL databases before the decryption is applied:
The last step is the Action plan where the list of SQL objects for decryption is shown and in which order will decryption be run against them. The list can also be sorted by Object type:
When everything is set, click the Decrypt button to start the process. When the process is completed the Results window will be shown. In this case, decryption is done successfully:
Conclusion
SQL Server database encryption is important for companies and users who want to protect the data from unwanted access. However, if there is a need to decrypt stored procedures, views, functions, or triggers on multiple SQL databases with a few steps, the ApexSQL Decrypt can be used which will finish the job in a couple of seconds or minutes depending on how large that SQL databases and encrypted objects are.
0 notes
Link
This article will explain how to export SQL Server data to a CSV file type. CSV represents a file format where plain text values are separated by a delimiter, usually comma symbol (Comma-Separated Values). It is one of the most common format types for any spreadsheet program. CSV files are easy to organize and edit, and can be used by many different business software programs for data analysis. Large amounts of data can be better organized with CSV files, and those files can be easily edited and created by different spreadsheet applications.
There are several ways to export data from SQL Server to CSV file type, and this article will explain two methods how this can be done. The export SQL Server data to CSV file type could be done by using next features or tools:
SQL Server Import and Export Wizard
ApexSQL Pump
Export SQL Server data to CSV by using the SQL Server Import and Export Wizard
The SQL Server Import and Export Wizard is tool that makes it easier to export and import data to and from SQL Server. There are two ways in which this tool can be opened:
Through SQL Server Management Studio
Through the Start menu
The export SQL Server data example in this article will be processed by starting the SQL Server Import and Export Wizard from the SQL Server Management Studio. When SQL Server Management Studio is opened and connected to the desired SQL Server instance, in the Object Explorer select a database and right-click on it to show the context menu. From the menu choose the Tasks option to expand it and from its submenu select the Export data option:
The SQL Server Import and Export Wizard window will appear with the Welcome to SQL Server Import and Export Wizard message presented when this tool is started. This is the first step to export SQL Server data through this wizard, and it contains information about what this wizard can do. Click the Next button to start the process of exporting data:
The Choose a data source will appear as the next step. From the Data source drop-down box the data source type should be selected and the rest of the control layout will be changed accordingly. If data needs to be exported from the SQL Server, the SQL Server Native client item should be selected. The SQL Server Instance and database that were selected in the Object Explorer will be displayed automatically in the appropriate fields, and by clicking the Next button the SQL Server Import and Export wizard will go to the next step of export SQL Server data:
The next step of exporting data is the Choose a Destination step, in which the location where the data from SQL Server will be copied can be selected. If data from SQL Server is exported to text or CSV file types, from the Destination dropbox, the Flat File Destination should be chosen. The Flat File Destination option writes data to text or CSV file. After the Flat File destination is picked, from the section below click on the Browse button, and select or create a file and file type in which the SQL Server data will be exported. For this article the Export.csv file will be created. In the section below, the regional based formatting, like date and time, can be customized using the Locale drop-list. From the Code page drop-list, the character coding table can be set for non-Unicode text. In the Format drop-down box is selection for the columns formatting that will be exported in the destination file. When this is set click the Next button:
The Specify Table Copy or Query is the next step of the export SQL Server data wizard, where the Copy data from one or more tables or views or the Write a query to specify the data to transfer option can be chosen. The Copy data from one or more tables or views option will be picked for this purpose. Click the Next button to process to the next step of this wizard:
The Configure Flat File Destination is the next step that is appeared in this wizard. In this step, the table or view for exporting data can be selected from the Source table or view drop-list. For this article dbo.Employe table will be used. To see the data that will be exported to CSV file click on the Preview button to open the Preview Data window:
In this step specifying the characters that separates the data (delimiter) in the destination file can be done by using:
Row delimiter dropbox – where it is possible to select what each row in a destination file will end with
Column delimiter dropbox – where the delimiter which separates columns of data can be chosen for each row
When all is checked and set, click on the Next button and it will change the view to the Save and Run Package step. In this step, it is possible to choose whether the data from a SQL Server database will be exported to a Flat file immediately by checking the Run immediately option or to save the copied data as SSIS package by checking the Save SSIS package check box. The Run immediately option should be chosen to continue with export. The SQL Server Import and Export Wizard offers the possibility to finish the whole process by clicking the Finish button or to proceed to review step by clicking the Next button:
The Complete the Wizard is the last step in which decisions that were made in the wizard can be verified. When all is verified, click the Finish button to complete the process:
After that, the last step will be presented which will show progress and status information about the export SQL Server data process. The screenshot below demonstrates that the process was finished successfully:
And the exported SQL Server data in the CSV file type in Excel and Notepad will look like this:
Export SQL Server data to CSV by using the ApexSQL Pump
ApexSQL Pump a standalone tool that provides easy import and export SQL Server data to various file types. It is very easy wizard based application and the CSV file type is one of 13 file types in can SQL Server data can be exported by using the ApexSQL Pump. The ApexSQL Pump makes data exporting easier, and more customizable with fewer steps. In addition, ApexSQL Pump can export the data from more than one table or view at once.
At the opening of ApexSQL Pump, the New project window will be presented and the Data source tab shown by default. This is where connection to desired SQL Server and database should be defined. Select the instance of SQL Server, type of Authentication, and the Database to insert data or extract data from. When this is set click the Next button:
The view will switch to the Action tab where we can choose between Export and Import action. When the right action is selected, in this case the Export option,, click on the OK button to proceed further:
The main window of ApexSQL Pump will be shown, where in the main grid will be listed all tables and views from the desired database. The process to export SQL Server data will start by selecting the table from which we want to export data to CSV file type. By ticking the check box in front of the name of a column in the main grid a custom set of columns can be chosen for data extraction. For this article dbo.Employee table will be chosen . When the table is selected the Preview pane on the left can show what data and how it will be exported:
When a column for export SQL Server data is selected, the next step is to click on the Manage button in the Home tab to customize the export parameters:
The Manage export window will appear. In the Formats tab, all file types in which the data from SQL Server can be exported are presented. Select the file type to which data will be exported by clicking on its name. The CSV file type will be chosen in this case:
Next to the selected file type the Settings button will appear:
By clicking on that button the Options window related to the selected file type will open in which additional options can be set regarding the selected file type. In CSV Options dialog the following options can be set:
Quote strings – represent a character that will put the data values as quoted in the export file
Force quote – when this option is checked every data value will be quoted with quote strings
Show table header – column names will be shown in exported CSV file as header if this option is checked
Separator – in this section the Tab, Space, Comma or Custom separator to separate field in one table can be set
Data Formats – this section contains options to format the data in specific way in the exported file, like date and time formatting, Boolean type values , etc.
If the customization is finished, click on the OK button to close the dialog:
In the Rows tab of the Manage export window the number and scope of rows to export can be defined:
The Additional scripts tab is the last tab in Manage export window. Here, a SQL script for manipulating the data in SQL database can be used. For example, if the data in one of the selected tables has dynamic data masking applied, with this method we could disable masking before exporting data, and enable it again after export. Where the Pre-processing script will be used before export SQL Server data, and Post-processing script will be used after exporting data. Click the OK button to proceed further:
If data export definition is completely set, click on the Process button in the Home tab:
The Job summary window will appear, with the Summary information about how many tables or views and columns will be processed and the Warnings information which may address some export configuration inconsistencies or mistakes. After reviewing this info click on the Export button to start the export SQL Server data process:
The Browse for folder dialog will pop up where the destination path for saving the exported file should be set. The click on the OK button, when the destination path is selected, will start the process of exporting data:
After the processing of export SQL Server data is finished, the View results window will be shown. That window will contain information about the previously finished exporting process. Click on the Close button will send the view back to the main grid of ApexSQL Pump:
After finishing the exporting process, the CSV file with exported data will look like this:
ApexSQL Pump is an easy wizard type tool that will save time to organize and export SQL Server data from multiple tables to the CSV file type. In addition to CSV export, with ApexSQL Pump, SQL Server data can be exported in 12 more different file types.
0 notes
Link
Challenge
Database development in a multiuser environment has its challenges on a daily level even without including the database source control. But when the database source control is included, based on the chosen source control system, those challenges can multiply by the number of database developers working on the same database.
Every database developer has its style of writing objects scripts, adding comments for part of the created object scripts, even object script formatting.
One of the many concerns that can be raised in the source controlled environment is how to define database changes properly. Many of the database changes can be recognized as a false positive change, e.g. case changes in the database object names. This type of change, chosen to be committed to source control, will cause the changeset to be larger than it is and will require more time for searching the right version of the object through the history of the committed changesets.
Native solutions for source control systems often provide some type of freedom for define database changes, but in the end, all changes made against objects no matter how small and insignificant are (like additional spaces, case differences in names, etc) will be recognized as a difference when the database and source control scripts of an object are compared.
More advanced source control users know how to exclude specific objects or even a part of the object script from the comparison process, but relative new source control users will find this hard and time-consuming. Even when source control users learned how to do that, for every newly created object the whole process needs to be repeated.
Using any of the third-party tools for define database changes will speed up the process, but still what will be defined as a database change will be done against an object, not the whole database. One more problem is how to set what will be defined as a database change across the multiuser environment.
The main problem that needs to be solved is how to create the usability for define database changes, to all users, new and advanced, to do the same quality work that will easy database development so only ‘’real’’ database changes are committed to source control.
Using ApexSQL Source Control database developers regardless of the source control knowledge will have the ability to specify what will be defined as a database change and what needs to be shared with other database developers by committing it to source control. Here will be explained the solution for define database changes using the dedicated development model and Git repository hosted on Azure DevOps Services.
Solution
ApexSQL Source Control provides a solution to define database changes using the Script options feature for the following natively supported source control systems:
Git
Azure DevOps Server/Services
Subversion
Mercurial
Perforce
Note: No matter what source control system is chosen, the defined database changes process is the same
Environment settings/Multiuser environment
To start working with the Script options feature in ApexSQL Source Control, the link process for a database to source control must be started. Click on the Link a database to source control right-click Object Explorer context menu command to open the Source control setup window:
When the Source control setup window is opened, choose the Native option and from the Source control system drop-down list specify the wanted source control system:
Note: Be sure to choose the correct source control repository, no matter where is hosted. To use the Git repository hosted on Azure DevOps, from the Source control system drop-down list Git must be chosen
In the Development model tab are present two options, the Dedicated and Shared options:
Note: There are no differences in the workflow of the Script options, regardless of the chosen development model
In this solution, the Git source control system will be chosen, although is hosted on AzureDevOps Services and the dedicated development model.
More about the Object filtering tab can be found in the Object filtering article.
The next tab in the linking process is the Script options tab:
Under the Script options tab, all available options for define database changes are divided into two parts, the Ignore comparison options, and Synchronization options. This solution will include only Ignore comparison script options.
More about each listed option can be found in the Script comparison and synchronization options in ApexSQL Source Control article.
Checking/unchecking any of the listed script options will help to define database changes. By default, some of the listed script options are already checked (Case, Column order, etc). The checked Ignore comparison options will be applied to the comparison process between the linked database and source control after the initial commit is done.
Note: The Synchronization options will be applied only in applying changes from source control against a linked database process
When linking a database to an empty source control repository, the best practice is to leave the default options set. But if a database is being linked to a source control repository that has previously committed changesets, the best practice is to set right away the Ignore script options set. This way, when the Action center tab is opened only the ’’real’’ differences between the linked database and source control repository will be shown.
Since in this solution a database will be linked to an empty source control repository, everything in the Script options tab will be left by default.
More information about the linking process is available in the How to link a SQL Server database to a source control system article.
After a database is linked in the dedicated development model to the source control repository and the initial commit is performed, every pre-requires is completed and the focus can be on how to define database changes.
Define database changes
A simple change in one of the database stored procedures, like adding an inline comment, will be recognized as a difference between the linked database and source control repository, and in the Action center tab the message will be changed, notifying the user who made the change about the changed status:
Click on the message will refresh the Action center tab and show the changed procedure with its script where the made change will be highlighted:
Every change, no matter how insignificant, made against a database will be recognized in the Action center tab as a difference between the linked database and source control repository. For this solution changes made against the objects (either in the database or on source control repository) are:
Adding new empty lines after every written SQL line
Changing column order in a table
Adding a new inline comment
Changing cases in the name of the database object
Every of above-mentioned change will be followed by the object status change in the Object Explorer panel, from ’’link’’ to ’’edited’’ status:
Based on the formatting style of a database developer, every of the above-listed type of change will be recognized as a schema change and shown in the Action center tab as a difference between the linked database and source control repository, but is not the type of change that can affect the execution of the object’s script and committing those changes to source control will only create one more changeset nothing more.
To avoid that, go to ApexSQL main menu in SQL Server Management Studio – ApexSQL Source Control menu and choose the Options command:
In the Script options tab, choose the SQL Server connected in the Object Explorer panel where the linked database is stored, and choose that database:
On the list of available Ignore comparison script options will be options for the above-mentioned changes which allows those types of changes to be ignored during the comparison process between the linked database and source control repository. To define database changes, and set those types of changes not to be recognized as a difference, check the following options:
From the Attributes for script-based objects section – the Comments and White space options:
From the General attributes section – the Case in names option:
From the Table attributes section – the Column order option:
Saving these options set by click on the OK button in the Options window, and refreshing the Action center tab will lead to no differences between the linked database and source control repository at all since all made changes will now be ignored. In the Action center tab will be shown the following message:
But all changed database objects will still have the edited status in the Object Explorer panel. This way, every database developer can define database changes and choose when they will be recognized as a difference between the linked database and source control repository, and choose when those changes should be committed to the source control repository for the other database developers to be used.
Note: If any other change that is not set to be ignored is made against already changed objects, those changes will be listed in the Action center tab
The process is the same for every of the Ignore comparison option and can be applied against any linked database, regardless of the chosen development model, source control system, or the number of database developers.
Sharing Ignore comparison script options set
After the script options are set and the Save as my defaults button is clicked, the Ignore comparison options set will be saved in the MyDefaults file on this location by default C:\Users\<user_name>\AppData\Local\ApexSQL\ApexSQL Source Control
This options set can be applied to any linked database, by click on the My defaults button in the same tab after the SQL Server and any other linked database is chosen. To share this saved options set among all users on the same linked database, copy and send the MyDefaults file to every user to paste it (overwrite the existing one) on the same location on its machines. Every user should open the Options window, and under the Script options tab chooses the desired linked database, click the My defaults button, and the OK button to apply the saved script options set.
More about the saving options set in ApexSQL Source Control can be found in the Default option settings in ApexSQL products article.
Define database changes like this, all database developers will have the same script options ignored and there will not be ’’false’’ changes or changesets committed to the source control repository.
0 notes
Link
Importing data in SQL database is playing an important role when working with SQL servers. There are various techniques and tools to facilitate data entry into the SQL database. This article will explain how to import data in an SQL database from an Excel file by using two methods:
SQL Server Management Studio (SQL Server Import and Export data wizard)
ApexSQL Pump
Import data in SQL database via SQL Server Import and Export data wizard
SQL Server Management Studio allows users to import data from different data sources, which will be explained in this chapter.
On SQL Server Management Studio launch, the Connect to Server window will be opened. Choose the Server name and the type of Authentication, provide credentials, and click the Connect button:
When SSMS is connected to the chosen instance of SQL Server, right-click on the desired database and navigate to Tasks > Import data option from the Tasks submenu:
That action will open the SQL Server Import and Export Wizard window. The first step of this process gives us a brief overview of what the wizard does. It’s designed to help users import and export data between many popular data formats including databases, spreadsheets, and text files. By clicking on the Next button, the wizard will go to the next step:
The Choose a Data Source step will be the next on the journey through import data in SQL database with this wizard. The source from which data will be copied will be selected in this step.
From the Data source drop-list list, choose Microsoft Excel as the source. The section below will be changed with options following the selected data source. In the Excel connection settings section, the path to an Excel file will be chosen by clicking on the Browse button:
The data from the dbo.Export_data Excel file will be used for importing in the desired database:
When the data source is chosen, click on the Next button to continue. The following warning message might be shown:
The operation could not be completed. Additional information: The ‘Microsoft.ACE.OLEDB.12.0’ provider is not registered on the local machine. (System.Data)
This warning message is usually encountered on a 64-bit operating system in a combination with the 32-bit version of SQL Server Management Studio. To bypass this issue, close SSMS, go to the Start menu, find and open the 64-bit version of SQL Server Import and Export Wizard:
Like on the 32-bit version, the same welcome step will be presented with the exact same steps. When everything previously mentioned in the article has been set, from the Choose a Data Source window click on the Next button.
Choose a destination step will be next in which the Destination where the data will be copied to will be set. From the Destination drop-down list, choose SQL Server Native Client 11.0:
With the selected destination, the section below the Destination drop-list will change automatically. Here, the Server name, type of Authentication, and the Database need to be set. Click on the Next button when all is specified:
Moving on, in the Specify Table Copy or Query step, there are two options available:
Copy data from one or more tables or views
Write a query to specify the data to transfer
The Copy data from one or more tables or views option will be selected in this case. Click on the Next button to continue the process of importing data in SQL Database:
The Select Source Tables and Views step is next in this wizard. It allows users to chosen one or more tables and views to copy data. For this article, dbo.Export_data table is selected. Furthermore, it allows users to edit mappings by clicking on the Edit mappings button, and to see how the imported data will look in the database by clicking on the Preview button. Click Next to continue the process of importing data in SQL database:
In the Save and Run Package step, the user can choose whether to save the SSIS package. The default option is Run immediately. Click on the Next button to continue the importing process:
Last but not least, the Complete the Wizard is the last step in the processing of import data in SQL database. Here, a summary of the choices that were made through the process of importing data is listed. Verify that everything is okay and click Finish to end the process:
The execution was successful message is shown with a brief status of the done operation. Click Close to exit the wizard:
The targeted database will be populated with the newly created table and data as can be seen below:
Import data in SQL database via ApexSQL Pump
ApexSQL Pump is a database pump tool, which allows the users to easily export or import data in SQL database and reverse.
On ApexSQL Pump launch, the New project window will be shown. In the Data source tab, choose Server, the type of the Authentication, and the Database. For this article, the AW2019 database will be used. Click Next to continue:
Under the Action tab, the Import action will be selected. Click OK to connect to the targeted database:
The main window with tables and views from the targeted database will appear in the main window:
To import data in SQL database, click on the Manage button in the Home tab:
The Manage import window will be opened. Click on the Add button in the Format tab to add the external file for import:
The Add import source window will appear where users can choose between the Database or the File data source for importing. The File option will be checked for purpose of this article. When checked, and File option with the Browse for folder button will appear:
Click on this option in which the file for import data in SQL database will be selected. In this case, dbo.Excel_data Excel file will be selected. Click Open to continue:
Click OK to close the Add Import source window:
When the import source is added, on the right side of the Manage import window, additional options for an imported data source will appear. In our case, options for the Excel file.
From Import by section, the option for Rows will be ticked by default, and in the Header section, the First row in range option will be checked. The Preview window shows how imported data will look in real-time. When all is set, click OK to continue:
The view will go back to the main grid, where columns from the chosen table will be used for mapping columns from the imported file, in this case, the dbo.Customer table will be used:
The next step is to go into the Settings window on the right side of the main window.
From the Mapping drop-down list, choose the previously selected dbo.Export_data Excel file. Then under the Table section, click on the Create new option, and in the field below type the name of the new table. Last, from the Import mode section, click on the Insert new option:
Moving on, in the main grid, go to the selected table, and from the Column mapping drop-down list choose which columns will be paired, for example, Customer_ID column with ID column in imported file source:
When all columns from the imported source are mapped with propriety column from the selected table, click on the Process button in the Home tab:
The job summary window will be opened, whereby clicking on the Import button, import data in SQL database process will be started:
The View results window with all results will be presented. This window also allows the users to Export results or Create report as shown below:
Back to SSMS, do a Select Top 1000 Rows command from right-click in the Object Explorer to verify that the imported data in SQL database was successful:
0 notes
Link
A part of DBA’s and developer’s role is to migrate SQL Server databases between hosts or migrate from a local development environment to a live hosted environment. It requires copying all database objects such as database schema, tables, stored procedures, functions, indexes, etc. For this, different methods can be used depending on a requirement such as a database backup restore, object SQL scripts, BACPAC files, etc.
Sometimes a .bak cannot be used due to differing versions. In a situation like this, the following needs to be performed: script the database schema and data and then run that script against the destination database.
Any DBA or developer spends much of a working day staring at SQL code and very often you have been asked to review someone else’s code, or perhaps do some performance tuning on a set of stored procedures. To make things worse, reviewing someone else’s code that is not formatted by your or company’s reference can slow your progress even worse if each SQL scripts you need to reformat manually.
In this article, we will show how to script SQL Server databases, format SQL script code to compline with your or company’s formatting standard, and deploy that formatted SQL scripts on multiple SQL Servers by using tools from ApexSQL Fundamentals Toolkit for SQL Server pack.
ApexSQL Fundamentals Toolkit for SQL Server pack consists of the following tools:
ApexSQL Analyze
ApexSQL Complete
ApexSQL Decrypt
ApexSQL Propagate
ApexSQL Refactor
ApexSQL Script
ApexSQL Search
SQL scripting
ApexSQL Script is a migration tool for SQL Server databases. This tool can create SQL scripts by converting SQL database objects and data into single or multiple deployment scripts. Besides this, ApexSQL Script can create .NET solutions, or executable installers, too. With ApexSQL Script you can script only particular objects from a single or multiple SQL databases at once.
Start ApexSQL Script and open a new project window. Choose a SQL Server and the type of authentication with user credential and click the Connect button. From the list, select one or more databases, and click the Load button:
In the Main grid, select SQL objects that want to be scripted. To script the structure of the objects without data, use the Structure view:
Furthermore, to include data in the scripts, use the Data view:
In our example, only the structure of all objects in a chosen database will be scripted.
Note: ApexSQL Script offers a possibility to script specific SQL objects only. See the How to create DDL and DML SQL scripts for specific SQL database objects only and the How to narrow the data sub-set for object scripting articles
Once the SQL objects under the Structure view are selected, click the Script button:
The Script wizard will be initialed on which the scripting mode and output type can be chosen.
In the first window of the Script wizard, the scripting mode needs to be chosen:
Structure – script structure of the selected objects (DDL scripting)
Data – script data of the selected objects (DML scripting)
Structure and data – combinate of the Structure (DDL) and Data (DML) scripting mode
As mentioned earlier, we are interested in DDL scripting only, so in this step, the Structure scripting mode will be selected. Next, on the Output type window, select the Script folder option:
SQL script – will put all SQL objects and data into a single script file per chosen database
Script Folder – will create a folder with every object scripted in a separate SQL file. All objects will be arranged through subfolders by object type
Source Control – will create a database folder in a source control repository, all objects will be scripted as a separate SQL file and arranged through subfolders by object type
C# solution – the C# solution packed will be created with all scripted SQL objects
Executable installer – all scripted objects will be a package as an executable installer
On the Dependencies window, by default, the dependent objects will be included, if there are any. Since we script the whole database, the dependencies grid, it will probably be empty:
On the Script folder options window, choose the location for the scripts and click the Create button:
Once, the scripting process finishes, the results window will appear with information about which objects are scripted:
As can be seen, under the chosen location, all objects are scripted and placed under the appropriate sub-folder:
SQL formatting
As it is mentioned earlier, imagine that you get a task to review all the above created scripts in order to do some performance tuning before the scripts have been executed on multiple SQL Servers.
Now, before starting reviewing and optimizing scripts, all of them need to be formatted to apply your or company’s formatting standard.
A well-formatted SQL code is easily readable and reviewing such code can be much easier and faster than a non-formatted SQL code.
In a situation like this, ApexSQL Refactor can be very handy. ApexSQL Refactor is an add-in that integrates into SSMS and Visual Studio, has over 200 SQL query formatting options for creating SQL formatting styles and features for applying a style on one or bulk T-SQL scripts at once.
Start SSMS and from ApexSQL main menu, choose the Format SQL scripts command:
The Format SQL scripts window will appear. Here, navigate to the folder where are all script objects located and select a sub-folder which scripts want to be formatted. From the Profile drop-down box, choose your formatting profile:
If the Overwrite box is not checked, when the OK button is clicked, for each script under the selected sub-folder, the new scripts will be created and formatted, while original scripts will stay intact.
A new name for every script will be created by using a template, that is set in the File name template box of the Format SQL scripts window:
To format scripts without creating a new once, check the Overwrite box:
Once all scripts are formatted you are ready to review them.
Run multiple SQL scripts on multiple SQL Server databases
Now, when the SQL scripts are formatted and reviewed, the next step is to execute some of the scripts on multiple SQL servers. For this purpose, the ApexSQL Propagate can be used.
Run ApexSQL Propagate and from the Home tab, click the New button:
From the New project window, under the Scripts tab, click the Create new script list button:
The Script list window will appear, here click the Add button:
In the Select script file window navigate to the scripts that want to be executed and click the Open button:
All scripts will be listed in the Script list window:
Additionally, the order of scripts for execution can be set by using the Move up/down buttons:
Once the order of the scripts is set, click the Save button. In the Save scripts list as window, set the location and the name for the newly created execution script list and click the Save button:
The newly created list will be loaded in the Scripts tab:
Now, to set on which databases these scripts will be run, go to the Databases tab and click on the New database list button. The Database list window will be shown, and from here click the Add database(s) button:
Under the Add database window, click the Add server(s) button:
The Connection to SQL Server window will appear:
Add as many SQL Server as needed and check the databases on which scripts need to be executed:
When the databases are chosen, click the OK button. The list of databases will be loaded to the Databases tab:
Once the scripts and databases are set, click the Load button. The main window will be shown, and on its left side, the Script and Database panels with created lists will be displayed:
If you need some scripts to be modified before executing, select a script in the Scripts panel, and in the Script preview panel, the code will appear, where it can be edited:
Now, click the Execute button from the Home tab to execute the scripts against selected SQL databases:
The results will be displayed in the Results window:
Now, check the database to see if the objects from the scripts are created:
Conclusion
With ApexSQL Fundamentals Toolkit for SQL Server pack, you can migrate the SQL Server databases on multiple SQL servers simply and easily using features from this pack. This will save you a lot of time and keep you from the trouble that can occur during manual migration.
0 notes
Link
This article will explain how to export data from SQL databases to Excel file format by using two methods:
SQL Server Import and Export Wizard – Microsoft’s native way to copy data from a source to a destination
ApexSQL Pump – a third-party SQL data pump tool for importing and exporting SQL Server data to different file types
In the database world, data plays an important role. Sharing data resources with multiple users is an often task that DBAs and developers are faced with on daily routines. Excel has proven to be one of the simplest solutions in data sharing for most users.
Exporting data from SQL database using SQL Server Import and Export Wizard
SQL Server Management Studio has an option that allows users to export data from SQL databases to different data sources. In this article, exporting data from an SQL database to an Excel file will be explained.
Launch SQL Server Management Studio and connect to the desired SQL Server instance. From Object Explorer right-click on the targeted database, in this case, the sample WideWorldImportes database, and click the Tasks option:
The Tasks submenu will appear, in which we need to click on the Export data command:
The SQL Server Import and Export wizard window will be opened, whereby clicking on the Next button export data process will continue:
Choose a Data Source step is the first step. In this step, a data source from which data will be copied should be selected. Click on the Data source drop-down list and select the SQL Server Native Client 11.0 item.
The Server name drop-box will appear with the Authentication section and the Database options. Fill out the required, choose the SQL database from which data will be copied, and click on the Next button:
Chose a Destination step is next, in which a source where data will be copied will be selected. From the Destination drop-box list, choose the Microsoft Excel source. Section under the Destination drop-box will be changed according to the selected destination source.
In this particular case, the Excel connection settings section is shown. In this section, click the Browse button to choose a path for the Excel file, and from the Excel version drop-box list, choose the type of Excel version. When everything is selected, click on the Next button to proceed further:
“The operation could not be completed” error might be encountered. This issue is usually encountered on a 64-bit operating system in a combination with the 32-bit version of SQL Server Management Studio:
To overcome this error, close SQL Server Management Studio, find the 64-bit version of SQL Server Import and Export Wizard from the Start menu, and run it:
This time, the 64-bit version of the SQL Server Import and Export is opened. Go through the configuration one more time up until the step when it previously failed. Only this time, when we click the Next option, it will go to the next step, Specify Table Copy or Query.
Select Copy data from one or more tables or views option and click the Next button to continue:
The Select Source Tables and Views step will be next, which will be choosing one or more tables or views to export data from the SQL database to an Excel file. When tables and/or views are selected, click on the Preview button to see data that will be exported to an Excel file:
If satisfied with the results, click on the Next button to open the Save and Run Package window, where the Run immediately option is selected by default. If there is no need to save the SSIS package click on the Next button:
The Complete the Wizard step is last in the chain. It is a summary of the choices that have been made. Click on the Finish button to complete export data from SQL database to an Excel file:
The Excel file with exported data will be looked like this:
Exporting data from SQL database using ApexSQL Pump
This chapter will explain how to export data from the SQL database to an Excel file by using ApexSQL Pump.
ApexSQL Pump is a SQL database tool that allows users to:
Export data from SQL database to 13 different file types
Import data from 11 different file types to the SQL database
On the launch of ApexSQL Pump, the New project window will appear. In the Data source tab of the New project window, configure the SQL Server instance, type of Authentication, etc., and choose the Database which data wants to export to an Excel file. When all is set up, click on the Next button:
From the Data source tab, the view will go to the Action tab. Here, users can choose one of the following options:
Export – export data from SQL database to the different file formats
Import – import data from files into the database
For this article, the Export option will be chosen. Click the OK button to close the New project window and continue:
The main window of the ApexSQL Pump with the selected database will be presented, where the tables and the views will be checked by default. Let’s uncheck all, and only select the Export_data table like in the previous example:
When the objects for export data from SQL database are selected, click on the Manage button from the Home tab:
The Manage export window will be opened. From the Formats tab, users can choose the file type to export data from the SQL database. By clicking on the MS Excel radio button, the Settings button will appear:
Clicking on the Settings button will open the MS Excel options window. In the MS Excel options window, ApexSQL Pump allows users to go through the Basic and Additional tabs to adjust the Excel file format.
In the Basic tab, this SQL data pump tool offers two versions of Excel files for exporting data depending on the user’s need. Also the appearance of o the exported data can be set in the Header and Borders section, as well as, how that data will be exported like the Separate files or One file:
In the Additional tab, the font style of rows can be configured under the Data rows section, and the page orientation, margins, the page size can be tuned in the Page settings section. When all is configured, click the OK button to close the MS Excel options window and continue further:
The focus will go back to the Manage export window. Click the OK button to close it as well. When all is set, click the Process button under the Home tab:
The Process window will be opened, with job summary information to inform the user before the exporting process begin. Click the Export button to export data from SQL database to an Excel file:
The Browse For Folder window will be shown where the path of the exported file can be configured. When all is set, click the OK button to continue:
The View results window will appear when the exporting process is finished. This window contains information about the duration of exporting, how many tables and columns are exported. If there is a need for the results to be exported, the user can do it by using the Export results button, or a report can be created too by clicking the Create report button. Click the Close button to finalize the process:
The exported Excel file will be found on the previously selected path and when opened, it looks like this:
0 notes
Link
The SELECT statement in SQL is the most used of the DML statements and it’s used for fetching data from tables in SQL Server. This article is intended for DBAs and developers who want to get familiar with all the major components of the SELECT statement and what this statement is all about.
Introduction to SELECT
The syntax of the SELECT statement can be rather complex, but let’s take a brief overview of some of the major keywords used in a SELECT statement, and then we will head over to SQL Server Management Studio to see how the main clauses can be summarized in code.
SELECT statement in SQL
So, SELECT just starts the statement and it’s probably followed by a star (*) AKA “splat”. It basically means retrieve all the columns from a table. If there are multiple tables that we are selecting from, the star will select all columns from all tables e.g. when joining two or more tables.
However, it’s advisable NOT to select everything from tables. “Why?” is a subject on its own. Let’s just say there are significantly more cons than pros that can be found online using simple research. Therefore, try not to select everything, and use specific column names followed by commas that will return only columns based on what’s in the list.
Furthermore, a lot of times when there is no column list, we have a keyword TOP followed by a number (n) after the SELECT statement in SQL that returns top (n) records from a table. This is usually used with the ORDER BY clause because, for example, if we want to grab top ten sales by quantity, those can be some big numbers. So, if we leave it at the default sort order, we’ll get little numbers first. However, if we say order it descending by the quantity that will give us the top ten records by the quantity. We also use DISTINCT in some cases after SELECT that gives us unique values across the select list.
FROM
The FROM part of the SELECT statement in SQL is simply used to tell SQL Server from which table should data be fetched. A JOIN is used when we want to pull data from multiple tables. There are three different types of joins:
Inner join – this is the default, used if no type is specified. When joining two tables on a common column, this type of join simply retrieves data that matches in both tables
Left join – means pull all data in the left-hand table and only the data that matches the left-hand table in the righthand table
Right join – you guessed it. It is the opposite of the left join
WHERE
The WHERE clause acts like a filter on a list of data that is coming back from tables. We can filter single or multiple columns out which will affect the data in the results set.
GROUP BY
The GROUP BY clause is associated with aggregates. If we want to do something like SUM, AVERAGE, MIN, MAX, etc. those are all aggregate functions, and GROUP BY allows us to arrange identical data into groups. In addition to this, we also have the HAVING clause which is pretty much a WHERE clause for groups. It allows us to apply filters to groups.
Examples
The above is a short overview of the SELECT statement in SQL. Let’s fire up SQL Server Management Studio and see how we can use some of that stuff. All of the following examples use the sample AdventureWorks2012 database.
This first example is as simple as it gets and returns all rows and columns using the (*) from the Product table:
SELECT * FROM Production.Product;
The results set is populated with all the columns from the Product table. The list is rather long, there are 25 columns in total that can be seen if scrolled to the right:
To select specific columns from the table, simply list column names followed by commas:
SELECT Product.ProductID ,Product.Name ,Product.ProductNumber ,Product.MakeFlag ,Product.FinishedGoodsFlag FROM Production.Product;
This time, only a subset of the columns is returned:
We could add a WHERE clause as shown below to select specific columns from a tabler with a filter:
SELECT Product.ProductID ,Product.Name ,Product.ProductNumber ,Product.MakeFlag ,Product.FinishedGoodsFlag FROM Production.Product WHERE Product.Name LIKE 'Mountain%';
Notice that we use a LIKE operator in the WHERE clause and therefore we have to specify wildcard character. In this example, the LIKE keyword says find everything that starts with “Mountain” and after that, it can be anything:
Using regular expressions can be more useful than specifying strings equality and allows the advanced search and text manipulation.
For detailed information about how to use the LIKE operator and wildcard characters, see Overview of the SQL LIKE Operator
Let’s move on to querying data using joins. This allows us to combine data from two or more tables on common columns. Remember, SQL Server performs an INNER JOIN if only the JOIN keyword is specified.
The following query returns all product names and sales order IDs from two tables and joins them on the common ProductID column:
SELECT p.Name ,sod.SalesOrderID FROM Production.Product p JOIN Sales.SalesOrderDetail sod ON p.ProductID = sod.ProductID;
We could also return all the data from both tables by simply using a star. It cannot be seen in the shot below, but the result set is populated with the data from the Product table first followed by the SalesOrderDetail table’s data:
SELECT * FROM Production.Product p JOIN Sales.SalesOrderDetail sod ON p.ProductID = sod.ProductID;
If we only want to return the data from the Product table, add a “p.” in front of the “splat” as shown below:
SELECT p.* FROM Production.Product p JOIN Sales.SalesOrderDetail sod ON p.ProductID = sod.ProductID;
These little highlighted things in the query are called aliases. They are useful when we have long table names making the code more readable and easier to understand. Aliases are also required for databases with schema names such as this sample AdventureWorks2012 database:
For example, consider parsing the following query and see what happens:
SELECT ProductID ,Name FROM Production.Product p JOIN Sales.SalesOrderDetail sod ON p.ProductID = sod.ProductID;
The result set says “Commands completed successfully.”, so it should run just fine, right?
Well, not exactly. If we run the query, we’ll get an error saying “Ambiguous column name ‘ProductID’.” as shown below even though the syntax is correct:
This basically means two columns have the same ProductID column name. The SQL Server is confused as to which ProductID out of the two different tables we are referring to. If we add an alias to specify from which table the column should be fetched, the query will execute just fine:
SELECT p.ProductID ,-- or sod.ProductID Name FROM Production.Product p JOIN Sales.SalesOrderDetail sod ON p.ProductID = sod.ProductID;
Let’s move on and take a look at the SELECT statement in SQL using a left outer join. This type of join retrieves everything from the left-hand table and only the records that match in the right-hand table:
SELECT p.Name ,sod.SalesOrderID FROM Production.Product p LEFT JOIN Sales.SalesOrderDetail sod ON p.ProductID = sod.ProductID ORDER BY p.Name;
In this example, it joins two tables on the ProductID column and retrieves every product, no matter if it has a sale or not, and preserves the unmatched rows from the left-hand table:
Do we really need an example for the SELECT statement in SQL using a right outer join? We mentioned earlier that it’s the exact opposite. But let’s look at an example anyway:
SELECT st.Name AS Territory ,sp.BusinessEntityID FROM Sales.SalesTerritory st RIGHT JOIN Sales.SalesPerson sp ON st.TerritoryID = sp.TerritoryID;
This time, it joins two tables on the TerritoryID column and retrieves everything from the right-hand table and only the records that match in the left-hand table. All salespersons appear in the result set, no matter if they are assigned territory or not:
For detailed information about the different types of joins, see SQL Join clause introduction and overview
Querying data using a third-party extension
It’s a known fact that nobody likes to write and read a ton of messy, unformatted text. When it comes to writing a complex SQL code that problem can become even more difficult. Querying data and writing SQL code smoothly requires years of practice and experience. Fortunately, there’s an easier way to query data using SELECT statement in SQL Server.
In this section, we’re going to take a look at a third-party tool that can help us with writing SQL code effortlessly. As a part of ApexSQL Fundamentals Toolkit for SQL Server, ApexSQL Complete extension for SSMS and VS is a well-known productivity tool that speeds up writing SQL code by automatically completing SQL statements, filling in SQL code fragments, and more.
Once the SQL code complete extension is installed into one or more hosts, it will suppress Microsoft’s native IntelliSense and take over the code completion. Enough talk, let’s open up a new query and see it in action. Start with a simple SELECT statement in SQL Server that you’re connected to. As soon as typing is started, the intelligent SQL code complete shows a hint-list with context-sensitive hints:
The hint list provides objects (name, type, schema) based on the current content of the query. Doble-click or Enter the highlighted object and it will be inserted as shown below:
USE AdventureWorks2012 GO SELECT * FROM Production.Product p
Remember how aliases are important in some cases? Auto-complete generates them automatically, by default. If a long table name is selected from the hint list, it will generate an alias using the capital letters from the object’s name:
SELECT * FROM Production.ProductModelProductDescriptionCulture pmpdc
If you don’t need aliases, you can disable them by going to Options > Inserts and unchecking the “Auto-generate aliases” option:
If you need aliases though, here you can also configure the behavior when generating them as follows:
SELECT * FROM Production.Product pro -- Use first three letters for alias SELECT * FROM Person.Person P -- Use upper case for alias SELECT * FROM Person.Person AS p -- Use AS keyword with alias
Furthermore, you can manually configure global aliases under the Aliases tab of the Options window. Here, you’ll need to specify server, database, object, and alias as shown below:
Global aliases have precedence over automatically generated ones, so if we do another SELECT statement in SQL Server, we’ll get this:
SELECT * FROM Production.Product p
Moving on, to select specific columns from the table, remove the star and the hint list will show all the available columns. From here, simply check only needed and they will be inserted:
Remember how we said earlier that the hint list is context-sensitive? This means that as far as the syntax goes, SQL code complete will list valid suggestions in its hint list. If we continue to type a WHERE clause in the example above, it will list appropriate column names, keywords, etc.:
Auto-complete will not only fill in SQL keywords and names, but it will also help you write complex queries such as JOIN statements or cross-database queries:
For detailed information about how to use the auto-complete feature, see Code completion
Conclusion
In this article, we’ve refreshed our memory on how to query data using SELECT statement in SQL Server. We started with a brief introduction and overview of all major components and then we jumped into SSMS and took a look at how to write everything from a basic SELECT statement in SQL to all the different types of joins.
Then we looked at an extension from ApexSQL Fundamentals Toolkit for SQL Server, ApexSQL Complete that reduces typing by automatically completing SQL statements for us. Furthermore, it automatically inserts fully qualified object names and alias on the server or database level. This tool provides an array of options that make writing SELECT statements in SQL a piece of cake.
0 notes
Link
SQL databases can contain a lot of data, including a lot of sensitive data. This sensitive information can easily be misused. Mask SQL Server data is of great importance when the SQL database contains a lot of sensitive data. Microsoft has offered a dynamic data masking feature that has been implemented in SQL Server 2016 and later versions, as one of the solutions for protecting sensitive data in SQL databases.
The dynamic data masking feature allows certain SQL database users to hide the original data while deciding which of the users will be privileged to see the original data and which will not. It is very important to note that using the Dynamic data masking feature does not change permanently data in the database, but only hides them.
In the article below, the usage of dynamic data masking feature will be explained throw:
Azure portal
ApexSQL Mask
Mask SQL Server data on the Azure SQL database
How to mask SQL Server data with the dynamic data masking feature on the Azure SQL Database will be explained on the AdventureWorksLT sample database. To start using the Dynamic data masking feature on the Azure portal, click on the desired database, and, then in the Security tab, click on the Dynamic Data Masking option:
That action will open a new page, where will be presented recommended fields to mask. Besides those recommended fields, Azure portal allows to add a mask to any other column in the database by clicking on the Add mask button:
The Add masking rule page will be shown, where a user can choose Schema, Table, and Column which data will be hidden. When all of that is set, the Azure portal will make the name of the mask in which will be presented the names of the selected Scheme, Table, and Column:
The next step is to choose the type of mask, from the Masking field format drop-down list for masking sensitive data. Depending on the data type of the selected column, Azure portal will be offered the next type of mask:
Default value (0, xxxx, 01-01-1900)
Credit card value (xxxx-xxxx-xxxx-1234)
Email ([email protected])
Number(random number range)
Custom string (prefix [padding] suffix)
When the masking method is selected by clicking on the Add button, the dynamic data mask will be created:
The newly added dynamic data mask will appear in the Dynamic data masking page, whereby clicking on the Save button, the mask will be applied on the selected column:
To see how dynamic data masking features work for Azure SQL Databases, in SSMS, login will be created and the user which will have data reader privileges.
By executing the following script on the master database, the new login and user Test will be created:
CREATE LOGIN Test WITH PASSWORD = 'Password.' GO CREATE user Test FOR LOGIN Test
Then, open a new query for the desired database where need to mask SQL Server data and execute the following script which will create a Test user on that database with data reading permissions:
CREATE user Test FOR LOGIN Test ALTER ROLE db_datareader ADD MEMBER [Test]; GO
This action will create the user with a limited permission, who will see only hidden sensitive data.
Further in the article, using the dynamic data masking feature through different masking methods will be explained with comparing original and hidden sensitive data.
Masking field formats using to mask SQL Server data
Default value
The Default value type of mask is used for masking columns with different data types. Different characters are used to hide sensitive data in columns with different data types:
xxxx – is used for the string data type (nchar, ntext, nvarchar)
0 – zero value is used for numeric data types (bigint, bit, decimal, int, money, numeric, smallint, smallmoney, tinyint, float, real)
01-01-1900 – is used for date/time data types (time, smalldatetime, datetimeoffset, date)
This dynamic data masking method will be explained on the SalesLT.Product table, where will be chosen Color (nvarchar), StandardCost (money), and SellStartDate (datetime) columns for hidden sensitive data:
When connecting to the Azure SQL Database like Admin and when executing the next script:
SELECT TOP (5) [ProductID] ,[Color] ,[StandardCost] ,[SellStartDate] FROM [SalesLT].[Product]
The original data will be presented:
But when connect to Azure SQL Database as a Test user with data reading permission, the sensitive data will be hidden:
Email
When the user chooses the Email type of dynamic data masking, the first character of the email will be shown, while the residue of the email will be hidden with the next string [email protected].
For presentation, the Email dynamic data masking type will be used for the SalesLT.Customer table and EmailAddress column. Connect to the Azure SQL Database like Admin and execute the script below. Original sensitive data will be shown:
SELECT TOP (5) [EmailAddress] FROM [SalesLT].[Customer]
When the Email type of dynamic data masking is used for hidden sensitive data, and when login is as the Test user, masked data will look like this:
Random number
A Random number is a masking method where a random number will be generated within a certain range that can be selected when creating this mask. The range can be chosen in From and To text boxes.If it is the same number inside both text boxes, the sensitive data will be hidden with only that number. For example, this masking method will be used to mask SQL Server data on the ListPrice column from the SalesLT.Product table, in the range from 3 to 99:
By executing the following script in SSMS, the Admin can see the original data:
SELECT TOP (5) [ListPrice] FROM [SalesLT].[Product]
But if in SSMS log in like Test user, executing the previous script will give data that will hide the numbers in the range from 3 to 99:
Custom text
This type of dynamic data masking exposes the first and last characters and puts a custom string in the middle. How many of the first and last characters will be exposed is set in the Exposed Prefix and the Exposed suffix text box, while a custom string will be inserted in the Padding string. Data from the CompanyName column from the SalesLT.Customer table will be used for presenting the results of masking, and, for purpose of this example, the first two and last two characters will be exposed with X-X-X custom string in the middle:
Executing the following script in SSMS will show how the sensitive data will look in the original state:
SELECT TOP (5) ,[CompanyName] FROM [SalesLT].[Customer]
While in a hidden state, data will be looked at:
Credit card
The credit card type of mask exposing the last four digits of the designated fields when mask SQL Server data, and a constant string xxxx-xxxx-xxxx- is added as a prefix in the form of a credit card. The masked credit card number will look like xxxx-xxxx-xxxx-1234. On the Credit_Card_Number column of SalesLT.Persons, the process of changing data from original to masked one will be followed. Original data will appear when connecting to the Azure SQL Database like admin and when executing the following script:
SELECT TOP (5) [Credit_Card_Number] FROM [SalesLT].[Persons]
Hidden sensitive data from the same column will be got when connecting to the Azure SQL Database like Test user with data reading permission:
In this way, the Azure portal allows the users to mask SQL Server data by applying these dynamic data masks on the columns which contain sensitive data. If there is a need for non-privileged users to see hidden data, by deleting that mask from the Dynamic data masking page, users will be able to see original data.
Mask SQL Server data with dynamic data masks using ApexSQL Mask
ApexSQL Mask is a data masking tool which uses static and dynamic data masks to mask SQL Server data. This article will explain how easy dynamic data masking feature can be used through this tool to hide sensitive data.
Connect to SQL Server
On starting the application, the Connect to SQL Server window will be opened, where SQL Server instance, type of Authentication, and the Database are needed to be set up. When all is set, click on the Connect button to connect to a desired database:
Creating dynamic data masks
The main grid of ApexSQL Mask will appear, with all columns presented. The first step to mask SQL Server data is to tick columns which data will be hidden. For example, the ListPrice column from the SalesLT.Product table will be used:
To add masks to the selected columns, click on the Manage button in the Home tab:
The Manage masks window will be opened. To create the dynamic data mask, go to the Hide original data tab:
To select which user will have a permission to see original data, click on the User permission button in the Hide original tab:
The User permission window will be opened, whereby ticking the check box in front of the name of the user will be allowed which user can see original data and which not:
When it is defined which user can see the original data, the next step is to click on the New button to start creating the dynamic data mask:
The Create new mask window will be opened, where it is required to give a name of the mask while the description of the mask is optional. By clicking on the Type dropdown list, ApexSQL Mask will offer the previously mentioned types of Dynamic data masking that will be used to mask SQL Server data:
When the right masking method has been chosen, for example, Random numeric method with which the sensitive data will be hidden with value in the range from 3 to 99, the next step is to click on the Apply button:
The Apply to columns window will be shown, where, from the Available grid, will move the selected column which data need to be masked to Applied grid, in this case, ListPrice column. When the column is selected, click on the OK button:
Again the Create new mask window will appear; clicking on the OK button will close that window, and the new dynamic mask will be created and appear in the Manage masks window. By clicking on the OK button, the view will move to the main grid:
In the main grid, the new masked column will appear. Click on that column from the Preview window and it can be seen how masked will look after masking:
When all columns are selected and masked, click on the Process button in the Home tab:
The Process mask window will be opened, whereby clicking on the dropdown list the user can choose whether Update database or Create script:
Conclusion
To protect sensitive data and to prevent security breaches, Azure portal and ApexSQL Mask have the similar functionality, and those tools allow users to mask SQL Server data without changing data in the database. Azure portal by using dynamic data masking feature protects data only in Azure SQL database while ApexSQL Mask can be used on Azure SQL database and the MS SQL Server database. Besides availability to the updated database directly, ApexSQL Mask allows users to create and mask scripts.
0 notes
Link
Frequently, developers and DBAs need to find SQL objects in databases for which name you only partially remember. Searching manually these objects can be a time-consuming job, especially when a database has a large number of objects.
In a situation when a search for specific data is needed in all database tables, views, etc. you should run a SELECT statement several times depending on the number of database tables or views that want to search. For some searches, you need to write a complex Select statement query, which requires advanced SQL Server development skills.
Furthermore, as a part of some tasks, you get a database that is not documented properly or completely, and you need to look for specific stored procedures that reference a specific table or process. Once you find SQL objects, you need to rename them and in doing that should pay attention not to break referential integrity.
This article will show some examples that can help in the above-mentioned situations. In the first part of the article, a couple of system views that can be used for finding SQL objects and data will be introduced, and in the second part of the article, the third-party software solution called ApexSQL Fundamentals Toolkit for SQL Server will be shown.
ApexSQL Fundamentals Toolkit for SQL Server pack consists of the following tools:
ApexSQL Analyze
ApexSQL Complete
ApexSQL Decrypt
ApexSQL Propagate
ApexSQL Refactor
ApexSQL Script
ApexSQL Search
Find SQL objects
The first system view that will be introduced for finding SQL objects is sys.objects:
SELECT * FROM sys.objects
When running the above code, all user-defined and schema-scoped objects that are created within a database, including natively compiled scalar user-defined function will be retrieved:
This is somewhat helpful. Let’s narrow down the search scope by defining condition in a Where clause:
SELECT NAME AS ObjectName ,SCHEMA_NAME(schema_id) AS SchemaName ,type ,type_desc FROM sys.objects WHERE is_ms_shipped = 0 AND NAME LIKE '%Emp%' ORDER BY NAME;
This looks much better, but let’s list only the stored procedure by adding an additional condition under the Where clause “AND type_desc LIKE ‘%PROCEDURE%”:
SELECT NAME AS ObjectName ,SCHEMA_NAME(schema_id) AS SchemaName ,type ,type_desc FROM sys.objects WHERE is_ms_shipped = 0 AND NAME LIKE '%Emp%' AND type_desc LIKE '%PROCEDURE%' ORDER BY NAME;
Subsets of the objects can be viewed by using system views for a specific type of object, such as:
sys.tables
sys.views
sys.procedures
SELECT SCHEMA_NAME(schema_id) AS SchemaName ,name ,type ,type_desc FROM sys.procedures WHERE name LIKE '%Emp%';
Another way to find SQL objects in a database is by using the SQL Server Management Studio (SSMS) Object Explorer Details feature.
In Object Explorer, select a database and, from the View menu, choose the Object Explorer Details command:
in the Search box, type search criteria, and press enter. This will return all user-defined and system objects that match search criteria:
To find SQL objects, the sys.syscomments and sys.sql_modules views can be used, too. But bear in mind that the sys.syscomments view will be removed in a future version of Microsoft SQL Server.
Find a string in the definition of stored procedure or function
All the above-mentioned system views are for finding SQL objects, but what about a situation when a string that is located in the body of some of the stored procedures or functions needs to be found.
For that, the INFORMATION_SCHEMA.ROUTINES view can be used. This system view can return one row for each stored procedure and function that can be accessed by the current user in the current database.
If searching for a specific string (keyword) or block of text in the stored procedures or functions, for example, a requirement to find all presents of some table (e.g. BillOfMaterials table), in the stored procedures or functions for a specific database, run the code below:
SELECT ROUTINE_NAME,ROUTINE_DEFINITION FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_DEFINITION LIKE '%BillOfMaterials%' AND ROUTINE_TYPE = 'PROCEDURE'
As can be seen, all stored procedures with a definition in which a specified string is present will be listed.
This system view can be used to find a specific parameter in stored procedures or functions:
SELECT ROUTINE_NAME ,ROUTINE_DEFINITION FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_DEFINITION LIKE '%@language%' AND ROUTINE_TYPE = 'PROCEDURE';
The ROUTINE_DEFINITION will return the first 4000 characters of the definition text of the stored procedure or function if it is not encrypted. Otherwise, it returns NULL.
Find data
A common way to find data in a database is using a Where clause with the utilization of the Comparison or Logical operators in the Select statements.
In the example below all employees whose job title contains the word ‘Marketing’ are listed:
SELECT e.JobTitle ,p.FirstName ,p.MiddleName ,p.LastName FROM HumanResources.Employee e INNER JOIN Person.Person p ON e.BusinessEntityID = p.BusinessEntityID WHERE e.JobTitle LIKE 'Marketing%';
More about this topic can be found on the Different methods to search SQL Server data page.
Find SQL objects using a third-party tool
ApexSQL Search, SSMS/VS add-in is a part of the ApexSQL Fundamentals Toolkit for SQL Server pack that provides solutions for situations mentioned at the beginning of the article.
To find SQL objects in one or multiple databases, the Object search feature can be used. From Object Explorer, select a SQL Server or desired database, on the ApexSQL Search menu, choose the Object search command:
On the Object search pane, in the Search text box, enter the search string:
From the Server drop-down list, choose a SQL Server on which the databases are located that want to be searched, and from the Database drop-down list, choose one or more databases:
Additionally, the search string can be adjusted by specifying whether to match (e.g. Object name, Object body, etc.) to the searched text:
Under the Object types drop-down list, select the type of the objects that want to be searched:
When all is set, click the Find button. The objects that match the searched string will be displayed in the result grid:
To find SQL specific string in the body (definition) of a stored procedure or function, for example, places where the name of the BillOfMaterials table appears do the following.
In the Search text box field, enter the name of the table, in our case, that will be BillOfMaterials. Specify server/databases where the search will be performed, under the Object type drop-down list, choose the object that wants to be searched, in our example that will be the stored procedures and user-defined functions, make sure that Object bodies check box is checked, and click the Find button:
In the results grid, all results of the BillOfMaterials table will be shown. Click on one of them and the stored procedure build definition will appear. Right-click, and, from the context menu, choose the Find command. In the text box, type the name of the table and press Enter. All the results of the word BillOfMaterials in the stored procedure definition will be found:
Rename SQL object
Once we find the SQL object of interest (e.g. BillOfMaterials table), the renaming process and reflection that changes to the objects that depend on the renamed object becomes a trivial job by using the Safe rename feature.
In Object Explorer, under database three, find an object that wants to be renamed, right-click and, from the context menu, choose the Safe rename command:
Another, more intuitive, way to initiate this feature is through the Object search results grid itself. In the Object search results grid, right-click on the column result and, from the context menu, choose the Safe rename command:
The Safe rename dialog will appear. In the New name dialog box, enter a new table name and click the Preview script button. In the Generated script tab, a complete SQL script of the renaming process will be shown:
If you want to modify the script before execution, click the Create script button. The generated script will be opened in a new query editor. Now, all is needed is to click the Execute button and the renaming process will be done.
Conclusion
Finding SQL objects and renaming them can be an easy job with the right tool. There is no need to analyze a whole database and search for dependencies yourself. With ApexSQL Fundamentals Toolkit for SQL Server pack and ApexSQL Search tool you can find all dependent objects, rename and drop–recreate when necessary, and avoid breaking database integrity.
0 notes
Link
This article will show a couple of ways of how to determine SQL object dependencies between SQL Server objects and how to safely rename a SQL Server object without compromising referenced integrity and break dependencies between objects using the ApexSQL Fundamentals Toolkit for SQL Server pack.
ApexSQL Fundamentals Toolkit for SQL Server pack consists of the following tools:
ApexSQL Analyze
ApexSQL Complete
ApexSQL Decrypt
ApexSQL Propagate
ApexSQL Refactor
ApexSQL Script
ApexSQL Search
View SQL object dependencies
Understanding dependencies between objects can be very useful in a situation like renaming, updating, or deleting objects that depend upon other objects.
In the situation when for some reason (e.g. typographical error, etc.), a table or table column name in a database needs to be changed. But before renaming, you need to know if that table or table column is used elsewhere (e.g. views, stored procedures, etc.) so that renaming will not damage referenced integrity.
Or in the situation when you ask to identify cross-database dependencies and find the objects referenced by other databases.
There are several methods to get information about SQL object dependencies.
The first method is by using the SQL Server Management Studio (SSMS) feature called View Dependencies. In Object Explorer, right-click on the object of interest, and, from the context menu, choose the View Dependencies command:
By default, on the Object Dependencies window, the objects that are dependent on the chosen table will be shown:
When the second radio button is selected, in the grid, the objects that the chosen table depends upon will be shown:
The second approach is to use the system stored procedure sp_depends to show object types views and stored procedures that depend on a table or view, and the tables and views that are dependent on the view or stored procedure:
EXEC sp_depends @objname = N'Person.Address';
When run, the following information will appear:
Also, the syscomments view can be used to list all views, stored procedure, etc. that depend on a chosen table:
SELECT DISTINCT syso.name FROM syscomments sysc INNER JOIN sysobjects syso ON sysc.id = syso.id WHERE charindex('Address', TEXT) > 0
Similar to the syscomments view, the INFORMATION_SCHEMA.ROUTINES view can be used to search through the definition of the routine such as functions, stored procedures, etc.:
SELECT routine_name ,routine_type FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_DEFINITION LIKE '%Address%';
One more method to determine SQL object dependencies is to use the undocumented SQL Server stored procedure sp_msdependencies.
To view the first level of objects that are dependent on the specified object, run the following query:
EXEC sp_MSdependencies N'Person.[Address]' ,NULL ,1315327;
The result will be:
To see all levels of dependencies for the objects that are dependent on the specified object, run the following code:
EXEC sp_MSdependencies N'Person.[Address]' ,NULL ,266751;
To be able to see the objects on which the specified object is dependent, run the following command:
EXEC sp_MSdependencies N'Person.[Address]' ,NULL ,1053183;
More about object dependencies can be found on the How to create a SQL dependency diagram in SQL Server page.
For the above-mentioned situations, to get a clear picture of which objects depend on the chosen object and on which objects a chosen object depends, you need to run a few different methods.
In a situation like this, a graphical SQL object dependencies viewer like ApexSQL Analyze can be of great help in finding cross-database dependencies or specific table or column dependencies. All objects in ApexSQL Analyze are present with different shapes and colors to improve their visual look and readability.
To see database object dependencies, under the Home tab, click the New button and on the Connection to SQL Server window, set credentials to connect to SQL Server, and from the Database drop-down list, choose a database:
When all is set, click the OK button. Under the Dependency viewer window all object and dependencies between the objects will be shown:
In the Object browser pane, find the object of interest, double click on it, and in the view grid, that object will be selected with all dependencies, all other objects will be grayed out:
In the Dependencies pane, all objects on which the chosen object depends will be listed and all objects which depend on the chosen object:
Additionally, the level of dependencies depth can be set in the Dependencies depth field under the General tab:
SQL Server safe object renaming
Even though all the SQL object dependencies were determined using ApexSQL Analyze, creating a script for renaming an object which will update all other referenced objects and doing so not to break some dependencies between them can be a very challenging job.
Using the Rename command from SSMS is something that won’t help much, even doing more harm than good, and eventually, if you rename some object (e.g. Address table) you can end up with a mistake like the one below when executing some views, stored procedures, etc. that depended on the Address table:
Msg 208, Level 16, State 1, Procedure vSalesPerson, Line 34 [Batch Start Line 0] Invalid object name ‘Person.Address’. Msg 4413, Level 16, State 1, Line 24 Could not use view or function ‘AdventureWorks2017.Sales.vSalesPerson’ because of binding errors. Completion time: 2020-11-23T15:24:41.3589340+01:00
Creating the script for renaming an object from scratch on which a code for modification of all dependence’s objects will be included can be a time-consuming job, and with little success to work on the first run.
The ApexSQL Fundamentals Toolkit for SQL Server pack comes with the features that solve that problem. The Safe rename feature comes with the installation of ApexSQL Refactor or ApexSQL Search add-ins. The Safe rename feature creates a renaming script for the chosen object and then when the script is run will modify all objects’ dependencies to reflect the change.
To initiate this feature, in Object Explorer, select an object that wants to be renamed, right-click and, from the context menu, choose the Safe rename command:
The Safe rename window will appear. Here, in the New name field, enter a new name for the object and click the Preview script button:
In the Generated script tab, a generated T-SQL script will be shown that will rename the chosen object with all other dependent objects.
To review or modify the generated T-SQL script, click the Create script button. The new query editor will be opened with all renaming code in it.
Now, all that needs to be done is to click the Run button in SSMS or Visual Studio and the object will be renamed successfully without breaking any SQL object dependencies:
Conclusion
With ApexSQL Fundamentals Toolkit for SQL Server pack and tools that it provides all SQL object dependencies related to an object can be determined and safely renamed without typing any line of code.
This example showed how to view dependencies for a table and rename it without breaking any dependencies, but the same steps can be applied to any SQL Server database objects (table/view columns, functions/procedures parameters, views, functions, procedures).
0 notes
Link
Tracking data changes helps us to understand moving parts in a database, learn about changing trends, easily recover from unwanted data changes or data loss, and adds more context to the auditing documentation. In this article, we will introduce ways of tracking database changes by exploring some out of many database auditing mechanisms in the SQL Server ecosystem.
Obtaining such value-added information from the live production databases without substantial data archiving and impacting the database performance is a challenging aspiration, and later in this course, we will touch base on some of the viable options.
Temporal tables
Temporal tables as a database feature are introduced in MS SQL Server 2016 and are generally useful in scenarios when a historical database change log is required and is a great performance-intensive tool while creating the logs.
The temporal tables are system-versioned tables having SQL Server control version history and store historical database changes in a dedicated table. In other words, whenever a DML transaction affects the table, the data is saved with time-stamped versioning. For instance, if UPDATE or DELETE statements make an effect on a table, data in use before the change is stored.
Enable system-versioning on a table
Before we start building on database auditing example using temporal tables, let us understand some of the implications while creating it:
A new table dedicated to storing historical data changes is created
The history table has to be schema-aligned with the candidate table in terms of column names, number of columns, ordering, and types
The candidate table must have a primary key defined, and have only one PERIOD FOR SYS_TIME structured via two datetime2 columns
The candidate table gets some extra GUI decoration
When enabling system-versioned tables, the candidate table is required to support version time stamping via “period” columns that will indicate on the data effective times. In other words, we will employ start row and end row columns to tell SQL Server when the row started and when it stopped being active. Let us start building the database auditing implementation on the example from scratch. To start, create the new candidate table or choose any existing one, below example demonstrates how to alter newly created table to enable system-versioning:
CREATE TABLE Employee ( [ID] INT NOT NULL PRIMARY KEY CLUSTERED ,[FirstName] NVARCHAR(100) NOT NULL ,[LastName] NVARCHAR(100) NOT NULL ,[Department] VARCHAR(100) NOT NULL ,[Role] VARCHAR(100) NOT NULL ,[Team] NVARCHAR(100) NOT NULL ,[Address] NVARCHAR(1024) NOT NULL ,[Salary] DECIMAL(10, 2) NOT NULL ,[Bonustarget] DECIMAL(10, 2) NOT NULL;
Now that we have a candidate table, let us follow the requirements and adjust it to support this feature by adding two columns that will indicate the period of data validity:
ALTER TABLE Employee ADD ValidFrom datetime2(2) GENERATED ALWAYS AS ROW START HIDDEN constraint DF_ValidFrom DEFAULT GETUTCDATE() ,ValidTo datetime2(2) GENERATED ALWAYS AS ROW END HIDDEN constraint DF_ValidTo DEFAULT '9999.12.31 23:59:59.99' , PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo);
With these two columns being added, we can move forward and enable system-versioning. Below T-SQL code will enable this feature and instantiate a new history table named “Employee_History”:
ALTER TABLE Employee SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeeHistory));
Historical data trail will be stored inside the history table whenever a DML operation drive changes. In the next chapter, we will explain how to query and get in touch with the database auditing trail from this read-only table.
Querying data from system-versioned tables
Temporal tables allow time-based data analysis via specialized sub-clauses to retrieve data from the candidate and history tables. This additional query power does not affect the conventual querying for the current data, and for that reason, in the example above we have used the HIDDEN attribute on the period columns to avoid displaying it in a result set unless they are specifically called out in a SELECT statement.
Retrieving the database auditing data from the system-versioned table is driven via sub-clauses defining the aspired time frame, here are five options that can be used:
ALL – Shows the complete history of changes and the current data
AS OF <date_time> – Displays what data looked like at a specific time
FROM <start_date_time> TO <end_date_time> – Gives an overview of the data in specific period including start and end dates
BETWEEN <start_date_time> AND <end_date_time> – Provides the data overview in specific time range between start and end dates
CONTAINED IN (<start_date_time> , <end_date_time>) – Return rows that existed in a given time range
The example below provides a query that retrieves historical data changes made in this Employee table against specific rows with IDs 7 and 8 for the given period from 17th of November 2020 to 21st of November 2020:
SELECT * FROM dbo.Employee FOR SYSTEM_TIME FROM '2020-11-17' TO '2020-11-21' WHERE ID IN (7,8) ORDER BY ValidFrom
The result-set have shown the data changes for each row including the ValidFrom and ValidTo columns, and it may be used as a tool to track down what changes did affect the row, and in this example, we updated Role, Team, and Bonus Target values:
Temporal tables play a significant role in the SQL DBA toolset, as it provides in-time data analytics, reconstructing data state and database auditing trail for data changes, calculating trends and recognize anomaly behavior, recover from the unwanted data changes.
At the glance, it seems this technology is an all-in-one solution for regular DBAs, however, system-versioned tables can’t provide much of the metadata in the auditing trail. Row versions store and provide the actual data state and time being effective, leaving no trail on who did the change, what application was used, from what particular host, and what is the exact SQL operation that made changes. To overcome this challenge, it is required to employ additional auditing power to streamline this information, and in this article, we will introduce how to get data changes using 3rd party database auditing tool.
Track data changes using a third-party database auditing tool
ApexSQL Audit is a compliance and auditing tool for SQL Server that track database activities and collect auditing information for almost 200 SQL operations, including DML activities. Each auditing event includes information on time, login, application, SQL query text, client host, and schema to describe an activity in generated reports.
Database changes per DML transactions can be reviewed by leveraging on a specific feature referred to as Before-After, and it is a SQL Trigger-based mechanism that collects before and after data changes values with the ability to distinctively tell what row in a table took effect by the change.
Before-after is a comprehensive database auditing mechanism that is easily configurable via application GUI and significantly decreases effort investment while configuring it against multiple tables in a database.
Let us quickly walk-trough the configuration process to track database changes on multiple tables at one edition set, here are the steps:
In the ApexSQL Audit GUI head to the Before-After tab
Opt to add the database and continue by specifying what tables you want to audit, we used AdventureWorks2017 for our example below
The configuration can be granularly defined and combine operations with the table columns, allowing the user to achieve specific auditing needs
Via this 3-step configuration using the application interface, we have complete configuring database auditing to track changes for each table and column per specific needs. After the auditing commences, the history of changes will be reconstructed and easily overviewed in auditing reports with rich metadata information:
Report feature provides the ability to generate and automate the reporting tasks while also including the ability to specify filters while obtaining this data from the auditing event logs. As the extra feature, ApexSQL Audit also provides a comprehensive data alert mechanism to raise an alert on any specific data change criteria and raise awareness of the activity in no time:
Configuring auditing specification, reporting task, and alerting are the three core pillars to utilize with ApexSQL Audit, in this article we’ve covered quick and easy configuration and output details, however, to grasp more about how to specify reporting and alerting to reach specific goals, feel free to visit this How to configure and use before-after auditing in ApexSQL Audit article.
Summary
In this article, we have covered two out of many SQL auditing solutions to track database changes, the native Temporal tables feature is a true powerhouse that helps in multiple scenarios while creating database auditing trail, especially with analytics and recovery. But, we’ve also acknowledged that system-version history changelogs do not provide rich metadata information that is a valuable piece for the audit review processes. As a potential solution to this challenge, ApexSQL Audit demonstrates a strong capability to track and document database changes on specific auditing policies, run and create the reporting documentation and provide real-time alerting that is as significant due to regulatory requirements that apply nowadays.
0 notes
Link
In the previous article, we have described How to collect information about SQL audits and database mails using SQL documentation and the problem that topic envelopes.
As a part of the problem we should consider the linked server objects so, in addition to the beforementioned SQL Server objects, we will add these objects to the SQL documentation, and how to collect information related to linked servers will be described in the following article.
Linked server information in the SQL Server Management Studio
Linked servers allow database clients to work directly with other SQL Servers. This brings the benefit of:
The ability to connect to a remote server
The ability to execute remote queries, updates, commands, and transactions on a variety of data sources
The ability to use different data sources in a similar way
Using SQL Server Management Studio, all needed information to recreate a linked server on another environment can be collected, so let’s demonstrate how to collect the related information.
Collecting Linked Servers information from Object Explorer
If the connection to the desired SQL Server, that has linked servers, is established, from the Object Explorer panel expand the Linked Servers item which can be found under the Server Objects folder:
A list of all available linked servers will be shown. From this list, right-click over the desired linked server name and from the context menu select the Properties option:
The new Linked Server Properties window will be opened. From the General tab, the information related to the Linked server name and the Server type can be collected:
From the Security tab, the information related to a login toward linked server will be shown, in this example, no explicit login mapping is set , so the connections will be made using custom security context. This means that a %user% will be able to connect to the desired server using a remote login name and password:
From the Server Options tab, general server parameters, that define relationship and dependencies with the parent server, can be collected:
All collected information from the previous tabs can be used to create the SQL documentation, then recreate the desired Linked Server on some test server or even on the same server.
Collecting Linked Servers information using T-SQL
Collecting the linked servers information can be done using T-SQL. This is done with the right-click over the desired linked server name then from the context menu go to Script Linked Server as – CREATE To – New Query Window:
The following T-SQL Script will be created and opened in the new query window:
USE [master] GO /****** Object: LinkedServer [%server name%] Script Date: 11/14/2020 12:12:45 PM ******/ EXEC master.dbo.sp_addlinkedserver @server = N'%Server name%' ,@srvproduct = N'SQL Server' /* For security reasons the linked server remote logins password is changed with ######## */ EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'%Server name%' ,@useself = N'False' ,@locallogin = NULL ,@rmtuser = N'%User name%' ,@rmtpassword = '########' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'collation compatible' ,@optvalue = N'false' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'data access' ,@optvalue = N'true' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'dist' ,@optvalue = N'false' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'pub' ,@optvalue = N'false' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'rpc' ,@optvalue = N'false' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'rpc out' ,@optvalue = N'false' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'sub' ,@optvalue = N'false' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'connect timeout' ,@optvalue = N'0' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'collation name' ,@optvalue = NULL GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'lazy schema validation' ,@optvalue = N'false' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'query timeout' ,@optvalue = N'0' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'use remote collation' ,@optvalue = N'true' GO EXEC master.dbo.sp_serveroption @server = N'%Server name%' ,@optname = N'remote proc transaction promotion' ,@optvalue = N'true' GO
This T-SQL script can be used later in SQL Server Management Studio to create the linked server on a test server or can be saved as a backup in SQL documentation.
SQL documentation software
ApexSQL Doc, allows database administrators to create various SQL documentation related to SQL Databases, SQL Server objects, SQL Server Integration Service (SSIS), etc. More information can be found on the SQL Database documentation tool page
As an alternative to previous examples on how to collect information related to linked servers using the SQL Server Management Studio, now is time to show how this can be done using ApexSQL Doc.
When ApexSQL Doc is started, from the Home tab, click the New button to initiate new project:
Select the Server engine from the Data sources and objects panel:
Click on the Add button and the Add SQL server window will open. In this window, select the targeted SQL Server, and type of the Authentication from the drop-down list, then click the OK button to confirm and establish connection:
In the server panel the connected SQL Server will be shown:
Quick tip:
ApexSQL Doc allows adding more than one SQL Server at a time.
Now is time to select the desired SQL Server objects which will be included in SQL documentation. This can be done by clicking on the Server objects under the Server engine tab.
As we can see all available SQL Server objects will be listed below the Object type grid. To complement the information about database mails and SQL audits, included in previous article,
find the Linked servers item and tick the check box next to it.
On the right side all available linked servers will be listed.
To select the desired linked server, tick the check box next to its name:
One more step is left before creating SQL documentation and that is to select an output file format for the documentation.
By default, ApexSQL Doc will generate documentation as .CHM file, but this can be changed by clicking the File format item under the Output options tab and selecting the desired output file format.
In this demonstrative purpose, Linked THML (.html) output file format will be used:
When the desired output file format is selected, the last step is to create a documentation, this can be done using the Generate button from the Home tab:
Quick tip:
When the SQL documentation is generated in HTML output file format, it will be opened in the default browser.
In the documentation, we can see that we have all items in one place, Audits, Database mails and Linked servers under the Server object content tree:
Collect SQL Linked Servers information using ApexSQL Doc
Click on the Linked server name in the generated SQL documentation and on the right side of the documentation the information about the Linked server properties will be shown:
In the same page of the SQL documentation scroll down to the Linked server script section, where the T-SQL script is shown.
This SQL script can be executed in SQL Server Management Studio to recreate the linked server on the test server:
As we can see the result of executing T-SQL Script created by ApexSQL Doc, the linked server is successfully recreated:
Conclusion
As we can see the process of collecting the information is easy no matter which information collection method the data administrator uses.
Based on the environment, any type of the above-explained solution can be easy enough, but using a SQL documentation tool like ApexSQL Doc will provide:
Preservation of collected information – When the linked server is created the database administrator can create a backup of the T-SQL script which can be used again, if, for example, someone deletes the created linked server
Saving collected information in various output formats:
Compiled HTML (.chm)
Linked THML (.html)
Markdown (.md)
Word document 97-2003 (.doc)
Word document 2007 (.docx)
Portable Document Format (.pdf)
Sharing the collected information between database developers – The created documentation can be saved on the cloud services or sent to another database developer
0 notes
Link
One of the problems that can be solved with SQL auditing technologies in conjunction with SQL Server transaction logs is finding specific events, i.e. specific database or data change, capturing the event, and resolving it as wanted or unwanted. Depending on methodology and tools this can be an easy or complicated task. The ApexSQL Audit tool provides the solution wherein a few easy steps a specific event can be audited, with high granularity and precision. The additional problem surfaces out in case unwanted change occurs, how to easily revert those changes? The usual methodology consists of creating SQL Server transaction log backups and restoring them. But restoring implies reverting the database to a specific moment in time, thus reverting even wanted changes.
This article will explain and provide an example on how to utilize auditing options that ApexSQL Audit offers and recovery options that ApexSQL Log has to offer and set the automation process via Windows Task scheduler to create undo scripts on every potentially unwanted change.
Setting the SQL audit configuration
The first problem that needs to be resolved is to define and capture critical database events through auditing. Let’s take an example where a specific database needs to be monitored for all table alterations and data access and changes:
Including INSERTs, UPDATEs, and DELETEs, on the specific table, in this example the Person table:
And, for this instance, we will consider all non-system logins that have access to the database’s server:
Now the SQL auditing is set, the ApexSQL Audit will be able to collect the information about occurred events that match the criteria. Based on that, the building of a fail-safe mechanism can proceed.
Setting the audit alert
When general SQL auditing is enabled, the next step is to introduce the alerting. This function can be set to target more specific events, i.e. narrow down the scope of events that we consider critical and should be covered with some fail-safe process. For this example, we will set the Before-after type alert to be triggered when specific login (e.g. the ‘Administrator’ account) performs a DML operation against the Person table:
When the event occurs, this alert can be set to provide notification about it via email, but what is more important for setting the recovery automation is that it will log the event in the Windows log, which is enabled by default:
For more details about configuring custom alerts, see the article: How to create a custom alert with an email notification
Here is how the alert will work now: if the ‘Administrator’ account executes a statement like this one:
UPDATE [Person].[Person]SET [Title] = N'Mrs',[FirstName] = N'Alex',[LastName] = N'Brown' WHERE ([BusinessEntityID] = 1)
Against the audited database, besides receiving a notification about it, the additional result for this event, (that will be used later) will look like on the screenshot below:
The before-after auditing report will show this event also and will reflect changes described with the UPDATE statement above:
Since we created the alert that is triggered on such exact type of events, this usually means that the event is unwanted. But the damage is already done, and a response is required.
Reverting database changes
Basic practice in database maintenance jobs is creating backups. Along with the option to recover a database due to a SQL Server failure or some other critical environmental events, database backups, with the SQL Server transaction logs, can be used to revert unwanted changes like explained in this example.
The procedure to revert changes is done in a few simple steps, providing that database backup exists. Using the SSMS’ Object Explorer, right-click on the database where SQL audit is activated. From the context menu, expand the Tasks option, expand the Restore option, and click Database:
This will open the database restore dialogue. In the dialogue, click on the Timeline button to invoke the SQL Server transaction log timeline access:
In the shown dialogue, set the database restore time to a moment that precedes the time of the event and confirm with OK to get back to the previous dialogue:
Make sure that all connections to this database are dropped and it is in the single-user mode or the restore process will fail. Confirm with OK and the restore process will be executed:
The reversion result can be easily checked using a simple SELECT statement to check at least one record with parameters used in the example UPDATE statement:
SELECT *FROM [AdventureWorks2014].[Person].[Person] WHERE BusinessEntityID = 1
And the resulting values can be compared with values in the before-after report, in other words, the before value should be found in the table again.
After the restore, another problem came to the surface. What if some other changes were introduced to the database before we were able to react? For example, someone altered a table, and in the audit report the event can be seen with a timestamp later than the before-after event:
This means that we reverted these changes even though there was intention to keep it. Situations like this can be resolved by pinpointing the exact set of records in the SQL Server transaction log and creating an undo script that can be achieved and automated using the ApexSQL Log.
Creating the Undo script
ApexSQL Log is the tool that can access SQL Server transaction logs and specific records in them and create undo scripts based on those records. Even more, the tool has the capability to be triggered through CLI so its process can be automated in conjunction with the SQL audit technologies.
Based on the example used to update the record in the Person table, the following walkthrough example will show how to pinpoint and read the event from the audited SQL Server transaction log with ApexSQL Log.
When started, the ApexSQL Log will immediately start the configuration wizard. From the initial step, connect to the server and database that has SQL audits applied:
In the following step select the scope of data sources to search the event from. In most cases, the current, Online transaction log would be sufficient. The tool will scan for audited events based on defined criteria that are certainly logged in the current SQL Server transaction log as they occurred a few moments ago:
Further on, select the Undo/Redo type of operations to process:
The following step is where details about events should be set. First, define the time frame of the events to be considered for processing. Continuous auditing is recommended as a more optimized method to search for events through logs by performing a differential scan based on previous log state, per event scope, that is stored internally in the designated .axtr file. For every scope that an undo script should be created, another name should be designated for the .axtr file:
Further, the scope of events should be defined which means that the filtering setup should match the criteria used for SQL audit alert. So, in the Tables tab select the Person table:
And in the Users tab pick the ’Administrator’ account:
The rest of the options can remain unchanged with their defaults.
At the final step, we will skip the finalization of the process, and instead, use the configured parameters to create the automation script which can be in the Batch or the PowerShell form:
Setting the undo automation
The generated CLI script will have the parameters that include the scope of events that the undo script should envelop, the connection parameters for audited instances, data sources to use, reference to the .axtr file, and the name of the exported undo script. For the undo script file name, in order to preserve undo history, it is recommended that the file name should be manually corrected to include a variable, for example a timestamp, otherwise, it will be overwritten with every CLI execution.
So the parameter that would look like this:
/undo:”C:\Users\Milan\Documents\ApexSQL\ApexSQL Log\Undo.sql”
should be changed to look like this:
/undo:”C:\Users\Milan\Documents\ApexSQL\ApexSQL Log\Undo_%SAVESTAMP%.sql”
Where %SAVESTAMP% will have to be manually defined in order to prevent M/D/Y and H:M:S format in case that system date-time settings are in that specific format as the file name cannot use ’/’ and ’:’ characters. The definition should be inserted at the top of the script:
set SAVESTAMP=%DATE:/=-%@%TIME::=-%
With the CLI script modified as explained, we can proceed with creating the scheduled task. In summary, the scheduled task will be configured to be triggered on Windows application log event and execute the CLI script.
To create the scheduled task, on the machine where ApexSQL Audit and ApexSQL Log are installed, open the Windows Task Scheduler and click on the Create task item:
In the General tab of the shown dialogue, fill in the information about the task name, and make sure that the task is executed with the highest, administrator-level privileges. This is the requirement for the ApexSQL Log CLI execution:
In the Triggers tab, click the New button, and in the opened dialogue set the task to be run On an event, choose the Application log type and in the Source field, set the ApexSQL Audit application. Optionally, a small delay can be introduced between the SQL audited event and SQL Server transaction log-based undo script creation:
In the Actions tab, click the New button and in the opened dialogue set the Start a program action and provide the path to the CLI script created with the ApexSQL Log:
With the OK button confirm the action creation and again the task creation.
At soon as SQL audit alerts and undo script scheduled tasks are created, the fail-safe mechanism is set and will enable the reversion of every defined event in case it is unwanted. The generated undo script will look like this:
The script will target that one specific event that triggered its creation but the example provided will only create this undo script, it will have to be run manually, which is the recommended method in order to maintain full control and leave room for the situation review.
In case it is required to execute the undo script immediately and automatically, that can be easily set by adding the following line at the end of the example batch script:
sqlcmd -S ServerName\InstanceName -i PathToUndoScript:\Undo_%SAVESTAMP%.sql -o PathToExecutionOutputFile:\UndoLog.txt
Conclusion
The procedure to set SQL audit alerts and scanning the SQL Server transaction log should be repeated for every critical situation that might emerge as a real problem. This way the high granularity and surgical precision can be achieved in case reverting changes is required.
0 notes