Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2015/08/31/compress-sql-server-backup-in-right-way/
Compress SQL Server Backup in Right Way
Databases have the nasty habit of growing to mammoth sizes with continued usage over years. And it is common knowledge that databases that grow too large become prone to problems such as corruption, mismanagement etc. It thus becomes important to keep tabs on their growing sizes before they reach the point of no return. That’s why organizations engage in frequent backing up and shrinking of entire databases (which is not a very safe option). However, just as the databases can grow large in size, database backups can also expand over years and need a way to be confined to certain limits. It is at this juncture that database backup compression finds its need.
Database backup compression traditionally comes at a hefty price, often being payable at the rate of per byte transfer in and out of servers. For an organization that has database backups going beyond many GBs in size, that could mean an expense of thousands of dollars just to compress the backups to manageable and safer forms.
This is the primary factor that prompts organizations to look for simpler, budget-friendlier ways to compress their SQL database backups. In this article, we’ll bring to light some neat ways to perform SQL server backup compression with minimum risk, expense and hassle.
First things first – why shrinking the database or its backup is not a good idea?
We often come across questions on forums where people are looking for ways to shrink their databases. While it is quite a popular technique used to reduce database size, it is not a very safe way to solve the problem.
When you initiate the shrink operation, you basically instruct the SQL server to remove any unused space from the database files. What it does is actually “shrink” the database or backup causing index fragmentation, data loss (which would require you to rope in SQL backup recovery software incurring additional costs) and adversely affecting performance. So while it may seem like a good immediate solution, in the long run it doesn’t prove to be fruitful.
So when you observe the database backup is reaching its size limit, opt for a safer compression rather than shrinking as explained in the next section.
Things to remember before compressing SQL server backups
Before you jump to the compression techniques kindly take note of these words of caution:
Consider compressing your SQL server backup only if you have a second separate server that can act as a standby and that you can use to:
Perform the restore of production backups
Backup other smaller databases
Decrease the size of the restored production backups
Having a second server is emphasized since if you use the techniques mentioned here to compress your backups without delegating the regular tasks to a standby server, you could end up with super-sluggish performance.
You should perform thorough testing of your compressed backups regularly. This is to ensure that at the end of applying so much time and effort into compressing them, your backups don’t end up becoming unusable for restoring data. After all, being able to restore data is why backups are done in the first place right?
Compressing SQL Server Backups – The steps!
Before we detail the steps, just to make things clear we’d like to briefly point out the distinction between clustered and non-clustered indexes. Clustered indexes are just definitions of how a table is laid out on the disk, so they actually don’t acquire much disk space. Non-clustered indexes however, are additional copies of table parts and hence take up extra disk space.
The reason we mentioned that would be clear to you in the first step:
Drop all Non-Clustered indexes
You read it right; you need to script out all the non-clustered indexes from the database, save those definitions to a stored procedure or table, create a stored procedure to loop through those definitions and recreate them later and execute the ‘DROP’ command.
The benefit of this step is that if the non-clustered indexes were occupying say 45% of the database space, the compressed backup will be 45% smaller in size. The drawback is that the database will take a little longer to become fully available since the stored procedure to recreate the indexes will be run first.
Rebuild tables with 100% Fill Factor
Fill Factor is the default amount of space used on each page where SQL server stores data. Usually, when data is written to SQL server pages, some space is left out to accommodate records that need to be added later which otherwise would require a lot of shuffling to be added. To compress the backup, like in this scenario, we would need to cram up as much data on each page as possible to save space. For this, you’ll need to rebuild all clustered indexes with a 100% fill factor.
This will take quite a lot of IO operations and thus again, it should be undertaken only if you have a standby server available.
Final Words
So it all just boils down to the 2 steps mentioned above. At the end of these steps, what you’ll have are backups that are almost half their original size. And you know what that means – half the expense of bandwidth, half the storage requirements in the long-run and half the time needed for SQL Server backup recovery. That established, we’d also like to point out that these techniques are effective but your individual results may vary depending upon the number of indexes you have and on your fill factor.
Author Bio: Priyanka Chouhan is a technical writer in Stellar Data Recovery with 5 years of experience and has written several articles on SQL server & SharePoint. In the spear time she loves reading and gardening.
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2015/07/24/sql-server-single-instance-vs-multiple-instances/
SQL Server - Single Instance vs. Multiple Instances
Single Instance
Pros
Only one instance needs to be administered on the single machine.
There is no duplication of components or processing overhead, such as having to run multiple database engines on the same computer. This means that the overall performance of a server with a single instance may be higher than a server running multiple instances.
A single instance of SQL Server is capable of handling the processing growth requirements of the largest Web sites and enterprise data-processing systems, especially when it is part of a federation of database servers.
Performance – One instance, one server, Always. The reasons have to do with the SQL Server memory manager and CPU scheduler architecture, it really works best if it has the whole box to itself and nothing else runs on the box. ‘Partitioning’ of resources (max server memory, affinity mask) solves some problems and introduces more.
Security – As If one SQL Server runs into problems and a third-party needs to get involved with troubleshooting, you can give them OS-level permissions without worrying about what they’ll do to the other SQL Servers installed on the box.
SQL Server be sure to patch all of the instances, not just the default or other instance. Many of the patches need to be applied per instance and not per server.
Cons
Open
Multiple Instances
Pros
Server consolidating is one of trending reason to have multiple instances of SQL Server on to a single server.
When you must support different systems that have to be securely isolated from each other, such as when a service owner has a large server and must create a separate instance of SQL Server for each customer.
When you need to support multiple test and development databases, and the most economical configuration is to run these as separate instances of SQL Server on a single large server.
Save the cost of Server Operating System Licenses.
Cons
Each instance will fight over resources (i.e. CPU, Memory, etc.) impacting performance of all instances.
Confusion among users, which instance to use.
Less problems with application compatibility. Some apps just aren’t compatible with named instances of SQL Server
If one SQL Server runs into problems and a third-party needs to get involved with troubleshooting, you can give them OS-level permissions without worrying about what they’ll do to the other SQL Servers installed on the box.
Bottom Line:
Multiple instances of SQL Server running on the same machine is an easy way to lose performance and scalability. Often when addressing tuning issues, we focus on what is happening inside of a SQL Server and we miss the forest of instances sitting on the same box. The recommended configuration for most production databases servers is to use a single instance of SQL Server with multiple databases.
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2015/03/24/packtpub-postgresql-cookbook-review/
PacktPub - PostgreSQL Cookbook Review
PostgreSQL Cookbook published with over 90 hands-on recipes to effectively manage, administer, and design solutions using PostgreSQL cookbook written by Chitij Chauhan who is leading expert in the area of database security, with expertise in database security products such as IBM Infosphere Guardium, Oracle Database Vault and Imperva. You can know more about the Book here bit.ly/1CkxzNe
With the goal of teaching you the skills to master PostgreSQL, the book begins by giving you a glimpse of the unique features of PostgreSQL and how to utilize them to solve real-world problems. With the aid of practical examples, the book will then show you how to create and manage databases. You will learn how to secure PostgreSQL, perform administration and maintenance tasks, implement high availability features, and provide replication. The book will conclude by teaching you how to migrate information from other databases to PostgreSQL.
Get into the Details Below:
Managing databases and the PostgreSQL server
Controlling security
Backup and recovery
Routine maintenance tasks
Monitoring the system using Unix utilities
Monitoring database activity and investigating performance issues
High availability and replication
Connection pooling
Table partitioning
Accessing PostgreSQL from Perl
Accessing PostgreSQL from python
Data migration from other databases and upgrading the PostgreSQL cluster
If you are a system administrator, database administrator, architect, developer, or anyone with an interest in planning, managing, and designing database solutions using PostgreSQL, this is the book for you. This book is suited for you if you have some prior experience with any relational database or with the SQL language.
Download the sample chapter :bit.ly/1CkxzNe
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2015/01/23/effective-database-maintenance-management-index-fragmentation/
Effective Database Maintenance and Management – Index Fragmentation
Effective Database Maintenance and Management – Index Fragmentation
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
We have said before that indexes are easily the most important database structures insofar as database performance tuning is concerned. Therefore, you must pay keen attention to their configuration, structure and arrangement in order to get the best level of performance out of them.
Theory of index fragmentation
Fragmentation does not only occur at the file-system level and inside log files; data files can also be affected by fragmentation – specifically within the sections that store index and table data. This can happen in two ways:
Internal fragmentation – occurs within individual index and data pages
Scan fragmentation – within index (logical scan fragmentation) or table (extent scan fragmentation) structures that constitute pages.
Internal fragmentation
This occurs where a page contains a lot of empty space, which also comes about where a single table or index record occupies more than half of the size of a page. This would mean that only one record stores in the page. Where this is the problem, only a scheme change would correct problem but more often than not even that is ineffective.
The more common reason for internal fragmentation is data modifications that leave empty spaces on pages e.g. updates, deletions and inserts. A mis-configured fill-factor can also cause fragmentation. Depending on the schema of the table or index, free space created may be irrecoverable, meaning that over time the amount of empty unusable space will keep growing in the database.
Wasted space implies that more data or index pages will be necessary to store the same amount of data. This takes up more disk space, and forces a query to have to issue more IOs to read the same volume of data.
Logical and extent scan fragmentation
This occurs due to an operation referred to as a page split – when a record defined for insertion on a particular index page that does not have enough space to fit the record. As a result, the page will split into two, and the latter portion moved into a new page that is usually detached from the old page. The data therefore becomes fragmented.
The concept similarly applies to extent scan fragmentation. The fragmentation interferes with the SQL Server’s ability to carry out scans efficiently, whether it is throughout the index or table, or restricted by a WHERE clause in a query.
Solutions to fragmentation
Changing the schema of a table or index is very hard, even impossible, but it is the best way to prevent fragmentation. Where prevention is not available, removal of fragmentation can be by rebuilding and reorganizing indexes. Rebuilding means creation of a new copy of the index, one that is contiguous and compact and then discarding the fragmented one. Rebuilding should be offline, though recent versions of SQL servers allow online rebuilding, with restrictions.
Reorganizing is an easier operation, one that can take place online and a space-efficient alternative to rebuilding. Reorganization is through using an in-place algorithm to defragment and compact an index. Details of trade-offs between the two can be accessed through online resources in order to determine the best method for your own database. Further assistance is available at the remote DBA expert at remotedba.com.
Some DBAs choose to rebuild or reorganize all indexes at the end of the day or week as opposed to finding out which indexes have fragments. Developers usually implement this as part of a maintenance plan option, but it is a very poor choice where databases are larger and where resources come at a premium.
Whichever method is applied, index fragmentation should undergo regularly investigation and corrected for better database performance.
Bio : Charlie Brown is a free lancer content writer. He has written many good and informative articles on Technology, software, internet etc. By this article, he has given information on Effective Database Maintenance. In his free time, he loves to collect more and more information on different Topics of Technology.
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2015/01/10/top-7-things-website-owners-know-backing-website/
Top 7 Things Website Owners Should Know When Backing Up Their Website
In today’s world of internet blogging/websites, taking a few moments to make an easy, quick and convenient backup of your database will save you untold hours, days or even weeks of hard work. Listed below are seven important things you’ll need to know about backing up your website, before disaster strikes unexpectedly.
1- How Many Times Should You Backup?
It’s your call. How important that data is, and how often you want to backup may depend on how often you make updates.
2- How Many Backups Should You Keep?
At least three copies in different places. You can use CD/DVDs, an external hard drive, a thumb drive or flash drive or one of several online backup servers. The important thing is to regularly keep everything backed up.
3- Can Backups Be Automatically Set?
Yes. However, you will do best if you do a manual backup in addition to an automatically scheduled backup. Online backup services today offer many options, but it’s always a good idea to do a free trial run of the service to see how efficiently they function. Some of the most reputable services you may try are Carbonite, Mozy, and iCloud just to mention a few. Some offer free trial or promotional offers usually for a one month basis, so choose wisely.
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
4- Failure to Back-up Your Site Can Be Costly
The latest statistics from one online survey indicates a full 90 percent of business blogs suffering from data loss find themselves shutting down just two years after the loss of data occurred. A startling 43 percent of blogs suffering any kind of disaster such as fire, flood, or failure to have an implemented plan, never fully recover. About 70% of blog owners experienced data loss due to accidental deletion, disk failure, malware, or hacking.
5- Database Backup or Full Backups?
We highly recommend running database backups for quick, frequent backups and running full backups to safeguard your images, themes, and plugins.
6- Backing Up Off-Site
In addition to having your own backed up copies of the site, we suggest sending your website copy to an online, off-site facility for regular scheduling. It is not generally a good idea to store everything you have on the same server as your web hosting server.
7- Backup Software
With so many good, and not so good, backup software packages available, your best bet would be to carefully compare different types of data backup software.
Backing up your website is perhaps one of the most vitally important tasks, and the wisest, you’ll ever do regardless of what blogging platform you are on.
Author Bio – Peter is a technical writer and data specialist, specializing in data backup and recovery.
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/11/21/unable-connect-sql-server-instance-26-error-locating-serverinstance-specified/
Unable to connect the SQL Server Instance - 26 - Error Locating Server/Instance Specified
Unable to connect the SQL Server Instance – 26 – Error Locating Server/Instance Specified
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
Problem\Issue:
User is not able to connect the SQL Server Named Instance ServerName\InstanceName
Additional Error Messages:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (Provider: SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)
Environment: SQL Server default single instance is running on the VM. Another named instanced is installed on the machine. Now user is able to connect the default instance but not able to connect the named instance.
Cause: SQL Server Browser service not running.
Reason: The basic purpose of the SQL Server Browser service is to provide instance and port information to incoming connection requests
If you have just one instance installed on machine and it is running on default port 1433, then status of SQL Server Browser service does not make any difference in your connection parameters.
If there are more than one instances running on the same machine, in that case either you have to start SQL Server Browser service or provide the port number along with IP (or server name) and instance name, to access any other instance than default.
If SQL Server instance is configured using dynamic ports then browser service is required to connect to correct port number.
Make sure SQL Server enabled for remote connection as mentioned in given URL >> Enable Remote Connection for SQL Server
Read About SQL Browser Services >> SQL Browser Service
Hope it helps.
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/11/21/unable-connect-sql-server-instance-26-error-locating-serverinstance-specified/
Unable to connect the SQL Server Instance - 26 - Error Locating Server/Instance Specified
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
Unable to connect the SQL Server Instance – 26 – Error Locating Server/Instance Specified
Problem\Issue:
User is not able to connect the SQL Server Named Instance ServerName\InstanceName
Additional Error Messages:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (Provider: SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)
Environment: SQL Server default single instance is running on the VM. Another named instanced is installed on the machine. Now user is able to connect the default instance but not able to connect the named instance.
Cause: SQL Server Browser service not running.
Reason: The basic purpose of the SQL Server Browser service is to provide instance and port information to incoming connection requests
If you have just one instance installed on machine and it is running on default port 1433, then status of SQL Server Browser service does not make any difference in your connection parameters.
If there are more than one instances running on the same machine, in that case either you have to start SQL Server Browser service or provide the port number along with IP (or server name) and instance name, to access any other instance than default.
If SQL Server instance is configured using dynamic ports then browser service is required to connect to correct port number.
Make sure SQL Server enabled for remote connection as mentioned in given URL >> Enable Remote Connection for SQL Server
Read About SQL Browser Services >> SQL Browser Service
Hope it helps.
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/11/11/sql-server-how-to-read-the-sql-server-error-log-files-using-tsql/
SQL Server - How to read the SQL Server Error log files using TSQL
SQL Server – How to read the SQL Server log files using T-SQL
There is undocumented system stored procedure sp_readerrorlog which allows us to read the SQL Server error log files directly using T-SQL.
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
This procedure has total 7 parameters as given below:
Parameter Values First Parameter Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc. Second Parameter Log file type : 1 – Reads SQL Server error logs, 2 – Reads SQL Server Agent error logs Third Parameter Search string 1: String Value Forth Parameter Search string 1: String Value Fifth Parameter Start Date: Start Date reading logs from specified date Sixth Parameter End Date: End Date reading logs from specified date Seventh Parameter Sort Order : ASC – Ascending or DESC – Descending
Important Note: Without passing any parameters this SP will return the contents of the current error log.
Example:
--Return the Cuurent SQL Server Error Log EXEC xp_ReadErrorLog 0,1 --Return the Cuurent SQL Agent Error Log EXEC xp_ReadErrorLog 0,2 -- Reads current SQL Server error log with text 'sql' EXEC xp_ReadErrorLog 0, 1, N'sql' -- Reads current SQL Server error log with text 'sql' and 'error' EXEC xp_ReadErrorLog 0, 1, N'sql' , N'error' -- Reads current SQL Server error log with text 'sql' and 'error' EXEC xp_ReadErrorLog 0, 1, N'sql' , N'error', N'20141027' -- Reads current SQL Server error log for the specfic date' EXEC xp_ReadErrorLog 0, 1, Null, Null, N'20141027' -- Reads current SQL Server error log for the specfic date in Descending order EXEC xp_ReadErrorLog 0, 1, Null, Null, N'20141027',Null, N'Desc'
Hope it helps !
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/11/11/sql-server-how-to-read-the-sql-server-error-log-files-using-tsql/
SQL Server - How to read the SQL Server Error log files using TSQL
SQL Server – How to read the SQL Server log files using T-SQL
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
There is undocumented system stored procedure sp_readerrorlog which allows us to read the SQL Server error log files directly using T-SQL.
This procedure has total 7 parameters as given below:
Parameter Values First Parameter Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc. Second Parameter Log file type : 1 – Reads SQL Server error logs, 2 – Reads SQL Server Agent error logs Third Parameter Search string 1: String Value Forth Parameter Search string 1: String Value Fifth Parameter Start Date: Start Date reading logs from specified date Sixth Parameter End Date: End Date reading logs from specified date Seventh Parameter Sort Order : ASC – Ascending or DESC – Descending
Important Note: Without passing any parameters this SP will return the contents of the current error log.
Example:
--Return the Cuurent SQL Server Error Log EXEC xp_ReadErrorLog 0,1 --Return the Cuurent SQL Agent Error Log EXEC xp_ReadErrorLog 0,2 -- Reads current SQL Server error log with text 'sql' EXEC xp_ReadErrorLog 0, 1, N'sql' -- Reads current SQL Server error log with text 'sql' and 'error' EXEC xp_ReadErrorLog 0, 1, N'sql' , N'error' -- Reads current SQL Server error log with text 'sql' and 'error' EXEC xp_ReadErrorLog 0, 1, N'sql' , N'error', N'20141027' -- Reads current SQL Server error log for the specfic date' EXEC xp_ReadErrorLog 0, 1, Null, Null, N'20141027' -- Reads current SQL Server error log for the specfic date in Descending order EXEC xp_ReadErrorLog 0, 1, Null, Null, N'20141027',Null, N'Desc'
Hope it helps !
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/10/16/leveraging-validator-toolkit-asp-net-mvc-build-client-server-side-form-validation/
Leveraging Validator Toolkit for ASP.NET MVC to Build Client and Server-Side Form Validation
Validator Toolkit for ASP.NET MVC
After having gained an incredible amount of popularity as one of the finest web development frameworks, Asp.net is now available with a validator toolkit that enables the developers to build both, client side and server-side form validation. The MVC programming model available for Asp.net has made its mark in the world of Asp.net development. In today’s blog, I’ll be covering details about the validator toolkit that’s available for Asp.net developers.
What’s the Validator Toolkit for Asp.net MVC used for?
While using Asp.net, the developers need a solution for validating an HTML form on the client and server side. Thanks to the existence of the Validator Toolkit for Asp.net MVC, it has become possible for the developers to perform form validation on the client and server side using the validation sets. As a new project, the Validator toolkit for Asp.net MVC has been uploaded on Microsoft’s Open Source Community site at http://mvcvalidatortoolkit.codeplex.com/ .
Validator toolkit for Asp.net MVC- How does it work?
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
The Validator toolkit uses validation sets as specific elements. These validation sets are the special classes that are derived from the ValidationSet base class and define all the validation rules for a particular HTML webpage form. The validator toolkit allows you to generate client-side JavaScript code on the basis of validators that have already been defined during the view rendering process. Talking specifically about client side validation, the validator toolkit allows the client side to use the very powerful jQuery JavaScript library in collaboration with the jQuery validation plug-in for accomplishing the task of client validation. This validation plug-in is customized for supporting the behavior that’s expected by the developer. Apart from using the jQuery library for client-side HTML form validation, it can also be used for creating a lot of other exciting stuff including animations, special effects etc.
Validator toolkit for Asp.net MVC- Understanding it better
Here’s a look at a sample validation set:
public class LoginValidationSet : ValidationSet protected override ValidatorCollection GetValidators() return new ValidatorCollection ( new ValidateElement("username") Required = true, MinLength = 5, MaxLength = 30 , new ValidateElement("password") Required = true, MinLength = 3, MaxLength = 50 );
In the above code, the LoginValidationSet class will define the rules for validating a simple login form. This will be done by overriding the GetValidators method of the base class which will return a ValidatorCollection instance along with all the validators that are required for validating the HTML form at a later point of time. Under such a situation, the username field will be required and input for the same will contain a minimum of 5 characters and a maximum of 30 characters. The developer would also be required to enter a value for the password field, restricting the count of characters between 3 and 50.
An uncertainly associated with the use of Validator toolkit for Asp.net
If by any chance the Validator toolkit uses custom attributes for setting the validation rules then there’s actually no guarantee for the HTML form validation process to validate in exactly the same order as the attributes have been designed. In addition to this, since the Type.GetCustomAttributes method will return the attribute list in an alphabetical order, this uncertainty become even more rigid. As someone who’s looking forward to validate the HTML forms on client and server side, you may also choose to write your own custom validators or opt for using the ValidateScriptMethod validator for calling a specific JavaScript function on the client side in addition to defining a method within the validation set class for validating the HTML form for the server-side.
Validator Toolkit brings in the convenience of creating custom validators
With a handy validator toolkit, you can easily create custom validators with just some basic knowledge about the jQuery JavaScript library and the validation plug-in. Every sample site created using validator toolkit includes a custom validator that’s named as ValidateBuga. This validator checks the input value against a constant string that’s called a buga. With each validator deriving from a specific Validator class, the custom validator provides a couple of virtual methods.
Wrapping Up
Now that was a round up of the basic features and working the validator toolkit for Asp.net MVC that’s used for performing HTML form validation for client and server side. Hope the details covered above would serve as a handy guide during all your forthcoming form validation.
About the Author :
Celin Smith is an ASP.Net web developer and blogger at Xicom Technologies Ltd. who loves to write about web & mobile apps. If you are looking to Hire .Net MVC Developers, you can get in touch with her or directly contact Xicom.
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/10/16/leveraging-validator-toolkit-asp-net-mvc-build-client-server-side-form-validation/
Leveraging Validator Toolkit for ASP.NET MVC to Build Client and Server-Side Form Validation
Validator Toolkit for ASP.NET MVC
After having gained an incredible amount of popularity as one of the finest web development frameworks, Asp.net is now available with a validator toolkit that enables the developers to build both, client side and server-side form validation. The MVC programming model available for Asp.net has made its mark in the world of Asp.net development. In today’s blog, I’ll be covering details about the validator toolkit that’s available for Asp.net developers.
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
What’s the Validator Toolkit for Asp.net MVC used for?
While using Asp.net, the developers need a solution for validating an HTML form on the client and server side. Thanks to the existence of the Validator Toolkit for Asp.net MVC, it has become possible for the developers to perform form validation on the client and server side using the validation sets. As a new project, the Validator toolkit for Asp.net MVC has been uploaded on Microsoft’s Open Source Community site at http://mvcvalidatortoolkit.codeplex.com/ .
Validator toolkit for Asp.net MVC- How does it work?
The Validator toolkit uses validation sets as specific elements. These validation sets are the special classes that are derived from the ValidationSet base class and define all the validation rules for a particular HTML webpage form. The validator toolkit allows you to generate client-side JavaScript code on the basis of validators that have already been defined during the view rendering process. Talking specifically about client side validation, the validator toolkit allows the client side to use the very powerful jQuery JavaScript library in collaboration with the jQuery validation plug-in for accomplishing the task of client validation. This validation plug-in is customized for supporting the behavior that’s expected by the developer. Apart from using the jQuery library for client-side HTML form validation, it can also be used for creating a lot of other exciting stuff including animations, special effects etc.
Validator toolkit for Asp.net MVC- Understanding it better
Here’s a look at a sample validation set:
public class LoginValidationSet : ValidationSet protected override ValidatorCollection GetValidators() return new ValidatorCollection ( new ValidateElement("username") Required = true, MinLength = 5, MaxLength = 30 , new ValidateElement("password") Required = true, MinLength = 3, MaxLength = 50 );
In the above code, the LoginValidationSet class will define the rules for validating a simple login form. This will be done by overriding the GetValidators method of the base class which will return a ValidatorCollection instance along with all the validators that are required for validating the HTML form at a later point of time. Under such a situation, the username field will be required and input for the same will contain a minimum of 5 characters and a maximum of 30 characters. The developer would also be required to enter a value for the password field, restricting the count of characters between 3 and 50.
An uncertainly associated with the use of Validator toolkit for Asp.net
If by any chance the Validator toolkit uses custom attributes for setting the validation rules then there’s actually no guarantee for the HTML form validation process to validate in exactly the same order as the attributes have been designed. In addition to this, since the Type.GetCustomAttributes method will return the attribute list in an alphabetical order, this uncertainty become even more rigid. As someone who’s looking forward to validate the HTML forms on client and server side, you may also choose to write your own custom validators or opt for using the ValidateScriptMethod validator for calling a specific JavaScript function on the client side in addition to defining a method within the validation set class for validating the HTML form for the server-side.
Validator Toolkit brings in the convenience of creating custom validators
With a handy validator toolkit, you can easily create custom validators with just some basic knowledge about the jQuery JavaScript library and the validation plug-in. Every sample site created using validator toolkit includes a custom validator that’s named as ValidateBuga. This validator checks the input value against a constant string that’s called a buga. With each validator deriving from a specific Validator class, the custom validator provides a couple of virtual methods.
Wrapping Up
Now that was a round up of the basic features and working the validator toolkit for Asp.net MVC that’s used for performing HTML form validation for client and server side. Hope the details covered above would serve as a handy guide during all your forthcoming form validation.
About the Author :
Celin Smith is an ASP.Net web developer and blogger at Xicom Technologies Ltd. who loves to write about web & mobile apps. If you are looking to Hire .Net MVC Developers, you can get in touch with her or directly contact Xicom.
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/08/11/sql-server-enable-sql-server-2012-alwayson-availability-groups/
SQL Server - Enable SQL Server 2012 AlwaysOn Availability Groups
In the previous post SQL Server – AlwaysOn Availability Groups, described enhancement to SQL Server 2014 AlwaysOn Availability Groups. In this post, we will see how to enable SQL Server 2012 AlwaysOn Availability Groups using UI and PowerShell.
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
Using UI
Go to SQL Server Configuration Manager >> Navigate to SQL Server Service >> Right Click to SQL Server Service and Click to Properties
SQL Server – Enable SQL Server 2012 AlwaysOn Availability Groups
Navigate to AlwaysOn High Availability tab >> Check the “Enable AlwaysOn Availability Groups ” check Box
SQL Server – Enable SQL Server 2012 AlwaysOn Availability Groups
Using PowerShell
By using Enable-SqlAlwaysOn and Disable –SqlAlwaysOn PowerShellcommand, we can enable or disable the AlwaysOn Availability Groups feature.
PS C:\> Enable-SqlAlwaysOn -ServerInstance INSTANCENAME –Force PS C:\> Disable-SqlAlwaysOn -ServerInstance INSTANCENAME -Force
The benefit of using this PowerShell is if we have two or more than two Windows Failover Cluster. Using the PowerShell minimizes the effort of logging in to each of the cluster nodes, opening SQL Server Configuration Manager to enable AlwaysOn Availability Groups.
Get-Cluster >> Get information about one or more failover clusters in a given domain Get-ClusterNode >> Get information about one or more nodes (servers) in a failover cluster
But with the help forereach command; we can enable the AlwaysOn Availability Groups to all the available nodes.
PS C:\> foreach ($node in Get-ClusterNode) Enable-SqlAlwaysOn -ServerInstance $node -Force
Example :
SQL Server – Enable SQL Server 2012 AlwaysOn Availability Groups
You are done !!!
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/08/11/sql-server-enable-sql-server-2012-alwayson-availability-groups/
SQL Server - Enable SQL Server 2012 AlwaysOn Availability Groups
In the previous post SQL Server – AlwaysOn Availability Groups, described enhancement to SQL Server 2014 AlwaysOn Availability Groups. In this post,we will how to enable SQL Server 2012 AlwaysOn Availability Groups using UI and PowerShell.
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
Using UI
Go to SQL Server Configuration Manager >> Navigate to SQL Server Service >> Right Click to SQL Server Service and Click to Properties
SQL Server – Enable SQL Server 2012 AlwaysOn Availability Groups
Navigate to AlwaysOn High Availability tab >> Check the “Enable AlwaysOn Availability Groups ” check Box
SQL Server – Enable SQL Server 2012 AlwaysOn Availability Groups
Using PowerShell
By using Enable-SqlAlwaysOn and Disable –SqlAlwaysOn PowerShellcommand, we can enable or disable the AlwaysOn Availability Groups feature.
PS C:\> Enable-SqlAlwaysOn -ServerInstance INSTANCENAME –Force PS C:\> Disable-SqlAlwaysOn -ServerInstance INSTANCENAME -Force
The benefit of using this PowerShell is if we have two or more than two Windows Failover Cluster. Using the PowerShell minimizes the effort of logging in to each of the cluster nodes, opening SQL Server Configuration Manager to enable AlwaysOn Availability Groups.
Get-Cluster >> Get information about one or more failover clusters in a given domain Get-ClusterNode >> Get information about one or more nodes (servers) in a failover cluster
But with the help forereach command; we can enable the AlwaysOn Availability Groups to all the available nodes.
PS C:\> foreach ($node in Get-ClusterNode) Enable-SqlAlwaysOn -ServerInstance $node -Force
Example :
SQL Server – Enable SQL Server 2012 AlwaysOn Availability Groups
You are done !!!
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/07/29/sql-server-database-mirroring-vs-log-shipping/
SQL Server – Database mirroring vs Log Shipping
SQL Server – Database mirroring vs Log Shipping
I would like to start this topic with very common question for database administrators.
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
Question: What is difference between the Database Mirroring and Log Shipping? Which is preferable solution?
Database Mirroring:
Database mirroring is functionality in the SQL Server engine that reads from the transaction log and copies transactions from the principal server instance to the mirror server instance. Database mirroring can operate synchronously or asynchronously. Database mirroring supports only one mirror for each principal database. Database mirroring also supports automatic failover if the principal database becomes unavailable. The mirror database is always offline in a recovering state, but you can create snapshots of the mirror database to provide read access for reporting purposes etc.
Log Shipping
Log shipping is based on SQL Server Agent jobs that periodically take log backups of the primary database, copy the backup files to one or more secondary server instances, and restore the backups into the secondary database(s). Log shipping supports an unlimited number of secondaries for each primary database.
I personally don’t think that anyone is preferable over another one because both have their pros and cons. All I can say it’s all depend upon your requirements. More information about both technologies is available in SQL Server – Book Online.
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/07/29/sql-server-database-mirroring-vs-log-shipping/
SQL Server – Database mirroring vs Log Shipping
SQL Server – Database mirroring vs Log Shipping
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
I would like to start this topic with very common question for database administrators.
Question: What is difference between the Database Mirroring and Log Shipping? Which is preferable solution?
Database Mirroring:
Database mirroring is functionality in the SQL Server engine that reads from the transaction log and copies transactions from the principal server instance to the mirror server instance. Database mirroring can operate synchronously or asynchronously. Database mirroring supports only one mirror for each principal database. Database mirroring also supports automatic failover if the principal database becomes unavailable. The mirror database is always offline in a recovering state, but you can create snapshots of the mirror database to provide read access for reporting purposes etc.
Log Shipping
Log shipping is based on SQL Server Agent jobs that periodically take log backups of the primary database, copy the backup files to one or more secondary server instances, and restore the backups into the secondary database(s). Log shipping supports an unlimited number of secondaries for each primary database.
I personally don’t think that anyone is preferable over another one because both have their pros and cons. All I can say it’s all depend upon your requirements. More information about both technologies is available in SQL Server – Book Online.
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/07/17/sql-server-log-shipping-disaster-recovery-solution/
SQL Server - Log shipping for Disaster Recovery Solution
What is Log Shipping?
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
Log Shipping is a basic level SQL Server high-availability technology. It is automated process to send the transaction log backups from a primary database on a primary server instance to one or more secondary databases on separate secondary server instances. The transaction log backups are applied to each of the secondary databases individually.
How Log Shipping working?
Log shipping involves copying a database backup and subsequent transaction log backups from the primary server and restoring the database and transaction log backups on one or more secondary servers.
Prerequisite
The primary database must use the full or bulk-logged recovery model.
Shared folder should be created to hold the transaction log backups.
SQL Server Agent Service must be configured properly.
Restriction:
Log shipping cannot be used for automatic failover plan
Here we will give you demo – How to Setup the Log Shipping
Very Step is to make sure your database is in full or bulk-logged recovery model.
Login to primary server then navigate to database Properties. Then select the Transaction Log Shipping Page. Check the “Enable this as primary database in a log shipping configuration” check box as shown in given snapshot :
SQL Server -Log shipping for Disaster-Recovery Solution
In the next step, we will configure the Backup Setting as shown in given snapshot. Here will mention network path. Also we can mention local folder path if backup folder path is located on the primary server. Once we are completed with this step, backup jobs will be created on primary server.
SQL Server -Log shipping for Disaster-Recovery Solution
Now in the next step we will add the secondary server instance\databases, we can add more than one if we want to log ship to multiple servers
SQL Server -Log shipping for Disaster-Recovery Solution
When we click on the Add Button, new screen will open and here we will configure the secondary server details. In the “Initialize secondary database” tab, there are three options. We have selected the first option as shown in the snapshot, which take the fresh backup of database from primary database and restore it on the secondary database.
SQL Server -Log shipping for Disaster-Recovery Solution
In the “Copy File” Tab, We will mention Destination Shared Folder where the Log Shipping Copy job will copy the Transactional-Log backup files. Once we are completed with this step, copy jobs will be created on Secondary server.
SQL Server -Log shipping for Disaster-Recovery Solution
In the “Restore Transaction Log” tab, we will mention database restore state and restore schedule. Once we are completed with this step, restore jobs will be created on secondary server.
No Recovery Mode: Database will be in restore state and cannot read until it’s online. Standby Mode: Database will be in read-only mode.
SQL Server -Log shipping for Disaster-Recovery Solution
In the next step, we will configure the log shipping monitoring; Monitoring can be done from the primary server or secondary server or any other SQL Server instance. We can configure alerts on primary / secondary server if respective jobs fail.
SQL Server -Log shipping for Disaster-Recovery Solution
When we click on the “Setting…” button, new screen will open. Here we will mention the monitoring server details as shown in given snapshot. Once we are completed with this step, alert jobs will be created on primary server.
SQL Server -Log shipping for Disaster-Recovery Solution
Now click “OK” button to finish the Log Shipping configuration and it will show the following screen :
SQL Server -Log shipping for Disaster-Recovery Solution
Click “Close”
Now you check on your secondary server, database will look as shown below :
SQL Server -Log shipping for Disaster-Recovery Solution
Log shipping has been configured successfully !
0 notes
Text
Post has been published on Varinder Sandhu 's Blog
Post has been published on http://www.varindersandhu.in/2014/07/17/sql-server-log-shipping-disaster-recovery-solution/
SQL Server - Log shipping for Disaster Recovery Solution
What is Log Shipping?
Log Shipping is a basic level SQL Server high-availability technology. It is automated process to send the transaction log backups from a primary database on a primary server instance to one or more secondary databases on separate secondary server instances. The transaction log backups are applied to each of the secondary databases individually.
How Log Shipping working?
Log shipping involves copying a database backup and subsequent transaction log backups from the primary server and restoring the database and transaction log backups on one or more secondary servers.
Prerequisite
The primary database must use the full or bulk-logged recovery model.
Shared folder should be created to hold the transaction log backups.
SQL Server Agent Service must be configured properly.
Restriction:
Log shipping cannot be used for automatic failover plan
<!-- google_ad_client = "pub-2404605494811633"; google_alternate_color = "FFFFFF"; google_ad_width = 468; google_ad_height = 60; google_ad_format = "468x60_as"; google_ad_type = "text_image"; google_ad_channel =""; google_color_border = "FFFFFF"; google_color_link = "0000FF"; google_color_bg = "FFFFFF"; google_color_text = "000000"; google_color_url = "008000"; google_ui_features = "rc:6"; //-->
Here we will give you demo – How to Setup the Log Shipping
Very Step is to make sure your database is in full or bulk-logged recovery model.
Login to primary server then navigate to database Properties. Then select the Transaction Log Shipping Page. Check the “Enable this as primary database in a log shipping configuration” check box as shown in given snapshot :
SQL Server -Log shipping for Disaster-Recovery Solution
In the next step, we will configure the Backup Setting as shown in given snapshot. Here will mention network path. Also we can mention local folder path if backup folder path is located on the primary server. Once we are completed with this step, backup jobs will be created on primary server.
SQL Server -Log shipping for Disaster-Recovery Solution
Now in the next step we will add the secondary server instance\databases, we can add more than one if we want to log ship to multiple servers
SQL Server -Log shipping for Disaster-Recovery Solution
When we click on the Add Button, new screen will open and here we will configure the secondary server details. In the “Initialize secondary database” tab, there are three options. We have selected the first option as shown in the snapshot, which take the fresh backup of database from primary database and restore it on the secondary database.
SQL Server -Log shipping for Disaster-Recovery Solution
In the “Copy File” Tab, We will mention Destination Shared Folder where the Log Shipping Copy job will copy the Transactional-Log backup files. Once we are completed with this step, copy jobs will be created on Secondary server.
SQL Server -Log shipping for Disaster-Recovery Solution
In the “Restore Transaction Log” tab, we will mention database restore state and restore schedule. Once we are completed with this step, restore jobs will be created on secondary server.
No Recovery Mode: Database will be in restore state and cannot read until it’s online. Standby Mode: Database will be in read-only mode.
SQL Server -Log shipping for Disaster-Recovery Solution
In the next step, we will configure the log shipping monitoring; Monitoring can be done from the primary server or secondary server or any other SQL Server instance. We can configure alerts on primary / secondary server if respective jobs fail.
SQL Server -Log shipping for Disaster-Recovery Solution
When we click on the “Setting…” button, new screen will open. Here we will mention the monitoring server details as shown in given snapshot. Once we are completed with this step, alert jobs will be created on primary server.
SQL Server -Log shipping for Disaster-Recovery Solution
Now click “OK” button to finish the Log Shipping configuration and it will show the following screen :
SQL Server -Log shipping for Disaster-Recovery Solution
Click “Close”
Now you check on your secondary server, database will look as shown below :
SQL Server -Log shipping for Disaster-Recovery Solution
Log shipping has been configured successfully !
0 notes