Thursday, December 31, 2015

SQL Recovery Software: An Ultimate Tool For SQL Database Recovery

SQL Server is a database management system which enables to store and retrieve data from the database. Due to some reasons like virus infection, improper closing of the Server, etc. the data in the database may go damaged or corrupted. The recovery of the data can be made possible with SQL Recovery Software. The tool attains an easy recovery of the database files from corruption or damage; no matter how deep the infection is. Both the MDF and NDF files get repaired using the tool and all the triggers, store procedures, tables, functions, etc. in the database are also recovered within seconds.
More On Software
SQL Repair tool restores all the data from the damaged database file of any server such as; Server 2014, 2012, 2008, etc. With the software all the hard deleted data items can be restored without any changes to the data.Being a Windows based software, the tool supports all the versions; Windows 8.1 and below.The Advance scan option recovers entire data when the corruption infection is severe and there is Quick scan mode for minor damage recovery. The tool is supported with the auto-detection option for the selection of NDF files as well as the SQL Server version. Once the restoration of the database files are done, software allows to export the files back to the Server.
Overview Of The Tool
Function Recover SQL Database from corruption
Version 6.2
Size 6.10 MB
Server Support SQL Server 2014, 2012, 2008 / 2008 R2, 2005, etc.
Windows Version Supports Windows 8.1 and below

Outstanding Features of SQL Recovery Software

Scan Modes
The tool is implemented with advanced algorithms for the recovery of the files. The “Scan Mode” option helps to recover the corrupted or damaged Server database files. The tool supports two scan modes; Quick and Advance. If the database file is less corrupted, users can go for “Quick” scan mode or else if the damage is severe, “Advance” scan restores all the data without loss.

Detects Server Version
Suppose, if you don’t know the version of the SQL Server where the database file is saved, there is “Auto detect SQL server file (.mdf) version” choice which will automatically search for the server file version once when the checkbox is ticked. Else, the user can mention the server version directly from the radio options provided but, one should be careful while opting since wrong selection of the server may lead to inconsistency.
NDF File Scan Option
The users can either add multiple files or folders for scanning and recovering. If the added files are to be removed, the tool is sufficient with that feature too. The eminent feature of the NDF option is that, it automatically detects the NDF files when the “Autodetect the SQL server secondary database (.ndf) files” option is selected. If the user doesn’t know the location of the file, auto-detect feature will be of useful; both to search the file and save time. Else, the user can select and provide the files using the “Let me choose the SQL Server secondary database (.ndf) files” choice

Hierarchical Structure
After the restoration of the database files, it shows the content of the files in hierarchical manner. The software will show all the contents such as; stored procedures, tables, triggers, functions, rules, etc. and each of the contents is expandable too. The structuring makes the viewing comfortable and easy.

Export Modes
Once when the files or folders are recovered from the damage, the tool allows exporting the recovered files back to the database itself. Either the user can export to “SQL Server Database” or to “SQL Server Compatible SQL Scripts”. If the user chooses the first option then, will have to provide the server name, database name, the credentials of the server. But, the latter option is usually opted when the server is not present or files are not to be exported to the server for time being. The “SQL Server Compatible SQL Scripts” choice will save the files in scripts and user can later transport the scripts.

Dual Export Option
While exporting the database files to the server or when saving in script forms, the tool facilitates the users with two options. User can export the files either “With only Schema” or “With Schema & Data” options. Both these options are applicable for both the exports i.e. to SQL Server Database or SQL Server Compactible SQL Scripts.

Versions & Its Extents
Demo Version
The capability of the application can be checked using the demo version available. If the user is satisfied will the working then, can go for the purchase option. But, the demo version is limited to the recovery of the files. The exporting and saving cannot be done with the demo version.
License Version
Along with the demo version, the full version is also available in the office website. The Licensed version recovers all the added files or folders and enables to export and save the restored files back to the server, if needed.
Pros & Cons
Pros:
  • Supports the recovery of both .ndf and .mdf files from damages or corruptions.
  • Enables to scan multiple .ndf files at a time.
  • Automatically selects the SQL Server version for the MDF files, if unknown.
  • Two scanning modes for the recovery of the files.
  • Selects the NDF files automatically, if the user doesn’t know the location.
  • Exports retrieved files back to Server database or saves in Compatible SQL Scripts.
  • Provides facility for exporting either schema only or both schema and data together.
Cons:
Doesn’t scan and recover LDF files of the database
The Verdict
In the technical point of view, the tool is equipped with all the sufficient features for the recovery of the database files. On the basis of its working, software can be rated as 9.6/10. Though the tool doesn’t recover LDF files, it efficiently restores both MDF and NDF files which are more important than the LDF files. For the repairing of the database files from damage, users can depend on the SQL Recovery Software without a second thought.

Saturday, August 15, 2015

Happy Independence Day - Jai Hind

Wishing you a very

Happy Independence Day


" We owe a lot to the Indians, who taught us how to count, without which no worthwhile scientific discovery could have been made. "Albert Einstein.



     Some facts about India 
  1. Indus Valley Civilization is the world’s oldest civilization.So, India is the world’s oldest, most advanced and continuous civilization.
  2. India has the world’s third largest active army, after China and USA but India has never invaded or attacked a country.
  3. Today, India is the world’s third largest economy.
  4. The Tirupati Balaji temple and the Kashi Vishwanath Temple both, receive more visitors than the Vatican City and Mecca combined.
  5. Every 12 years, a religious gathering called the Kumbh Mela occurs in India. It is the world’s largest gathering of people.
  6. Varanasi, also known as Benaras, was called "the Ancient City" when Lord Buddha visited it in 500 B.C., and is the oldest, continuously inhabited city in the world today.
  7. India has third largest Muslim population in the world and has more mosques (300,000 mosques) than any other nation in the world.
  8. Jews and Christians have lived continuously in India since 200 B.C. and 52 A.D. respectively
  9. Two major religions, Buddhism and Jainism, were established in India.
  10. Takshila is said to be the first every university in the world; it started around 700 BC.
  11. Indian Railways employs more than 1.3 million people. That’s more than the population of many nations.
  12. India is the largest democracy in the world. More than 54 crore people voted in the 2014 General Election – more people than the population of USA, UK, Australia and Japan combined.
  13. India has the largest English speaking population in the world.
  14. India’s space program is one of the top 5 space programs in the world and off course we invented water on the moon.
  15. Chess was invented in India.
  16. And discovered the number zero.
  17. And the value of pi. 
  18. And trigonometry, algebra, calculus.
  19. Ayurveda is the earliest school of medicine known to mankind. The Father of Medicine, Charaka, consolidated Ayurveda 2500 years ago.
  20. Cataract surgery and plastic surgery are invented in India
  21. Around the 17th century, India was one of the richest countries in the world.
  22. India is one of the only three countries that makes supercomputers (the US and Japan are the other two).
  23. India has the largest amount of vegetarians in the world. So much so that Pizza Hut had to open their first pure vegetarian restaurant in the country and KFC had to introduce a “vegetarian” menu for India.
  24. India gave the world Yoga, that has existed for more than 5,000 years.
  25. Martial Arts was first created in India.

Proud To Be An Indian


Friday, August 14, 2015

SQL Script to find all the triggers defined on a database or on a single table

Sometime we need to find all the triggers defined on the database. So in this case, we can use the below SQL Query:-

SELECT
       tbl.name as [Table Name]
     , trig.name as [Trigger Name]
     , trig.is_disabled 
FROM [sys].[triggers] as trig
INNER JOIN sys.tables as tbl
ON trig.parent_id = tbl.object_id 

In case, if we need to find out all the triggers defined on a particular table then we can use the below SQL script

SELECT
       tbl.name as [Table Name]
     , trig.name as [Trigger Name]
     , trig.is_disabled 
FROM [sys].[triggers] as trig
inner join sys.tables as tbl
on trig.parent_id = tbl.object_id 
where tbl.name='Tblname'

/*  where Tblname name is the name of the table*/


Related article

SQL Triggers - An Introduction

Keep learning and don't forget to gives feedback on the article. You can also send feedback to me on my mailid  askvivekjohari@gmail.com


Thursday, August 13, 2015

Interview question - Is Clustered index on column with duplicate values possible?

Through this article, we are going to discuss three important interview questions of SQL which are given below:-


 1) Can we create clustered index on a column containing duplicate values?

 2) Can we create a Primary Key on a table on which a clustered index is already defined?

3) If a clustered index is already defined on a table and we create a primary key on this table, then is there any index is automatically created on the column on which Primary key is defined? 

Many people say no for first question as they always consider clustered index and Primary key together and primary key can only be created on column with unique and not null values. Since primary key and clustered index combination is always consider as best combination like made in heaven so many people thought clustered index can only be created on the column on which primary key is created.

But this is not correct. We can create clustered index on the columns on which primary key is not defined or in simple words clustered index can be created on the non primary key column of a table. We know that when we defined primary key, a clustered index is created on the column / columns automatically by default. But if a clustered index is already defined for the table and we created the primary key latter then clustered index is not created on the primary key column as only one clustered index can be possible for a table which is in this case, already defined. So Primary key and clustered index are both different objects in SQL. Uniqueness is must in case of Primary key but it is not a must condition in case of Clustered Index.

For example,
Suppose we have a table called tbl_clustered whose structure is given below:-

create table tbl_clustered (Id int identity(1,1), Empname nvarchar(200 ))

Below is the data inserted  query to insert data in the table tbl_clustered  

Insert into tbl_clustered (Empname )Values ('Kumar'), ('Gupta'), ('Arora'),('Gupta'), ('Gupta') 

Now let see the data inserted into the table using the below query

select * from tbl_clustered 

Result:-

Id Empname
1 Kumar
2 Gupta
3 Arora
4 Gupta
5 Gupta

So till now, we have created a table and inserted some data in it. ID column is the identity column and Empname  is varchar type column and we inserted some duplicate values in the Empname column.

Some people thought when we create an Identity column, primary key is automatically created on it. This is wrong. We usually create primary key on Identity column as it contains unique and not null value but primary key is not created on it by default unless we create it manually.

Let see the table structure to make sure that no primary  key or clustered index is created on the table yet.

To see the table structure, we use the following command:-

sp_help tbl_clustered


Now lets create a clustered index on the table tbl_clustered.

create clustered index inx_clux_test on tbl_clustered( Empname )

Above command will create the clustered index successfully irrespective of the fact that Empname column contains the duplicate values. Let again check the table structure again

sp_help tbl_clustered


If you see the above picture, you can find the details of clustered index in the last row.

Now let create a Primary key on the table to show that if a clustered index is already created on a table and then we try to create a primary key then the primary key is created but without any clustered index.

let create the primary key using the below script

Alter table tbl_clustered add primary key ( id)

Let see the table details again to check is any clustered index is created on the primary key column when we defined the primary key in the table.



If we see the above picture, we got to know, if we create a primary key on table on which a clustered index is already defined, then a non clustered index is created automatically on the primary key column. So the table has two indexes
  1. Clustered Index defined manually by us on the table intially
  2. Non clustered index created automatically with the creation of primary key on the column on which Primary key is defined.

Summary:-Through this article, I tried to show that primary key and clustered index are the different objects. We can create a clustered index on the column which contains duplicate values but primary key can only be created on the column which contains unique and not null values. Also, if a clustered index is already defined on a table and we try to create a primary on the table then a non-clustered index is created on the column on which the primary key is defined(instead of clustered index). But we should try to design the database so that clustered index is created on the primary key as we mostly use primary key in Joins conditions. Also primary key is referred by the foreign keys. So clustered index on the primary key helps in increasing the database performance.

You can refers the following articles for better understanding of the concepts used in this article


For Primary Keys

For Indexes:-

Keep learning and don't forget to gives feedback on the article. You can also send feedback to me on my mailid  askvivekjohari@gmail.com

Monday, August 3, 2015

Question of the Month (July)

Suppose we have a table emp_plan  which contains 4 columns "Empid" (employeeid), Planid (Projectid), Startdate(Allocation Start date) and Enddate (Allocation Enddate). Its structure is given below:-


Create table emp_plan (empid nvarchar(200),planid nvarchar(20),startdate datetime, enddate datetime)

Also below is the script to enter sample data into the table:-

insert into emp_plan(empid,planid,startdate,enddate)
select '001','planA','2015-05-15','2015-05-30'
union all
select '001','planA','2015-05-31','2015-06-14'
union all
select '001','planA','2015-06-15','2015-06-30'
union all
select '001','planA','2015-07-10','2015-07-20'
union all
select '001','planA','2015-07-21','2015-07-30'
union all
select '001','planB','2015-06-14','2015-06-30'
union all
select '001','planB','2015-07-10','2015-07-20'
union all
select '001','planB','2015-07-21','2015-07-30'
union all
select '002','planA','2015-06-14','2015-06-30'
union all
select '002','planB','2015-07-10','2015-07-20'
union all
select '002','planB','2015-07-21','2015-07-30'

So the data contain in the table is given blow:-



Now if a employee has the same project(planid) with no gap in the next allocation segment( allocation segment means from allocation startdate to allocation enddate), then we need to join the continuous allocation segment. For example if you see the data in the table, in the first 3 rows, employee having empid 001 is allocated to the Same project (PlanA) and all allocation segments are continuous (from 2015-05-15 to 2015-06-30)  as shown below:-

2015-05-15 to 2015-05-30  (1st row)
2015-05-31 to 2015-06-14  (2nd row)
2015-06-15 to 2015-06-30  (3rd row)

But the 4th row contains allocation segment which is not not continuous with the above 3 allocation segments as there is a gap between the 3rd row enddate and 4th row startdate and again 4th row is in continuation with 5th row with same employeeid 001 and project id (Plan1) as shown below:-

2015-07-10  to 2015-07-20 (4th Row)

2015-07-21 to 2015-07-30 (5th Row)

So expect result set should be


Empid     Planid               Startdate                           Enddate
001          planA     2015-05-15 00:00:00.000     2015-06-30 00:00:00.000 

(1st to 3rd Row)
001          planA     2015-07-10 00:00:00.000     2015-07-30 00:00:00.000        (4th to 5th Row)

So according to the above rule, expected result from the sample data inserted into the table (as shown in the above picture), should be same as given below:-

Expected Result

Empid     Planid               Startdate                           Enddate
001          planA     2015-05-15 00:00:00.000     2015-06-30 00:00:00.000
001          planA     2015-07-10 00:00:00.000     2015-07-30 00:00:00.000
001          planB     2015-06-14 00:00:00.000     2015-06-30 00:00:00.000
001          planB     2015-07-10 00:00:00.000     2015-07-30 00:00:00.000
002          planA     2015-06-14 00:00:00.000     2015-06-30 00:00:00.000
002          planB     2015-07-10 00:00:00.000     2015-07-30 00:00:00.000

So what should be the query/stored procedure to get the above result?

Saturday, July 11, 2015

Do you know about Whatsapp web?


Whatsapp claims that they currently have 500 million users worldwide. That is pretty impressive looking at the good amount of distinctively decent competitors they have within the space of Social Networking communication. ViberLineGoogle Hangout,Facebook Messenger & Hike (Commonly in India). While all these products are growing inch by inch and including new set of features into their product, whatsapp has never really seen any major makeover or a big release since Facebook took it over. Changing the paradigm, Whatsapp’s Web version has been introduced recently. Let us review and assess do we have got a Cherry with an already delicious pineapple cake or a bitter cup?
To begin using Whatsapp web, you would have to follow few steps
1. Update your whatsapp application to the latest version:  Right now, the web version is only compatible with Google Chrome when paired with the Android, Windows Phone, or Blackberry version of the mobile app. iOS users sadly have to wait for some more time till they see the sunrise.
2. Pairing: The process of bringing whatsapp to the Desktop shall begin by pairing the Web version with the Whatsapp Mobile Application.  To pair your mobile application, navigate to Web version of Whatsapp by typing in http://web.whatsapp.com on your browser’s Address bar. You shall view a similar interface as below
Whatsapp web screenshot


Now, open the Whatsapp application on your Android mobile and open Menu. You shall be prompted to scan the QR code present on the Chrome window (As depicted in figure above).
Note: QR code(abbreviated from Quick Response Code) is a complex matrix of coded information represented in a square shape. If you want to know more about QR code, head here.
Point your mobile so that QR Code can be seen within your mobile and fits within Scan area. The Mobile shall automatically pair the web version of whatsapp to load the similar view on web view.
Once you are in, you can search for your friends in the similar fashion as you’d do from your mobile. You can send messages, share images. You also have the option to record your voice using the Microphone button, that is placed exactly where you’d observe on your mobile version (Adjacent to message type-in area).
The only thing that I really miss is the ability of Web version to forward the received messages. Hopefully, whatsapp should be able to bring this feature soon. Till then, you can still enjoy the luxury of keep plugged on your desktop, without your boss knowing that you are still “Whatsapp Phantom“.

Friday, July 10, 2015

Script to find complete months between two given dates

Below script will help in finding the complete months within the 2 given dates. In the below script we takes below dates as example

Startdate =2015-07-05
Enddate =2016-01-06

/**********************Script Start**************************/

IF (object_id('tempdb..#month') is not null)
DROP TABLE #month

CREATE TABLE #month (id int identity(1,1),MonthStartDate DATETIME,MonthEndDate DATETIME)

DECLARE @startdate AS DATETIME
DECLARE @enddate AS DATETIME

DECLARE @monthstartdate AS DATETIME
DECLARE @monthenddate AS DATETIME

SET @startdate='2015-07-05'
SET @enddate='2016-01-06'

SELECT @monthstartdate= dateadd(mm,1,dateadd(DD,-day(@startdate)+1,@startdate))

SELECT @monthenddate= dateadd(dd,-1,dateadd(mm,1,@monthstartdate))

IF @enddate<@monthenddate
   BEGIN

    INSERT INTO #month
    SELECT null,null
   END
ELSE IF @startdate=@monthstartdate and @enddate=@monthenddate
  BEGIN

    INSERT INTO #month
    SELECT @monthstartdate,@monthenddate
  END
ELSE IF (@enddate>@monthenddate)
   BEGIN

    IF @startdate=@monthstartdate
    BEGIN
      INSERT INTO #month
      SELECT @monthstartdate,@monthenddate
    END

  WHILE (@enddate>=@monthenddate)

    BEGIN

   INSERT INTO #month
   SELECT @monthstartdate,@monthenddate

   SELECT @monthstartdate=dateadd(MM,1,@monthstartdate)
   SELECT @monthenddate=dateadd(day,-1,dateadd(MM,1,@monthstartdate))
  

    END

END

SELECT cast(MonthStartDate as DATE) as MonthStart,cast(MonthEndDate as DATE) as MonthEnd FROM #month

/**********************Script End**************************/

This script will gives the following result:-

MonthStart MonthEnd
2015-08-01        2015-08-31
2015-09-01        2015-09-30
2015-10-01        2015-10-31
2015-11-01        2015-11-30
2015-12-01        2015-12-31

Script to find the complete weeks within two given dates

Sometimes we need to find out the complete weeks within two given dates. Below script will help in finding the complete weeks within 2 given dates. In this script, I have used below dates as example 

Startdate=2015-04-12
Enddate=2015-06-10 

SET DATEFORMAT YMD
if (object_id('tempdb..#weekdays') is not null)
drop table #weekdays

Declare @StartDate as datetime
Declare @EndDate as datetime
Declare @Initialweekstart as datetime
Declare @weekstart as datetime
Declare @weekend as datetime
declare @day as int

set @StartDate='2015-04-12'
set @EndDate='2015-06-10'

Create table #weekdays(weekstart datetime, weekend datetime)

select @Initialweekstart=dateadd(wk, datediff(wk, 0, @StartDate), 0)


if @Initialweekstart<@StartDate
  begin
   set @weekstart= DATEADD(wk,1,DATEADD(wk,DATEDIFF(wk,0,@Initialweekstart),0))
   print  @weekstart
   set @weekend=DATEADD(dd,6,@weekstart)
 end
else
begin
set @weekstart=@Initialweekstart
set @weekend=DATEADD(dd,6,@weekstart)
end
while @weekend<=@EndDate
begin
  insert into #weekdays(weekstart,weekend)
  values (@weekstart,@weekend)
  set @weekstart= DATEADD(wk,1,DATEADD(wk,DATEDIFF(wk,0,@weekstart),0))
   print  @weekstart
   set @weekend=DATEADD(dd,6,@weekstart)
end
select @day= datepart(dw,@EndDate)
if @day=6 or @day=7
begin
insert into #weekdays(weekstart,weekend)
values (@weekstart,@weekend)
end
select CAST(weekstart AS DATE) as WeekStart, CAST(weekend AS DATE) as WeekEnd from #weekdays

This script will gives the following result:-

WeekStart        WeekEnd
2015-04-13      2015-04-19
2015-04-20      2015-04-26
2015-04-27      2015-05-03
2015-05-04      2015-05-10
2015-05-11      2015-05-17
2015-05-18      2015-05-24
2015-05-25      2015-05-31
2015-06-01      2015-06-07

   




     

Sunday, February 1, 2015

Importance of Best Practices in database programming

For any programming language, just writing the code is not well enough. It should be written using the best practices. This article will try to explain the disadvantages of writing code without using Best Practices and latter on how best practices can be implemented in our database code.

Code written without using best practices has the following disadvantages:-

1) Difficult to maintain:- If the code is not written properly, it will be very difficult to modify the code in future and sometimes it become so messy that we required to rewrite the code again even for a small change.

2) Lot of unusable code left which makes the code unnecessary lengthy: - If we do not do the designing part correctly, we keep changing the code again and again which result in lot of reworking. Due to this most of the time leads to a situation where some procedure is left with function or block of query which is not required but it keep executing.

3) Difficult to understand the code: - One should write a code which can be easily understandable by the other team members. If we do not write the proper comments for each block of code, it becomes difficult to understand the purpose behind the block of code.

4) Poor performance due to improper written complex queries: - In database, there can be multiple ways to write a query which fetch/update/delete records from the tables but performance depends on how we write the queries. If we do not write the optimized queries, it badly affects the performance of the application.

5) Unexpected error or behaviors due to poor exceptional handling:- If exception handling is not done properly, query execution may break in midway and wrong data is inserted into the tables which will corrupt the database. 

6) Locking of tables for long duration due to poor transaction handling: - Transactions should be used so that in case of any exception or error, uncommitted data must be rolled back. But sometimes we use transaction so poorly that it locked the main tables for longer time and it resulted in the deadlock situation and long duration of locking period result in poor application performance.

7) Difficult to debug: - If the code is written in a very messy way with lot of unusable codes, it becomes difficult to debug the code and find the error.

Best practices can be divided into 3 sections

1) How much time you spend in design phase of the database

2) How you write the block of code

3) How you write the SQL queries 

Time spends on design of the database:-

Best practices says that the more time you spend on the designing phase , the less time you spend on coding, fixing the bugs and rework. It doesn't not says that one should spend all the time in the designing phase itself but one should spend 40%-50% time depending on the project complexity on the design phase. This is true for not only database programming but for any other languages too. In any programming, it has said that we should spend more time on designing rather than in code. For example,

40% of the time was spent on design;

30% of the time was spent on coding;

20% of the time was spent on testing;

10% of the time was spent on documenting. 

But most of the developers do exactly the opposite. They spend less time on designing and spend most of the time in coding, reworking on the code and bug fixing. Before start writing any code, one should have done the proper homework on the design and a details technical document should be created whichdescribes how we can divide the requirement into small sections like store procedures, functions etc. One should also match that technical document with the requirement document to make sure that it meets most of the requirement.

Best practices for writing code blocks

According to best practices, we should write code based on the following below parameters:-

1) Maintainability: - The code should be written in such a way that if we need to add further queries or modify the existing queries, we can easily do this. We do not need to rewritten it again and again for changes in the future.

2) Reusable components:-Instead of writing lengthy stored procedures, we should try to divide it into multiple small stored procedures so that if we need, we can use these small stored procedures as reusable code components. It save lots of time by using the same stored procedure again and again.

3) Easy to understand:-Every code of block should have proper comments which should clearly define the purpose of this block of code so that any other team member or any other new person in your team can easily understand the code.

4) Proper error handling:-Code should be written in such a way that no error should go unchecked. In case of error, catch block should give proper error message. SQL has given the try catch block for error handing. Unchecked error sometimes breaks the code in the midway and corrupt data is inserted in the tables. Try…Catch help in maintaining the integrity of the database. 

5) Proper transaction handling:-Transactions should be handled properly in the stored procedure so that in case of error, uncommitted data can be easily rollback. Also transaction should not be applied in such a way that it keeps locking the tables for longer duration as it affects the performance of the application. Instead of applying transaction in the whole stored procedures, multiple transactions can be applied in stored procedures by keeping the related block of code in a single transaction.

Best practices for writing SQL Queries:-

1. All tables in the database should have properly defined relationship using primary keys and foreign keys. It helps in maintaining the database integrity and help in increasing the database performance as with each primary key, SQL server automatically created the clustered Indexes.

2. Indexes should be created on the tables as it increases the performance of the SQL Queries. With latest version of SQL Server, we can use Covering Index to include all the columns of the select query in the indexes. But too many indexes can also decreased the performance, as it take cost in maintaining the indexes.

3. Find the indexes which need to be reorganized or rebuild. Indexes are created to increases the performance of the database. But with a period of time, frequent insertion/deletion and updating of data in the tables causes fragmentation which reduces the effect of indexes. We should check fragmentation and accordingly reorganized or rebuild the indexes.

4. Use "set not count on/off" as it will increase the performance of the queries. Unless there is a need to know the number of rows affected by the execution of the query/stored procedure, we should use "set not count on" at the beginning of the code block and "set not count off" at the end of the code block. Calculation of the number of rows affected by the execution of the query/stored procedure add extra cost to the performance. 

5. Try to avoid looping using the Set based approach but if looping is necessary, try to avoid the cursor and replaces it with while loop or using switch statements or Sub Queries. Since till the cursor is not deallocated, memory is occupied by the cursor data and in case of large amount of data, it can decrease the performance of the SQL queries.

6. Proper use of temporary tables and table variable. Temporary tables and table variables has its own advantages and disadvantages. If the number of rows which are going to stored in the table is small, then table variable is a good option and if the number of records which is going to be stored in the table is large then we should use the temporary tables as we can create indexes on them which helps in increasing the performance

7. Use Joins instead of Sub queries. We should use joins instead of sub queries as much as possible because in case of sub queries, SQL Server internally tries to use joins between the record set return by the separate execution of queries.

8. Intelligent use of Distinct command. Distinct command adds additional cost to the query. So we should use distinct commands only when we need the unique results.

9. Proper error handling using Try....catch  should be done so that execution of the script should not be stop in the midway due to errors and tables data should not get corrupted.

10. Do not use "Select * from tablename" command when you only needs to select few columns from the table. In the select section of the query, we should mention only those columns whose data we required from query as if "select *" will give the data from all the columns of the table and it will decrease the performance of the query.

11. Never use "sp_" prefix in your store procedure name. If we use "sp_" prefix in the procedure name then, SQL will first search the stored procedure within the system procedures and then in the user created stored procedures. So it will result in the waste of time in searching.

13. Use SQL Profiler to monitor the SQL performance. SQL Profiler is the tool provided by the SQL Server. User can use the SQL Profiler to find out the missing indexes or statistics

14. Use stored procedure instead of inline Queries in the code. There are lots of benefits if we use stored procedures which includes re-usability, better Security of Database records due to access rights on stored procedures, good performance due to optimal execution plan maintained by SQL server, easy to maintain and many more...,

15. If we do not need the unique results, use Union all instead of Union.

Summary:-

This article tried to show the disadvantages of not following the best practices and how we can reduce our rework by spending proper time on the designing of the database. Also this article gives put lights on the some best practices which we can use during coding to make our database code more flexible, reusable , easy to understand and more optimised.