Query Tuning Approach – For SQL Server and SQL Azure DB – Part2

It’s really important for a DBA to identify and tune the queries so that applications can perform at their best. If you are interested to learn the query tuning, you should at least have the understanding of the followings:

  1. Understand the phases of query processor/optimizer
  2. Purpose of query optimizer
  3. Indexing
  4. Statistics
  5. Types of execution plans (Estimated Vs Actual)
  6. Various execution plan operators like Index seek/scan, Joins, conversions , Compute scalar etc.
  7. Cardinality estimation
  8. Understand the purpose of data distribution in the table
  9. Various DMVs to understand the above information

Based on my experience, the query tuning world revolves around the above topics. If you manage to get a good understanding on the topics, you have already learnt why the query is performing slow. Rest all is an art which one learns with the experience.

We tried to cover these topics briefly during our SQL Server Delhi NCR community event as well. Though these topic were really big but we tried to share our experience on the field and information about must- knows on the above topics.

There are two approaches which we discussed in the event:
1. Proactive
2. Reactive

1. Proactive approach : This approach is really helpful to reduce the chances of major performance bottlenecks and help to understand the root cause of the ones , if occur. MDW is a great tool which really undervalued. I have personally implemented at many customers’ environments and it’s been really helpful to baseline the workload and identify the top bottlenecks and their causes. For more info on this, read this post

if we understand the system really well, we can identify the cause of the issues quickly. Generally, when the questions like ‘The application was working fine last month and suddenly performance started degrading’ arise we have no answers on what would have gone wrong. But if we have the performance baselining of good scenario , we can simply have a comparison the current state and identify the difference.

Moreover, optimal indexing plays a great role for the optimal performance. Many DBA’s tend to ignore this piece and work randomly on indexing when missing indexes warnings are seen in the execution plans. But as a proactive measure, if we can chuck out unused/duplicate indexes from the database – it really helps with optimal DDL and DML operations. This post is really helpful to find out such information from your server.

In my first post of this series, I have discussed on how to get the top resource intensive queries using query/plan hash. Once we have this information, we can simply start tuning the query after seeing the IO/Time statistics and execution plan of the query.

Reactive approach: This approach is generally followed when the issue is currently happening and we need to resolve the issue as quickly as possible. In this approach, we first try to find out the cause of contention which may be IO/CPU or memory. Once we identify the cause of contention, we need to pick up the queries accordingly.


If the cause of contention is CPU, steps mentioned in this post can be followed. If it’s memory/IO , we need to focus on high logical read queries and try to tune them. In my first post of this series, I have shared the queries to be used to identify the top logical read queries. Once we have those queries, we need to identify the IO/Time statistics along with their execution plans. As we know, all the DMV share the compile time plan and for run time execution plan , we need to run the query.

Once we have these statistics, my approach is as follows:

1. Look at the IO statistics and identify which table has the highest logical/physical reads
2. Look at the time statistics and see how long did it take to compile the query. Compile time may be high for the really big reporting queries.
3. Once the table with highest logical reads is identified – look at the execution plan with the highest cost in percentage which will mostly be around the operations on the same table.
4. Look and fix for the Index/table scan or key lookup in the execution plan
5. Look for the estimated and actual number of rows in the operators. If the actual rows are higher than the estimated rows – check the statistics of the table/columns
6. Look for the warnings in the plan for hash/sort/missing statistics or missing indexes

All the above mentioned steps will resolve 40-50% of the cases. Generally, indexing plays a vital role and fixing that itself will help to make the query run faster. When nothing of the above helps, then we need to go a bit deeper which is: Cardinality estimation which really depends on the data distribution of the table.


Majority of the query issues are caused by wrong cardinality estimation. Once we identify the issue with cardinality estimation, we need to dig and find out what could be the issue. Always try to calculate , how number of estimated rows were identified by query optimizer as there is a formula/reason behind every such calculation.

There are two really great videos from PASS 2014 which really talk about major execution plan issues and the cardinality estimation. If you go though them, you will understand the logic behind this discussion.

Cardinality Estimations:

Various Query Execution plan patterns:

This post covers the summary of Delhi NCR community December event session 2 – Learn SQL Query tuning

PPT can be found here. Demo Scripts can be found here.

SQL UG meet Delhi NCR – Theme – Performance and Tuning of SQL Server

This event was held on 19th September, where we had DB professionals coming from all over Delhi NCR region. Agenda of this event was :
1. SQL Performance Troubleshooting using PSSDIAG and SQLNexus
2. Reduce downtime on Production Server – How to Manage SQL Server Proactively
3. Increase IOPS to handle SQL workloads using Windows 2012 storage spaces

SQL Performance Troubleshooting using PSSDIAG and SQLNexus : This session was delivered by Amit Khandelwal , Support Escalation Engineer , India GTSC. He explained about
1. PSSDIAG configuration and SQL Nexus in detail.
2. Helped the audience understand , the use of this tool while troubleshooting performance issues.
3. Explained about IO issues and tips to troubleshoot such issues
4. Shared some trace flags which can be enabled on all the SQL environments as best practices
5. Shared some tricks while troubleshooting the SQL performance before delving deeper.

Session PPT can be found here

Reduce downtime on Production Server – How to Manage SQL Server Proactively : This sessions was delivered by Harsh Chawla , Premier Field Engineer , Microsoft Services(myself) on how to be more proactive while working on the production server. This was a philosophical session on how as a DBA can change his mindset while working on the database and be more efficient on the job. In this presentation , the complete discussion was around a production down scenario and how it could be avoided. There were a major discussion around , these three rules:

1. Understand your “Present” – Just be aware of the Application, business purpose/impact, SLA/RPO/RTO and hardware configuration of SQL Server you are working on.
2. Invest for better “Future” – Follow the right practices and have the correct configuration in place to avoid any issues in future.
3. Don’t Ignore “Past” – Have right monitoring and auditing solution to track the cause of the issues in the past.

Session PPT can be found here

Increase IOPS to handle SQL workloads using Windows 2012 storage spaces :  This session was delivered by Gaurav Srivastava , Service Engineer , Microsoft IT. He talked about how storage spaces can help you with high disk performance even without using SANs. Storage spaces is the feature from Windows 2012 and is very useful when you deploy SQL on Azure VMs. Nevertheless , the same feature can be used even for on-premise SQL server deployed on Windows 2012. He talked about:
1. Storage Pools
2. Disk caching
3. Storage spaces types – Simple, Mirror and Parity
4. Tools like Diskpd and SQLIO to see the disk IOPS and latency
5. Shared the demo of creating Storage pools and spaces using Powershell and GUI

Session PPT and Scripts can be found here

Disclaimer – The views expressed on this website/blog are mine alone and do not reflect the views of my company. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.


Configure SQL agent alerts to notify the performance issues

In every production environment, we have to monitor the various aspects of the SQL instance proactively to avoid any outage. Many times there are requirements like

Whenever the size of database increases more than a specific size- alert me (email/text me)Whenever the procedure cache size increases more than a specific size –alert me
Whenever Page Splits/sec increases more than a specific no. – alert me
and so on.

Today, we are going to talk about how we can achieve this using SQL agent alerts. It comes very handy when we don’t want to write a T-SQL code or use any third party tool.

In SQL server agent we can create 3 types of alerts:
1. SQ: server event alert
2. SQL server performance condition alert
3. WMI event alert

In this blog I will talk only about the second one i.e. SQL server performance condition alert.

To start with SQL server performance condition alert, I am sure everyone would have heard of the DMV sys.dm_os_performance_counters. In this DMV’s output we get the performance monitor data of SQL server. Yes, that’s what SQL agent use/query in the background.

1) Here is how you select this alert: right click on the alert and select new alert.


Then select SQL server performance condition alert.

2) click on the object combo


and select the counter you are looking for – to monitor.

3) click on the counter


4) select the instance for which you want to monitor and by default it’ll be the one from which you are creating the alert.

5)  select the condition from the “alter if counter” combo and then set the value.

Once this is configured then the other options which are 1. response 2. options can be configured as mentioned in the article: – http://msdn.microsoft.com/en-us/library/ms191508.aspx

One more important thing which you may be thinking is how this type of alert works. As per my testing on this , I came to know that when we configure the alert it makes an SQL query using the sys.dm_os_performance_counter DMV and runs it every 20s. Once the condition which you had set becomes true then the notifications are sent.

Even, you can execute the job in response to the alert along with the notification.


e.g. if space in the database goes below the threshold , I want to add another .ndf file to the database and also send a notification.


Rebuild SQL performance counters

Issue description
On SQL server 2005, while installing the SQL updates there was a failure. When the logs at location: C:Program FilesMicrosoft SQL Server90Setup BootstrapLOG and found the errors:

12/12/2011 09:28:31.734 Registry: Opened registry key “SoftwarePoliciesMicrosoftWindowsInstaller”

12/12/2011 09:28:31.734 Registry: Cannot read registry key value “Debug”, error 0

12/12/2011 09:28:43.777 MSP Error: 29528  The setup has encountered an unexpected error while Installing performance counters. The error is: The specified driver is invalid.

Action Plan

1. Change the path to the BINN directory of the SQL Server instance you desire to correct.   (Ex: C:Program FilesMicrosoft SQL  ServerMSSQL10.SQL2008MSSQLBinn)

2.  Unload the counters:

For a default instance of SQL Server run this command:


For a named instance of SQL server (e.g. called namedInstance) run this command:

unlodctr MSSQL$namedInstance

3. Check the .ini files by running a command on command prompt: – dir *.ini

Execute lodctr <<perf-sql* matching the counters you desire to load>>

         e.g.  perf-MSSQL$SQL2008sqlctr.ini for sql instance.

         Command will be lodctr  /T  perf-MSSQL$SQL2008sqlctr.ini . Once it’s executed successfully please follow the next step.

       4. Restart the machine

5. For SQL server 2008 on windows 2008 – will need to follow the same steps except need to restart the machine.
In this scenario, you restart  the remote registry service and the issue will be resolved