Query Tuning Approach – For SQL Server and SQL Azure DB – Part1

It’s been really long since I have written a post. I have been working on lots of query tuning my in recent past and I thought of sharing my approach.I have been using this approach for quite some time now and have got really good success rate to tune the query.

Just understand one rule, 80% of the times you will encounter queries which can be tuned by intermediate skills. It’ll be just 20% or even lesser when you will need some extreme level of query tuning skills. Honestly speaking, I rarely get into such situations where I see the query is complex to be tuned – really rare. I have been using really basic approach to combat bad queries and it’s really serving well for my customers.

I assume , you know how to identify which queries should be picked up for tuning e.g. if you are facing high CPU issues , you need to pick CPU intensive queries or if you are facing memory pressure or slow I/O , you need you pick high logical/physical read queries. Now, you know the resource contention on your system – let’s see what to do next:

1. Query to get top resource intensive queries: I have picked these queries from the PSSDIAG tool mentioned here

print ‘– top 10 CPU by query_hash –‘

select getdate() as runtime, *
from
(
SELECT TOP 10 query_hash, COUNT (distinct query_plan_hash) as ‘distinct query_plan_hash count’,
sum(execution_count) as ‘execution_count’,
sum(total_worker_time) as ‘total_worker_time’,
SUM(total_elapsed_time) as ‘total_elapsed_time’,
SUM (total_logical_reads) as ‘total_logical_reads’,
max(REPLACE (REPLACE (SUBSTRING (st.[text], qs.statement_start_offset/2 + 1,
CASE WHEN qs.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), st.[text]))
ELSE qs.statement_end_offset/2 – qs.statement_start_offset/2 + 1
END), CHAR(13), ‘ ‘), CHAR(10), ‘ ‘))  AS sample_statement_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
group by query_hash
ORDER BY sum(total_worker_time) DESC
) t

print ‘– top 10 logical reads by query_hash –‘

select getdate() as runtime, *
from
(
SELECT TOP 10 query_hash,
COUNT (distinct query_plan_hash) as ‘distinct query_plan_hash count’,
sum(execution_count) as ‘execution_count’,
sum(total_worker_time) as ‘total_worker_time’,
SUM(total_elapsed_time) as ‘total_elapsed_time’,
SUM (total_logical_reads) as ‘total_logical_reads’,
max(REPLACE (REPLACE (SUBSTRING (st.[text], qs.statement_start_offset/2 + 1,
CASE WHEN qs.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), st.[text]))
ELSE qs.statement_end_offset/2 – qs.statement_start_offset/2 + 1
END), CHAR(13), ‘ ‘), CHAR(10), ‘ ‘))  AS sample_statement_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
group by query_hash
ORDER BY sum(total_logical_reads) DESC
) t

print ‘– top 10 elapsed time by query_hash –‘

select getdate() as runtime, *
from
(
SELECT TOP 10 query_hash,
sum(execution_count) as ‘execution_count’,
COUNT (distinct query_plan_hash) as ‘distinct query_plan_hash count’,
sum(total_worker_time) as ‘total_worker_time’,
SUM(total_elapsed_time) as ‘total_elapsed_time’,
SUM (total_logical_reads) as ‘total_logical_reads’,
max(REPLACE (REPLACE (SUBSTRING (st.[text], qs.statement_start_offset/2 + 1,
CASE WHEN qs.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), st.[text]))
ELSE qs.statement_end_offset/2 – qs.statement_start_offset/2 + 1
END), CHAR(13), ‘ ‘), CHAR(10), ‘ ‘))  AS sample_statement_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
group by query_hash
ORDER BY sum(total_elapsed_time) DESC
) t

print ‘– top 10 CPU by query_plan_hash and query_hash –‘

SELECT TOP 10 query_plan_hash, query_hash,
COUNT (distinct query_plan_hash) as ‘distinct query_plan_hash count’,
sum(execution_count) as ‘execution_count’,
sum(total_worker_time) as ‘total_worker_time’,
SUM(total_elapsed_time) as ‘total_elapsed_time’,
SUM (total_logical_reads) as ‘total_logical_reads’,
max(REPLACE (REPLACE (SUBSTRING (st.[text], qs.statement_start_offset/2 + 1,
CASE WHEN qs.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), st.[text]))
ELSE qs.statement_end_offset/2 – qs.statement_start_offset/2 + 1
END), CHAR(13), ‘ ‘), CHAR(10), ‘ ‘))  AS sample_statement_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
group by query_plan_hash, query_hash
ORDER BY sum(total_worker_time) DESC;

print ‘– top 10 logical reads by query_plan_hash and query_hash –‘

SELECT TOP 10 query_plan_hash, query_hash, sum(execution_count) as ‘execution_count’,
sum(total_worker_time) as ‘total_worker_time’,
SUM(total_elapsed_time) as ‘total_elapsed_time’,
SUM (total_logical_reads) as ‘total_logical_reads’,
max(REPLACE (REPLACE (SUBSTRING (st.[text], qs.statement_start_offset/2 + 1,
CASE WHEN qs.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), st.[text]))
ELSE qs.statement_end_offset/2 – qs.statement_start_offset/2 + 1
END), CHAR(13), ‘ ‘), CHAR(10), ‘ ‘))  AS sample_statement_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
group by query_plan_hash, query_hash
ORDER BY sum(total_logical_reads) DESC;

print ‘– top 10 elapsed time  by query_plan_hash and query_hash –‘

SELECT TOP 10 query_plan_hash, query_hash, sum(execution_count) as ‘execution_count’,
sum(total_worker_time) as ‘total_worker_time’,
SUM(total_elapsed_time) as ‘total_elapsed_time’,
SUM (total_logical_reads) as ‘total_logical_reads’,
max(REPLACE (REPLACE (SUBSTRING (st.[text], qs.statement_start_offset/2 + 1,
CASE WHEN qs.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), st.[text]))
ELSE qs.statement_end_offset/2 – qs.statement_start_offset/2 + 1
END), CHAR(13), ‘ ‘), CHAR(10), ‘ ‘))  AS sample_statement_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
group by query_plan_hash, query_hash
ORDER BY sum(total_elapsed_time) DESC;

2. Check the execution plan for the query – Just remember , wherever you get the top resource intensive queries – be it MDW / Profiler / DMVs – you will get an option to get SQL Handle / Plan Handle/ SQL Hash / Plan Hash. If you have any of these , you can get an execution plan from the cache.

Query to get the execution plan is:

This will give you the query execution and plan

select b.*,a.* from sys.dm_exec_query_stats a cross apply
sys.dm_exec_query_plan  (a.plan_handle) b
where a.query_hash= <Query_hash>

if you want to know , execution statistics specific to the plan e.g. how many times the plan was reused – in the above output , you will get:

image

There can be multiple plans as well but you need to pick up the plans with More reads/CPU/time based on execution count.

3. Get the query parameters for the runtime planJust remember , the plans you get from these DMVs are the compile time plans and to get the actual plans – you will need to run the query on your SQL Instance. I have observed, DBAs ask the developer team to share the parameters of the query so that, they can run the query  on the test environment. In fact, if you have the query plan – you can get the parameter from there itself. let’s how to get that:

Right click on the graphical execution plan and click on show execution plan as XML – once the xml plan is visible , search for ParameterList as show below:

image

4. Execute the query :  Now, it’s the time to execute the query and analyze the actual statistics. leverage the below commands to delve deeper into the query execution:

Set statistics IO ON –>   Get the IO statistics of the query

Table ‘residual_demo’. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Set statistics Time ON –> Get the time statistics of the query i.e. how long it took for the query for compilation and running

SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.

SQL Server Execution Times:
CPU time = 0 ms,  elapsed time = 0 ms.

Set Statistics Profile ON –> Get the execution plan of the query in the text format
image

or
get the graphical execution by clicking on the actual execution plan from the Management Studio

Depending on the issue you are troubleshooting, the operator to be analyzed will vary. e.g. If you are tuning query for CPU intensive workload then probably the first thing to look at will be sort and for high logical reads , it will be an index/table scans

In this post, we have got the execution plan and now we have to start the tuning process. In the next post, I will talk about the operators you can pick for the tuning.

HTH!

Advertisements

A day with SQL Date functions

Recently, I was working on a project where I had to write lots of reporting queries based on the date. I wrote some SQL queries where I used lots of date functions. I thought of writing a blog for this.

1) I had a table where my columns had data type date and when I used to select the column and output was like:

2011-10-11
2011-10-11
2011-10-11
2011-10-11
2011-10-12

But in reports this was not user friendly so, I twisted the query little bit:

select convert(varchar(11),date_col ,100)  from   and the output was as required:

Oct 11 2011
Oct 11 2011
Oct 11 2011
Oct 11 2011
Oct 12 2011
Oct  3 2011

2) Similarly, there was a requirement when I had to do group by – year. I tried to use substring function but got an error while execution:

Msg 8116, Level 16, State 1, Line 1
Argument data type date is invalid for argument 1 of substring function.

Which makes sense. and I had to find some other way. Here is what I did:

select YEAR() as [Year] , from dbo.T_activites_cost_execution
group by YEAR(),

Please note: – Date_col is the column with the datatype date

Output was like:

Year        ———– —————————————
2010        16398036
2011        22015269

3) Now, after the yearly report I had to create one quarterly. The query which I created was:

select ‘Q’+Datename(qq,) AS Qtr , Col
from

Output was like :

Qtr                             Actual_Cost
——————————- —————————————

Q2                              40698
Q2                              4500
Q3                              4334

There are basically 2 functions to get the specific information from date column

1. DATEPART –>Returns Numeric
2. DATENAME –>Returns Varchar (Further, substring function can be used)

Even we can get second, milisecond part from the date column using any of the above functions.

HTH!

Query to find the columns on which index is existent

select c.name as “table_name” ,b.name as “column_name”,d.name as “index Name” from sys.index_columns a, sys.columns b , sys.objects c , sys.indexes d
where a.object_id=b.object_id and a.column_id=b.column_id and c.object_id=b.object_id
and d.object_id=c.object_id
order by c.name

Query to find the LOB,varchar columns in the SQL database

For SQL server 2000 Instance : –

SELECT a.name as [ColumnName],B.name as [Table Name] , c.name
FROM sysCOLUMNS A , sysOBJECTS B ,systypes c
where A.id = B.id
and A.xtype in (35,34,241,99)
and A.xtype =c.xtype
AND B.xtype =’U’
AND B.name <>’dtproperties’
UNION
SELECT a.name as [ColumnName],B.name as [Table Name] , c.name
FROM sysCOLUMNS A , sysOBJECTS B , systypes c
where A.id = B.id
and A.xtype in (167,231,165)
and A.xtype =c.xtype
AND B.xtype =’U’
AND B.name <>’dtproperties’

For SQL server 2005 Instance : –

SELECT a.name as [ColumnName],B.name as [Table Name] , c.name
FROM SYS.COLUMNS A , SYS.OBJECTS B , sys.types c
where A.object_id = B.object_id
and A.system_type_id in (35,34,241,99)
and A.system_type_id =c.system_type_id
AND B.type =’U’
AND B.name <>’dtproperties’
UNION
SELECT a.name as [ColumnName],B.name as [Table Name] , c.name
FROM SYS.COLUMNS A , SYS.OBJECTS B , sys.types c
where A.object_id = B.object_id
and A.system_type_id in (167,231,165)
and A.system_type_id =c.system_type_id
AND B.type =’U’
AND B.name <>’dtproperties’