Query Writing Guidelines for SQL Server DBAs and Developers

Working on SQL Server Performance and Tuning for last 8 years has given a great experience to understand the common practices at customers’ end. Recently, I was looking at the list of companies in India I have worked with ,  during my tenure at Microsoft is around 48. I wanted to share the common trends and my observation with you through this post. It may be a great help to sort out really common unknown issues on the job as a SQL DBAs and developers.

I have observed a disconnect between DBA and Developer team most of the times. The common dialogues which I have heard from both the DBAs and Developers are:

DBA:
1. Developers don’t know how to write a query
2. Developers create indexes without knowing their impact
3. Developers never see execution plan of the queries before releasing those on the production server

Developers:
1. DBAs don’t do nothing 🙂
2. No visibility to the queries performing bad
3. Even if we have the list of queries, we don’t know how to tune them
4. What to look in the execution plans
5. No time for query tuning because of the strict deadlines

If we see all this from the top, there’s noone at fault. For DBAs, they are agitated because entire blame is put on them and for developers, they have to listen to the music from the DBA team. It’s all about educating the developers, on how to write the optimal query, the repercussions of the bad code and how it can bring the entire system down. They need to try their best to avoid writing bad code and there has to be efficient review process in place.

For DBA’s , they need to help the developers for the SQL query writing/tuning education and the challenges they face if the queries are bad. While all that is done, the DBA also need to tune the bad code so that, the application can perform optimally. Moreover , if possible have a monthly session on what queries were tuned and the benefit it yielded for the application. The bad queries should be shared with the developers team on regular intervals so that, those can be tuned/written by the developers. All the process has to function in tandem and wherever, I have observed a disconnect between the teams – the duration of SQL Server downtime is higher.

One big improvement point which I have observed for both the teams can be, the deep conceptual knowledge of how indexing works. That’s the reason for having multiple sessions and blog posts on indexing itself.

https://dbcouncil.net/2015/06/03/which-one-is-better-composite-index-or-index-with-included-columns/
https://dbcouncil.net/2015/06/04/which-index-is-better-composite-index-or-index-with-included-columns-part-2/
https://dbcouncil.net/2015/06/09/index-session-scripts-for-sql-community-delhi-ncr/
https://dbcouncil.net/2014/06/27/sql-server-is-my-table-indexing-optimal-2/

All in all, if developers start writing optimal queries – there was lesser chances of performance degradation due to application code and lesser the DBAs need to work for query tuning. There should be monthly sessions between DBAs and Developers on the application queries tuning and writing education. it will not only help the teams to work in tandem but will also help to resolve the teams the common challenges they face during the query writing/tuning process.

We had organized a session on , Optimal Query Writing which was delivered by my colleague Sourabh Agarwal. Please go though this post and let any of us know if you have any questions. It covers some best practices for query writing which can be really helpful for developers.

Check the summary of entire sessions on Query writing and tuning here:
https://dbcouncil.net/2015/12/24/delhi-ncr-community-december-event-session-1-leverage-query-store-for-query-troubleshooting/
https://dbcouncil.net/2015/12/28/query-tuning-approach-for-sql-server-and-sql-azure-db-part2/

HTH!

Disclaimer:
The views expressed on this website/blog are mine alone and do not reflect the views of my company. All postings on this blog are provided “AS IS” with no warranties, and confers no rights

Upgrade Web/Business edition to Basic/standard/Premium edition databases – Using Powershell and GUI

As we all know that Web and Basic edition database have been retired on12th September , 2015. It means you won’t be able to create Web or Business edition post 12th September , 2015. I was checking this post and found that the databases will be migrated by Microsoft internally on your behalf based on the following criteria:

image

Before upgrading you will receive an email 14 days prior to scheduled upgrade. For most of the cases, the performance level and billing will be same but for few cases you may not get the same performance. The intent of this post is to guide you on what are the various options you can leverage to upgrade your databases.

As you know, Azure captures lots of telemetry data for predictive analysis. Based on that , we have tools like:

Index Advisor as discussed in my previous post –  https://dbcouncil.net/2015/07/07/sql-database-index-advisor-v12-sql-azure-databases/ based on that data. For the upgrade from web and business tier we have a tool called pricing tier recommendation which we can leverage to decide which pricing tier should you go:

1. Open Portal – https://ms.portal.azure.com
2. Click browse and select your database and click on the pricing tier:
image

3.  Here you will see recommended tier:
image

4. If you click on “Click here to view usage details” , you will see the data captured based on the telemetry data captured by Azure services.

Now, lets say you have multiple databases which you want to upgrade, it’s not easy to do this manually for every database through. For that, you have an option to use PowerShell script. Here is how you do that:

Powershell Script – to get the database level upgrade advisor recommendation:

Step 1 –  Add-AzureAccount

Switch-AzureMode -Name AzureResourceManager

Step 2 –  $SubscriptionName = ‘Test’
$ServerName =  ‘Test’
$ResourceGroupName = ‘Default-SQL-SoutheastAsia’

Select-AzureSubscription $SubscriptionName
$hint = Get-AzureSqlServerUpgradeHint  -ResourceGroupName $ResourceGroupName -ServerName $ServerName
$hint.Databases

The above script can be really helpful when you have large number of databases. Later on you can process that information and upgrade your databases accordingly. Moreover you can further write the PowerShell scripts to upgrade the databases. Check this article for help – https://azure.microsoft.com/en-us/documentation/articles/sql-database-upgrade-server/

If you are a SaaS company , you may want to get the elastic pool recommendation using PowerShell script:

Step 1 –  Add-AzureAccount

Switch-AzureMode -Name AzureResourceManager
Step 2 – $SubscriptionName =’Test’
$ServerName =  ‘BCDDS’
$ResourceGroupName = ‘Default-SQL-SoutheastAsia’

Select-AzureSubscription $SubscriptionName
$hint = Get-AzureSqlServerUpgradeHint  -ResourceGroupName $ResourceGroupName -ServerName $ServerName
$hint.ElasticPools

The above script will help you identify the databases which can be put into different pools.

References – https://azure.microsoft.com/en-us/documentation/articles/powershell-azure-resource-manager/
https://azure.microsoft.com/en-us/documentation/articles/sql-database-upgrade-server/

HTH!

For a SQL server DBA – How to start Azure Learning – Part 5 (SQL Azure Database Monitoring)

In continuation to this series, I will write about how we can monitor SQL Azure DB. Before I get into the monitoring of SQL Azure DB, I just wanted to discuss on the major work which DBA does on day to day basis:

1. Database Management e.g. creating/adding/removing files
2. Backup/Restore
3. Database Security
4. HADR
5. Performance Tuning and Optimization
6. Performance Monitoring
7. Database maintenance e.g. Rebuild Index/Update statistics etc.

Let’s see how many activities will reduce if the database is moved to SQL Azure DB.

1. Database Management – we don’t need to add/remove any files. Once the database is moved to Azure we don’t have this option. Database size will be dependent on the performance tier selected.
2. Backup/Restore – It will be managed by Microsoft itself so, this task will reduce. You just need to use the GUI for the restore and get your work done.
3HADR – You just need to configure HADR options as discussed in my post and rest will be managed by Microsoft.

For rest of the activities , the method will change slightly – though logically things will remain the same. If you understand, It’s still SQL server underneath.

4. Database maintenance – The activities like Rebuilding Indexes/update statistics will still need to be managed. These activities will still need to be scheduled regularly. Just remember, there are some restrictions with the T-log consumption. I will come up with a separate post on the limitations which we hit in Azure DB due to some resource constraints.

5. Database Security – It will still need to be managed, though there is a change with the way it used to be managed in on-premise environment.

6. Performance Tuning and Optimization – We will no longer be able to manage SQL configuration using sp_configure or hardware or database files etc. Generally these areas used to be looked at before getting into query tuning. DTUs will become the prime focus when it will come to throughput of IO/CPU etc. Query tuning will remain the same, we still need to look for Index/table scans or implicit conversions or joins etc. in the execution plans.

7. Performance Monitoring – Finally we are on today’s topic of discussion, If we talk about what do we monitor today for on-premise SQL server:

1. Memory
2. CPU
3. IO
4. Network
5. SQL Access Methods
6. SQL Memory Clerks
7. SQL Blocking/Deadlocks
8. SQL Transactions
9. DB Size/Growth
10. SQL Configuration changes
11. SQL Connections
12. SQL Wait Statistics
13. Connection failures/Attentions

We generally use various tools like Perfmon, Management Data Warehouse , DMVs and activity monitor etc. to see these various resources.  But for Azure DBs, we will have a monitoring Dashboard and DMVs to serve the purpose. The monitoring dashboard is really detailed to help with most of the resources and for delving deeper we will still have the luxury of the DMVs as required. The DMVs related to the features we don’t have or are not required won’t be there.

Just login into https://manage.windowsazure.com and you will your database as follows:

image

Just click on the database name and you will get an option to select the dashboard, here you can see the database size:

image

If you want to use PowerShell to see the DB size , check this post.

Just above the database size, there is monitoring Dashboard but for detailed option, just click on Monitor option:

image

It’s testing database so, I couldn’t have much data there. If you want to add more counters, simple select “Add Metrics” and you will see:

image

Most of the counters are self-explanatory so, I will simply come to the point – something , which will be change , to look at is:

DTU percentage
Log IO Percentage
Storage
Data IO percentage

These four counters are really important to understand. I personally think, DTU is little tricky to articulate as a DB professional working on on-premise SQL Server. Instead of thinking about CPU/memory/storage , we just need to focus on the transactions / concurrent connections / Concurrent logins etc. As discussed in the previous posts, Microsoft guarantees to provide the below mentioned metrics in the performance tiers and you just need to move your from how much CPU/disk you need. Instead focus on how much transaction /connections will the application have etc. etc.

image

Once the application is deployed on a specific tier, we need to monitor how much percentage is being used. If it’s 80 – 90% utilized and we will either need to tune the queries using the same methods as we used to use for on-premise SQL server. Else, we will need to scale up to the next level. This can be scheduled for automatic scale up and scale down as well. For more information on resource limits check – https://azure.microsoft.com/en-us/documentation/articles/sql-database-resource-limits/

 

For information on DTU – please check this video:

https://channel9.msdn.com/Series/Windows-Azure-Storage-SQL-Database-Tutorials/Scott-Klein-Video-02/player

 

References – https://msdn.microsoft.com/en-us/library/azure/dn369873.aspx

Benchmark Overview – https://msdn.microsoft.com/en-us/library/azure/dn741327.aspx

Resource Limits  – https://azure.microsoft.com/en-us/documentation/articles/sql-database-resource-limits/

I will hold PaaS series for now and will move a little bit into IaaS as well. I will write my next post on storage for IaaS.

HTH!

For a SQL server DBA – How to start Azure Learning – Part 4 (Database Sharding)

In continuation to my previous post , I will write about SQL Database sharding(For SQL Azure databases, PaaS) in this post. As we know, we have various tiers of SQL Azure databases :

Service Tier/Performance Level DTU MAX DB Size Max Concurrent Requests Max Concurrent Logins Max Sessions Benchmark Transaction Rate Predictability
Basic 5 2 GB 30 30 300 16,600 transactions per hour Good
Standard/S0 10 250 GB 60 60 600 521 transactions per minute Better
Standard/S1 20 250 GB 90 90 900 934 transactions per minute Better
Standard/S2 50 250 GB 120 120 1,200 2,570 transactions per minute Better
Standard/S3 100 250 GB 200 200 2,400 5,100 transactions per minute Better
Premium/P1 125 500 GB 200 200 2,400 105 transactions per second Best
Premium/P2 250 500 GB 400 400 4,800 228 transactions per second Best
Premium/P3 1000 500 GB 1,600 1,600 19,200 735 transactions per second Best

If we see in the above table, we have a limit on the size of the database (Maximum size if 500 GB) . What If , the database size which has to be hosted in SQL Azure is bigger than 500 GB or you need more throughput to handle more transactions.

Generally, for on-premise servers – when we need more throughput , we generally talk about two things:

1. Scale Up – It means we add more hardware for better throughput i.e. add more CPUs/RAM/Disk space as per the requirement.
2. Scale Out – It means move some of the workload to other server e.g. moving read only traffic to AlwaysON secondary server.

For Azure servers – the definition is:

1. Scale Up – It means we move to higher tier for better throughput e.g. if Standard/S3 was being used then upgrade the tier to P3. But sometimes moving to the highest tier is not enough and we don’t have any option to add hardware.
2. Scale Out – It means move some of the workload to other server/databases i.e. splitting the databases and routing the traffic appropriately for the great performance.

Sharding comes into picture when we want to scale out or when there is no further option of scaling up. if you read my first post, I talked about declaring the data types wisely and that’s just because of the reason of keeping the space utilization optimal otherwise, if you hit the maximum limit of the tier then you may need to move to higher performance tier (if below 500 GB) else scale out and go for Sharding.

With Sharding , we create multiple databases on Azure Server/s – and move the data based on a shard key. It’s logically same as table partitioning and the only difference is instead of keeping the data under same table, the data is actually moved under different databases. There is a component known as Shard map manager which helps to route the database queries to appropriate shards.

Note – Earlier, federations were the options for scaling out but along with Web and Business edition, federations will also expire.

Let’s see how Sharding can be enabled. There are two major areas:

1. Development for Shards
2. Administration/Management of Shards

Under Development for Shards, the options are:
1. Shard Map Management –> Define Groups of Shards for your application.
2. Data Dependant Routing –> Route incoming requests to the correct shard
3. Multi-Shard querying –> Running a query across multiple shards

Under Administration/Management of Shards, the options are:
1. Shard Split/Merge –> Grow/Shrink capacity by adding or removing the databases
2. Shard Elasticity –> Dynamically adjust scale factor of the databases e.g. change the performance tier from standard to Premium or vice-versa

Actually, everything is very well documented already so, I won’t re-invent the wheel and will share the good reference material on this topic.

For :

Under Development for Shards, the options are:
1. Shard Map Management –> Define Groups of Shards for your application.
2. Data Dependant Routing –> Route incoming requests to the correct shard
3. Multi-Shard querying –> Running a query across multiple shards

Please refer this link with the step by step configuration for sharding. Once the recommended package is downloaded, everything is just right there. Please ensure to have Visual Studio 2012 or higher with C# installed:

Shard_studio

If you want all this using PowerShell , you could download the package from https://gallery.technet.microsoft.com/scriptcenter/Elastic-Scale-Shard-c9530cbe

Under Administration/Management of Shards, the options are:
1. Shard Split/Merge –> Grow/Shrink capacity by adding or removing the databases – Check https://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-scale-overview-split-and-merge/
2. Shard Elasticity –> Dynamically adjust scale factor of the databases e.g. change the performance tier from standard to Premium or vice-versa – https://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-scale-elasticity/

Everything related to Sharding is mentioned on this link: https://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-scale-documentation-map/.

Reference Links –
Video – https://channel9.msdn.com/events/Ignite/2015/BRK3568
Text – https://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-scale-documentation-map/

In the next post, I will discuss about how Azure DB monitoring is done and how it’s different from on-premise monitoring.

HTH!

SQL Database Index Advisor – V12 SQL Azure databases

There is a new preview release of Index Advisor for SQL Azure V12 databases. It can analyze the historical performance data and can recommend on the best suited Index (non-clustered indexes only) to improve the query performance.

Currently, it’s doable only from the new portal –  https://portal.azure.com/.

Here is how it looks:

1.Sign in to the Azure Portal.
2.Click BROWSE in the left menu.
3.Click SQL databases in the Browse blade.
4.On the SQL databases blade, click the database that you want to review recommended indexes for.
5.Click Index Advisor to open and view the available Index recommendations for the selected database.

Index_advisor1
Once you sign up, the portal looks like :

portal

For more information , please check – https://azure.microsoft.com/en-us/documentation/articles/sql-database-index-advisor/

HTH!