Strategy for tuning SQL server VLDB

I have been thinking to write this post for quite sometime. I thought it will be good to share my success story of making a VLDB run super fast. When I started working the scenario was:

1. The database size – 1 TB
2. Data insertion rate per day – approx. 25 GB
3. Long running transactions and heavy blocking entire day
4. Log shipping failure because of huge log file backups , sometimes 100 GB
5. Replication latency due to long running transactions.
6. Biggest table had 10,000,000,000 records(cause of the contention).

It was a total chaos. At that point I was thinking where to start. As usual, I started to understand the environment and tried to find out the issues and bottlenecks. MDW(Management Data Warehouse) was really a life saver. MDW and perfmon helped to find out the health/risks of the overall system.
To read more about MDW, please check this post: https://dbcouncil.net/2013/06/25/amazing-sql-server-tools-every-dba-must-have-part-2/

To start with, I checked for overall SQL server status like Memory, CPU , IO , Top bottlenecks and resource intensive queries. My initial action was just to give a band aid solution to the system to restore it to a normal state. I tuned top queries , made configuration changes and optimally configured the hardware e.g. disks and memory.  All said and done,  the system started performing better.

Then I asked myself, is it sufficient????  The answer was a big no. I had to churn the system to give a permanent resolution or at least a long term solution.So, I did a flashback on my on my initial observation of the system, the issue which was striking me the most , was the heavy scans on the queries (more than 1 billion reads) .  It’s not why there were 1 billion reads but the problem was number of times these number of reads were being performed. Then the question was, can it be avoided or reduced ?

Finally my mission started, I started delving deeper into the system. I found there were many tables where we had more than 1 billion records and lots of index scans etc.  It seems really common to have more than 1 billion records. But sometimes, we need to check with the management/project leads/DBAs:

1. Do we really need these number of records in a table(Data purging)?
2. Do we access these many records actively(Data Archival)?
3. Can the records be archived or at least partitioned?
4. Is the data type and size being used optimally?
5. Is the indexing strategy optimal?
6. Can the compression be done?

Believe me, many times these questions can bring you lots of work 🙂 and eventually relief from the issues mentioned in the beginning of the post. Project leads/ management , they are so busy in meeting the project deadlines – these activities about huge tables is completely ignored unless there are any major issues.

Any table with more than 1 billion records need to have either partitioning or archival strategy in conjunction with the data type assessment , data compression and indexing strategy. Sometimes, the problem is not just bad query writing but also huge unmanaged data in the tables. In my scenario, i found the tipping point to be huge unmanaged data in the tables.

For better understanding,let’s take some real life example, lets say in a food market, if we have 2 million bottles of ketchup out which 1.8 million bottles have reached expiry date. Now if I have to find remaining .2 million bottles. How much extra time/fetches , will I spend/do? Of course it’s going to be huge. It’s always preferred to have only required stuff – be it bottles of ketchup in food market or clothes in our wardrobe and similarly the data in the database.

Again “Band aid” solution first : To showcase the value of the plan, it’s always good to start with the band aid solution. Band aid solution is the solution which can show results with minimal tweaks/efforts. In my scenario, the band aid solution was:

1. Better Indexing strategy
2. Data Partitioning
3. Data compression

These three points seemed achievable as band aid solutions. Planning the archival strategy and Data type assessment were big activities and needed lot of intervention of busy people (management and project leads). Along with that these two activities may need coding and design changes which sometimes are stuck.

For better indexing strategy, I will write another blog post soon – but for now , i will just brief you on this. It’s more about removing duplicate/unused indexes and creating better indexes which could fetch more seeks. There is whole lot of dynamics around it which I will discuss in detail.

For Data partitioning strategy : please refer my blog post https://dbcouncil.net/2014/04/08/table-partitioning-have-you-chosen-right-partition-column/

For Data Compression –  please refer the link : http://technet.microsoft.com/en-us/library/dd894051(v=SQL.100).aspx

you must be thinking, how to relate it back to 1 billion reads in a query. If we have partitioned the big tables, it helps to reduce the scans/seeks to partition level. Instead of search from 1 billion records, we may now be searching from 25 million records which is still better. On the top of it , we we have a good indexing strategy and data compression, the queries will perform more seeks and eventually the number of reads will be very very less. When the reads in the query execution reduces , the queries start performing faster and eventually lesser load on the disks,lesser long running transactions , smaller the T-log backups and reduction in the data latency in replication. Such a big impact!

Now, the impact will be visible and of course management will be happier. But still permanent/long term resolution is pending. I will discuss about

1. Data Archival / Purging
2. Data Type/size assessment

in my next blog post.

HTH!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s