Unable to shrink the SQL server database. Is it the same Blob thing?

This is a very common scenario which I have seen on many customers’ environments. Even shrinking the database doesn’t help to reduce the size of the database. Most of the times on the products which use BLOB very often face this issue e.g Microsoft SharePoint – it deals with BLOB most of the times.

Many times,I have seen many customers complaining of the database size going even double after the upgrade. Most of the times, the issue is found to be due to BLOB stuff.  In this blog we will talk about resolving the issues which are only related to BLOB storage.

Now it’s the time to build some base to understand the cause of the issue.  Let’s start with some Q & A

Q : – When we issue delete command on a table, what happens in the background. Do the records get deleted immediately?

A : –  There are lots of people who have doubt about this. Well, the answer is no, they are not deleted immediately. They are marked as ghost records in the PFS page. There is a process known as ghost record cleanup task which runs periodically to delete those records.

Q : – How is it related to the database size issue.

A: –  When we delete a record, it’s the responsibility of the ghost records cleanup task to delete those records physically to reclaim the space. It helps to keep the database size up to the mark. But the above issue can happen due to the reasons :

1. The ghost record cleanup task is not running.
2. The load on the system is very high and there are huge no. of deletions and updates – and the ghost cleanup task is not able to catch up.
3.  In certain cases, these allocated but empty LOB pages may accumulate and this is by design. When the LOB value to be inserted is larger than one page, we break it up into page-size fragments. And when we insert a page-size fragment, we don’t search the existing LOB pages for free space; we instead just allocate a new page.

This is an optimization: in the typical case, the free space fragments on the existing pages are all smaller than a page. Since we need a whole page of free space, we might as well skip the search and just allocate a new page. Thus, if we repeatedly rollback insertions of LOB data larger than one page, their database will accumulate empty LOB pages.

Now we will talk about the action steps to resolve this issue:

1.  Run DBCC updateusage and see if this sorts out your issue.

It reports and corrects pages and row count inaccuracies in the catalog views. These inaccuracies may cause incorrect space usage reports returned by the sp_spaceused system stored procedure.

2. If step 1 doesn’t help, we need to know whether we have ghost records or not. To find that out , please run this command:

SELECT object_name(object_id) as Name, record_count, GHOST_RECORD_COUNT,Version_ghost_record_count,INDEX_TYPE_DESC, ALLOC_UNIT_TYPE_DESC
FROM sys.dm_db_index_physical_stats (DB_ID(N'<dbname>’), NULL, NULL, NULL , ‘DETAILED’)

and this may take some time to give you output for all the objects .  if the database size is very large then we can first check the largest objects and then run the query for a particular table name. To get the object name with the largest allocation , you could run the query:

select name=object_schema_name(object_id) + ‘.’ + object_name(object_id)
, rows=sum(case when index_id < 2 then row_count else 0 end)
, reserved_kb=8*sum(reserved_page_count)
, data_kb=8*sum( case
when index_id<2 then in_row_data_page_count + lob_used_page_count + row_overflow_used_page_count
else lob_used_page_count + row_overflow_used_page_count
end )
, index_kb=8*(sum(used_page_count)
– sum( case
when index_id<2 then in_row_data_page_count + lob_used_page_count + row_overflow_used_page_count
else lob_used_page_count + row_overflow_used_page_count
end )
)
, unused_kb=8*sum(reserved_page_count-used_page_count)
from sys.dm_db_partition_stats
where object_id > 1024
group by object_id
–order by name
order by reserved_kb

once we get the top space consumers, we can pick up the table names and run it like:

    SELECT object_name(object_id) as Name, record_count, GHOST_RECORD_COUNT,Version_ghost_record_count,INDEX_TYPE_DESC, ALLOC_UNIT_TYPE_DESC
FROM sys.dm_db_index_physical_stats (DB_ID(N'<dbname>’), object_id(‘<TableName’), NULL, NULL , ‘DETAILED’)

3. If we see the ghost records, then we also need to find out whether ghost cleanup task is running or not. To check that , we will need to enable the trace flags : DBCC TRACEON (662,3605,-1) and it will throw messages in the SQL error logs. Also, you could run the query :

if exists (select 1 from sys.dm_tran_active_transactions where name=’GhostCleanupTask’)
select * from sys.dm_tran_active_transactions where name=’GhostCleanupTask’

Keep this query running for 10-15 seconds continuously and you will get an output as ghost cleanup task runs after every 5-10 seconds. If the ghost cleanup task is not working then please check if you have 661 trace flag enabled. If yes, please disable it.

4.  There is an Cumulative update of SQL server which has fix for this issue: – http://support.microsoft.com/kb/2622823

5. If you can’t upgrade the SQL instance then you could try, dbcc cleantable.

6. If still no help, you could try  ALTER INDEX…REORGANIZE WITH (LOB_COMPACTION = ON)

I hope this will help you to fix the issue of database space usage. Please feel free to post the questions if any.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s