Month: June 2018

DYNAMICALLY RECLAIM SPACE FROM THE LARGEST NDF’s IN A DATABASE

A quick google search will reveal several articles which recommend not shrinking your database files. At a high level this is because the space you are freeing up is likely to be needed again in the future, the shrink causes heavy index fragmentation, and the cost of running the operation is usually not worth the gain. You can read more about shrinking databases here and here. Adding complexity to this is that shrinking database files which contain LOB data can take a very long time.
There are however fringe cases where it makes sense to shrink, as described below. To address this, I have developed a process to shrink database files automatically and dynamically, based on the largest files in a database with the most free space.

The approach is suitable where the following conditions exist;

* Large amount of space available for reclamation
* Predicted future DB size is significantly less than current size
* Large number of data files exist and free space amount is evolving
* Data files being shrunk do not contain LOB data
* A data archiving routine is in place
* A regular maintenance window is available

Note that before developing the method below, I looked at other options for reclaiming database space, such as rebuilding the largest tables, through two methods;

1) Rebuilding the clustered index on a new filegroup using the CREATE INDEX… WITH DROP_EXISTING clause. This works well but does not move LOB data from the old filegroup. The LOB data filegroup can only be specified at the initial table creation stage using the syntax TEXTIMAGE_ON .

2) Making a copy of the table that we can populate with data behind the scenes, then perform a synch of data, an object rename, and drop the old table all in serialized transaction. This is my preferred method however it introduces a small amount of risk coupled with administrative overhead. The benefit however is that we can switch over to the new smaller table with minimal impact to the application because everything is performed side-by-side. This is likely to be your only option if reclaiming large amounts of LOB data and you do not have the luxury of an outage window. This is also useful for making table-level DDL changes on a 24/7 system.

Below is the automation script. It utilizes two SQL agent jobs to achieve asynchronous execution, and an allowable run day/time can be configured to prevent unplanned performance impacts against your database. The first SQL agent job runs the shrink operation. The second SQL agent job is kicked off automatically and will monitor and kill the shrink process when your configurable window is finished. Take some time to familiarize yourself with the script as it needs to be configured for your environment, and don’t forget to make sure you are rebuilding indexes after you shrink.

ShrinkData

Here is a script which covers the creation of both SQL agent jobs and the shrink process all in one. This illustrates the end to end process in a little more detail. For example, the agent job step ‘on failure’ workflows required a bit of care to handle the killed process.

ShrinkData(wSQLJobs)

If you want to take things further, you could modify the shrink command so that Instead of targeting zero (which means release all free space available), you could target the current free space minus a few GB, or a percentage. This would allow you to see results faster because if you target zero, no space will be released until the entire operation has completed.

Finally, I have provided some example scripts which show the other two methods that were mentioned.

RecreateTableMethods

Enjoy and let me know if you have any questions / feedback.