TSM Storage tiers for DocAve
Here we are the final part of this series of articles showing the integration between AvePoint DocAve Platform and IBM Tivoli Storage Manager. The first post was covering the installation steps of a TSM Server on a Windows machine and the initial configuration. The second one was about creating a DocAve Storage Policy pointing to a TSM node and then running a Granular Backup for testing. Third part is about considerations for a Backup Strategy when using:
-
TSM as a single Tier of storage
-
TSM as a multiple Tier of Storage
This article covers the main steps for DocAve integration with Tivoli storage.

TSM as a single Tier of Storage
In a single Tier of Storage DocAve integration is configured to talk with one TSM Node.
When using TSM as a “single Tier of Storage” the Backup data cycles as long as the Archive ones created by DocAve integration will go into one central location on TSM (as per node configuration). Now from this article perspective it is not the purpose of this post to discuss about the best practises on how the backup data should be stored in TSM but simply to take a look at this from an application perspective and in our case from the DocAve Platform perspective.
To be more clear in this regards DocAve is pointing at a single TSM node. The node configurations and settings pertain to the TSM Server and cannot be directly operated by DocAve. Each DocAve Storage Policy can operate with one TSM Node at a time.
We learned from the previous articles that DocAve (as the Application Requestor)Â can initiate a SharePoint Backup and save data onto TSM.
From a DocAve perspective the Backup data now does exist in this storage type. In the event of a restore DocAve integration will try again to reach the same location from where to load the data and begin the Restore process in SharePoint. Looks pretty easy and a straight forward process but in reality I have seen the following cases to happen:
Case 1: The amount of free space where the Backups are saved is insufficient and therefore an “external” process is needed to clean up some data
Case 2: Both DocAve and TSM offer Retention Policies to manage their content. How should I configure these for the best results? Can they coexist and if so how they should be configured?
What should I do in these cases?
For Case 1 assuming some capacity planning exercises have been conducted in prior to any backup plan it might happen that due to unforeseen growth of the data or simply “exceptions” to normal behaviours it might increase the space needed for storing the SharePoint Backup data. So when this happens in extreme cases the tendency would be to manually delete or move the “old” Backup cycles. Now this operation will inhibit DocAve from being able to automatically Restore data from this specific Backup cycle as the pertinent data cannot be found in the “expected location”. A similar failure will result in TSM marking for deletion this Backup data set without providing any notification to DocAve. Simply the Restore routine in DocAve will fail with data not found. When we are in a scenario like the one mentioned here we should consider the following:
-
For every SharePoint Backup executed by DocAve integration it will create a job and retain all the necessary details including the data location for its own record (DocAve index.db files)
-
When restoring data DocAve integration will refer to this details in order to find the Backup data and commence the restoring activities (in particular will look at the index.db files first and then the necessary *.dat files generated at the time of the Backup)
-
In case we are running out of space on the TSM device we should enable the Retention Policies on the DocAve Storage Policy in order to move selected backup cycles to a different storage volume or delete them if they are not needed in the short term. Should they not be needed in the long term it might be appropriate to delete them permanently. In both cases with move and delete actions we are making some room for Backups into the same storage. This is essential for DocAve as it will dictate whether that specific Backup cycle is still available or not even before trying to run the Restore job.
The Retention Policy in DocAve can be enabled in Storage Policy settings
By enabling the Retention Policy in DocAve we can control several aspects as mentioned below.
-
Number of Full Backups and/or Full Backup Cycles to keep:
-
Full Backup is a copy of the data at the time the job is running
-
Full Backup Cycle is the Full Backup plus all the pertinent Differentials and Incrementals just before the next Full Backup
-
-
Number of Full Backups and/or Full Backup Cycles to keep based on time:
-
Same as above but controlled by time instead of last time of occurrences
-
-
Exclude Retention rules on any backup job completed with all statuses other than successful:
-
Jobs finished with exceptions are still valid as only some components might have failed or skipped the backup. But other components might be successful. For example a newly added Database failed to backup due to lack of permissions. All the existing Databases and the rest of the components are successful. We might still want to keep the backup to restore the successful objects
-
-
Choose when the Retention rules will be triggered if before or after the new Backup:
-
It is a good practise to trigger the retention rules after the new Backup has completed. For the simple reason that if we delete the existing Backup due to lack of free space and the new jobs is failing for some reason essentially we have no valid backup data to restore from. So this option should be used with care
-
-
Which Backup type will trigger the Retention rules (Full, Differential, Incremental):
-
For big environments or particularly aggressive backup schedules it is possible to choose which Backup type will trigger the Retention rules. It is important to note that Differentials and Incrementals depend on Full Backups. Having this in mind will also help deciding when moving /deleting Full backup cycles
-
-
Enable Retention based on Backup job status (Successful, Finished with exceptions):
-
Very helpful option preventing the Retention rule to kick in when the new backup job is not successful
-
-
Delete or Move data to a different DocAve Logical Device:
-
When running out of space we can move the data to a different DocAve Logical Device enabling by de facto multiple tiers of storage before deleting the data permanently when not needed. Additionally this will help us reducing the cost of maintaining this data as the cost of the storage will decrease over the time. It is possible to repeat this steps for multiple tiers available with the same options
-
-
Run custom action from a script file:
-
Very useful if data should be in some way handed to third parties applications for other uses
-
So essentially irrespective of the TSM Node configuration, The DocAve Platform already provides the Retention Rules to manage the SharePoint Backup Data and the associated storage volumes. No TSM Retention Rules needed at this time.
For Case 2 we are approaching a different scenario. Let’s assume that TSM Policies Domain are extended to “All” TSM storage. And they have priority on any other Retention rules operated by other applications. So no “active handling” from DocAve Platform in this instance. This means that if by mean of these policies SharePoint Backup data is now moved or deleted by TSM, it would be impossible for DocAve to access this data again and restore it back to SharePoint.
An example for this scenario happens when there is no Retention rule setup in DocAve and at the same time TSM retains the data for 30 days only. From the very 31st day it would not be possible for DocAve to Restore the content back to SharePoint as the content has been moved or deleted by TSM according to Policies Domain setup for that Node.
If in DocAve we have a 30 days retention then TSM will mark this data for deletion from the 31st day. In this case both DocAve and TSM know that data is not available from the 31st day.
It is clear that TSM Policies Domain settings should be more “relaxed” when compared to DocAve ones. Of course both DocAve and TSM Retention rules can coexist but since TSM is the end-point storage then has the final word. In the big majority of cases the DocAve Retention rules already satisfy the most stringent requirements. So a good Backup Strategy comes with a good planning on how to manage the storage as well. But let’s not forget they work on the entire Backup Data set consisting of several files (*.db; *.dat..)
TSM from its perspective considers these as single files as opposed to these files for being part of data set (our SharePoint backup!)
Let’s say TSM is moving or deleting data prior to DocAve Retention days. In this case for example TSM should be configured to make a Backup of the SharePoint Backup Data and store these ones to a different volume (a separate node for archiving purposes). After running this operation it is possible to delete the SharePoint Backup Data from the original location. Of course this operation is unknown to DocAve and for this reason it will fail when attempting a SharePoint Restore. For this scenario TSM should first make the SharePoint Backup data available ideally into the original location or in a new location so that DocAve can read this content from the new location afterwards. In the latter case DocAve has to re-import the data using the DocAve Data Manager wizard from the Control Panel as per screenshot below

Â
Essentially by running this utility we are declaring to DocAve the new location from where the SharePoint Backup data can be restored. After the Data has been restored or copied back through TSM to a new location and imported through DocAve it is possible to proceed with the normal operations with the Restore back in SharePoint.
TSM as a multiple Tier of Storage
Another scenario I have been dealing with is when it is when DocAve integration is connecting with multiple Nodes available through TSM. Of course the rule for which 1 TSM Node per 1 DocAve Storage Policy at a time is still valid. We can consider this scenario as multiple Tiers of Storage where each Tier is represented by a specific TSM Node.
It is a rare scenario I have personally encountered for the simple reason that usually the Node configured for storing SharePoint Backup data as first Tier is big enough to contain the intended Backup cycles, so no need to create a separate Tier by mean of a dedicated Node serving for the same purpose. Also it might be more convenient to add more volumes to the same Node rather than creating a new one. Of course requirements might change and environment configurations do follow. When this happens the same considerations as per Case 2 mentioned above still apply.
One last consideration before concluding this article is about the volume types used when configuring the Nodes. Going for the fastest storage available is an obvious choice especially when running restores where a temporary SQL DB has to be mounted in order to extract the content. From this point of view “Tape Devices” are not the best ones. Generally speaking DISK Type of volume offer the best performances. At least they are not prone to the “shoe shine” effect that might happen with tapes. So Tier 1 Nodes should use this volume type. Along with these I would also recommend the following guidelines when possible:
-
Use a dedicated Backup Network when connecting DocAve Server to TSM Server
-
Avoid Tape Devices for intense read/write operations related to Backup/Restore operations
-
Configure DocAve SQL Staging to point to a different SQL Server and a NetShare to store temporary data before a restore job
-
Always use the DocAve InstaMount technology to restore data UNLESS the amount of data you are restoring is equal or bigger than 20% of the entire Database size (eg. When restoring a Site Collection compared to the size of its Content Database) then use the SQL Classic Staging method available in DocAve
Add Comment