How to minimise backup durations.

Date

There are some quick wins you can have with a few simple housekeeping, maintenance and minor implementations of backup methods to improve your traditional backups and their durations. There are many variables that we can focus on, although for a simplistic outlook we will focus on the items below as a starting point:
Highly fragmented drives
Identifying slow clients
Dense file system backup

Fragmentation of mounts points

In our experience there have been many occasions where highly fragmented mount points have an impact on not only performance of the server, but also backup durations. Files that have blocks dispersed across a disk take longer to back up because each segment of data is located at a different location, and therefore instead of reading block after block of data, the heads on the disk must move to a number of places to access the data. When a traditional backup is performed it has to traverse all folders and files and the longer it takes to perform this activity the longer the backup duration will inturn take.

Look into executing a regular scheduled maintenance activity, such as a simple disk defragmentation that can maintain a greater consistency in performance of your operating system’s ability to service the demands placed on it, including backups and minimising their durations.

Slow Clients

If you notice unusually slow backups in terms of throughput, you should investigate further. This may be an issue if you have a reliance on resources (target device) needing to release reservations to allow further connections (backups). A couple of areas to look at for a quick win would be to either ensure these “slow” servers have backups running with a shared device pool that will piggyback off other servers sending data through to ensure the drives do not experience “shoe shining”. Another way is grouping all the client servers that are slowly transferring data into a group and dedicated “slow lane”, such as a dedicated drive, which won’t impede on the majority of servers performing backups in the environment.

The last thing you need is critical servers and their backups waiting on non-critical or test/development servers reserving resources for long periods of time before they can commence.

Dense mount points

Dense file systems that run flat file backups should be revisited in the approach taken to perform the backup. When there are millions of relatively small sized files located on a mount point, the time taken to capture and send them to tape will cause slow backups and “shoe shining” on your tape device, which doesn’t allow the tape drive to reach its optimum throughput speeds.

Having a look into using the raw disk partition as a block level backup, or perform snapshots of the dense mount point. Other options are available, however the pros and cons of such changes should be accounted for before implementation occurs. E.g. running a block level backup could mean restores at a file level is not available depending on the technology used.

More
ARTICLES