Tivoli Storage Manager for Sun Solaris Administrator's Guide


Overview: The Storage Pool Hierarchy

You can set up your devices so that the server automatically moves data from one device to another, or one media type to another. The selection can be based on characteristics such as file size or storage capacity. To do this, you set up different primary storage pools to form a storage pool hierarchy. A typical implementation may have a disk storage pool with a subordinate tape storage pool. When a client backs up a file, the server may initially store the file on disk according to the policy for that file. Later, the server may move the file to tape when the disk becomes full. This action by the server is called migration. You can also place a size limit on files that are stored on disk, so that large files are stored initially on tape instead of on disk.

For example, your fastest devices are disks, but you do not have enough space on these devices to store all data that needs to be backed up over the long term. You have tape drives, which are slower to access, but have much greater capacity. You can define a hierarchy so that files are initially stored on the fast disk volumes in one storage pool. This provides clients with quick response to backup requests and some recall requests. As the disk storage pool becomes full, the server migrates, or moves, data to volumes in the tape storage pool.

Migration of files from disk to sequential storage pool volumes is particularly useful because the server migrates all the files for a single node together. This gives you partial collocation for clients. Migration of files is especially helpful if you decide not to enable collocation for sequential storage pools. See Keeping a Client's Files Together: Collocation for details.

Setting Up a Storage Pool Hierarchy

You can set up a storage pool hierarchy when you first define storage pools. You can also change the storage pool hierarchy later.

You establish a hierarchy by identifying the next storage pool, sometimes called the subordinate storage pool. The server migrates data to the next storage pool if the original storage pool is full or unavailable. See Migration of Files in a Storage Pool Hierarchy for detailed information on how migration between storage pools works.

Restrictions:

  1. You cannot establish a chain of storage pools that leads to an endless loop. For example, you cannot define StorageB as the next storage pool for StorageA, and then define StorageA as the next storage pool for StorageB.

  2. The storage pool hierarchy includes only primary storage pools, not copy storage pools. See Using Copy Storage Pools to Back Up a Storage Hierarchy.

Example: Defining a Storage Pool Hierarchy

For this example, suppose that you have determined that an engineering department requires a separate storage hierarchy. You set up policy so that the server initially stores backed up files for this department to a disk storage pool. When that pool fills, you want the server to migrate files to a tape storage pool. You want the pools to have the following characteristics:

You can define the storage pools in a storage pool hierarchy from the top down or from the bottom up. Defining the hierarchy from the bottom up requires fewer steps. To define the hierarchy from the bottom up, perform the following steps:

  1. Define the storage pool named BACKTAPE with the following command:
    define stgpool backtape tape
    description='tape storage pool for engineering backups'
    maxsize=nolimit collocate=yes maxscratch=100
    
  2. Define the storage pool named ENGBACK1 with the following command:
    define stgpool engback1 disk
    description='disk storage pool for engineering backups'
    maxsize=5M nextstgpool=backtape highmig=85 lowmig=40
    

Example: Updating a Storage Pool Hierarchy

If you have already defined the storage pool at the top of the hierarchy, you can update the storage hierarchy to include a new storage pool.

For example, suppose that you had already defined the ENGBACK1 disk storage pool. Now you have decided to set up a tape storage pool to which files from ENGBACK1 can migrate. Perform the following steps to define the new tape storage pool and update the hierarchy:

  1. Define the storage pool named BACKTAPE with the following command:
    define stgpool backtape tape
    description='tape storage pool for engineering backups'
    maxsize=nolimit collocate=yes maxscratch=100
    
  2. Specify that BACKTAPE is the next storage pool defined in the storage hierarchy for ENGBACK1. To update ENGBACK1, enter:
    update stgpool engback1 nextstgpool=backtape
    

How the Server Groups Files before Storing

When a user backs up or archives files from a client node, the server may group multiple client files into an aggregate (a single physical file). The size of the aggregate depends on the sizes of the client files being stored, and the number of bytes and files allowed for a single transaction. Two options, one in the server options file and one in the client options file, affect the number of bytes and files allowed for a single transaction:

Together these options allow you to control the size of aggregate files stored by the server. For more information on using options to tune performance, look for the performance tuning guide on the product Web site ( http://www.tivoli.com/support/storage_mgr/tivolimain.html ).

When a Tivoli Space Manager client (HSM client) migrates files to the server, the files are not grouped into an aggregate.

Where the Files Are Stored

When a user backs up, archives, or migrates a file from a client node, the server looks at the management class that is bound to the file. The management class specifies the destination, the storage pool in which to store the file. The server then checks that storage pool to determine the following:

Using these factors, the server determines if the file can be written to that storage pool or the next storage pool in the hierarchy.

Subfile backups: When the client backs up a subfile, it still reports the size of the entire file. Therefore, allocation requests against server storage and placement in the storage hierarchy are based on the full size of the file. The server does not aggregate a subfile with other files if the size of the entire file is too large to aggregate. For example, the entire file is 8MB, but the subfile is only 10KB. The server does not typically aggregate a large file, so the server begins to store this file as a standalone file. However, the client sends only 10KB, and it is now too late for the server to aggregate other files with this 10KB file. As a result, the benefits of aggregation are not always realized when clients back up subfiles.

ADSM Version 2 Clients: When an ADSM Version 2 client backs up or archives files, the server must estimate the size of the aggregate file that the client will send. The server bases the estimate on earlier transactions with the client. The server uses the estimated size to check whether the storage pool has enough space to store the file. Because the server uses the estimated size rather than the actual size for ADSM Version 2 clients, the server may not always store files in the storage pool that you expect.

How the Server Stores Files in a Storage Hierarchy

As an example of how the server stores files in a storage hierarchy, assume a company has a storage pool hierarchy as shown in Figure 19.

Figure 19. Storage Hierarchy Example

Simple storage pool hierarchy with a disk storage pool and a tape storage pool

The storage pool hierarchy consists of two storage pools:

DISKPOOL
The top of the storage hierarchy. It contains fast disk volumes for storing data.

TAPEPOOL
The next storage pool in the hierarchy. It contains tape volumes accessed by high-performance tape drives.

Assume a user wants to archive a 5MB file that is named FileX. FileX is bound to a management class that contains an archive copy group whose storage destination is DISKPOOL, see Figure 19.

When the user archives the file, the server determines where to store the file based on the following process:

  1. The server selects DISKPOOL because it is the storage destination specified in the archive copy group.
  2. Because the access mode for DISKPOOL is read/write, the server checks the maximum file size allowed in the storage pool.

    The maximum file size applies to the physical file being stored, which may be a single client file or an aggregate file. The maximum file size allowed in DISKPOOL is 3MB. FileX is a 5MB file and therefore cannot be stored in DISKPOOL.

  3. The server searches for the next storage pool in the storage hierarchy.

    If the DISKPOOL storage pool has no maximum file size specified, the server checks for enough space in the pool to store the physical file. If there is not enough space for the physical file, the server uses the next storage pool in the storage hierarchy to store the file.

  4. The server checks the access mode of TAPEPOOL, which is the next storage pool in the storage hierarchy. The access mode for TAPEPOOL is read/write.
  5. The server then checks the maximum file size allowed in the TAPEPOOL storage pool. Because TAPEPOOL is the last storage pool in the storage hierarchy, no maximum file size is specified. Therefore, if there is available space in TAPEPOOL, FileX can be stored in it.

Using Copy Storage Pools to Back Up a Storage Hierarchy

Copy storage pools enable you to back up your primary storage pools for an additional level of data protection for clients. See Backing Up Storage Pools for details. Copy storage pools are not part of a storage hierarchy.

For efficiency, it is strongly recommended that you use one copy storage pool to back up all primary storage pools that are linked to form a storage hierarchy. By backing up all primary storage pools to one copy storage pool, you do not need to recopy a file when the file migrates from its original primary storage pool to another primary storage pool in the storage hierarchy.

In most cases, a single copy storage pool can be used for backup of all primary storage pools. The number of copy storage pools you need depends on whether you have more than one primary storage pool hierarchy and on what type of disaster recovery protection you want to implement.

Multiple copy storage pools may be needed to handle particular situations, including:

Using the Hierarchy to Stage Client Data from Disk to Tape

A common way to use the storage hierarchy is to initially store client data on disk, then let the server migrate the data to tape. Typically you would need to ensure that you have enough disk storage to handle one night's worth of the clients' incremental backups. While not always possible, this guideline proves to be valuable when considering storage pool backups.

For example, if you have enough disk space for nightly incremental backups for clients and have tape devices, you can set up the following pools:

You can then schedule the following steps every night:

  1. Perform an incremental backup of the clients to the disk storage pool.
  2. After clients complete their backups, back up the disk primary storage pool (now containing the incremental backups) to the copy storage pool.

    Backing up disk storage pools before migration processing allows you to copy as many files as possible while they are still on disk. This saves mount requests while performing your storage pool backups.

  3. Start the migration of the files in the disk primary storage pool to the tape primary storage pool (the next pool in the hierarchy) by lowering the high migration threshold. For example, lower the threshold to 40%.

    When this migration completes, raise the high migration threshold back to 100%.

  4. Back up the tape primary storage pool to the copy storage pool to ensure that all files have been backed up.

    The tape primary storage pool must still be backed up to catch any files that might have been missed in the backup of the disk storage pools (for example, large files that went directly to sequential media).

See Estimating Space Needs for Storage Pools for more information about storage pool space.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]