Fast-Checkpointing a Perforce Database Using a Network Appliance Filer


Richard Geiger
Network Appliance, Inc.
December, 1999

Introduction

This note describes how to use the "Snapshot" feature of Network Appliance filers to implement a "Perforce fast checkpoint" capability. This can serve to dramatically reduce the amount of time that the Perforce server is unavailable during a database checkpoint operation.

The size of the reduction will vary, depending on the size of the Perforce depot, as well as the overall size of the filer volume on which it it stored. But, for example: on a depot with a 1.8Gb database (i.e., of db.* files in $P4ROOT), stored on 17Gb filer volume, the window of time when the Perforce server was unavailable during a "normal" checkpoint was typically 40-45 minutes. By using the technique illustrated here, the window of unavailability was reduced to under 10 seconds.

How it Works

The "Snapshot" feature on Network Appliance filers allows the state of an entire filesystem (volume) to be rapidly saved. It's fast because it only involves copying pointers. Initially, a moment after the creation of the snapshot, the snapshot and the live filesystem have identical contents, and share the same set of data blocks. Subsequently, new writes to files and directories in the "live" version of the filesystem are done by writing to free disk blocks, and updating the pointers in the live filesystem. No disk block that was in use at the time the snapshot was taken is re-allocated until after the snapshot is deleted. Thus, the snapshot remains as a read-only version of the filesystem at the time the snapshot was taken.

The technique illustrated in the snap_checkpoint script takes advantage of this, by locking the Perforce database, snapshotting the filesystem containing the db.* files, and then unlocking the database. These steps happen fairly quickly - just a few seconds on a 17Gb filer volume, for example. As soon as the snapshot is complete, and the database has been unlocked, the Perforce server is once again available to users. At this point, a

p4d -r snapshot-copy-of-$P4ROOT -jd
command is run, to create the Perforce checkpoint. This operation can take as long as it must, without locking down the live database. (There, actually, are a couple of other steps in involved to correctly handle the saving and truncation of the journal file.)

The snap_checkpoint script is intended

snap_checkpoint has a handful of configuration parameters (see the comments under "Configuration Settings" at the top of the script), but is not completely flexible in every way imaginable; you may need to alter it in order to make it fit well with your own Perforce server backup practices.

Example Output

Here's what the output looks like:
$ snap_checkpoint
> /u/p4/dist/r99.2/bin.osf/p4 -p p4netapp:1678 counters
: burt = 976
: change = 1211
: journal = 789
: notify = 442
> /u/p4/dist/r99.2/bin.osf/p4 -p p4netapp:1678 counter journal 790
: Counter journal set.
/u/p4/root.p4netapp:1678 locked.
> /bin/cp -p /u/p4/checkpoint.p4netapp:1678/journal /u/p4/checkpoint.p4netapp:1678/20000110112316.jnl.789
> /bin/rsh maglite snap delete perforce checkpoint 2>&1
: No such snapshot.
> /bin/rsh maglite snap create perforce checkpoint 2>&1
: creating snapshot......
/u/p4/root.p4netapp:1678 unlocked.
The steps up to this point execute quickly. Beyond this point, the Perforce server is available to users, while the checkpoint operation actually takes place from the snapshot.
> /usr/local/bin/gzip /u/p4/checkpoint.p4netapp:1678/20000110112316.jnl.789
> /u/p4/dist/r99.2/bin.osf/p4d -r /u/p4/root.p4netapp:1678/.snapshot/checkpoint -p p4netapp:1678 -z -jd /u/p4/checkpoint.p4netapp:1678/20000110112316.ckp.790.gz
: Dumping to /u/p4/checkpoint.p4netapp:1678/20000110112316.ckp.790.gz...
> /bin/rsh maglite snap delete perforce checkpoint 2>&1
$


NEITHER THE AUTHOR, NETWORK APPLIANCE, INC. NOR PERFORCE SOFTWARE MAKE ANY WARRANTY, EXPLICIT OR IMPLIED, AS TO THE CORRECTNESS, FITNESS FOR ANY APPLICATION, NOR THE SAFETY OF THE snap_checkpoint SOFTWARE.