Fast-Checkpointing a Perforce Database Using a Network Appliance Filer


Richard Geiger
Network Appliance, Inc.
January, 2000; revised September 2000

Note for use with Perforce release 2000.1 servers:

In release 2000.1 of p4d, the server will not allow certain system-defined counters (including "journal") to be altered by the p4 counter command. The snap_checkpoint script (described below) must be able to manipulate the value of the journal counter; therefore, snap_checkpoint#5 now relies on the use of of a user-defined counter - "snap_journal" - to track the checkpoint number. In order to use this version of snap_checkpoint, you must therefore first establish an initial value for this counter, using

p4 counter snap_journal value
where value should be the same as the current value of the journal counter. This initialization might look, for example, like:
% p4 counters
change = 80106
job = 3
journal = 913
notify = 80106
% p4 counter snap_journal 913
Counter snap_journal set.
From this point on, it is important to use the snap_journal counter to track checkpoint numbers, rather than the default journal counter. This essentially means that you will need to consistently use only snap_checkpoint to make checkpoints. If you must run a "normal" (non-snapshot-based) checkpoint, you can carefully use p4d -joption -J ... to explicitly specify the checkpoint file name, manually save and truncate the journal, and manually increment the snap_journal counter.

Discussions are underway with Perforce Software about defining a mechanism that could once again allow the journal counter to be changed by p4 counter, removing the need for this snap_journal hack.

Introduction

This note describes how to use the "Snapshot" feature of Network Appliance filers to implement a "Perforce fast checkpoint" capability. This can serve to dramatically reduce the amount of time that the Perforce server is unavailable during a database checkpoint operation.

The size of the reduction will vary, depending on the size of the Perforce depot, as well as the overall size of the filer volume on which it it stored. But, for example: on a depot with a 1.8Gb database (i.e., of db.* files in $P4ROOT), stored on 17Gb filer volume, the window of time when the Perforce server was unavailable during a "normal" checkpoint was typically 40-45 minutes. By using the technique illustrated here, the window of unavailability was reduced to under 20 seconds.

How it Works

The "Snapshot" feature on Network Appliance filers allows the state of an entire filesystem (volume) to be rapidly saved. It's fast because it only involves copying pointers. Initially, a moment after the creation of the snapshot, the snapshot and the live filesystem have identical contents, and share the same set of data blocks. Subsequently, new writes to files and directories in the "live" version of the filesystem are done by writing to free disk blocks, and updating the pointers in the live filesystem. No disk block that was in use at the time the snapshot was taken is re-allocated until after the snapshot is deleted. Thus, the snapshot remains as a read-only version of the filesystem at the time the snapshot was taken.

The technique illustrated in the snap_checkpoint script takes advantage of this, by locking the Perforce database, snapshotting the filesystem containing the db.* files, and then unlocking the database. These steps happen fairly quickly - just a few seconds on a 17Gb filer volume, for example. As soon as the snapshot is complete, and the database has been unlocked, the Perforce server is once again available to users. At this point, a

p4d -r snapshot-copy-of-$P4ROOT -jd
command is run, to create the Perforce checkpoint. This operation can take as long as it must, without locking down the live database. (There, actually, are a couple of other steps in involved to correctly handle the saving and truncation of the journal file.)

The snap_checkpoint script is intended both...

snap_checkpoint has a handful of configuration parameters (see the comments under "Configuration Settings" at the top of the script), but is not completely flexible in every way imaginable; you may need to alter it in order to make it fit well with your own Perforce server backup practices.

Example Output

Here's what the output looks like:
$ snap_checkpoint
> /bin/rsh powermatic snap delete perforce checkpoint 2>&1
: No such snapshot.
: deleting snapshot...
> /u/p4/VERS/bin.osf/p4 -p p4netapp:1672 counters
: change = 26
: journal = 1
: snap_journal = 1
> /u/p4/VERS/bin.osf/p4 -p p4netapp:1672 counter snap_journal 2
: Counter snap_journal set.
snap_checkpoint: /u/p4/root.p4netapp:1672 locked.
> /bin/rsh powermatic snap create perforce checkpoint 2>&1
: creating snapshot...
snap_checkpoint: "/u/p4/checkpoint.p4netapp:1672/journal" truncated.
snap_checkpoint: /u/p4/root.p4netapp:1672 unlocked.
The steps up to this point execute quickly. Beyond this point, the Perforce server is available to users, while the checkpoint operation actually takes place from the snapshot.
> /bin/cp -p /u/p4/checkpoint.p4netapp:1672/.snapshot/checkpoint/journal /u/p4/checkpoint.p4netapp:1672/20000925142937.jnl.1
> /usr/local/bin/gzip /u/p4/checkpoint.p4netapp:1672/20000925142937.jnl.1
> /u/p4/VERS/bin.osf/p4d -r /u/p4/root.p4netapp:1672/.snapshot/checkpoint -p p4netapp:1672 -z -jd /u/p4/checkpoint.p4netapp:1672/20000925142937.ckp.2.gz
: Dumping to /u/p4/checkpoint.p4netapp:1672/20000925142937.ckp.2.gz...
$ 


For more informaiton on Perforce - The Fast Software Configuration Management System - please visit the Perforce web site.

For more informaiton Network Appliance filers - Fast, Simple, Reliable Network Attached Storage - please visit the Network Appliance web site.


NEITHER THE AUTHOR, NETWORK APPLIANCE, INC. NOR PERFORCE SOFTWARE MAKE ANY WARRANTY, EXPLICIT OR IMPLIED, AS TO THE CORRECTNESS, FITNESS FOR ANY APPLICATION, NOR THE SAFETY OF THE snap_checkpoint SOFTWARE.