The Server Deployment Package (SDP) is the implementation of Perforce’s recommendations for operating and managing a production Perforce Helix Core Version Control System. It is intended to provide the Helix Core administration team with tools to help:
High Availability (HA)
Disaster Recovery (DR)
This guide is intended to provide instructions for failover in an SDP environment using built in Helix Core features.
For more details see:
Please Give Us Feedback
Perforce welcomes feedback from our users. Please send any suggestions for improving this document or the SDP to firstname.lastname@example.org.
We need to consider planned vs unplanned failover. Planned may be due to upgrading the core Operating System or some other dependency in your infrastructure, or a similar activity.
Unplanned covers risks you are seeking to mitigate with failover:
loss of a machine, or some machine related hardware failure
loss of a VM cluster
failure of storage
loss of a data center or machine room
Please refer to the following sections:
HA failover should not require a P4PORT change for end users. Depending on your topology, you can avoid changing P4PORT by having users set P4PORT to an alias which can be easily changed centrally.
p4d-bos-01, a master/commit-server in Boston, pointed to by a DNS name like
p4d-bos-02, a standby replica in Boston, not pointed to by a DNS until failover, at which time it gets pointed to by
Changing the Perforce broker configuration to target the backup machine.
Other advanced networking options might be possible if you talk to your local networking gurus (virtual IPs etc).
2. Planned Failover
In this instance you can run
p4 failover with the active participation of its upstream server.
We are going to provide examples with the following assumptions:
master_1is current commit, running on machine
p4d_ha_bosis HA server
perforceis set to
You need to ensure:
you are running p4d 2018.2 or later for your commit and all replica instances, preferably 2020.1+
source /p4/common/bin/p4_vars 1 p4 info | grep version
your failover target server instance is of type
On HA machine:
p4 info : ServerID: p4d_ha_bos Server services: standby Replica of: perforce:1999 :
it has Options mandatory set in its server spec
p4 server -o p4d_ha_bos | grep Options Options: mandatory
you have a valid
On HA machine:
Monitoring is enabled - so the following works:
p4 monitor show -al
DNS changes are possible so that downstream replicas can seamlessly connect to HA server
pullstatus is valid
p4 pull -lj
You have a valid
offline_dbfor the HA instance
Check that the sizes of the
db.*are similar - compare output:
ls -lhSr /p4/1/offline_db/db.* | tail ls -lhSr /p4/1/root/db.* | tail
Check the current journal counter and compare against live journal counter:
/p4/1/bin/p4d_1 -r /p4/1/offline_db -jd - db.counters | grep journal p4 counters | grep journal
Check all defined triggers will work (see next section) - including Swarm triggers
Check authentication will work (e.g. LDAP configuration)
Check firewall for HA host - ensure that other replicas will be able to connect on the appropriate port to the instance (using the DNS alias)
2.1.1. Pre Failover Checklist
It is important to perform these checks before any planned failover, and also to make sure they are considered prior any unplanned failover.
184.108.40.206. Swarm Triggers
If Swarm is installed, ensure:
Swarm trigger is installed on HA machine (could be executed from a checked in depot file)
Typically installed (via package) to
But can be installed anywhere on the filesystem
Execute the trigger to ensure that any required Perl modules are installed:
Note that things like
JSON.pmcan often be installed with:
sudo apt-get install libjson-perl
sudo yum install perl-JSON
Swarm trigger configuration file has been copied over from commit server to appropriate place.
This defaults to one of:
/etc/perforce/swarm-trigger.conf /opt/perforce/etc/swarm-trigger.conf swarm-trigger.conf (in the same directory as trigger script)
220.127.116.11. Other Triggers
Make sure that the appropriate version of
rubyetc are installed in locations as referenced by
Make sure that any relevant modules have also been installed (e.g.
18.104.22.168. Other Replica’s P4TARGET
Review the settings for other replicas and also to check live replicas on the source server of the failover (
p4 servers -J)
p4 configure show allservers | grep P4TARGET
Make sure the above settings are using correct DNS alias (which will be redirected).
These are typically configured via
Ensure the target is the correct DNS alias (which will be redirected).
These are typically configured via
Ensure the config file correctly identifies the appropriate
target server using correct DNS alias (which will be redirected).
22.214.171.124. HA Server OS Configuration
Check to make sure any performance configuration such as turning off THP (transparent huge pages), and putting serverlocks.dir into a RAM filesystem have also been made to your HA Failover system. See SDP Guide: Mazimizing Server Performane
2.2. Failing over
The basic actions are the same for Unix and Windows, but there are extra steps required as noted in Section 2.4, “Failing over on Windows”
p4 failoverin reporting mode on HA machine:
Successful output looks like:
Checking if failover might be possible ... Checking for archive file content not transferred ... Verifying content of recently update archive files ... After addressing any reported issues that might prevent failover, use --yes or -y to execute the failover.
p4 failover --yes
Output should be something like:
Starting failover process ... Refusing new commands on server from which failover is occurring ... Giving commands already running time to complete ... Stalling commands on server from which failover is occurring ... Waiting for 'journalcopy' to complete its work ... Waiting for 'pull -L' to complete its work ... Waiting for 'pull -u' to complete its work ... Checking for archive file content not transferred ... Verifying content of recently updated archive files ... Stopping server from which failover is occurring ... Moving latest journalcopy'd journal into place as the active journal ... Updating configuration of the failed-over server ... Restarting this server ...
During this time, if you run commands against the master, you may see:
Server currently in failover mode, try again after failover has completed
Change the DNS entries so downstream replicas (and users) will connect to the new master (that was previously HA)
Validate that your downstream replicas are communicating with your new master
On each replica machine:
p4 pull -lj
Or against the new master:
p4 servers -J
Check output of
: Server address: p4d-bos-02 : ServerID: master_1 Server services: commit-server :
Make sure the old server spec (
p4d_ha_bos) has correctly had its
Options:field set to
nomandatory(otherwise all replication would stop!)
2.3. Post Failover
2.3.1. Moving of Checkpoints
After failing over, on Unix there will be journals which may need to be copied/moved and renamed due to the SDP structure.
For example, an HA server might have stored its journals in
/p4/1/checkpoints.ha_bos (assuming it was create by
mkrep.sh with serverid
As a result of failover, these files need to be copied/moved to:
The reason different directories is used is because in some installations the
/hxdepots filesystem is shared on NFS between commit server and HA server, and we don’t want to risk overwriting of these files.
If these files are not present, then normal SDP crontab tasks such as
The following command will rotate journal and replay any missing ones to
offline_db (it is both fairly quick and safe to run without placing much load on the server host as it doesn’t do any checkpointing):
If it fails, then check
/p4/1/logs/checkpoint.log for details - it may have an error such as:
Replay journal /p4/1/checkpoints/p4_1.jnl.123 to offline db.
Perforce server error: open for read: /p4/1/checkpoints/p4_1.jnl.123: No such file or directory
This indicates missing journals which will need to be moved/copied as above.
2.3.2. Check on Replication
We recommend that you connect to all your replicas/proxies/brokers and make sure that they are successfully working after failover.
It is surprisingly common to find forgotten configuration details meaning that they are attempting to connect to old server for example!
For proxies and brokers - you probably just need to run:
For downstream replicas of any type, we recommend logging on to the host and running:
p4 pull -lj
and checking for any errors.
We also recommend the following is executed on both HA server and all replicas and the output examined for any unexpected errors:
grep -A4 error: /p4/1/logs/log
Or you can review the contents of
/p4/1/logs/errors.csv if you have enabled structured logging.
2.4. Failing over on Windows
The basic steps are the same as for on Unix, but with some extra steps at the end.
p4 failover --yes command has completed its work (on the HA server machine):
Review the settings for the Windows service (examples are for instance
1) - note below -S is uppercase:
p4 set -S p4_1
C:\p4\1>p4 set -S p4_1 : P4JOURNAL=c:\p4\1\logs\journal (set -S) P4LOG=c:\p4\1\logs\p4d_ha_aws.log (set -S) P4NAME=p4d_ha_aws (set -S) P4PORT=1666 (set -S) P4ROOT=c:\p4\1\root (set -S) :
Change the value of
P4LOGto correct value for
p4 set -S p4_1 P4NAME=master p4 set -S p4_1 P4LOG=c:\p4\1\logs\master.log
And re-check the output of
p4 set -S p4_1
Restart the service:
c:\p4\common\bin\svcinst stop -n p4_1 c:\p4\common\bin\svcinst start -n p4_1
p4 configure showto check that the output is as expected for the above values.
2.4.1. Post Failover on Windows
This is slightly different for the Windows and Linux SDP, since there is currently no equivalent of
mkrep.sh for Windows, and replication topologies on Windows are typically smaller and simpler.
Thus it is Windows HA instances are likely to have journals rotated into
c:\p4\1\checkpoints instead of something like
However, it is still worth ensuring things like:
offline_dbon HA is up-to-date
all triggers (including any Swarm triggers) are appropriately configured
daily_backup.batwill work appropriate after failover
3. Unplanned Failover
In this case there is no active participation of upstream server, so there is an increased risk of lost data.
We assume we are still failing over to the HA machine, so:
Failover target is
Server spec still has
Original master is not running
The output of
p4 failover on the DR machine might be:
Checking if failover might be possible ... Server ID must be specified in the '-s' or --serverid' argument for a failover without the participation of the server from which failover is occurring. Checking for archive file content not transferred ... Verifying content of recently update archive files ... After addressing any reported issues that might prevent failover, use --yes or -y to execute the failover.
p4 failoverwith the extra parameter to specify server we are failing over from:
p4 failover --serverid master_1 --yes
Expected output is somewhat shorter than for planned failover:
Starting failover process ... Waiting for 'pull -L' to complete its work ... Checking for archive file content not transferred ... Verifying content of recently updated archive files ... Moving latest journalcopy'd journal into place as the active journal ... Updating configuration of the failed-over server ... Restarting this server ...
3.1. Post Unplanned Failover
This is similar to Section 2.3, “Post Failover” with the exception of the next section below.
3.1.1. Resetting Downstream Replicas
In an unplanned failover scenario it is possible that there is a journal synchronisation problem with downstream replicas.
The output of
p4 pull -lj may indicate an error, and/or there may be errors in the log:
grep -A4 error: /p4/1/logs/log | less
If you need to reset the replica to restart from the beginning of the current journal it is attempting to pull, then the process is:
Stop the replica:
cd /p4/1/root mv state save/
Restart the replica
Recheck the log for errors as above.
4. Old style failover
This does not use the
p4 failover command (so is valid for pre-2018.2 p4d versions)