= Server Deployment Package (SDP) for Perforce Helix: SDP Failover Guide (for Unix and Windows) Perforce Professional Services <consulting@perforce.com> :revnumber: v2020.1 :revdate: 2020-09-12 :doctype: book :icons: font :toc: :toclevels: 5 :sectnumlevels: 4 :xrefstyle: full == Preface The Server Deployment Package (SDP) is the implementation of Perforce's recommendations for operating and managing a production Perforce Helix Core Version Control System. It is intended to provide the Helix Core administration team with tools to help: * High Availability (HA) * Disaster Recovery (DR) This guide is intended to provide instructions for failover in an SDP environment using built in Helix Core features. For more details see: * http://www.perforce.com/manuals/p4sag/Content/P4SAG/failover.html#Failover[Sysadmin Guide - Failover] *Please Give Us Feedback* Perforce welcomes feedback from our users. Please send any suggestions for improving this document or the SDP to consulting@perforce.com. :sectnums: == Overview We need to consider planned vs unplanned failover. Planned may be due to upgrading the core Operating System or some other dependency in your infrastructure, or a similar activity. Unplanned covers risks you are seeking to mitigate with failover: * loss of a machine, or some machine related hardware failure * loss of a VM cluster * failure of storage * loss of a data center or machine room * etc... See also https://www.perforce.com/manuals/cmdref/Content/CmdRef/p4_failover.html[p4 failover in Command Reference Guide] Please refer to the following sections: * <<SDP_Guide.Unix.adoc#_planning_for_ha_and_dr,SDP Guide: Planning for HA and DR>> * <<SDP_Guide.Unix.adoc#_pre_requisites_for_failover,SDP Guide: Pre-requisites for Failover>> === Planning HA failover should not require a P4PORT change for end users. Depending on your topology, you can avoid changing P4PORT by having users set P4PORT to an alias which can be easily changed centrally. For example: * `p4d-bos-01`, a master/commit-server in Boston, pointed to by a DNS name like `perforce` or `perforce.p4demo.com`. * `p4d-bos-02`, a standby replica in Boston, not pointed to by a DNS until failover, at which time it gets pointed to by `perforce`/`perforce.p4demo.com`. * Changing the Perforce broker configuration to target the backup machine. See <<SDP_Guide.Unix.adoc#_server_host_naming_conventions,SDP Guide: Server Host Naming Conventions>> Other advanced networking options might be possible if you talk to your local networking gurus (virtual IPs etc). == Planned Failover In this instance you can run `p4 failover` with the active participation of its upstream server. We are going to provide examples with the following assumptions: * ServerID `master_1` is current commit, running on machine `p4d-bos-01` * ServerID `p4d_ha_bos` is HA server * DNS alias `perforce` is set to `p4d-bos-01` === Prerequisites You need to ensure: . you are running p4d 2018.2 or later for your commit and all replica instances, preferably 2020.1+ source /p4/common/bin/p4_vars 1 p4 info | grep version . your failover target server instance is of type `standby` or `forwarding-standby` + On HA machine: p4 info : ServerID: p4d_ha_bos Server services: standby Replica of: perforce:1999 : . it has Options mandatory set in its server spec p4 server -o p4d_ha_bos | grep Options Options: mandatory . you have a valid `license` installed in `/p4/1/root` (<instance> root) + On HA machine: cat /p4/1/license . Monitoring is enabled - so the following works: p4 monitor show -al . DNS changes are possible so that downstream replicas can seamlessly connect to HA server . Current `pull` status is valid p4 pull -lj . You have a valid `offline_db` for the HA instance + Check that the sizes of the `db.*` are similar - compare output: ls -lhSr /p4/1/offline_db/db.* | tail ls -lhSr /p4/1/root/db.* | tail + Check the current journal counter and compare against live journal counter: /p4/1/bin/p4d_1 -r /p4/1/offline_db -jd - db.counters | grep journal p4 counters | grep journal . Check all defined triggers will work (see next section) - including Swarm triggers . Check authentication will work (e.g. LDAP configuration) . Check firewall for HA host - ensure that other replicas will be able to connect on the appropriate port to the instance (using the DNS alias) ==== Pre Failover Checklist It is important to perform these checks before any planned failover, and also to make sure they are considered prior any unplanned failover. ===== Swarm Triggers If Swarm is installed, ensure: . Swarm trigger is installed on HA machine (could be executed from a checked in depot file) + Typically installed (via package) to `/opt/perforce/swarm-triggers/bin/swarm-trigger.pl` + But can be installed anywhere on the filesystem + Execute the trigger to ensure that any required Perl modules are installed: perl swarm-trigger.pl + Note that things like `JSON.pm` can often be installed with: sudo apt-get install libjson-perl sudo yum install perl-JSON . Swarm trigger configuration file has been copied over from commit server to appropriate place. + This defaults to one of: /etc/perforce/swarm-trigger.conf /opt/perforce/etc/swarm-trigger.conf swarm-trigger.conf (in the same directory as trigger script) ===== Other Triggers Checklist: * Make sure that the appropriate version of `perl`, `python`, `ruby` etc are installed in locations as referenced by `p4 triggers` entries. * Make sure that any relevant modules have also been installed (e.g. `P4Python` or `P4Perl`) ===== Other Replica's P4TARGET Review the settings for other replicas and also to check live replicas on the source server of the failover (`p4 servers -J`) p4 configure show allservers | grep P4TARGET Make sure the above settings are using correct DNS alias (which will be redirected). ===== Proxies These are typically configured via `/p4/common/bin/p4_1.vars` settings: export PROXY_TARGET= Ensure the target is the correct DNS alias (which will be redirected). ===== Brokers These are typically configured via `/p4/common/config/p4_1.broker.cfg` Ensure the config file correctly identifies the appropriate `target` server using correct DNS alias (which will be redirected). ===== HA Server OS Configuration Check to make sure any performance configuration such as turning off THP (transparent huge pages), and putting serverlocks.dir into a RAM filesystem have also been made to your HA Failover system. See <<SDP_Guide.Unix.adoc#_maximizing_server_performance,SDP Guide: Mazimizing Server Performane>> === Failing over The basic actions are the same for Unix and Windows, but there are extra steps required as noted in <<_failing_over_on_windows>> . Run `p4 failover` in reporting mode on HA machine: p4 failover + Successful output looks like: Checking if failover might be possible ... Checking for archive file content not transferred ... Verifying content of recently update archive files ... After addressing any reported issues that might prevent failover, use --yes or -y to execute the failover. . Perform failover: p4 failover --yes + Output should be something like: Starting failover process ... Refusing new commands on server from which failover is occurring ... Giving commands already running time to complete ... Stalling commands on server from which failover is occurring ... Waiting for 'journalcopy' to complete its work ... Waiting for 'pull -L' to complete its work ... Waiting for 'pull -u' to complete its work ... Checking for archive file content not transferred ... Verifying content of recently updated archive files ... Stopping server from which failover is occurring ... Moving latest journalcopy'd journal into place as the active journal ... Updating configuration of the failed-over server ... Restarting this server ... + During this time, if you run commands against the master, you may see: Server currently in failover mode, try again after failover has completed . Change the DNS entries so downstream replicas (and users) will connect to the new master (that was previously HA) . Validate that your downstream replicas are communicating with your new master + On each replica machine: p4 pull -lj + Or against the new master: p4 servers -J + Check output of `p4 info`: : Server address: p4d-bos-02 : ServerID: master_1 Server services: commit-server : . Make sure the old server spec (`p4d_ha_bos`) has correctly had its `Options:` field set to `nomandatory` (otherwise all replication would stop!) === Post Failover ==== Moving of Checkpoints After failing over, on Unix there will be journals which need to be copied/moved and renamed due to the SDP structure. For example, an HA server might have stored its journals in `/p4/1/checkpoints.ha_bos` (assuming it was create by `mkrep.sh` with serverid `p4d_ha_bos`): /p4/1/checkpoints.ha_bos/p4_1.ha_bos.jnl.123 /p4/1/checkpoints.ha_bos/p4_1.ha_bos.jnl.124 As a result of failover, these files need to be copied/moved to: /p4/1/checkpoints/p4_1.jnl.123 /p4/1/checkpoints/p4_1.jnl.124 The reason different directories is used is because in some installations the `/hxdepots` filesystem is shared on NFS between commit server and HA server, and we don't want to risk overwriting of these files. IMPORTANT: If these files are not present, then normal SDP crontab tasks such as `daily_checkpoint.sh` will fail as they won't be able to find the required journals to be applied to the `offline_db` The following command will rotate journal and replay any missing ones to `offline_db` (it is both fairly quick and safe to run without placing much load on the server host as it doesn't do any checkpointing): /p4/common/bin/rotate_journal.sh 1 If it fails, then check `/p4/1/logs/checkpoint.log` for details - it may have an error such as: Replay journal /p4/1/checkpoints/p4_1.jnl.123 to offline db. Perforce server error: open for read: /p4/1/checkpoints/p4_1.jnl.123: No such file or directory This indicates missing journals which will need to be moved/copied as above. ==== Check on Replication We recommend that you connect to all your replicas/proxies/brokers and make sure that they are successfully working after failover. It is surprisingly common to find forgotten configuration details meaning that they are attempting to connect to old server for example! For proxies and brokers - you probably just need to run: p4 info For downstream replicas of any type, we recommend logging on to the host and running: p4 pull -lj and checking for any errors. We also recommend the following is executed on both HA server and all replicas and the output examined for any unexpected errors: grep -A4 error: /p4/1/logs/log Or you can review the contents of `/p4/1/logs/errors.csv` if you have enabled structured logging. === Failing over on Windows The basic steps are the same as for on Unix, but with some extra steps at the end. After the `p4 failover --yes` command has completed its work (on the HA server machine): . Review the settings for the Windows service (examples are for instance `1`) - note below *-S* is uppercase: p4 set -S p4_1 + Example results: C:\p4\1>p4 set -S p4_1 : P4JOURNAL=c:\p4\1\logs\journal (set -S) P4LOG=c:\p4\1\logs\p4d_ha_aws.log (set -S) P4NAME=p4d_ha_aws (set -S) P4PORT=1666 (set -S) P4ROOT=c:\p4\1\root (set -S) : . Change the value of `P4NAME` and `P4LOG` to correct value for `master`: p4 set -S p4_1 P4NAME=master p4 set -S p4_1 P4LOG=c:\p4\1\logs\master.log + And re-check the output of `p4 set -S p4_1` . Restart the service: c:\p4\common\bin\svcinst stop -n p4_1 c:\p4\common\bin\svcinst start -n p4_1 . Run `p4 configure show` to check that the output is as expected for the above values. ==== Post Failover on Windows This is slightly different for the Windows and Linux SDP, since there is currently no equivalent of `mkrep.sh` for Windows, and replication topologies on Windows are typically smaller and simpler. Thus it is Windows HA instances are likely to have journals rotated into `c:\p4\1\checkpoints` instead of something like `c:\p4\1\checkpoints.ha_bos`. However, it is still worth ensuring things like: * `offline_db` on HA is up-to-date * all triggers (including any Swarm triggers) are appropriately configured * `daily_backup.bat` will work appropriate after failover == Unplanned Failover In this case there is no active participation of upstream server, so there is an increased risk of lost data. We assume we are still failing over to the HA machine, so: * Failover target is `standby` or `forwarding-standby` * Server spec still has `Options:` set to `mandatory` * Original master is not running The output of `p4 failover` on the DR machine might be: Checking if failover might be possible ... Server ID must be specified in the '-s' or --serverid' argument for a failover without the participation of the server from which failover is occurring. Checking for archive file content not transferred ... Verifying content of recently update archive files ... After addressing any reported issues that might prevent failover, use --yes or -y to execute the failover. . Execute `p4 failover` with the extra parameter to specify server we are failing over from: p4 failover --serverid master_1 --yes Expected output is somewhat shorter than for planned failover: Starting failover process ... Waiting for 'pull -L' to complete its work ... Checking for archive file content not transferred ... Verifying content of recently updated archive files ... Moving latest journalcopy'd journal into place as the active journal ... Updating configuration of the failed-over server ... Restarting this server ... === Post Unplanned Failover This is similar to <<_post_failover>> with the exception of the next section below. ==== Resetting Downstream Replicas In an unplanned failover scenario it is possible that there is a journal synchronisation problem with downstream replicas. The output of `p4 pull -lj` may indicate an error, and/or there may be errors in the log: grep -A4 error: /p4/1/logs/log | less If you need to reset the replica to restart from the beginning of the current journal it is attempting to pull, then the process is: . Stop the replica: /p4/1/bin/p4d_1_init stop . Remove the `state` file: cd /p4/1/root mv state save/ . Restart the replica /p4/1/bin/p4d_1_init start . Recheck the log for errors as above. === Unplanned Failover on Windows The extra steps required are basically the same as in <<_failing_over_on_windows>> as well as the steps in <<_post_unplanned_failover>> == Old style failover This does not use the `p4 failover` command (so is valid for pre-2018.2 p4d versions) See: https://community.perforce.com/s/article/2495[KB - Failing over to a Replica]
# | Change | User | Description | Committed | |
---|---|---|---|---|---|
#11 | 30388 | C. Thomas Tyler |
Released SDP 2024.1.30385 (2024/06/11). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
#10 | 30043 | C. Thomas Tyler |
Released SDP 2023.2.30041 (2023/12/22). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
#9 | 29891 | C. Thomas Tyler |
Released SDP 2023.1.29699 (2023/07/11). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
#8 | 29612 | C. Thomas Tyler |
Released SDP 2023.1.29610 (2023/05/25). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
#7 | 29443 | C. Thomas Tyler |
Released SDP 2022.2.29441 (2023/02/27). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
#6 | 29252 | C. Thomas Tyler |
Released SDP 2022.2.29250 (2022/12/08). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
#5 | 28858 | C. Thomas Tyler |
Released SDP 2022.1.28855 (2022/05/27). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
#4 | 28412 | C. Thomas Tyler |
Released SDP 2021.2.28410 (2021/11/24). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
#3 | 28240 | C. Thomas Tyler |
Released SDP 2021.1.28238 (2021/11/12). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
#2 | 27761 | C. Thomas Tyler |
Released SDP 2020.1.27759 (2021/05/07). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
#1 | 27331 | C. Thomas Tyler |
Released SDP 2020.1.27325 (2021/01/29). Copy Up using 'p4 copy -r -b perforce_software-sdp-dev'. |
||
//guest/perforce_software/sdp/dev/doc/SDP_Failover_Guide.adoc | |||||
#5 | 26851 | Robert Cowham |
Fix typo in tmpfs /etc/fstab entry which stopped it working in the doc. Mention in pre-requisites for failover and failover guide the need to review OS Config for your failover server. Document Ubuntu 2020.04 LTS and CentOS/RHEL 8 support. Note performance has been observed to be better with CentOS. Document pull.sh and submit.sh in main SDP guide (remove from Unsupported doc). Update comments in triggers to reflect that they are reference implementations, not just examples. No code change. |
||
#4 | 26747 | Robert Cowham |
Update with some checklists for failover to ensure valid. Update to v2020.1 Add Usage sections where missing to Unix guide Refactor the content in Unix guide to avoid repetition and make things read more sensibly. |
||
#3 | 26727 | Robert Cowham |
Add section on server host naming conventions Clarify HA and DR, and update links across docs Fix doc structure for Appendix numbering |
||
#2 | 26660 | Robert Cowham |
New common Failover Guide - removed old one. This is based on 'p4 failover' so requires 2018.2 or later. |
||
#1 | 26654 | Robert Cowham |
First draft of new Failover Guide using "p4 failover" Linked from SDP Unix Guide |