= SDP Legacy Upgrade Guide (for Unix) Perforce Professional Services :revnumber: v2020.1 :revdate: 2021-02-26 :doctype: book :icons: font :toc: :toclevels: 5 :sectnumlevels: 5 :xrefstyle: full // Attribute for ifdef usage :unix_doc: true == Preface This document provides an overview of the process to upgrade the Perforce Helix Server Deployment Package (SDP) from any older version (dating back to 2007) to the SDP 2020.1 release, also referred to as "r20.1". If your SDP version is 2020.1 or newer, refer to the link:SDP_Guide.Unix.html[SDP Guide (Unix)] for instructions on how to upgrade from SDP 2020.1 to any later version. Starting from SDP 2020.1, the upgrade procedure for the SDP is aided by an automated and incremental upgrade mechanism similar to p4d itself, capable of upgrade SDP from the current release to any future version so long as the current release is SDP 2020.1 or newer. This document describes the process of upgrading to SDP 2020.1. *Please Give Us Feedback* Perforce welcomes feedback from our users. Please send any suggestions for improving this document or the SDP to consulting@perforce.com. :sectnums: == Overview The Perforce Server Deployment Package (SDP) software package, just like the Helix Core software it manages, evolves over time and requires occasional upgrades to remain supported. Further, patches may be released over time. This document discusses how to upgrade the SDP, and when in relationship to Helix Core itself. === Upgrade Order: SDP first, then Helix P4D The SDP should be upgraded prior to the upgrade of Helix Core (P4D). If you are planning to P4D to or beyond P4D 2019.1 from a prior version of P4D, you __must__ upgrade the SDP first. If you run multiple instances of P4D on a given machine (potentially each running different versions of P4D), upgrade the SDP first before upgrading any of the instances. The SDP should also be upgraded before upgrading other Helix software on machines using the SDP, including P4D, P4P, P4Broker, and the 'p4' command line client on the server machine. Even if not strictly required, upgrading the SDP first is **strongly** recommended, as SDP keeps pace with changes in the p4d upgrade process, and can ensure a smooth upgrade. === Upgrading Helix P4D and Other Software See the link:SDP_Guide.Unix.html[SDP Guide (Unix)] for instructions on how to upgrade Helix binaries in the SDP structure after the SDP has been upgraded to 2020.1 or later. === SDP and P4D Version Compatibility The SDP is often forward- and backward-compatible with P4D versions. However, for best results they should be kept in sync by upgrading SDP before P4D. This is partly because the SDP contains logic to upgrade P4D, which can change as P4D evolves. The SDP is aware of the P4D version(s) it manages, and has backward-compatibility logic to support older versions of P4D. This is guaranteed for supported versions of P4D. Backward compatibility of SDP with older versions of P4D may extend farther back then officially supported versions, though without the "officially supported" guarantee. === SDP Upgrade Methods There are several methods for upgrading to a new version of the SDP: * *In-Place, Manual Upgrades*: Manual upgrades of the SDP must be performed to upgrade from versions older than r20.1 in-place on existing machines. This document provides details on how to do this. * *In-Place, Automated Upgrades*: Automation assisted in-place upgrades can be done if your current SDP version is 2020.1 or later. and you are upgrading to 2021.1 and later. Refer to documentation in link:SDP_Guide.Unix.html[SDP Guide (Unix)] for upgrading _from_ SDP 2020.1 onward. * *Migration-Style Upgrades*: A migration-style upgrade is one in which the existing server machines (virtual or physical) are left in place, and brand new "green field" machines are installed fresh using the https://swarm.workshop.perforce.com/projects/perforce_software-helix-installer[Helix Installer] (which installs the latest SDP on a "green field" baseline machine with only the operating system installed). Then the Helix Core data is migrated from the existing hardware to the new hardware. This approach is especially appealing when upgrading other aspects of the infrastructure at the same time as the SDP, such as the hardware and/or operating system. Migration style upgrades require new hardware, and provide a straightforward rollback option because the original hardware is left in place. * *Custom with HMS*: The https://swarm.workshop.perforce.com/projects/perforce_software-hms[Helix Management System] is used by some customers. See <> == Upgrade Planning - In-Place Upgrades Legacy SDP upgrades require some familiarization that should be done prior to scheduling an upgrade. The following information is useful for planning SDP in-place upgrades. === Plan What Is Being Upgraded Presumably if you are reading this, you are intending to upgrade the SDP. During planning, you'll want to decide if you want to upgrade only the SDP. Or you may want to upgrade Helix Core software in the same maintenance window. Tactically, the SDP upgrade is done first. However, upgrading SDP and Helix Core can be done in the same maintenance window, so both upgrade tasks can be done in one upgrade session. Alternately, Helix Core can be upgraded at a later date after the SDP upgrade. TIP: Decide whether Helix Core will be upgraded during the same work session that SDP is upgraded. === Upgrade Duration Key questions to answer during upgrade planning are: * Will p4d need to be taken offline? * If so, how long? Most upgrades to SDP to r20.1 will require downtime for the Helix Core (p4d) server, even if p4d is not being upgraded. The only exception to requiring downtime is if the current SDP is r2019._x_ *and* your SDP is _not_ configured to use `systemd`, i.e. does not have `/etc/systemd/system/p4_*.service` files. (Most 2019-2021 era SDP deployments use systemd, so this is a narrow exception.) If the current SDP *and* P4D versions are both r2019.1 or later, the downtime can be brief. The scripts can be upgraded while the server is live, stopping p4d only long enough to change the `systemd *.service` files with new ones. If your SDP is older than 2019.1, other changes will be needed depending on your SDP version, which will extend the downtime for p4d needed to upgrade the SDP. Read on to get a sense for what steps are required, which vary based on your SDP version. If you are upgrading P4D along with the SDP, and your P4D version is older than 2019.1, live checkpoints are recommended after the upgrade is complete. This can significantly extend downtime required. The "to-or-thru P4D 2019.1" upgrades involve significant Helix Core database structural changes. If your Helix topology includes edge servers, you'll want to account for taking checkpoints of the edge servers in "to-or-thru 2019.1" upgrade planning; edge checkpoints can occur in parallel with the checkpoint on the master server to reduce overall upgrade process duration. .Drivers of Downtime Duration **** The big drivers of required downtime duration are: * Is SDP 2019.1+ and P4D. * Is P4D being upgraded to-or-thru 2019.1. If yes, significantly longer downtime is needed, both for the p4d upgrade process and taking a live checkpoint after. * Time required to execute the SDP upgrade steps that you'll define in detail with information from later in this document. The older the SDP version, the more steps required. * Sophisticated global topologies with many machines take longer for these one-time legacy upgrades, due to needing to run commands on multiple machines. Note that, once on the SDP 2020.1+ and using P4D 2019.1+, we do not expect any future upgrades to require extended downtime. Upgrade simplification and downtime reduction are priorities for both Helix Core and the SDP. **** === SDP Machines and Instances Early in your planning, you'll want to take stock of all server machines on which the SDP is to be upgraded. For each machine, you'll need to be aware of what SDP instances need to be upgraded on that machine. For each instance on any given machine, determine what servers or services are in place. For example, a given machine might be a master p4d server for one instance, a replica for another, and a simple proxy for a third instance. === Mount Point Names You will need to be aware of the three standard SDP mount points. While referred to as "mount points" in this document, in any given installation, any or all of the three SDP mount points may be simple directories on the root storage volume, or symlinks to some other storage volume. In some installations, fewer than three volumes were used, and in some cases 4 were used (2 for metadata). In some cases the operating system root volume was used as one of the volumes. Investigate and be aware of how your installation was configured. Comparing the output of `pwd` and `pwd -P` in the same directory can be informative. The mount points do not necessarily need to be changed during the SDP upgrade process, as the SDP structural design has always and remains flexible with respect to mount point names. However, understanding whether the "moint points" are actual mount points, regular directories or symlinks is something to be aware of for detailed planning. In the examples below, the modern SDP mount point names are used: * `/hxdepots` - Volume for versioned files and rotated/numbered metadata journals. * `/hxmetadata`- Volume for active and offline metadata. In some cases the single `/hxmetadata` is replaced with the pair `/hxmetadata1` and `/hxmetadata2`. * `/hxlogs` - Volume for active journal and various logs. Depending on the version of the SDP, the above values may be used, or earlier defaults such as: * `/depotdata` for `/hxdepots` * `/metadata` for `/hxmetadata` * `logs`. In some cases, custom values were used like `/p4depots`, `/p4db`, `/p4jnl`, etc. In these cases, it is important to know what standard names are referred to by the local names. In the sample steps in this document, adapt the steps to use your local values for mount point names to the new values. If your site uses two volumes for metadata, `/hxmetadata1` and `/hxmetadata2`, continue using those same names. === SDP Structure If your SDP is 2019.1 or newer, skip this section. Become familar with the Pre-SDP 2019.1 and SDP 2019.1+ structures. (Note: This is not related to the P4D version). TIP: Determine if SDP structural changes are needed. ==== SDP Pre-2019.1 Structure Before SDP 2019.1: * `/p4` is a directory on the operating system root volume, `/`. * `/p4/__N__` is a symlink to a directory is typically the mount point for a storage volume (`/hxdepots` by default). * `/p4/__N__` contains symlinks for `/hxdepots`, `/hxmetadata`, and `hxlogs`, as well as tickets and trust files. ==== SDP 2019.1+ Structure In SDP 2019.1+: * `/p4` is a directory on the operating system root volume, `/`, (same as Pre-2019.1 Structure). * `/p4/__N__` is local directory on the operating system root volume, * `/p4/__N__` contains symlinks for `/hxdepots`, `/hxmetadata`, and `hxlogs`, as well as tickets and trust files (same as the Pre-2019.1 structure) * `/p4/__N__/bin` is local directory on the operating system root volume. The `bin` directory is the only actual directory in `/p4/__N__`; other items are files or symlinks to directories. TIP: The `verify_sdp.sh` script (included in the SDP starting with SDP 2019.1) gives errors if the 2019.1+ SDP structure is not in place. Converting the SDP structure in-place to the new style requires downtime on the edge/replica of interest. === Topology Option: Shared Archives A topology option with SDP deployments is to share the `/hxdepots` mount point across machines, e.g. with NFS. SDP upgrade procedures involve updating the `/p4/common` directory that is physically on the `/hxdepots` volume. When updating the `/p4/common` on one machine, be aware that any changes will be immediately visible on all other machines that share from the same NFS network location. TIP: Be aware of any shared `/hxdepots` volumes when planning SDP upgrades. === P4HOME: Dir or Symlink In current and all legacy SDP installations, including all topology variations, the top-level `/p4` directory is always a regular directory on the local machine. However, the instance directory, e.g. the `1` in `/p4/1/root`, might be a directory or a symlink depending on your SDP version. In the modern SDP, the `/p4/_N_` directory (where `_N_` is the SDP instance, e.g. `1`), is also a regular directory on the local machine. This `/p4/_N_` directory is referred to as the `P4HOME` directory for the instance. In older versions of the SDP, the `_N_` was a symlink rather than a local directory. Check with a command like `ls -l /p4/_N_` to determine if the P4HOME dir is a symlink or a directory. If it is a directory, the upgrade procedure will need to change the directory to a symlink. TIP: If the `_N_` in `/p4/_N_` is a symlink rather than a directory, you'll need to change that to a directory in the upgrade procedure. === Instance Bin: Dir or Symlink In the modern SDP, the "instance bin" directory, `/p4/_N_/bin`, is a symlink. As with P4HOME, on older versions of the SDP, `/p4/_N_/bin` may be a directory or symlink. Check with a command like `ls -l /p4/_N_/bin` to determine if that is a directory or symlink, depending on how old the SDP is. TIP: If the `bin` in `/p4/_N_/bin` is a symlink rather than a directory, you'll need to change that to a directory in the upgrade procedure. === Operating System User (OSUSER) You will need to be aware of your operating system user (OSUSER) that `p4d` runs as in your environment. The sample upgrade steps below assume that Perforce runs as the `perforce` operating system user, which is typical. You do not need to change it, but you will need to adapt the samples below if your OSUSER is something other than `perforce`. The OUSER user should dedicated to operating the Perforce Helix Core and related Helix services. === Home Directory for OSUSER In modern installations, the default home directory is `/home/perforce`, though in some older installations the home directory is `/p4`. In either case, this does not need to be changed during the upgrade process. NOTE: As a general guideline, we recommend using a local home directory for the account under which p4d runs, as opposed to an auto-mounted home directory. Using an auto-mounted home directory can be a source of operational instability. While changing to a local directory is not a hard requirement, it is recommended. === Metadata Symlink Type Depending on how old the SDP in place is, the structure will have either _Fixed_ or _Variable_ Metadata Symlinks. Determine which you have. To determine this, login as the OSUSER (e.g. `perforce`), and run a command like this sample (for instance `1`): ls -l /p4/1/root /p4/1/offline_db The `root` and `offline_db` will be always symlinks in all versions of the SDP. However, they might be fixed or variable. *Variable Metadata Symlink References* If one of the symlinks points to a directory ending in `db1`, and the other in `db2` (it doesn’t matter which is pointing to which), you have *variable metadata symlinks*. *Fixed Metadata Symlink References* If the target of the `root` and `offline_db` symlinks points to directories ending in the same names, i.e. `root` and `offline_db`, then you have *fixed metadata symlinks.* TIP: If you have *fixed metadata symlinks*, your upgrade procedure will need to convert them to *variable metadata symlinks*, per examples below. === Init Mechanism: Systemd or SysV You will need to determine whether you are running with the Systemd or SysV init mechanism. Generally, newer operating systems like RHEL/CentOS 7 & 8, Ubuntu 18.04 and 20.04, and SuSE 12 & 15 will run with the `systemd` init mechanism. Older ones likely use the SysV init scripts (with some exceptions). If the command `systemctl` exists in your path, then you are running a system that supports systemd. == Upgrade Procedure - In-Place Upgrades With the familiarization and planning from the last section complete, move on to defining an upgrade procedure for your topology. The procedure is broken into 3 phases: * *Preparation*: Preparation steps can be done in a non-disruptive manner on a production server ahead of the Execution, possibly days or more ahead of the target date for the actual upgrade. * *Execution*: Execution steps are generally performed in a scheduled maintenance window. * *PostOp*: PostOp steps are done some time after the upgrade is complete, perhaps days or weeks later. For example, some cleanup is done in PostOp. Often the upgrade procedure leaves copies of various files and directories around to support a fast abort of the upgrade. Such files are comforting to have around during the upgrade procedure, but after a time become clutter and should be removed. === Preparation Preparation steps are detailed below. WARNING: These procedure involves deploying new files on the Production server ahead of the actual upgrade. As prescribed, the steps avoid interaction with a live running p4d server and with the SDP script operations of the current SDP. The preparation steps can be safely done without affecting behavior until you are ready to execute the upgrade. However, human error is always a possibility when working on a production server. Respect the machine, and type carefully! ==== Plan Communications Crafting emails can be time consuming! Communications to end users should be prepared so that they are ready to send quickly during maintenance. For the most part, this type of "back end system" upgrade is transparent to end users other than the associated downtime. However, if you have power users granted direct SSH access to the server machines, such users should also be made aware of SDP changes. ==== Plan System Backups If your environment has special backup capabilities, such as snapshots of key storage volumes and/or the entire machine, it should be determined whether such things are to be utilized during this upgrade (generally before anything starts). ==== Run verify_sdp.sh If you SDP is 2019.1 or newer, it will have the `verify_sdp.sh` script. Run it during preparation to ensure you have a good start state. Resolve issues detected by the script. ==== Plan User Lockout There are a variety of strategies for locking out users during maintenance. Choose what combination to apply in your environment. * **Protections table**: Create a near-empty "maintenance mode" Protections table that references only the P4USER used by the SDP (typically `perforce`) and any replication service users (`svc_*`). As maintenance starts, save the standard Protections table, and put the maintenance one in place. At the end of maintenance, bring the original one back in place. (Be aware of whether your site has any polices or custom automation that might interfere with this method.) * **Temp Firewall Changes**: Using network and/or host firewall rules can block end users out during maintenance. Be wary not to block replicas. (Warning: This can be complex and hard to get right, and may involve coordination with other teams if the Perforce admin does not have direct control over such things. Choose this option with care.) * **Temp P4PORT Change**: In some environments, the P4PORT and P4BROKERPORT configured in the Instance Vars files are changed to non-production values not known to users during maintenance, and switched back when done. If there are replicas, there is a need to do `p4d -cset` commands to change replica's P4TARGET back and forth between production and maintenance mode values. * **DFM Brokers**: Down For Maintenance "(DFM)" brokers can be used for some types of upgrade. DFM brokers cannot be used if SDP structure change are needed, as all processes must be down in that case. ==== Acquire Downloads Download the latest SDP tarball release from this link: https://swarm.workshop.perforce.com/projects/perforce-software-sdp/download/downloads/sdp.Unix.tgz. Copy the downloaded tarball to the machine and put it in `/hxdepots/sdp.Unix.tgz`. (If a file with the same name exists from a previous upgrade, move it aside first.) ==== Extract new `sdp` Directory mkdir /hxdepots/new cd /hxdepots/new tar -xzf /hxdepots/sdp.Unix.tgz cat /hxdepots/new/sdp/Version Verify that the contents of the `Version` file are as expected. ==== Generate new SDP Shell Environment Files The following SDP shell environment files are generated and should be reviewed, comparing new files generated with a `*.new` extension with the corresponding files in the existing installation (without the `.new` suffix). Be careful not to modify the production files; only update the *.new files. The Shell Environment Files to generate for review are: ===== The p4_vars File The single `p4_vars` file is the main SDP shell environment file. Generate the new form of the file like this example, operated as the `perforce` OSUSER: cd /p4/common/bin sed -e "s:REPL_OSUSER:$USER:g" -e "s:REPL_SDPVERSION:$(cat /hxdepots/new/sdp/Version):g" /hxdepots/new/sdp/Server/Unix/p4/common/config/p4_vars.template > p4_vars.new grep -E 'export KEEP.*=' p4_vars >> p4_vars.new The old file may have custom `KEEP*` settings that need to be preserved; that `grep` command above handles preservation of the `KEEP*`` settings. ===== Instance Vars Files The `/p4/common/config` directory in the SDP contains `p4_N.vars` shell environment files, one per SDP instance (e.g. `p4_1.vars`, `p4_abc.vars`, etc.). .What if there is no `/p4/common/config` ? **** If the existing SDP does not have a `/p4/common/config` directory at all, as will be the case for very old versions of SDP, you can safely create the `config` directory during preparation. In that case, leave off the `*.new` extension for files to be generated in `/p4/common/config, i.e. the `p4_N.vars` files (one per instance) and optional `p4_N.p4review.cfg` files created in that directory. Create the `/p4/common/config` directory by copying from the new SDP area, like so: su - perforce cp -pr /hxdepots/new/sdp/Server/Unix/p4/common/config /p4/common/. **** In the following example, replace `1 abc` with your actual list of SDP instance names, delimited by spaces. Note that if you have many machines in your topology, it is possible that each machine may have a different set of instances. You'll need to be aware of which instances are active on which machines. On any given machine with a given set of instances, do something like: su - perforce cd /p4/common/config for i in 1 abc; do cp /hxdepots/new/sdp/Server/Unix/p4/common/config/instance_vars.template > p4_${i}.vars.new; done The format of `p4_N.vars` files has evolved over time, so it is important to generate new files from the new template. For each p4_N.vars.new file, search for the string `REPL_` in the file to find strings that need to be replaced (everywhere except in comment blocks). Settings to be replaced are: * MAILTO * MAILFROM * P4USER * P4MASTER_ID * SSL_PREFIX * P4PORTNUM * P4BROKERPORTNUM * P4MASTERHOST (appears in some older versions as P4MASTER) As needed, refer to the original `p4_N.vars` files to retrieve values. Following is the complete list of settings to check and see if you have values defined in your original Instance Vars file that may differ from the new template: * MAILTO * MAILFROM * P4USER * P4MASTER_ID * SSL_PREFIX * P4PORTNUM * P4BROKERPORTNUM * P4MASTERHOST * P4MASTERHOST * PROXY_TARGET * PROXY_PORT * P4DTG_CFG * SNAPSHOT_SCRIPT * SDP_ALWAYS_LOGIN * SDP_AUTOMATION_USERS * The 'umask' setting, which is set with a command like `umask 0026`. ==== Account for Typical Customization If the SDP has been customized in your environment, custom upgrade procedures may be required. An understanding of what was customized and why will be useful in determining if custom upgrade procedures are required. In typical deployments, the SDP is not customized, or only customized in some way that is no longer needed due to improvements in the "stock" SDP. Starting with the SDP 2020.1 release, there is a new mechanism to cleanly separate typical configuration changes from customizations. The SDP `p4_vars` file and Instance Vars files may contain local custom modifications made by administrators or Perforce Consultants. When generating new Instance Vars files, be sure to review old Instance Vars files for any custom code or logic. The `p4_vars` files and generated Instance Vars files (e.g. `p4_1.vars`) each have a line at the bottom of the generated portion of the file that looks like this: ### MAKE LOCAL CHANGES HERE: If any custom functionality is to be preserved, be sure to add it *_only_* after that line containing `### MAKE LOCAL CHANGES HERE:` Future automated SDP upgrades will preserve any lines below the `### MAKE LOCAL CHANGES HERE:` line. In addition to preserving content below that line, the right-side of all variable assignments anywhere in the file will be preserved extracted from existing Instance Vars files as well as `p4_vars`. For example, say if you have: export MAILLOST=MyAdminList@MyCompany.com That variable assignment will be preserved in future SDP upgrades. Use this information when reviewing your `p4_vars.new` and `p4_1.vars.new` (or other Instance Vars files) in preparation for the upgrade. ==== Account for SDP Additions In many cases, customers have added custom trigger scripts into the SDP structure. In this case, but should not need to be changed. ==== Deeper Customizations If you need help determining if and how the SDP was customized in your environment, link:mailto:consulting@perforce.com[Perforce Consulting] may be of assistance. Note that customizations are not supported by Perforce Support. ===== Instance P4Review Scripts The legacy p4review "review daemon" scripts are still supported in the SDP, but have become obsolete with Helix Swarm offering the needed functionality. Determine if you still need them. If not, skip to the next section. TIP: If Helix Swarm is used, use Swarm's `honor_p4_reviews` feature to displace the legacy `p4review` scripts and config files. Swarm has its own "project-based" email notification scheme, which can be augmented with `honor_p4_reviews` to also provide notifications based on the `Reviews:` field of the user spec. If you need them, generate *.new config files as per this example. In the following example, replace `1 abc` with your actual list of SDP instance names for which review daemons are active. Review daemons are only enabled on the master server machine for an instance. su - perforce cd /p4/common/config for i in 1 abc; do cp /hxdepots/new/sdp/Server/Unix/p4/common/config/p4review.cfg.template > p4review_${i}.cfg.new; done It is possible the new files may not have changed from the older ones if your SDP version is recent enough. ===== Broker Config Files - Nothing To Do Broker config files, `p4_N.broker.cfg`, may also exist in `/p4/common/config`. These are not affected by the SDP upgrade procedure, and can be ignored. ==== Generate New SDP Instance Bin Files If the current SDP is 2018.1 or newer, skip this section. Examine the `p4d_N_init` script in the 'instance bin' folder, `/p4/_N_/bin`. Does the actual code look like this sample (with comments and the "shebang" `#!/bin/bash` line removed)? export SDP_INSTANCE=N /p4/common/bin/p4d_base $SDP_INSTANCE $@ If the `p4d_N_init` script already looks like this, then the 'instance bin' folder does not need to be touched during the upgrade process. Skip thie rest of this section. If, however, the *_init script has more code, then all the `p4*_init` scripts will need to be replaced during the upgrade execution. Templates are available in `/p4/common/etc/init.d`. The templates contains a few values that will need to be replaced. Identify existing `p4*_init` scripts to be replaced, and create new files with a `*.new` suffix. For example, for instance `1`, generate new p4d and p4broker init scripts like this: cd /p4/1/bin sed -e s:REPL_SDP_INSTANCE:1:g /hxdepots/new/sdp/Server/Unix/p4/common/etc/init.d/p4d_instance_init.template > p4d_1_init.new chmod +x p4d_1_init.new sed -e s:REPL_SDP_INSTANCE:1:g /hxdepots/new/sdp/Server/Unix/p4/common/etc/init.d/p4broker_instance_init.template > p4broker_1_init.new chmod +x p4d_1_init.new ==== Check for systemd service files The format of Systemd service files (sometimes referred to as 'Unit' files) changed with the SDP 2020.1 release. As part of planning, it is helpful to identify if systemd is already in use, and which Perforce Helix services are managing with systemd. You can get a list of such services with: ls -ld /etc/systemd/system ls -lrt /etc/systemd/system/p4*.service If the `/etc/systemd/system` directory exists, then the Systemd init mechanism is available. On systems that use the Systemd init mechanism, we recommend using it. Once systemd is configured for any given service, the SDP requires using the systemd mechanism (i.e. the `systemctl` command) to start/stop Perfore Helix services (for safety and consistency of management). Depending on your SDP version and how it was installed, there may or may not already be `p4*.service` files. In any case, in the Execution phase below, new systemd `p4*.service` files will be put in place, which may be new, or may replace existing `p4*.service` files. === Execution This section outlines sample steps for executing an actual upgrade after planning and preparations identified in <> have been completed. The following typically performed in a scheduled maintenance window. Execution steps detailed below. ==== Lockout Users Execute whatever steps was planned in <>. ==== Disable Crontabs Capture original crontabs on all servers. On each machine as `perforce`: [[ -d /p4/common/etc/cron.d ]] || mkdir -p /p4/common/etc/cron.d crontab -l > /p4/common/etc/cron.d/crontab.$USER.$(hostname -s) ==== Stop Services Stop the `p4d` service for all instances on this machine. Also stop all p4broker services running on this machine (if any). For this SDP maintenance, the broker cannot be left running (e.g. to broadcast a "Down For Maintenance (DFM)" message) because the structure change cannot be started until _all_ processes launched from the SDP directory structure have stopped. ===== Stop Services with Systemd Sample systemd commands to stop the services, executed as `perforce`: sudo systemctl status p4d_1 p4broker_1 sudo systemctl stop p4d_1 p4broker_1 sudo systemctl status p4d_1 p4broker_1 The extra `status` commands before and after the start/stop commands are for situational awareness. These are not strictly necessary. ===== Stop Services with SysV Sample SysV commands to stop the services, executed as `perforce`: p4d_1_init status p4d_1_init stop p4d_1_init status p4broker_1_init status p4broker_1_init stop p4broker_1_init status The extra `status` commands before and after the start/stop commands are for situational awareness. These are not strictly necessary. ==== Backup the current SDP Common Dir On each machine, copy the old SDP directory: cd /hxdepots/p4 cp -pr common/ OLD.common.$(date +'%Y-%m-%d') ==== Upgrade Physical Structure In this step, the physical structure of the upgrade is done for pre-2019.1 SDP. Skip this step if you are already on the 2019.1+ structure. The structure of the SDP changed in the 2019.1 release, to increase performance and reduce complexity in post-failover operations. The following notes describe how to do an in-place conversion to the new structure. Following is the procedure to upgrade the structure in-place on a machine. WARNING: In the following sample procedure, the default SDP instance name of `1` is used, and default mount point names are used. Adapt this to your environment by applying this procedure to each instance on any given machine. If you have multiple instances, apply this procedure for each instance, one at a time. ===== Replace Instance Symlink with Directory Skip this step if you are already on the 2019.1+ structure. Move the instance symlink aside, and replace it with a regular directory. Then copy the `.p4*` files (e.g. `.p4tickets` and `.p4trust`) into the new directory. Sample commands: cd /p4 mv 1 1.old_symlink.OLD_DELETE_ME_LATER mkdir 1 cd 1 cp -p /p4/1.old_symlink.OLD_DELETE_ME_LATER/.p4t* . ===== Convert Fixed to Variable Metadata Symlinks Skip this step if you are already using Variable Metadata Symlinks. If you have Fixed Metadata Symlinks, first convert them to to Variable Metadata Symlinks. If you already have Varialbe Metadata Symlinks, proceed to <> In this step, move the underlying directories that will be pointed to by the `root` and `offline_db` sylmink names, and move them to their `db1` and `db2` names. mv /hxmetadata/p4/1/root /hxmetadata/p4/1/db1 mv /hxmetadata/p4/1/offline_db /hxmetadata/p4/1/db2 ===== Replace Instance Symlink with Directory Recreate the same symlinks you see reported by the `ls` command: ls -l /p4/1.old_symlink.OLD_DELETE_ME_LATER/* cd /p4/1 ln -s /hxmetadata/p4/1/db1 root ln -s /hxmetadata/p4/1/db2 offline_db WARNING: Do not just copy the sample commands above. Pay close attention to the `ls` output, and make sure the `root` points to whatever it was pointing to before, either a directory ending in `db1` or `db2` (unless you just converted from Fixed Metadata Symlinks in STEP 4). Also confirm that `offline_db` and `root` aren't both pointing to the same directory; one should be pointing to `db1` and the other to `db2`. Then, create additional symlinks akin to whatever else is in `/p4/1.old_symlink.OLD_DELETE_ME_LATER` That should look something like this: cd /p4/1 ln -s /hxdepots/p4/1/depots ln -s /hxdepots/p4/1/checkpoints ln -s /hxdepots/p4/1/checkpoints.YourEdgeServerID ln -s /hxlogs/p4/1/logs ln -s /hxlogs/p4/1/tmp ls -l Next, create the `bin` directory, as a local directory and copy files to it: mkdir bin cd bin cp /p4/1.old_symlink.OLD_DELETE_ME_LATER/bin/p4d_1_init . cp /p4/1.old_symlink.OLD_DELETE_ME_LATER/bin/p4broker_1_init . ln -s /p4/common/bin/p4broker_1_bin p4broker_1 ln -s /p4/common/bin/p4_bin p4_1 Last, take a look at `/p4/1.old_symlink.OLD_DELETE_ME_LATER/bin/p4d_1` - that `p4d_1` will be either a tiny script or a symlink (depending on whether your p4d is case sensitive or not). If you server is case sensitive, it will be a symlink. If your server is case-insensitive, it will be a tiny script. If your server is case sensitive, create the symlink like this: ln -s /p4/common/bin/p4d_1_bin p4d_1 OR, if your server is case-sensitive, that p4d_1 will be a tiny script, so just copy it: cp /p4/1.old_symlink.OLD_DELETE_ME_LATER/bin/p4d_1 . Then, start your server again, and run the `verify_sdp.sh` script and confirm that it’s happy now. ==== Deploy new SDP Common Files rsync -a /p4/sdp/Server/Unix/p4/common/ /p4/common ==== Put New SDP Init scripts In Place If you generated new `p4*_init` scripts in preparation, put them in place now, doing something like this as `perforce`: cd /p4/1/bin mv p4d_1_init p4d_1_init.OLD_DELETE_ME_LATER mv p4d_1_init.new p4d_1_init mv p4broker_1_init p4broker_1_init.OLD_DELETE_ME_LATER mv p4broker_1_init.new p4broker_1_init ==== Put New p4_vars In Place Put the new p4_vars.new file in place doing something like this as `perforce`: cd /p4/common/bin mv p4_vars p4_vars.OLD_DELETE_ME_LATER mv p4_vars.new p4_vars ==== Put New Instance Vars In Place Put all the new Instance Vars files in place doing something like this as `perforce` and replacing `1 abc` with your list of instances: cd /p4/common/config for i in 1 abc; do mv p4_${i}.vars p4_${i}.vars.OLD_DELETE_ME_LATER; mv p4_${i}.vars.new p4_${i}.vars; done ==== Put New P4Review Files In Place If you generated new P4Review files put them in place, doing something like this as `perforce` and replacing `1 abc` with your list of instances: cd /p4/common/config for i in 1 abc; do mv p4review_${i}.cfg p4review_${i}.cfg.OLD_DELETE_ME_LATER; mv p4review_${i}.cfg.new p4review_${i}.cfg; done ==== Upgrade systemd service files The format of systemd unit files changed with the SDP 2020.1 release. If systemd is not used, skip this section. The SDP 2020.1 release includes templates for System unit files in `/p4/common/etc/systemd/system`. These should be deployed on each machine that uses SDP, and for each Helix service (e.g. `p4d`, `p4broker`, `p4p`) within each SDP instance. For example, the following installs or replaces system Unit files for `p4d` and `p4broker` for SDP instance 1. These must be executed as `root` Next, add/replace the *.service files, with commands like these samples: cd /etc/systemd/system sed -e s:__INSTANCE__:1:g -e s:__OSUSER__:perforce:g /p4/common/etc/systemd/system/p4d_N.service.t > p4d_1.service sed -e s:__INSTANCE__:1:g -e s:__OSUSER__:perforce:g /p4/common/etc/systemd/system/p4broker_N.service.t > p4broker_1.service systemctl daemon-reload ==== Start Services Start the `p4d` service for all instances on this machine. Also start p4broker services running on this machine (if any). ===== Start Services with Systemd Sample systemd commands to start the services, executed as `perforce`: source p4_vars 1 sudo systemctl status p4d_1 p4broker_1 sudo systemctl start p4d_1 p4broker_1 sleep 3 sudo systemctl status p4d_1 p4broker_1 p4 info The extra `status` commands before and after the start/stop commands are for situational awareness. These are not strictly necessary. TIP: The `systemctl start` command returns immediately after a request to the system mechanism has been made to start service. However, the return of the command should NOT be taken as an indication that the service is actually up; that must verified, e.g. with `p4 info`. TIP: If you execute the second `systemctl status` command or the `p4 info` command too quickly after the `start` command, the service will not have started. Give it several seconds and try again. ===== Start Services with SysV Sample SysV commands to start the services, executed as `perforce`: p4d_1_init status p4d_1_init start p4d_1_init status p4broker_1_init status p4broker_1_init start p4broker_1_init status The extra `status` commands before and after the start/stop commands are for situational awareness. These are not strictly necessary. === Re-Enable Crontabs Re-enable crontabs on each machine with a commands: crontab -l crontab /p4/common/etc/cron.d/crontab.$USER.$(hostname -s) crontab -l The `crontab -l` before and after is done for situational awareness and to confirm the crontab was loaded correctly. === Open The Flood Gates At this point, do any sanity testing you desire to have confidence everything is OK. Then, allow users back in by undoing whatever you did to lock users out in <>. === Post Operation Steps Cleanup steps can occur after the upgrade. In some cases cleanup is done immediately following the upgrade; in other cases it may be deferred by days or weeks. ==== Cleanup Temporary directories with DELETE_ME_LATER created during the upgrade procedure can now be deleted. [appendix] === Custom HMS Managed Installations If the Helix Management System (HMS) is used to manage this installation, you should have custom site-specific documentation for upgrading the SDP that supersedes this documentation. If the file `/p4/common/bin/hms` exists at your site, you have an HMS-managed site. Contact mailto:consulting@perforce.com[Perforce Consulting] for more information. Note that HMS solutions are inherently custom and not officially supported, but can be fully automated for global Helix Core topologies.