Preface

This document provides an overview of the process to upgrade the Perforce Helix Server Deployment Package (SDP) from any older version (dating back to 2007) to the SDP 2020.1 release, also referred to as "r20.1".

If your SDP version is 2020.1 or newer, refer to the SDP Guide (Unix) for instructions on how to upgrade from SDP 2020.1 to any later version. Starting from SDP 2020.1, the upgrade procedure for the SDP is aided by an automated and incremental upgrade mechanism similar to p4d itself, capable of upgrade SDP from the current release to any future version so long as the current release is SDP 2020.1 or newer.

This document describes the process of upgrading to SDP 2020.1.

1. Overview

The Perforce Server Deployment Package (SDP) software package, just like the Helix Core software it manages, evolves over time and requires occasional upgrades to remain supported. Further, patches may be released over time.

This document discusses how to upgrade the SDP, and when in relationship to Helix Core itself.

1.1. Upgrade Order: SDP first, then Helix P4D

The SDP should be upgraded prior to the upgrade of Helix Core (P4D). If you are planning to P4D to or beyond P4D 2019.1 from a prior version of P4D, you must upgrade the SDP first. If you run multiple instances of P4D on a given machine (potentially each running different versions of P4D), upgrade the SDP first before upgrading any of the instances.

The SDP should also be upgraded before upgrading other Helix software on machines using the SDP, including P4D, P4P, P4Broker, and the 'p4' command line client on the server machine. Even if not strictly required, upgrading the SDP first is strongly recommended, as SDP keeps pace with changes in the p4d upgrade process, and can ensure a smooth upgrade.

1.2. Upgrading Helix P4D and Other Software

See the SDP Guide (Unix) for instructions on how to upgrade Helix binaries in the SDP structure after the SDP has been upgraded to 2020.1 or later.

1.3. SDP and P4D Version Compatibility

The SDP is often forward- and backward-compatible with P4D versions. However, for best results they should be kept in sync by upgrading SDP before P4D. This is partly becuase the SDP contains logic to upgrade P4D, which can change as P4D evolves.

The SDP is aware of the P4D version(s) it manages, and has backward-compatibility logic to support older versions of P4D. This is guaranteed for supported versions of P4D. Backward compatiblity of SDP with older versions of P4D may extend farther back then officially supported versions, though without the "offically supported" guarantee.

1.4. SDP Upgrade Methods

There are several methods for upgrading to a new version of the SDP:

  • In-Place, Manual Upgrades: Manual upgrades of the SDP must be peformed to upgrade from versions older than r20.1 in-place on existing machines. This document provides details on how to do this.

  • In-Place, Automated Upgrades: Automation assisted in-place upgrades can be done if your current SDP version is 2020.1 or later. and you are upgrading to 2021.1 and later. Refer to documentation in SDP Guide (Unix) for upgrading from SDP 2020.1 onward.

  • Migration-Style Upgrades: A migration-style upgrade is one in which the existing server machines (virtual or physical) are left in place, and brand new "green field" machines are installed fresh using the Helix Installer (which installs the latest SDP on a "green field" baseline machine with only the operating system installed). Then the Helix Core data is migrated from the existing hardware to the new hardware. This approach is especially appealing when upgrading other aspects of the infrastructure at the same time as the SDP, such as the hardware and/or operating system. Migration style upgrades require new hardware, and provide a straightforward rollback option because the original hardware is left in place.

  • Custom with HMS: The Helix Management System is used by some customers. See Appendix 3.A, Custom HMS Managed Installations

2. Upgrade Planning - In-Place Upgrades

Legacy SDP upgrades require some familiarization that should be done prior to scheduling an upgrade.

The following information is useful for planning SDP in-place upgrades.

2.1. Upgrade Duration

Key questions to answer during upgrade planning are:

  • Will p4d need to be taken offline?

  • If so, how long?

Most upgrades to SDP to r20.1 will require downtime for the Helix Core server, even if p4d is not being upgraded. The only exception to requiring downtime is if the current SDP is r2019.x and your SDP is not configured to use systemd, i.e. does not have /etc/systemd/system/p4_*.service files.

If the current SDP and P4D versions are both r2019.1 or later, the downtime can be brief. The scripts can be upgraded while the server is live, stopping p4d only to change the systemd *.service files with new ones.

If your SDP is older than 2019.1, other changes may be needed depending on your SDP version, which will extend the downtime for p4d needed to upgrade the SDP. Read on to get a sense for what steps are required, which vary based on your SDP version.

If your P4D version is older than 2019.1, full checkpoints are recommended after the upgrade is complete, which can significantly extend downtime. The "to-or-thru 2019.1" upgrades involve significant Helix Core database structural changes. If your Helix topology includes edge servers, you’ll want to account for taking checkpoints of the edge servers in "to-or-thru 2019.1" upgrade planning; edge checkpoints can occur in parallel with the checkpoint on the master server to reduce overall upgrade process duration.

2.2. Plan What Is Being Upgraded

Presumably if you are reading this, you are intending to upgrade the SDP. During planning, you’ll want to decide if you want to upgrade only the SDP. Or you may want to upgrade Helix Core software in the same maintenance window.

Tactically, the SDP upgrade is done first. However, upgrading SDP and Helix Core can be done in the same maintenance window, so both upgrade tasks can be done in one upgrade session. Alternately, Helix Core can be upgraded at a later date after the SDP upgrade.

Decide whether Helix Core will be upgraded during the same work session that SDP is upgraded.

2.3. SDP Machines and Instances

Early in your planning, you’ll want to take stock of all server machines on which the SDP is to be ugpraded.

For each machine, you’ll need to be aware of what SDP instances need to be upgraded.

For each instance on any given machine, determine what servers or services are in place. For example, a given machine might be a master server for one instance, a replica for another, and a simple proxy for a third instance.

2.4. Mount Point Names

You will need to be aware of the three standard SDP mount points. While referred to as "mount points" in this document, in any given installation, any or all of the three SDP mount points may be simple directories on the root storage volume, or symlinks to some other storage volume. In some installations, fewer than three volumes were used, and in some cases 4 were used (2 for metadata). In some cases the operating system root volume was used as one of the volumes. Investigate and be aware of how your installation was configured. Comparing the output of pwd and pwd -P in the same directory can be informative.

The mount points do not necessarily need to be changed during the SDP upgrade process, as the SDP structural design has always and remains flexibile with respect to mount point names. However, understanding whether the "moint points" are actual mount points, regular directories or symlinks is something to be aware of for detailed planning.

In the examples below, the modern SDP mount point names are used:

  • /hxdepots - Volume for versioned files and rotated/numbered metadata journals.

  • /hxmetadata- Volume for active and offline metadata. In some cases the single /hxmetadata is replaced with the pair /hxmetadata1 and /hxmetadata2.

  • /hxlogs - Volume for active journal and various logs.

Depending on the version of the SDP, the above values may be used, or earlier defaults such as:

  • /depotdata for /hxdepots

  • /metadata for /hxmetadata

  • logs.

In some cases, custom values were used like /p4depots, /p4db, /p4jnl, etc. In these cases, it is important to know what standard names are referred to by the local names.

In the sample steps in this document, adapt the steps to use your local values for mount point names to the new values.

If your site uses two volumes for metadata, /hxmetadata1 and /hxmetadata2, continue using those same names.

2.5. Topology Option: Shared Archives

A topology option with SDP deployments is the choice of whether the /hxdepots mount point is shared across machines, e.g. with NFS. SDP ugprade procecures involve updating the /p4/common directory that is physically on the /hxdepots volume. So when updating the /p4/common on one machine, be aware that any changes will be immediately visible on all machines that share from the same NFS.

Be aware of any shared /hxdepots volumes when planning SDP upgrades.

Skip to the next section if your SDP is 2018.2 or newer.

In current and all legacy SDP installations, including all topology variations, the top-level /p4 directory is always a regular directory on the local machine. However, the instance directory, e.g. the 1 in /p4/1/root, might be a directory or a symlink depending on your SDP version.

In the modern SDP, the /p4/N directory (where N is the SDP instance, e.g. 1), is also a regular directory on the local machine. This /p4/N directory is referred to as the P4HOME directory for the instance.

In older versions of the SDP, the instance N was a symlink rather than a local directory. Check with a command like ls -l /p4/N to determine if the P4HOME dir is a symlink or a directory. If it is a directory, the upgrade procedure will need to change the directory to a symlink.

If the N in /p4/N is a symlink rather than a directory, you’ll need to change that to a directory in the upgrade procedure.

In the modern SDP, the /p4/N/bin (the "instance bin directory") directory is a symlink. As with P4HOME, on older versions of the SDP, /p4/N/bin may be a directory or symlink. Check with a command like ls -l /p4/N/bin to determine if that is a directory or symlink.

If the bin in /p4/N/bin is a symlink rather than a directory, you’ll need to change that to a directory in the upgrade procedure.

2.8. Operating System User (OSUSER)

You will need to be aware of your operating system user ((OSUSER) that p4d runs as in your environment.

The sample upgrade steps below assume that Perforce runs as the perforce operating system user, which is typical. Adapt if your OSUSER is something other than perforce.

The OUSER user should dedicated to operating the Perforce Helix Core and related Helix services.

2.9. Home Directory for OSUSER

In modern installations, the default home directory is /home/perforce, though in some older installations the home directory is /p4. In either case, this does not need to be changed during the upgrade process.

As a general guideline, we recommend using a local home directory for the account under which p4d runs, as opposed to an auto-mounted home directory. Using an auto-mounted home directory can be a source of operational instability. While changing to a local directory is not a hard requirement, it is recommended.

Depending on how old the SDP in place is, the structure will have either fixed or variable metadata symlinks. Determine which you have.

To determine this, login as the OSUSER (e.g. perforce), and run a command like this sample (for instance 1):

ls -l /p4/1/root /p4/1/offline_db

The root and offline_db will be always symlinks in all versions of the SDP. However, they might be fixed or variable.

Variable Metadata Symlink References

If one of the symlinks points to a directory ending in db1, and the other in db2 (it doesn’t matter which is pointing to which), you have variable metadata symlinks.

Fixed Metadata Symlink References

If the target of the root and offline_db symlinks points to directories ending in the same names, i.e. root and offline_db, then you have fixed metadata symlinks.

If you have fixed metdata symlinks, your upgrade procedure will need to convert them to variable metadata symlinks, per examples below.

3. Upgrade Procedure

With the familiarization and planning from the last section complete, move on to defining an upgrade procedure for your topology.

The procedure is broken into 3 phases:

  • Preparation: Preparation steps can be done in a non-disruptive manner on a production server ahead of the Execution, possibly days or more ahead of the target date for the actual upgrade.

  • Execution: Execution steps are generally performed in a scheduled maintenance window.

  • PostOp: PostOp steps are done some time after the upgrade is complete, perhaps days or weeks later. For example, some cleanup is done in PostOp. Often the upgrade procedure leaves copies of various files and directories around to support a fast abort of the ugprade. Such files are comforting to have around during the upgrade procedure, but after a time become clutter and should be removed.

3.1. Preparation

Preparation steps are:

  1. Acquire Downloads.

  2. Deploy new SDP common files.

  3. Generate new SDP config files.

  4. Configure new SDP instance bin files and symlinks.

  5. Determine Metadata Symlink Type (Fixed or Variable)

  6. Account for customization (if any)

3.1.1. Acquire Downloads

Copy the downloaded tarball to the machine and put it in /hxdepots/sdp.Unix.tgz. (If a file with the same name exists from a previous upgrade, move it aside first.)

3.1.2. Deploy new SDP common files

mkdir /hxdepots/new
cd /hxdepots/new
tar -xzf /hxdepots/sdp.Unix.tgz
cat /hxdepots/new/sdp/Version

Verify that the contents of the Version file are as expected.

3.1.3. Generate new SDP config files

The following SDP config files are generated and should be reviewed, comparing new files generated with a *.new extension with the files in the existing installation (without the .new suffix). Be careful not to modify the production files; only update the *.new files.

The files to generate for review are:

3.1.3.1. p4_vars

The p4_vars file is the main SDP shell environment file. Generate the new form of the file like this example (assuming your OSUSER is perforce):

cd /p4/common/bin
sed -e "s:REPL_OSUSER:perforce:g" -e "s:REPL_SDPVERSION:$(cat /hxdepots/new/sdp/Version):g" /hxdepots/new/sdp/Server/Unix/p4/common/config/p4_vars.template > p4_vars.new
grep -E 'export KEEP.*=' p4_vars >> p4_vars.new

The old file may have custom KEEP* settings that need to be preserved; that grep command above handles preservation of the KEEP*` settings.

3.1.3.2. Instance Config Files

The /p4/common/config directory in the SDP contains p4_N.vars vars files, one per SDP instance (e.g. p4_1.vars, p4_abc.vars, etc.).

In the following example, replace 1 abc with your actual list of SDP instance names. Note that if you have many machines in your topology, it is possbile that each machine may have a different set of instances.

su - perforce
cd /p4/common/config
for i in 1 abc; do cp /hxdepots/new/sdp/Server/Unix/p4/common/config/instance_vars.template > p4_${i}.vars.new; done

The format of p4_N.vars files has evolved over time, so it is important to generate new files from the new template. For each p4_N.vars.new file, search for the string REPL_ in the file to find strings that need to be replaced (everywhere except in comment blocks).

  • MAILTO

  • MAILFROM

  • P4USER

  • P4MASTER_ID

  • SSL_PREFIX

  • P4PORTNUM

  • P4BROKERPORTNUM

  • P4MASTERHOST (appears in some older versions as P4MASTER)

As needed, refer to the original p4_N.vars files to retrieve values. Settings needed are:

The SDP instance config files and in some cases the p4_vars files can contain local custom modifications made by administrators. Often the need for customization goes away with new SDP versions. However, when generating new config files, be sure to review old files for any custom code or logic. If any custom functionality is to be preserved, be sure to add it only after the comment indicator that appears at the bottom of the p4_vars and instance-specific p4_N.vars files that looks like this:
### MAKE LOCAL CHANGES HERE:

Changes made anywhere else in the file will be lost on a future ugprade.

What if there is no /p4/common/config ?

If the existing SDP does not have a /p4/common/config directory at all, as will be the case for very old versions of SDP, you can safely create the config directory during preparation.

In that case, leave off the *.new extension for files to be generated in /p4/common/config, i.e. the `p4_N.vars files (one per instance) and optional p4_N.p4review.cfg files created in that directory. Create the /p4/common/config directory by copying from the new SDP area, like so:

su - perforce
cp -pr /hxdepots/new/sdp/Server/Unix/p4/common/config /p4/common/.
3.1.3.3. Instance P4Review Script Files

The legacy p4review "review daemon" scripts are still supported in the SDP, but have become obsolete with Helix Swarm offering the needed functionality. Determine if you still need them. If not, skip to the next section.

If Helix Swarm is used, use Swarm’s honor_p4_reviews feature to displace the legacy p4review scripts and config files. Swarm has its own "project-based" email notification scheme, which can be augmented with honor_p4_reviews to also provide notifications based on the Reviews: field of the user spec.

If you need them, generate *.new config files as per this example. In the following example, replace 1 abc with your actual list of SDP instance names for which review daemons are active. Review daemons are only enabled on the master server machine for an instance.

su - perforce
cd /p4/common/config
for i in 1 abc; do cp /hxdepots/new/sdp/Server/Unix/p4/common/config/p4review.cfg.template > p4review_${i}.cfg.new; done

It is possilbe the new files may not have changed from the older ones if your SDP version is recent enough.

3.1.3.4. Broker Config Files - Nothing To Do

Broker config files, p4_N.broker.cfg, may also exist in /p4/common/config. These are not affected by the SDP upgrade procedure, and can be ignored.

If the current SDP is 2018.1 or newer, skip this section.

Examine the p4d_N_init script in the 'instance bin' folder, /p4/N/bin.

Does the actual code look like this sample (with comments and the "shebang" #!/bin/bash line removed)?

export SDP_INSTANCE=N
/p4/common/bin/p4d_base $SDP_INSTANCE $@

If the p4d_N_init script already looks like this, then the 'instance bin' folder does not need to be touched during the upgrade process.

If, however, the *_init script has more code, then all the p4*_init scripts will need to be replaced during the upgrade execution. Templates are available in /p4/common/etc/init.d. The templates contains a few values that will need to be replaced.

Identify init scripts to be replaced, and create new files with a *.new suffix. For example, for instance 1, generate new p4d and p4broker init scripts like this:

cd /p4/1/bin
sed -e s:REPL_SDP_INSTANCE:1:g /hxdepots/new/sdp/Server/Unix/p4/common/etc/init.d/p4d_instance_init.template > p4d_1_init.new
chmod +x p4d_1_init.new
sed -e s:REPL_SDP_INSTANCE:1:g /hxdepots/new/sdp/Server/Unix/p4/common/etc/init.d/p4broker_instance_init.template > p4broker_1_init.new
chmod +x p4d_1_init.new

3.1.5. Upgrade *_init scripts

The format of SDP init scripts may have changed since your legacy version. Check them to see if they need to be modified.

For each instance, look in the /p4/N/bin folder, and review the scripts. Compare them to templates in /p4/common/etc/init.d. For example, compare /p4/1/bin/p4d_1_init with /p4/common/etc/init.d/p4d_instance_init.template.

If your current init scripts look exactly like the templates, except for subsitutions of any REPL_* strings from the template, then they do not need to be updated. Older SDP versions had more complex *_init scripts.

If they need to be replaced, plan to do so during your upgrade with steps like these samples:

cd /p4/N/bin
mkdir OLD_DELETE_ME_LATER
mv p4d_N_bin OLD_DELETE_ME_LATER/.
sed s:REPL_SDP_INSTANCE:N:g /p4/common/etc/init.d/p4d_instance_init.template > p4d_N_init
chmod +x p4d_N_init

If there are p4broker_N_init, p4d_N_init, and/or p4dtg_N_init scripts, follow the same procedure for those, generating new init scripts from the templates.

These steps can only be executed after the /p4/common folder has been updated.

3.1.6. Review systemd service files

The format of Systemd service files (sometimes referred to as 'Unit' files) changed with the SDP 2020.1 release. As part of planning, it is helpful to identify if systemd is already in use, and which Perforce Helix services are managing with systemd.

You can get a list of such services with:

ls -ld /etc/systemd/system
ls -lrt /etc/systemd/system/p4*.service

If the /etc/systemd/system directory exists, then the Systemd init mechanism is available. On systems that use the Systemd init mechanism, we recommend using it. Once systemd is configured for any given service, the SDP requires using the systemd mechanism (i.e. the systemctl command) to start/stop Perfore Helix services (for safety and consistency of management). Depending on your SDP version and how it was installed, there may or may not already be p4*.service files.

In any case, in the Execution phase below, new systemd p4*.service files will be put in place, which may be new or replace existing files.

3.1.7. Account for customization and additions (if any)

If the SDP has been customized in your environment, custom upgrade procedures may be required. An understanding of what was customized and why will be useful in determining if custom ugprade procedures are required.

In typical deployments, the SDP is not customized, or only customized in some way that is no longer needed due to improvements in the "stock" SDP.

In many cases, customers have added custom trigger scripts into the SDP structure. In this case, the script files may be moved around in the SDP structure during the uprade process, but should not need to be changed.

If you need help determining if and how the SDP was customized in your environment, Perforce Consulting may be of assistance. Note that customizations are not supported.

3.2. Execution

This section outlines sample steps for executing an actual upgrade after [Planning] and Section 3.1, “Preparation” have been completed. The following is typically performed in a scheduled maintenance window.

Execution steps are:

  1. Stop Services

  2. Move Old SDP aside.

  3. Upgrade Physical Structure

  4. Put new SDP common files in place.

  5. Put new SDP config files in place.

  6. Put new SDP instance bin files in place.

3.2.1. Stop Services

Stop the p4d service for all instances on this machine. Also stop all p4broker services running on this machine (if any).

For this short maintenance, the broker cannot be left running (e.g. to broadcast a "Down For Maintenance (DFM)" message) because the structure change cannot be started until all processes launched from the SDP directory structure have stopped.

Sample commands:

p4d_1_init status
p4d_1_init stop
p4d_1_init status

p4broker_1_init status
p4broker_1_init stop
p4broker_1_init status

The extra status commands before and after the start/stop commands are for situational awareness. These are not strictly necessary.

3.2.2. Move old SDP Aside

First, move the old SDP common files aside, like so:

cd /hxdepots/p4
mv common OLD.common.$(date +'%Y-%m-%d')

Next, move the old SDP instance-specific files aside.

cd /p4/1
mv 1 OLD.1.$(date +'%Y-%m-%d')

3.2.3. Upgrade Physical Structure

In this step, the physical structure of the upgrade is done for pre-2019.1 SDP.

The structure of the SDP changed in the 2019.1 release, to increase performance and reduce complexit in post-failover operations. The following notes describe how to do an in-place conversion to the new structure.

First, become familar with the Pre-2019.1 and 2019.1+ structures.

SDP Pre-2019.1 Structure:

  • /p4 is a directory on the operating system root voume, /.

  • /p4/N is a symlink to a directory is typically the mount point for a storage volume (/hxdepots by default).

  • /p4/N contains symlinks for /hxdepots, /hxmetadata, and hxlogs, as well as tickets and trust files.

SDP 2019.1+ Structure:

  • /p4 is a directory on the operating system root volume, /, (same as Pre-2019.1 Structure).

  • /p4/N is local directory on the operating system root volume,

  • /p4/N contains symlinks for /hxdepots, /hxmetadata, and hxlogs, as well as tickets and trust files (same as the Pre-2019.1 structure)

  • /p4/N/bin is local directory on the operating system root volume. The bin directory is the only actual directory in /p4/N; other items are files or symlinks to directories.

The verify_sdp.sh script (included in the SDP starting with SDP 2019.1) give errors if the 2019.1+ SDP structure is not in place.

Converting the SDP structure in-place to the new style requires downtime on the edge/replica of interest. While the downtime can be brief if only the SDP structure is changed, commonly the P4D is upgraded in the same maintenance window. If the P4D is pre-2019.1, a longer maintenance window will be required, depending on duration of checkpoints.

Following is the procedure to upgrade the structure in-place on a machine.

In the following sample procedure, the default SDP instance name of 1 is used, and default mount point names are used. Adapt this to your environment by applying this procedure to each instance on any given machine. If you have multiple instances, apply this procedure for each instance, one at a time.

Move the instance symlink aside, and replace it with a regular directory. Then copy the .p4* files (e.g. .p4tickets and .p4trust) into the new directory. Sample commands:

cd /p4
mv 1 1.old_symlink
mkdir 1
cd 1
cp -p /p4/1.old_symlink/.p4t* .

If you have Fixed Metadata Symlinks, first convert them to to Variable Metadata Symlinks. If you already have Varialbe Metadata Symlinks, proceed to Section 3.2.4, “Replace Instance Symlink with Directory”

In this step, move the underlying directories that will be pointed to by the root and offline_db sylmink names, and move them to their db1 and db2 names.

mv /hxmetadata/p4/1/root /hxmetadata/p4/1/db1
mv /hxmetadata/p4/1/offline_db /hxmetadata/p4/1/db2

Next, recreate the same symlinks you see reported by the ls command:

ls -l /p4/1.old_symlink/*
cd /p4/1

ln -s /hxmetadata/p4/1/db1 root
ln -s /hxmetadata/p4/1/db2 offline_db
Do not just copy the sample commands above. Pay close attention to the ls output, and make sure the root points to whatever it was pointing to before, either a directory ending in db1 or db2 (unless you just converted from Fixed Metadata Symlinks in STEP 4). Also confirm that offline_db and root aren’t both pointing to the same directory; one should be pointing to db1 and the other to db2.

Then, create additional symlinks akin to whatever else is in /p4/1.old_symlink

That should look something like this:

cd /p4/1
ln -s /hxdepots/p4/1/depots
ln -s /hxdepots/p4/1/checkpoints
ln -s /hxdepots/p4/1/checkpoints.YourEdgeServerID
ln -s /hxlogs/p4/1/logs
ln -s /hxlogs/p4/1/tmp

ls -l

Next, create the bin directory, as a local directory and copy files to it:

mkdir bin
cd bin
cp /p4/1.old_symlink/bin/p4d_1_init .
cp /p4/1.old_symlink/bin/p4broker_1_init .
ln -s /p4/common/bin/p4broker_1_bin p4broker_1
ln -s /p4/common/bin/p4_bin p4_1

Last, take a look at /p4/1.old_symlink/bin/p4d_1 - that p4d_1 will be either a tiny script or a symlink (depending on whether your p4d is case sensitive or not). If you server is case sensitive, it will be a symlink. If your server is case-insensitive, it will be a tiny script.

If your server is case sensitive, create the symlink like this:

ln -s /p4/common/bin/p4d_1_bin p4d_1

OR, if your server is case-sensitive, that p4d_1 will be a tiny script, so just copy it:

cp /p4/1.old_symlink/bin/p4d_1 .

Then, start your server again, and run the verify_sdp.sh script and confirm that it’s happy now.

3.2.7. Put new SDP common files in place.

rsync /p4/sdp/Server/Unix/p4/common/ /hxdepots/p4/common

3.2.8. Put new SDP instance bin files in place.

cd /p4/1/bin

sed s:REPL_SDP_INSTANCE:1:g /p4/common/etc/init.d/p4d_instance_init.template > p4d_1_init
chmod +x p4d_1_init
sed s:REPL_SDP_INSTANCE:1:g /p4/common/etc/init.d/p4broker_instance_init.template > p4broker_1_init
chmod +x p4broker_1_init

3.2.9. Upgrade systemd service files

The format of systemd unit files changed with the SDP 2020.1 release.

The service must be down when these until files are added, or existing ones.

The SDP r20.1 release includes templates for System unit files in /p4/common/etc/systemd/system. These should be deployed on each machine that uses SDP, and for each Helix service (e.g. p4d, p4broker, p4p) within each SDP instance.

For example, the following installs or replaces system Unit files for p4d and p4broker for SDP instance 1. These must be executed as root

First, stop the services if they are running.

systemctl stop p4d_1 p4broker_1
/p4/1/bin/p4d_1_init stop
/p4/1/bin/p4broker_1_init stop

Next, add/replace the *.service files:

cd /etc/systemd/system
sed -e s:__INSTANCE__:1:g -e s:__OSUSER__:perforce:g /p4/common/etc/systemd/system/p4d_N.service.t > p4d_1.service
sed -e s:__INSTANCE__:1:g -e s:__OSUSER__:perforce:g /p4/common/etc/systemd/system/p4broker_N.service.t > p4broker_1.service
systemctl daemon-reload

Enable and start the services.

systemctl enable p4d_1 p4broker_1
systemctl start p4d_1 p4broker_1

Confirm that they are happy:

systemctl status p4d_1 p4broker_1

3.3. Post Operation Steps

Cleanup steps can occur after the upgrade. In some cases cleanup is done immediately following the upgrade; in other cases it may be deferred by days or weeks.

3.3.1. Cleanup

Temporary directories with DELETE_ME created durnig the upgrade procedure can now be deleted.

Appendix A: Custom HMS Managed Installations

If the Helix Management System (HMS) is used to manage this installation, you should have custom site-specific documentation for upgrading the SDP that supercedes this documentation. If the file /p4/common/bin/hms exists at your site, you have an HMS-managed site. Conact Perforce Consulting for more information.

Note that HMS solutions are inherently custom and not officially supported, but can be fully automated for global Helix Core topologies.