SDP_Guide.Unix.md #1

  • //
  • guest/
  • russell_jackson/
  • sdp/
  • doc/
  • SDP_Guide.Unix.md
  • Markdown
  • View
  • Commits
  • Open Download .zip Download (48 KB)

Server Deployment Package for Perforce Helix

User Guide (for Unix)

Perforce Software, Inc.


Preface

This guide tells you how to set up a new Perforce Helix Server installation using the Server Deployment Package (SDP). Recommendations for optimal system maintenance and performance are included as well. The SDP follows best practices for Perforce server configuration and administration. The SDP consists of standard configuration settings, scripts, and tools, which provide several key features:

  • A volume layout designed for maximum data integrity and server performance.
  • Automated offline checkpointing and backup procedures for server metadata.
  • Replication to another server.
  • Easy maintenance of user accounts, labels, workspaces, and other data.
  • User authentication using LDAP or Active Directory.

This guide assumes some familiarity with Perforce, and does not duplicate the basic information in the Perforce user documentation. For basic information on Perforce, consult Introducing Perforce. For system administrators, the Perforce System Administrator's Guide is essential reading. All documentation is available from the Perforce web site at http://www.perforce.com.

Please Give Us Feedback

Perforce welcomes feedback from our users. Please send any suggestions for improving this document or the SDP to rusty@rcjacksonconsulting.com.


Table of Contents

  1. Overview
  2. Configuring the Perforce Server
  3. Installing the Perforce Server and the SDP
  4. Backup, Replication, and Recovery
  5. Server Maintenance
  6. Tools and Scripts
  7. Appendix A – Directory Structure Configuration Script
  8. Appendix B – Frequently Asked Questions/Troubleshooting

Overview

The SDP has four main components:

  • Hardware and storage layout recommendations for Perforce.
  • Scripts to automate offline checkpoints and other critical maintenance activities.
  • Scripts to replicate the Perforce journal to another volume or server.
  • Scripts to assist with user account maintenance and other routine administration tasks.

Each of these components is covered in detail in this guide.

The SDP should be versioned in a depot (e.g. //perforce) as part of the installation process.

The directory structure of the SDP is shown below in Figure 1: SDP Package Directory Structure. This includes all SDP files, including documentation and maintenance scripts. A subset of these files are deployed to server machines during the installation process.

sdp
├── doc
├── Maintenance (Admin scripts)
└── Server (Core SDP Files)
    ├── setup (typemap, configure, etc)
    └── Unix
        ├── setup
        └── p4
            ├── 1
            │   └── bin
            └── common
                ├── bin (Backup scripts, etc)
                ├── triggers (Example triggers)
                ├── config
                ├── etc
                │   ├── cron.d
                │   └── init.d
                ├── lib
                └── test

Figure 1: SDP Package Directory Structure


Configuring the Perforce Server

This chapter tells you how to configure a Perforce server machine and an instance of the Perforce Server. These topics are covered more fully in the System Administrator's Guide and in the Knowledge Base; this chapter covers the details most relevant to the SDP.

The SDP can be installed on multiple server machines, and each server machine can host one or more Perforce server instances. (In this guide, the term server refers to a Perforce server instance unless otherwise specified.) Each server instance is assigned a number. This guide uses instance number 1 in the example commands and procedures. Other instance numbers can be substituted as required.

Optionally, instances can be given a short tag name, such as 'abc', rather than a number. Manual configuration is required to use tag names rather than the default numeric values.

This chapter also describes the general usage of SDP scripts and tools.

Volume Layout and Hardware

To ensure maximum data integrity and performance, use three or four different physical volumes for each server instance. Three volumes can be used for all instances hosted on one server machine, but using three or four volumes per instance reduces the chance of hardware failure affecting more than one instance. The hx prefix is used to indicate Helix volumes in the documentation, but your own naming conventions/standards can be used instead.

  • Perforce metadata (database files), volumes 1 & 2: Use the fastest volume possible, ideally RAID 1+0 on a dedicated controller with the maximum cache available on it. These volumes default to /metadata1 and /metadata2. Having two means we can swap online and offline database just by switching a couple of links. It is fine to have these both pointing to the same physical volume.

  • Journals and logs: Use a fast volume, ideally RAID 1+0 on its own controller with the standard amount of cache on it. This volume is normally called /logs. If a separate logs volume is not available, put the logs on the metadata1 volume.

  • Depot data, archive files, scripts, and checkpoints: Use a large volume, with RAID 5 on its own controller with a standard amount of cache or a SAN or NAS volume. This volume is the only volume that must be backed up. The backup scripts place the metadata snapshots on this volume. This volume can be backed up to tape or another long term backup device. This volume is normally called /depots.

If three controllers are not available, put the logs and depots volumes on the same controller.

Warning: Do not run anti-virus tools or back up tools against the metadata volume(s) or logs volume(s), because they can interfere with the operation of the Perforce server.

Important: Back up everything on the depots volume(s). Avoid backing up the metadata[1,2] volumes directly, because doing so can interfere with the operation of a live Perforce server, potentially corrupting data. The checkpoint and journal process archive the metadata on the depots volume. Backing up the logs volume is optional.

The SDP assumes (but does not require) the three volumes described above. On Unix/Linux platforms, the SDP creates a convenience directory containing links to the three volumes for each instance. This convenience directory is called /p4. The convenience directory enables easy access to the different parts of the file system for each instance. For instance:

  • /p4/1/root has the database for instance 1
  • /p4/1/logs has the logs for instance 1
  • /p4/1/bin has the binaries for instance 1
  • /p4/common/bin contains the scripts common to all instances

Figure 2: SDP Runtime Structure and Volume Layout

Memory and CPU

Make sure the server has enough memory to cache the db.rev database file and to prevent the server from paging during user queries. Maximum performance is obtained if the server has enough memory to keep all of the database files in memory.

Below are some approximate guidelines for allocating memory:

  • 1.5 kilobyte of RAM per file stored in the server.
  • 32 MB of RAM per user.

Use the fastest processors available with the fastest available bus speed. Faster processors with a lower number of cores provide better performance for Perforce. Quick bursts of computational speed are more important to Perforce's performance than the number of processors, but have a minimum of two processors so that the offline checkpoint and back up processes do not interfere with your Perforce server.

General SDP Usage

This section presents an overview of the SDP scripts and tools. Details about the specific scripts are provided in later sections.

Unix/Linux

Most scripts and tools reside in /p4/common/bin. The /p4/instance/bin directory contains scripts that are specific to that instance such as wrappers for the p4d executable.

Older versions of the SDP required you to always run important administrative commands using the p4master_run script, and specify fully qualified paths. This script loads environment information from /p4/common/bin/p4_vars, the central environment file of the SDP, ensuring a controlled environment. The p4_vars file includes instance specific environment data from /p4/common/config/instance.vars. The p4master_run script is still used when running p4 commands against the server unless you set up your environment first by sourcing p4_vars with the instance as a parameter. Administrative scripts, such as daily_backup.sh, no longer need to be called with p4master_run however, they just need you to pass the instance number to them.

When invoking a Perforce command directly on the server machine, use the p4_instance wrapper that is located in /p4/instance/bin. This wrapper invokes the correct version of the p4 client for the instance. The use of these wrappers enables easy upgrades, because the wrapper is a link to the correct version of the p4 client. There is a similar wrapper for the p4d executable, called p4d_instance.

Below are some usage examples for instance 1:

Example Remarks
/p4/common/bin/p4master_run 1 /p4/1/bin/p4_1 admin stop Run p4 admin stop on instance 1
/p4/common/bin/live_checkpoint.sh 1 Take a checkpoint of the live database on instance 1
/p4/common/bin/p4login 1 Log in as the p4admin user on instance 1

Some maintenance scripts can be run from any client workspace, if the user has administrative access to Perforce. For example, to run the script that archives old workspaces and branches, run:

/ws_root/Perforce/sdp/Maintenance/accessdate.py

If an error occurs due to the default Python interpreter used by the script, invoke Python first:

/bin/python /ws_root/Perforce/sdp/Maintenance/accessdate.py

In the preceding example /ws_root is the root of the client workspace, and the python interpreter is located in /bin.

Monitoring SDP activities

The important SDP maintenance and backup scripts generate email notifications when they complete.

For further monitoring, you can consider options such as:

  • Making the SDP log files available via a password protected HTTP server.
  • Directing the SDP notification emails to an automated system that interprets the logs.

Installing the Perforce Server and the SDP

This chapter tells you how to install a Perforce server instance in the SDP framework. For more details about server installation, refer to the Perforce System Administrator's Guide.

Many companies use a single Perforce Server to manage their files, while others use multiple servers. The choice depends on network topology, the geographic distribution of work, and the relationships among the files being managed. If multiple servers are run, assign each instance a number and use that number as part of the name assigned to depots, to make the relationship of depots and servers obvious.

The default P4PORT setting used by the SDP is instance666. For example, instance 1 runs on port 1666. Each Perforce instance uses its hostname as an identifying name; this identification is used for replicated servers. This can easily be changed in /p4/common/bin/p4_vars.

For any instances that are named rather than numbered, then the /p4/common/bin/p4_vars file must be customized to assign a numeric P4PORT value to each named instance.

Note: To install the SDP, you must have root (super-user or administrator) access to the server machine.

Installing on Unix/Linux Machines

To install Perforce Server and the SDP, perform the following basic steps that are discussed below:

  1. Set up a user account, file system, and configuration scripts.
  2. Run the configuration script.
  3. Start the server and configure the required file structure for the SDP.

Initial setup

Prior to installing the Perforce server, perform the following steps:

  1. Create a user called p4admin (It can be a different name if you prefer, in which case modify the OSUSER entry in the mkdirs.sh script). Set the user's home directory to /p4 on a local disk.

  2. Create a group called perforce (again, can be a different name – see OSGROUP in mkdirs.sh) and make it the perforce user's primary group.

  3. Create or mount the server file system volumes (/depots, /metadata1, /metadata2, /logs).

  4. Copy the SDP to the directory /depots/sdp. We will refer to this directory as $SDP. Make the entire $SDP directory writable.

  5. Download the appropriate p4 and p4d binaries for your release and platform from ftp.perforce.com (log in as anonymous) and place them in $SDP/Server/Unix/p4/common/bin. Do not rename them to include the version number; this step is done automatically for you by the SDP.

  6. cd to $SDP/Server/Unix/setup and edit mkdirs.sh - set all of the variables in the configuration variables section for your company.

  7. As the root user, cd to $SDP/Server/Unix/setup, and run:

    mkdirs.sh instance

    Examples:

    mkdirs.sh 1
    mkdirs.sh Master

    This script configures the first Perforce Server instance. To configure additional instances, run mkdirs.sh again, specifying the instance number each time:

    mkdirs.sh 2
    mkdirs.sh 3
  8. Put the Perforce license file for the server into /p4/1/root. Note, if you have multiple instances and have been provided with port-specific licenses by Perforce, the appropriate license file must be stored in the appropriate /p4/instance/root folder.

  9. Make the Perforce server a system service that starts and stops automatically when the machine reboots. Running mkdirs.sh creates a set of init scripts for various Perforce server products in the instance-specific bin folder:

    • /p4/1/bin/p4d_1_init
    • /p4/1/bin/p4broker_1_init
    • /p4/1/bin/p4p_1_init
    • /p4/1/bin/p4ftpd_1_init
    • /p4/1/bin/p4dtg_1_init
    • /p4/1/bin/p4web_1_init

The steps required to complete the configuration will vary depending on the Unix distribution being used.

The following sample commands enable init scripts as system services on RedHat / CentOS (up to version 6) and SuSE (up to version 11). Run these commands as the root user:

cd /etc/init.d
ln -s /p4/1/bin/p4d_1_init
chkconfig --add p4d_1_init
chkconfig p4d_1_init on

Run the ln –s and two chkconfig commands for any other init scripts, besides p4d_1_init, that you wish to operate on for that instance and on the current machine, such as p4broker_1_init or p4web_1_init. Remove init scripts for any services not needed on that machine.

Note: RHEL 7, CentOS 7, SuSE 12, Ubuntu (v15.04) and other distributions utilize systemd/systemctl as the mechanism for controlling services. A sample systemd configuration is included in $SDP/Server/Unix/setup/systemd, along with a README.md file that describes the configuration process.

Ubuntu (pre 15.04), MacOS and other Unix derivatives use different mechanisms to enable services. If your Linux distribution does not have the chkconfig or systemctl utilities, consult your distribution's documentation for information on enabling services.

Upgrading an existing SDP installation

If you have an earlier version of the Server Deployment Package (SDP) installed, you'll want to be aware of the new -test flag to the SDP setup script, mkdirs.sh. The following update instructions assume a simple, single-server topology.

See the instructions in the file README.md / README.html in the root of the SDP directory.

Configuration script

The mkdirs.sh script executed above resides in $SDP/Server/Unix/setup. It sets up the basic directory structure used by the SDP. Carefully review the header of this script before running it, and adjust the values of the variables near the top of the script as required. The important parameters are:

Parameter Description
DB1 Name of the metadata1 volume (can be same as DB2)
DB2 Name of the metadata2 volume (can be same as DB1)
DD Name of the depots volume
LG Name of the logs volume
ADMINUSER P4USER value of a Perforce super user that operates SDP scripts, typically perforce or p4admin
OSUSER Operating system user that will run the Perforce instance, typically perforce
OSGROUP Operating system group that OSUSER belongs to, typically perforce
SDP Path to SDP distribution file tree
CASESENSITIVE Indicates if server has special case sensitivity settings
CLUSTER Indicates if server is running in cluster
P4ADMINPASS Password to use for Perforce superuser account
P4SERVICEPASS Service User's password for replication
P4DNSNAME Fully qualified DNS name of the Perforce master server machine

For a detailed description of this script, see Appendix A.

Starting/Stopping Perforce Server Products

The SDP includes templates for initialization (start/stop) scripts, "init scripts," for a variety of Perforce server products, including:

  • p4d
  • p4broker
  • p4p
  • p4dtg
  • p4ftpd
  • p4web

The init scripts are named /p4/instance/bin/svc_instance_init.

For example, the init script for starting p4d for Instance 1 is /p4/1/bin/p4d_1_init. All init scripts accept at least start, stop, and status arguments. The perforce user can start p4d by calling:

p4d_1_init start

And stop it by calling:

p4d_1_init stop

Once logged into Perforce as a super user, the p4 admin stop command can also be used to stop p4d.

All init scripts can be started as the perforce user or the root user (except p4web, which must start initially as root). The application runs as the perforce user in any case. If the init scripts are configured as system services, they can also be called by the root user using the service command:

service p4d_1_init start

Templates for the init scripts used by mkdirs.sh are stored in:

/p4/common/etc/init.d

There are also basic crontab templates for a Perforce master and replica server in:

/p4/common/etc/cron.d

These define schedules for routine checkpoint operations, replica status checks, and email reviews.

To configure and start instance 1, follow these steps:

  1. Start the Perforce server by calling p4d_1_init start.

  2. Ensure that the admin user configured above has the correct password defined in /p4/common/config/.p4passwd.p4_1.admin, and then run the p4login script (which calls the p4 login command using the .p4passwd.p4_1.admin file).

  3. For new servers, run this script, which sets several recommended configurables:

    $SDP/Server/setup/configure_new_server.sh

    For existing servers, examine this file, and manually apply the p4 configure command to set configurables on your Perforce server.

  4. Initialize the perforce user's crontab with one of these commands:

    crontab /p4/p4.crontab

    or

    crontab /p4/p4.crontab.rep

    and customize execution times for the commands within the crontab files to suit the specific installation.

To verify that your server installation is working properly:

  1. Issue the p4 info command, after setting appropriate environment variables. If the server is running, it will display details about its settings.

  2. Now that the server is running properly, copy the following configuration files to the depots volume for backup purposes:

    • Any init scripts used in /etc/init.d.
    • A copy of the crontab file, obtained using crontab –l.
    • Any other relevant configuration scripts, such as cluster configuration scripts, failover scripts, or disk failover configuration files.

Archiving configuration files

Now that the server is running properly, copy the following configuration files to the depots volume for backup:

  • The scheduler configuration.
  • Cluster configuration scripts, failover scripts, and disk failover configuration files.

Configuring protections, file types, monitoring and security

After the server is installed and configured, most sites will want to modify server permissions (protections) and security settings. Other common configuration steps include modifying the file type map and enabling process monitoring.

To configure permissions, perform the following steps:

  1. To set up protections, issue the p4 protect command. The protections table is displayed.

  2. Delete the following line:

    write user * * //depot/...
  3. Define protections for your server using groups. Perforce uses an inclusionary model. No access is given by default, you must specifically grant access to users/groups in the protections table.

  4. To set the server's default file types, run the p4 typemap command and define typemap entries to override Perforce's default behavior.

  5. Add any file type entries that are specific to your site. Suggestions:

    • For already-compressed file types (such as .zip, .gz, .avi, .gif), assign a file type of binary+Fl to prevent the server from attempting to compress them again before storing them.
    • For regular binary files, add binary+l to make so that only one person at a time can check them out.

    A sample file is provided in $SDP/Server/config/typemap

  6. To make your changelists default to restricted (for high security environments):

    p4 configure set defaultChangeType=restricted

Other server configurables

There are various configurables that you should consider setting for your server. Some suggestions are in the file: $SDP/Server/setup/configure_new_server.sh

Review the contents and either apply individual settings manually, or edit the file and apply the newly edited version. If you have any questions, please see the configurables section in Appendix of the Command Reference Guide. You can also contact support regarding questions.


Backup, Replication, and Recovery

Perforce servers maintain metadata and versioned files. The metadata contains all the information about the files in the depots. Metadata resides in database (db.*) files in the server's root directory (P4ROOT). The versioned files contain the file changes that have been submitted to the server. Versioned files reside on the depots volume.

This section assumes that you understand the basics of Perforce backup and recovery. For more information, consult the Perforce System Administrator's Guide and the Knowledge Base articles about replication.

Typical Backup Procedure

The SDP's maintenance scripts, run as cron tasks, periodically back up the metadata. The weekly sequence is described below.

Seven nights a week, perform the following tasks:

  1. Truncate the active journal.
  2. Replay the journal to the offline database.
  3. Create a checkpoint from the offline database.
  4. Recreate the offline database from the last checkpoint.

Once every six months, perform the following tasks:

  1. Stop the live server.
  2. Truncate the active journal.
  3. Replay the journal to the offline database.
  4. Archive the live database.
  5. Move the offline database to the live database directory.
  6. Start the live server.
  7. Create a new checkpoint from the archive of the live database.
  8. Recreate the offline database from the last checkpoint.
  9. Verify all depots.

This normal maintenance procedure puts the checkpoints (metadata snapshots) on the depots volume, which contains the versioned files. Backing up the depots volume with a normal backup utility like robocopy or rsync provides you with all the data necessary to recreate the server.

Important: Be sure to back up the entire depots volume using a normal backup utility.

With no additional configuration, the normal maintenance prevents loss of more than one day's metadata changes. To provide an optimal Recovery Point Objective (RPO), the SDP provides additional tools for replication.

Full One-Way Replication

Perforce supports a full one-way replication of data from a master server to a replica, including versioned files. The p4 pull command is the replication mechanism, and a replica server can be configured to know it is a replica and use the replication command. The p4 pull mechanism requires very little configuration and no additional scripting. As this replication mechanism is simple and effective, we recommend it as the preferred replication technique.

Replica servers can also be configured to only contain metadata, which can be useful for reporting or offline checkpointing purposes. See the Distributing Perforce Guide for details on setting up replica servers.

If you wish to use the replica as a read-only server, you can use the P4Broker to direct read-only commands to the replica or you can use a forwarding replica. The broker can do load balancing to a pool of replicas if you need more than one replica to handle your load.

Note: Replication handles all server metadata and versioned file content, but not the SDP installation itself or other external scripts such as triggers. Use tools such as robocopy or rsync to replicate the rest of the depots volume.

Replication Setup

To configure a replica server, first configure a machine identically to the master server (at least as regards the link structure such as /p4, /p4/common/bin and /p4/instance/*), then install the SDP on it to match the master server installation.

Perforce supports many types of replicas suited to a variety of purposes, such as:

  • Real-time backup
  • Providing a disaster recovery solution
  • Load distribution to enhance performance
  • Distributed development
  • Dedicated resources for automated systems, such as build servers

The easiest way to set up a replica is to run this command on the master:

/p4/common/bin/mkstandby.sh 1 <rep name> <replica user password> <master server address:port>

(There is also a mkedge.sh if you are creating an edge server rather than a standby)

That will create the service user, the server spec, the configurables, set up the service.g group and the protections for replication.

Now that the settings are in the master server, you need to create a checkpoint to seed the replica:

/p4/common/bin/daily_checkpoint.sh 1

When the checkpoint finishes, rsync the checkpoint plus the versioned files over to the replica:

rsync -avz /p4/1/checkpoints/p4_1.ckp.###.gz perforce@replica:/p4/1/checkpoints/.
rsync -avz /p4/1/depots/ perforce@replica:/p4/1/depots/

(Assuming perforce is the OS user name and replica is the name of the replica server, and that ### is the checkpoint number created by the daily backup.)

Once the rsync finishes, go to the replica machine and run the following:

/p4/1/bin/p4d_1 -r /p4/1/root -jr -z /p4/1/checkpoints/p4_1.ckp.###.gz

Login as the service user (specifying appropriate password when prompted):

P4TICKETS=/p4/1/.p4tickets /p4/1/bin/p4_1 -p svrmaster:1667 -u svc_replica1 login

Start the replica instance:

/p4/1/bin/p4d_1_init start

Now, you can log into the replica server itself and run p4 pull -lj to check to see if replication is working. If you see any numbers with a negative sign in front of them, replication is not working.

The final steps for setting up the replica server are to set up the crontab for the replica server, and set up the rsync trust certificates so that the replica scripts can run rsync without passwords.

Recovery Procedures

There are three scenarios that require you to recover server data:

Metadata Depot data Action required
lost or corrupt Intact Recover metadata as described below
Intact lost or corrupt Call Perforce Support
lost or corrupt lost or corrupt Recover metadata as described below. Recover the depots volume using your normal backup utilities.

Restoring the metadata from a backup also optimizes the database files.

Recovering a master server from a checkpoint and journal(s)

The checkpoint files are stored in the /p4/instance/checkpoints directory, and the most recent checkpoint is named p4_instance.ckp.number.gz. Recreating up-to-date database files requires the most recent checkpoint, from /p4/instance/checkpoints and the journal file from /p4/instance/logs.

To recover the server database manually, perform the following steps from the root directory of the server (/p4/instance/root):

  1. Stop the Perforce Server:

    /p4/common/bin/p4master_run instance /p4/instance/bin/p4_instance admin stop
  2. Delete the old database files in the /p4/instance/root/save directory

  3. Move the live database files (db.*) to the save directory.

  4. Restore from the most recent checkpoint:

    /p4/instance/bin/p4d_instance -r /p4/instance/root -jr –z /p4/instance/checkpoints/p4_instance.ckp.most_recent_#.gz
  5. Replay the transactions that occurred after the checkpoint was created:

    /p4/instance/bin/p4d_instance -r /p4/instance/root –jr /p4/instance/logs/journal
  6. Restart your Perforce server.

  7. If the Perforce service starts without errors, delete the old database files from /p4/instance/root/save.

Recovering a replica from a checkpoint

This is very similar to creating a replica in the first place as described above. If you have been running the replica crontab commands as suggested, then you will have the latest checkpoints from the master already copied across to the replica.

See the steps in the script weekly_sync_replica.sh for details.

Remember to ensure you have logged the service user in to the master server.

Recovering from a tape backup

This section describes how to recover from a tape or other offline backup to a new server machine if the server machine fails. The tape backup for the server is made from the depots volume. The new server machine must have the same volume layout and user/group settings as the original server.

To recover from a tape backup, perform the following steps:

  1. Recover the depots volume from your backup tape.
  2. Create the /p4 convenience directory on the OS volume.
  3. Create the directories /metadata/p4/instance/root/save and /metadata/p4/instance/offline_db.
  4. Change ownership of these directories to the OS account that runs the Perforce processes.
  5. Switch to the Perforce OS account, and create a link in the /p4 directory to /depotdata/p4/instance.
  6. Create a link in the /p4 directory to /depots/p4/common.
  7. As a super-user, reinstall and enable the init.d scripts
  8. Find the last available checkpoint, under /p4/instance/checkpoints
  9. Recover the latest checkpoint:
    /p4/instance/bin/p4d_instance -r /p4/instance/root -jr -z last_ckp_file
  10. Recover the checkpoint to the offline_db directory:
    /p4/instance/bin/p4d_instance -r /p4/instance/offline_db -jr -z last_ckp_file
  11. Reinstall the Perforce server license to the server root directory.
  12. Start the perforce service: /p4/1/bin/p4d_1_init start
  13. Verify that the server instance is running.
  14. Reinstall the server crontab or scheduled tasks.
  15. Verify the database and versioned files by running the p4verify.sh script.

Failover to a replicated standby machine

See DR-Failover-Steps-Linux.docx


Server Maintenance

This section describes typical maintenance tasks and best practices for administering server machines. The directory $SDP/Maintenance contains scripts for several common maintenance tasks.

The user running the maintenance scripts must have administrative access to Perforce for most activities. Most of these scripts can be run from a client machine, but it is easiest to run them on the server via crontab.

Server upgrades

Upgrading a server instance in the SDP framework is a simple process involving a few steps:

  1. Download the new p4 and p4d executables for your OS from ftp.perforce.com and place them in /p4/common/bin

  2. Run:

    /p4/common/bin/upgrade.sh instance
  3. If you are running replicas, upgrade the replicas first, and then the master (outside -> in)

Database Modifications

Occasionally modifications are made to the Perforce database from one release to another. For example, server upgrades and some recovery procedures modify the database.

When upgrading the server, replaying a journal patch, or performing any activity that modifies the db. files, you must restart the offline checkpoint process so that the files in the offline_db directory match the ones in the live server directory. The easiest way to restart the offline checkpoint process is to run the live_checkpoint script after modifying the db. files:

/p4/common/bin/live_checkpoint.sh instance

This script makes a new checkpoint of the modified database files in the live root directory, then recovers that checkpoint to the offline_db directory so that both directories are in sync.

Listing inactive specifications

To list branch specifications, clients, labels and users that have been inactive for a specified number of weeks, run accessdates.py. This script generates four text files listing inactive specifications:

  • branches.txt
  • clients.txt
  • labels.txt
  • users.txt

Unloading and Reloading labels

To use the unload and reload commands for archiving clients and labels, you must first create an unload depot using the p4 depot command:

p4master_run instance /p4/instance/bin/p4_instance depot unload

Set the type of the depot to unload and save the form.

After the depot is created, you can use the following command to archive all the clients and labels that have not been accessed since the given date:

p4master_run instance /p4/instance/bin/p4_instance unload -f -L -z -a -d <date>

Users can reload their own clients/labels using the reload command:

p4 reload -c <clientname>
p4 reload -l <labelname>

Deleting users

To delete users, run python p4deleteuser.py, specifying the users to be deleted. The script deletes the users, any workspaces they own, and removes them from any groups they belong to.

To delete all users that have not accessed the server in the past 12 weeks, run python delusers.py.

Listing users

To display a list of users that are in a group but do not have an account on the server, run python checkusers.py.

Group management

To duplicate a specified user's group entries on behalf of another user, run python mirroraccess.py:

python mirroraccess.py sourceuser targetuser

To add users to a group:

python addusertogroup.py user group

Adding users

To add users to a server:

  1. Create a text file, such as users.csv, containing the users to add:

    user,email,full name
  2. If you are using LDAP/AD authentication, edit createusers.py and comment out:

    setpass.setpassword(user[0])
  3. Run python createusers.py users.csv.

Email functions

To send email to all of your Perforce users:

  1. Create a file called message.txt that contains the body of your message.
  2. Run email.sh, specifying the email subject in quotes.

To list the email addresses of your Perforce users, run python make_email_list.py.

Workspace management

The form-out trigger $SDP/Server/Unix/p4/common/bin/triggers/SetWsOptions.py contains default workspace options, such as leaveunchanged instead of submitunchanged.

To use the trigger, first copy it to /p4/common/bin/triggers. Then modify the OPTIONS variable in the script, providing the set of desired options. Insert an entry in the trigger table:

setwsopts form-out client "python /p4/common/bin/triggers/SetWsOptions.py %formfile%"

Removing empty changelists

To delete empty pending changelists, run python remove_empty_pending_changes.py.

Maximizing Server Performance

The following sections provide some guidelines for maximizing the performance of the Perforce Server.

Optimizing the database files

The Perforce Server's database is composed of b-tree files. The server does not fully rebalance and compress them during normal operation. To optimize the files, you must checkpoint and restore the server. The weekly checkpoint script used as part of the normal server maintenance automates this task.

Proactive Performance Maintenance

This section describes things that can be done proactively to enhance scalability and maintain performance.

Limiting large requests

To prevent large requests from overwhelming the server, you can limit the amount of data and time allowed per query by setting the maxresults, maxscanrows and maxlocktime parameters. As a good starting point:

  • Set maxscanrows to maxresults * 3
  • Set maxresults to slightly larger than the maximum number of files the users need to sync
  • Set maxlocktime to 30000 milliseconds

These values must be adjusted up as the size of your server and the number of revisions of the files grow.

Offloading remote syncs

For remote users who need to sync large numbers of files, Perforce offers a proxy server. P4P, the Perforce Proxy, is run on a machine that is on the remote users' local network. The Perforce Proxy caches file revisions, serving them to the remote users and diverting that load from the main server.


Tools and Scripts

This section describes the various scripts and files provided as part of the SDP package. To run main scripts, the machine must have Python 3, and a few scripts require Perl 5. The Maintenance scripts can be run from the server machine or from client machines.

Shell Script Standards

All SDP shell scripts follow these coding standards for reliability and maintainability:

Error Handling

Scripts use set -uo pipefail at the top for robust error handling:

  • -u: Treat unset variables as errors, preventing silent failures from typos
  • -o pipefail: Return the exit status of the first failed command in a pipeline

Instance Initialization

Scripts use the standard init_sdp_instance() function from backup_functions.sh for consistent behavior:

#!/bin/bash
set -uo pipefail

source /p4/common/bin/backup_functions.sh
init_sdp_instance "${1:-}"

source /p4/common/bin/p4_vars "$SDP_INSTANCE"

Modern Bash Syntax

  • Double brackets [[ ]] for conditionals (safer word splitting and globbing)
  • $() for command substitution instead of backticks
  • Proper variable quoting to handle spaces and special characters
  • source instead of . for sourcing files

Safe File Operations

  • Avoid rm $(ls ...) patterns which can fail with spaces in filenames
  • Use proper globbing with null checks: for f in pattern*; do [[ -e "$f" ]] || continue; ...
  • Quote all variable expansions in file operations

Core Scripts

The core SDP scripts are those related to checkpoints and other scheduled operations, and all run from /p4/common/bin.

backup_functions.sh

Core function library sourced by most SDP scripts. This file provides:

  • init_sdp_instance(): Standard initialization for SDP instance scripts. Validates the instance parameter and sets up SDP_INSTANCE.
  • check_vars() / set_vars(): Environment validation and variable setup
  • log() / die(): Logging and error handling functions
  • ckp_running() / ckp_complete(): Checkpoint status management
  • stop_p4d() / start_p4d(): Service control functions
  • rsync_with_retry(): Reliable file synchronization with retry logic
  • create_service_user(): Service user creation for replicas
  • remove_old_checkpoints_and_journals(): Cleanup functions
  • rotate_log_file(): Log rotation utilities

Location: /p4/common/bin

p4_vars

Defines the environment variables required by the Perforce server. This script uses a specified instance number as a basis for setting environment variables. It will look for and open the respective p4_<instance>.vars file.

This script also sets server logging options and configurables.

Location: /p4/common/bin

p4_<instance>.vars

Defines the environment variables for a specific instance, including P4PORT etc.

Location: /p4/common/config

p4master_run

This is the wrapper script to other SDP scripts. This ensures that the shell environment is loaded from p4_vars. It provides a -c flag for silent operation, used in many crontab entries so that email is sent from the scripts themselves.

Location: /p4/common/bin

recreate_offline_db

Recovers the offline_db database from the latest checkpoint and replays any journals since then. If you have a problem with the offline database then it is worth running this script first before running live_checkpoint, as the latter will stop the server while it is running which can take hours.

Run this script if an error occurs while replaying a journal during weekly or daily checkpoint process.

Location: /p4/common/bin

live_checkpoint

Stops the server, creates a checkpoint from the live database files, recovers the offline_db database from that checkpoint to rebalance and compress the files, then recovers the checkpoint in the offline_db directory to ensure that the database files are optimized.

Run this script when creating the server and if an error occurs while replaying a journal during the off-line checkpoint process.

Location: /p4/common/bin

daily_checkpoint

This script is configured to run six days a week using crontab or the Windows scheduler. The script truncates the journal, replays it into the offline_db directory, creates a new checkpoint from the resulting database files, then recreates the offline_db directory from the new checkpoint.

Location: /p4/common/bin

recreate_db_checkpoint

Performs the weekly checkpoint process. This script stops your server for a few minutes to rotate your database files with those in the offline_db directory.

Location: /p4/common/bin

p4verify.sh / p4verify.py

Verifies the integrity of the depot files. These scripts are run by crontab.

  • p4verify.sh - Shell script version for basic verification
  • p4verify.py - Python version with advanced features including parallel verification, pull-queue gating, and journal-lag gating for replicas

Location: /p4/common/bin

p4review.py

Sends out email containing the change descriptions to users who are configured as reviewers for affected files.

Location: /p4/common/bin

p4login

Executes a p4 login command, using the password configured in mkdirs.sh and stored in a text file.

Location: /p4/common/bin

p4d_instance_init

Starts the Perforce server. This script sources /p4/common/bin/p4_vars, then /p4/common/bin/p4d_base.

Location: /p4/instance/bin

More Server Scripts

These scripts are helpful components of the SDP that run on the server.

upgrade.sh

Runs a typical upgrade process, once new p4 and p4d binaries are available in /p4/common/bin. Handles upgrading p4d, p4broker, and p4p binaries with proper version linking.

Location: /p4/common/bin

run_if_master.sh / run_if_replica.sh / run_if_edge.sh

Conditional execution scripts that run a command only if the current server matches the specified type:

  • run_if_master.sh - Runs command only on commit/master servers
  • run_if_replica.sh - Runs command only on standby or edge replica servers
  • run_if_edge.sh - Runs command only on edge servers

Usage:

/p4/common/bin/run_if_master.sh 1 /p4/common/bin/some_script.sh

Location: /p4/common/bin

mkstandby.sh / mkedge.sh

Scripts to create standby and edge server configurations on the master server. These set up the service user, server spec, configurables, and protections for replication.

Location: /p4/common/bin

p4.crontab

Contains crontab entries to run the server maintenance scripts.

Location: /p4/sdp/Server/Unix/p4/common/etc/cron.d

Python Utilities

sdputils.py

Python utility module for SDP maintenance scripts. Located in $SDP/Maintenance/sdputils.py.

Provides:

  • SDPUtils class: Wrapper for p4 commands and SDP configuration

    • run_p4(): Safe command execution without shell injection
    • login(): Perforce login using SDP password files
    • User, group, label, and change operations
    • Email sending via SMTP
  • EmailSender class: SMTP email functionality with TLS support

  • P4ConnectionManager: P4Python connection management

  • Helper functions: Argument parsing, file operations, protected user lists

Example usage:

from sdputils import SDPUtils

utils = SDPUtils("1")  # For instance 1
utils.login()
result = utils.run_p4(["info"], capture_output=True)
print(result.stdout)

# Get all users
users = utils.get_all_users()

# Send email
utils.send_email("user@example.com", "Subject", "Body text")

Maintenance Scripts

There are many useful scripts in /p4/sdp/Maintenance that are not set up to run automatically as part of the SDP installation. The scripts provide maintenance tools and various reports. Each script has comments at the top indicating what it does and how to run it.

Other Files

File Location Remarks
dummy_ip.txt $SDP/Server/config Instructions for using a license on more than one machine
backup_functions.sh /p4/common/bin Unix/Linux only. Core utilities for maintenance scripts
p4d_base /p4/common/bin Unix/Linux only. Template for Unix/Linux init.d scripts
change.txt $SDP/Maintenance Template for new pending changelist

Appendix A – Directory Structure Configuration Script for Linux/Unix

This appendix describes the steps performed by the mkdirs.sh script on Linux/Unix platforms. Please review this appendix carefully before running these steps manually.

Assuming the three-volume configuration described in the Volume Layout and Hardware section are used, the following directories are created (examples use "1" as the server instance number):

Directory Remarks
/p4 Must be under / on the OS volume
/logs/p4/1/bin Files in here are generated by the mkdirs.sh script
/depots/p4/1/depots
/depots/p4/1/tmp
/depots/p4/common/config Contains p4_<instance>.vars file
/depots/p4/common/bin Files from $SDP/Server/Unix/p4/common/bin
/depots/p4/common/etc Contains init.d and cron.d
/logs/p4/1/logs
/metadata2/p4/1/db2 Contains offline copy of main server databases
/metadata1/p4/1/db1/save Used only during recreate_db_checkpoint.sh

Next, mkdirs.sh creates the following symlinks in the /depots/p4/1 directory:

Link source Link target
/metadata1/p4/1/db1 /p4/1/root
/metadata2/p4/1/db2 /p4/1/offline_db
/logs/p4/1/logs /p4/1/logs
/logs/p4/1/bin /p4/1/bin
/depots/p4/common /p4/common
/depots/p4/1/depots /p4/1/depots
/depots/p4/1/checkpoints /p4/1/checkpoints
/depots/p4/1/tmp /p4/1/tmp

Next, mkdirs.sh renames the Perforce binaries to include version and build number, and then creates appropriate symlinks.

Example structure for two instances (instance #1 using Perforce 2026.1 and instance #2 using 2026.2):

In /p4/common/bin:

p4_2026.1_bin   -> p4_2026.1.685046
p4d_2026.1_bin  -> p4d_2026.1.685046
p4_2026.2_bin   -> p4_2026.2.700949
p4d_2026.2_bin  -> p4d_2026.2.700949
p4_1_bin        -> p4_2026.1_bin
p4d_1_bin       -> p4d_2026.1_bin
p4_2_bin        -> p4_2026.2_bin
p4d_2_bin       -> p4d_2026.2_bin

In /p4/1/bin:

p4_1  -> /p4/common/bin/p4_1_bin
p4d_1 -> /p4/common/bin/p4d_1_bin

Appendix B – Frequently Asked Questions/Troubleshooting

This appendix lists common questions and problems encountered by SDP users. Do not hesitate to contact consulting@perforce.com if additional assistance is required.

Journal out of sequence

This error is encountered when the offline and live databases are no longer in sync, and will cause the offline checkpoint process to fail. Because the scripts will replay all outstanding journals, this error is much less likely to occur.

This error can be fixed by running the live_checkpoint.sh script. Alternatively, if you know that the checkpoints created from previous runs of daily_checkpoint.sh are correct, then restore the offline_db from the last known good checkpoint.

Unexpected end of file in replica daily sync

Check the start time and duration of the daily_checkpoint.sh cron job on the master. If this overlaps with the start time of the sync_replica.sh cron job on a replica, a truncated checkpoint may be rsync'd to the replica and replaying this will result in an error.

Adjust the replica's cronjob to start later to resolve this. Default cron job times, as installed by the SDP, are initial estimates and should be adjusted to suit your production environment.

# Server Deployment Package for Perforce Helix

## User Guide (for Unix)

### Perforce Software, Inc.

---

# Preface

This guide tells you how to set up a new Perforce Helix Server installation using the Server Deployment Package (SDP). Recommendations for optimal system maintenance and performance are included as well. The SDP follows best practices for Perforce server configuration and administration. The SDP consists of standard configuration settings, scripts, and tools, which provide several key features:

- A volume layout designed for maximum data integrity and server performance.
- Automated offline checkpointing and backup procedures for server metadata.
- Replication to another server.
- Easy maintenance of user accounts, labels, workspaces, and other data.
- User authentication using LDAP or Active Directory.

This guide assumes some familiarity with Perforce, and does not duplicate the basic information in the Perforce user documentation. For basic information on Perforce, consult *Introducing Perforce*. For system administrators, the *Perforce System Administrator's Guide* is essential reading. All documentation is available from the Perforce web site at http://www.perforce.com.

**Please Give Us Feedback**

Perforce welcomes feedback from our users. Please send any suggestions for improving this document or the SDP to rusty@rcjacksonconsulting.com.

---

# Table of Contents

1. [Overview](#overview)
2. [Configuring the Perforce Server](#configuring-the-perforce-server)
3. [Installing the Perforce Server and the SDP](#installing-the-perforce-server-and-the-sdp)
4. [Backup, Replication, and Recovery](#backup-replication-and-recovery)
5. [Server Maintenance](#server-maintenance)
6. [Tools and Scripts](#tools-and-scripts)
7. [Appendix A – Directory Structure Configuration Script](#appendix-a--directory-structure-configuration-script-for-linuxunix)
8. [Appendix B – Frequently Asked Questions/Troubleshooting](#appendix-b--frequently-asked-questionstroubleshooting)

---

# Overview

The SDP has four main components:

- Hardware and storage layout recommendations for Perforce.
- Scripts to automate offline checkpoints and other critical maintenance activities.
- Scripts to replicate the Perforce journal to another volume or server.
- Scripts to assist with user account maintenance and other routine administration tasks.

Each of these components is covered in detail in this guide.

The SDP should be versioned in a depot (e.g. `//perforce`) as part of the installation process.

The directory structure of the SDP is shown below in Figure 1: SDP Package Directory Structure. This includes all SDP files, including documentation and maintenance scripts. A subset of these files are deployed to server machines during the installation process.

```
sdp
├── doc
├── Maintenance (Admin scripts)
└── Server (Core SDP Files)
    ├── setup (typemap, configure, etc)
    └── Unix
        ├── setup
        └── p4
            ├── 1
            │   └── bin
            └── common
                ├── bin (Backup scripts, etc)
                ├── triggers (Example triggers)
                ├── config
                ├── etc
                │   ├── cron.d
                │   └── init.d
                ├── lib
                └── test
```

*Figure 1: SDP Package Directory Structure*

---

# Configuring the Perforce Server

This chapter tells you how to configure a Perforce server machine and an instance of the Perforce Server. These topics are covered more fully in the *System Administrator's Guide* and in the Knowledge Base; this chapter covers the details most relevant to the SDP.

The SDP can be installed on multiple server machines, and each server machine can host one or more Perforce server instances. (In this guide, the term *server* refers to a Perforce server instance unless otherwise specified.) Each server instance is assigned a number. This guide uses instance number 1 in the example commands and procedures. Other instance numbers can be substituted as required.

Optionally, instances can be given a short tag name, such as 'abc', rather than a number. Manual configuration is required to use tag names rather than the default numeric values.

This chapter also describes the general usage of SDP scripts and tools.

## Volume Layout and Hardware

To ensure maximum data integrity and performance, use three or four different physical volumes for each server instance. Three volumes can be used for all instances hosted on one server machine, but using three or four volumes per instance reduces the chance of hardware failure affecting more than one instance. The `hx` prefix is used to indicate Helix volumes in the documentation, but your own naming conventions/standards can be used instead.

- **Perforce metadata (database files), volumes 1 & 2:** Use the fastest volume possible, ideally RAID 1+0 on a dedicated controller with the maximum cache available on it. These volumes default to `/metadata1` and `/metadata2`. Having two means we can swap online and offline database just by switching a couple of links. It is fine to have these both pointing to the same physical volume.

- **Journals and logs:** Use a fast volume, ideally RAID 1+0 on its own controller with the standard amount of cache on it. This volume is normally called `/logs`. If a separate logs volume is not available, put the logs on the metadata1 volume.

- **Depot data, archive files, scripts, and checkpoints:** Use a large volume, with RAID 5 on its own controller with a standard amount of cache or a SAN or NAS volume. This volume is the only volume that must be backed up. The backup scripts place the metadata snapshots on this volume. This volume can be backed up to tape or another long term backup device. This volume is normally called `/depots`.

If three controllers are not available, put the logs and depots volumes on the same controller.

> **Warning:** Do not run anti-virus tools or back up tools against the metadata volume(s) or logs volume(s), because they can interfere with the operation of the Perforce server.

> **Important:** Back up everything on the depots volume(s). Avoid backing up the metadata[1,2] volumes directly, because doing so can interfere with the operation of a live Perforce server, potentially corrupting data. The checkpoint and journal process archive the metadata on the depots volume. Backing up the logs volume is optional.

The SDP assumes (but does not require) the three volumes described above. On Unix/Linux platforms, the SDP creates a convenience directory containing links to the three volumes for each instance. This convenience directory is called `/p4`. The convenience directory enables easy access to the different parts of the file system for each instance. For instance:

- `/p4/1/root` has the database for instance 1
- `/p4/1/logs` has the logs for instance 1
- `/p4/1/bin` has the binaries for instance 1
- `/p4/common/bin` contains the scripts common to all instances

*Figure 2: SDP Runtime Structure and Volume Layout*

## Memory and CPU

Make sure the server has enough memory to cache the db.rev database file and to prevent the server from paging during user queries. Maximum performance is obtained if the server has enough memory to keep all of the database files in memory.

Below are some approximate guidelines for allocating memory:

- 1.5 kilobyte of RAM per file stored in the server.
- 32 MB of RAM per user.

Use the fastest processors available with the fastest available bus speed. Faster processors with a lower number of cores provide better performance for Perforce. Quick bursts of computational speed are more important to Perforce's performance than the number of processors, but have a minimum of two processors so that the offline checkpoint and back up processes do not interfere with your Perforce server.

## General SDP Usage

This section presents an overview of the SDP scripts and tools. Details about the specific scripts are provided in later sections.

### Unix/Linux

Most scripts and tools reside in `/p4/common/bin`. The `/p4/instance/bin` directory contains scripts that are specific to that instance such as wrappers for the p4d executable.

Older versions of the SDP required you to always run important administrative commands using the `p4master_run` script, and specify fully qualified paths. This script loads environment information from `/p4/common/bin/p4_vars`, the central environment file of the SDP, ensuring a controlled environment. The `p4_vars` file includes instance specific environment data from `/p4/common/config/instance.vars`. The `p4master_run` script is still used when running p4 commands against the server unless you set up your environment first by sourcing `p4_vars` with the instance as a parameter. Administrative scripts, such as `daily_backup.sh`, no longer need to be called with `p4master_run` however, they just need you to pass the instance number to them.

When invoking a Perforce command directly on the server machine, use the `p4_instance` wrapper that is located in `/p4/instance/bin`. This wrapper invokes the correct version of the p4 client for the instance. The use of these wrappers enables easy upgrades, because the wrapper is a link to the correct version of the p4 client. There is a similar wrapper for the p4d executable, called `p4d_instance`.

Below are some usage examples for instance 1:

| Example | Remarks |
|---------|---------|
| `/p4/common/bin/p4master_run 1 /p4/1/bin/p4_1 admin stop` | Run p4 admin stop on instance 1 |
| `/p4/common/bin/live_checkpoint.sh 1` | Take a checkpoint of the live database on instance 1 |
| `/p4/common/bin/p4login 1` | Log in as the p4admin user on instance 1 |

Some maintenance scripts can be run from any client workspace, if the user has administrative access to Perforce. For example, to run the script that archives old workspaces and branches, run:

```bash
/ws_root/Perforce/sdp/Maintenance/accessdate.py
```

If an error occurs due to the default Python interpreter used by the script, invoke Python first:

```bash
/bin/python /ws_root/Perforce/sdp/Maintenance/accessdate.py
```

In the preceding example `/ws_root` is the root of the client workspace, and the python interpreter is located in `/bin`.

### Monitoring SDP activities

The important SDP maintenance and backup scripts generate email notifications when they complete.

For further monitoring, you can consider options such as:

- Making the SDP log files available via a password protected HTTP server.
- Directing the SDP notification emails to an automated system that interprets the logs.

---

# Installing the Perforce Server and the SDP

This chapter tells you how to install a Perforce server instance in the SDP framework. For more details about server installation, refer to the *Perforce System Administrator's Guide*.

Many companies use a single Perforce Server to manage their files, while others use multiple servers. The choice depends on network topology, the geographic distribution of work, and the relationships among the files being managed. If multiple servers are run, assign each instance a number and use that number as part of the name assigned to depots, to make the relationship of depots and servers obvious.

The default P4PORT setting used by the SDP is `instance666`. For example, instance 1 runs on port 1666. Each Perforce instance uses its hostname as an identifying name; this identification is used for replicated servers. This can easily be changed in `/p4/common/bin/p4_vars`.

For any instances that are named rather than numbered, then the `/p4/common/bin/p4_vars` file must be customized to assign a numeric P4PORT value to each named instance.

> **Note:** To install the SDP, you must have root (super-user or administrator) access to the server machine.

## Installing on Unix/Linux Machines

To install Perforce Server and the SDP, perform the following basic steps that are discussed below:

1. Set up a user account, file system, and configuration scripts.
2. Run the configuration script.
3. Start the server and configure the required file structure for the SDP.

### Initial setup

Prior to installing the Perforce server, perform the following steps:

1. Create a user called `p4admin` (It can be a different name if you prefer, in which case modify the OSUSER entry in the mkdirs.sh script). Set the user's home directory to `/p4` on a local disk.

2. Create a group called `perforce` (again, can be a different name – see OSGROUP in mkdirs.sh) and make it the perforce user's primary group.

3. Create or mount the server file system volumes (`/depots`, `/metadata1`, `/metadata2`, `/logs`).

4. Copy the SDP to the directory `/depots/sdp`. We will refer to this directory as `$SDP`. Make the entire `$SDP` directory writable.

5. Download the appropriate p4 and p4d binaries for your release and platform from ftp.perforce.com (log in as anonymous) and place them in `$SDP/Server/Unix/p4/common/bin`. Do not rename them to include the version number; this step is done automatically for you by the SDP.

6. `cd` to `$SDP/Server/Unix/setup` and edit `mkdirs.sh` - set all of the variables in the configuration variables section for your company.

7. As the root user, `cd` to `$SDP/Server/Unix/setup`, and run:
   ```bash
   mkdirs.sh instance
   ```
   Examples:
   ```bash
   mkdirs.sh 1
   mkdirs.sh Master
   ```

   This script configures the first Perforce Server instance. To configure additional instances, run `mkdirs.sh` again, specifying the instance number each time:
   ```bash
   mkdirs.sh 2
   mkdirs.sh 3
   ```

8. Put the Perforce license file for the server into `/p4/1/root`. Note, if you have multiple instances and have been provided with port-specific licenses by Perforce, the appropriate license file must be stored in the appropriate `/p4/instance/root` folder.

9. Make the Perforce server a system service that starts and stops automatically when the machine reboots. Running `mkdirs.sh` creates a set of init scripts for various Perforce server products in the instance-specific bin folder:
   - `/p4/1/bin/p4d_1_init`
   - `/p4/1/bin/p4broker_1_init`
   - `/p4/1/bin/p4p_1_init`
   - `/p4/1/bin/p4ftpd_1_init`
   - `/p4/1/bin/p4dtg_1_init`
   - `/p4/1/bin/p4web_1_init`

The steps required to complete the configuration will vary depending on the Unix distribution being used.

The following sample commands enable init scripts as system services on RedHat / CentOS (up to version 6) and SuSE (up to version 11). Run these commands as the root user:

```bash
cd /etc/init.d
ln -s /p4/1/bin/p4d_1_init
chkconfig --add p4d_1_init
chkconfig p4d_1_init on
```

Run the `ln –s` and two `chkconfig` commands for any other init scripts, besides `p4d_1_init`, that you wish to operate on for that instance and on the current machine, such as `p4broker_1_init` or `p4web_1_init`. Remove init scripts for any services not needed on that machine.

> **Note:** RHEL 7, CentOS 7, SuSE 12, Ubuntu (v15.04) and other distributions utilize systemd/systemctl as the mechanism for controlling services. A sample systemd configuration is included in `$SDP/Server/Unix/setup/systemd`, along with a README.md file that describes the configuration process.

Ubuntu (pre 15.04), MacOS and other Unix derivatives use different mechanisms to enable services. If your Linux distribution does not have the `chkconfig` or `systemctl` utilities, consult your distribution's documentation for information on enabling services.

### Upgrading an existing SDP installation

If you have an earlier version of the Server Deployment Package (SDP) installed, you'll want to be aware of the new `-test` flag to the SDP setup script, `mkdirs.sh`. The following update instructions assume a simple, single-server topology.

See the instructions in the file `README.md` / `README.html` in the root of the SDP directory.

### Configuration script

The `mkdirs.sh` script executed above resides in `$SDP/Server/Unix/setup`. It sets up the basic directory structure used by the SDP. Carefully review the header of this script before running it, and adjust the values of the variables near the top of the script as required. The important parameters are:

| Parameter | Description |
|-----------|-------------|
| DB1 | Name of the metadata1 volume (can be same as DB2) |
| DB2 | Name of the metadata2 volume (can be same as DB1) |
| DD | Name of the depots volume |
| LG | Name of the logs volume |
| ADMINUSER | P4USER value of a Perforce super user that operates SDP scripts, typically `perforce` or `p4admin` |
| OSUSER | Operating system user that will run the Perforce instance, typically `perforce` |
| OSGROUP | Operating system group that OSUSER belongs to, typically `perforce` |
| SDP | Path to SDP distribution file tree |
| CASESENSITIVE | Indicates if server has special case sensitivity settings |
| CLUSTER | Indicates if server is running in cluster |
| P4ADMINPASS | Password to use for Perforce superuser account |
| P4SERVICEPASS | Service User's password for replication |
| P4DNSNAME | Fully qualified DNS name of the Perforce master server machine |

For a detailed description of this script, see Appendix A.

### Starting/Stopping Perforce Server Products

The SDP includes templates for initialization (start/stop) scripts, "init scripts," for a variety of Perforce server products, including:

- p4d
- p4broker
- p4p
- p4dtg
- p4ftpd
- p4web

The init scripts are named `/p4/instance/bin/svc_instance_init`.

For example, the init script for starting p4d for Instance 1 is `/p4/1/bin/p4d_1_init`. All init scripts accept at least `start`, `stop`, and `status` arguments. The perforce user can start p4d by calling:

```bash
p4d_1_init start
```

And stop it by calling:

```bash
p4d_1_init stop
```

Once logged into Perforce as a super user, the `p4 admin stop` command can also be used to stop p4d.

All init scripts can be started as the perforce user or the root user (except p4web, which must start initially as root). The application runs as the perforce user in any case. If the init scripts are configured as system services, they can also be called by the root user using the service command:

```bash
service p4d_1_init start
```

Templates for the init scripts used by `mkdirs.sh` are stored in:
```
/p4/common/etc/init.d
```

There are also basic crontab templates for a Perforce master and replica server in:
```
/p4/common/etc/cron.d
```

These define schedules for routine checkpoint operations, replica status checks, and email reviews.

To configure and start instance 1, follow these steps:

1. Start the Perforce server by calling `p4d_1_init start`.

2. Ensure that the admin user configured above has the correct password defined in `/p4/common/config/.p4passwd.p4_1.admin`, and then run the `p4login` script (which calls the `p4 login` command using the `.p4passwd.p4_1.admin` file).

3. For new servers, run this script, which sets several recommended configurables:
   ```bash
   $SDP/Server/setup/configure_new_server.sh
   ```
   For existing servers, examine this file, and manually apply the `p4 configure` command to set configurables on your Perforce server.

4. Initialize the perforce user's crontab with one of these commands:
   ```bash
   crontab /p4/p4.crontab
   ```
   or
   ```bash
   crontab /p4/p4.crontab.rep
   ```
   and customize execution times for the commands within the crontab files to suit the specific installation.

To verify that your server installation is working properly:

1. Issue the `p4 info` command, after setting appropriate environment variables. If the server is running, it will display details about its settings.

2. Now that the server is running properly, copy the following configuration files to the depots volume for backup purposes:
   - Any init scripts used in `/etc/init.d`.
   - A copy of the crontab file, obtained using `crontab –l`.
   - Any other relevant configuration scripts, such as cluster configuration scripts, failover scripts, or disk failover configuration files.

### Archiving configuration files

Now that the server is running properly, copy the following configuration files to the depots volume for backup:

- The scheduler configuration.
- Cluster configuration scripts, failover scripts, and disk failover configuration files.

## Configuring protections, file types, monitoring and security

After the server is installed and configured, most sites will want to modify server permissions (protections) and security settings. Other common configuration steps include modifying the file type map and enabling process monitoring.

To configure permissions, perform the following steps:

1. To set up protections, issue the `p4 protect` command. The protections table is displayed.

2. Delete the following line:
   ```
   write user * * //depot/...
   ```

3. Define protections for your server using groups. Perforce uses an inclusionary model. No access is given by default, you must specifically grant access to users/groups in the protections table.

4. To set the server's default file types, run the `p4 typemap` command and define typemap entries to override Perforce's default behavior.

5. Add any file type entries that are specific to your site. Suggestions:
   - For already-compressed file types (such as .zip, .gz, .avi, .gif), assign a file type of `binary+Fl` to prevent the server from attempting to compress them again before storing them.
   - For regular binary files, add `binary+l` to make so that only one person at a time can check them out.

   A sample file is provided in `$SDP/Server/config/typemap`

6. To make your changelists default to restricted (for high security environments):
   ```bash
   p4 configure set defaultChangeType=restricted
   ```

## Other server configurables

There are various configurables that you should consider setting for your server. Some suggestions are in the file: `$SDP/Server/setup/configure_new_server.sh`

Review the contents and either apply individual settings manually, or edit the file and apply the newly edited version. If you have any questions, please see the configurables section in Appendix of the *Command Reference Guide*. You can also contact support regarding questions.

---

# Backup, Replication, and Recovery

Perforce servers maintain metadata and versioned files. The metadata contains all the information about the files in the depots. Metadata resides in database (db.*) files in the server's root directory (P4ROOT). The versioned files contain the file changes that have been submitted to the server. Versioned files reside on the depots volume.

This section assumes that you understand the basics of Perforce backup and recovery. For more information, consult the *Perforce System Administrator's Guide* and the Knowledge Base articles about replication.

## Typical Backup Procedure

The SDP's maintenance scripts, run as cron tasks, periodically back up the metadata. The weekly sequence is described below.

**Seven nights a week, perform the following tasks:**

1. Truncate the active journal.
2. Replay the journal to the offline database.
3. Create a checkpoint from the offline database.
4. Recreate the offline database from the last checkpoint.

**Once every six months, perform the following tasks:**

1. Stop the live server.
2. Truncate the active journal.
3. Replay the journal to the offline database.
4. Archive the live database.
5. Move the offline database to the live database directory.
6. Start the live server.
7. Create a new checkpoint from the archive of the live database.
8. Recreate the offline database from the last checkpoint.
9. Verify all depots.

This normal maintenance procedure puts the checkpoints (metadata snapshots) on the depots volume, which contains the versioned files. Backing up the depots volume with a normal backup utility like robocopy or rsync provides you with all the data necessary to recreate the server.

> **Important:** Be sure to back up the entire depots volume using a normal backup utility.

With no additional configuration, the normal maintenance prevents loss of more than one day's metadata changes. To provide an optimal Recovery Point Objective (RPO), the SDP provides additional tools for replication.

## Full One-Way Replication

Perforce supports a full one-way replication of data from a master server to a replica, including versioned files. The `p4 pull` command is the replication mechanism, and a replica server can be configured to know it is a replica and use the replication command. The `p4 pull` mechanism requires very little configuration and no additional scripting. As this replication mechanism is simple and effective, we recommend it as the preferred replication technique.

Replica servers can also be configured to only contain metadata, which can be useful for reporting or offline checkpointing purposes. See the *Distributing Perforce Guide* for details on setting up replica servers.

If you wish to use the replica as a read-only server, you can use the P4Broker to direct read-only commands to the replica or you can use a forwarding replica. The broker can do load balancing to a pool of replicas if you need more than one replica to handle your load.

> **Note:** Replication handles all server metadata and versioned file content, but not the SDP installation itself or other external scripts such as triggers. Use tools such as robocopy or rsync to replicate the rest of the depots volume.

### Replication Setup

To configure a replica server, first configure a machine identically to the master server (at least as regards the link structure such as `/p4`, `/p4/common/bin` and `/p4/instance/*`), then install the SDP on it to match the master server installation.

Perforce supports many types of replicas suited to a variety of purposes, such as:

- Real-time backup
- Providing a disaster recovery solution
- Load distribution to enhance performance
- Distributed development
- Dedicated resources for automated systems, such as build servers

The easiest way to set up a replica is to run this command on the master:

```bash
/p4/common/bin/mkstandby.sh 1 <rep name> <replica user password> <master server address:port>
```

(There is also a `mkedge.sh` if you are creating an edge server rather than a standby)

That will create the service user, the server spec, the configurables, set up the service.g group and the protections for replication.

Now that the settings are in the master server, you need to create a checkpoint to seed the replica:

```bash
/p4/common/bin/daily_checkpoint.sh 1
```

When the checkpoint finishes, rsync the checkpoint plus the versioned files over to the replica:

```bash
rsync -avz /p4/1/checkpoints/p4_1.ckp.###.gz perforce@replica:/p4/1/checkpoints/.
rsync -avz /p4/1/depots/ perforce@replica:/p4/1/depots/
```

(Assuming `perforce` is the OS user name and `replica` is the name of the replica server, and that `###` is the checkpoint number created by the daily backup.)

Once the rsync finishes, go to the replica machine and run the following:

```bash
/p4/1/bin/p4d_1 -r /p4/1/root -jr -z /p4/1/checkpoints/p4_1.ckp.###.gz
```

Login as the service user (specifying appropriate password when prompted):

```bash
P4TICKETS=/p4/1/.p4tickets /p4/1/bin/p4_1 -p svrmaster:1667 -u svc_replica1 login
```

Start the replica instance:

```bash
/p4/1/bin/p4d_1_init start
```

Now, you can log into the replica server itself and run `p4 pull -lj` to check to see if replication is working. If you see any numbers with a negative sign in front of them, replication is not working.

The final steps for setting up the replica server are to set up the crontab for the replica server, and set up the rsync trust certificates so that the replica scripts can run rsync without passwords.

## Recovery Procedures

There are three scenarios that require you to recover server data:

| Metadata | Depot data | Action required |
|----------|------------|-----------------|
| lost or corrupt | Intact | Recover metadata as described below |
| Intact | lost or corrupt | Call Perforce Support |
| lost or corrupt | lost or corrupt | Recover metadata as described below. Recover the depots volume using your normal backup utilities. |

Restoring the metadata from a backup also optimizes the database files.

### Recovering a master server from a checkpoint and journal(s)

The checkpoint files are stored in the `/p4/instance/checkpoints` directory, and the most recent checkpoint is named `p4_instance.ckp.number.gz`. Recreating up-to-date database files requires the most recent checkpoint, from `/p4/instance/checkpoints` and the journal file from `/p4/instance/logs`.

To recover the server database manually, perform the following steps from the root directory of the server (`/p4/instance/root`):

1. Stop the Perforce Server:
   ```bash
   /p4/common/bin/p4master_run instance /p4/instance/bin/p4_instance admin stop
   ```

2. Delete the old database files in the `/p4/instance/root/save` directory

3. Move the live database files (db.*) to the save directory.

4. Restore from the most recent checkpoint:
   ```bash
   /p4/instance/bin/p4d_instance -r /p4/instance/root -jr –z /p4/instance/checkpoints/p4_instance.ckp.most_recent_#.gz
   ```

5. Replay the transactions that occurred after the checkpoint was created:
   ```bash
   /p4/instance/bin/p4d_instance -r /p4/instance/root –jr /p4/instance/logs/journal
   ```

6. Restart your Perforce server.

7. If the Perforce service starts without errors, delete the old database files from `/p4/instance/root/save`.

### Recovering a replica from a checkpoint

This is very similar to creating a replica in the first place as described above. If you have been running the replica crontab commands as suggested, then you will have the latest checkpoints from the master already copied across to the replica.

See the steps in the script `weekly_sync_replica.sh` for details.

Remember to ensure you have logged the service user in to the master server.

### Recovering from a tape backup

This section describes how to recover from a tape or other offline backup to a new server machine if the server machine fails. The tape backup for the server is made from the depots volume. The new server machine must have the same volume layout and user/group settings as the original server.

To recover from a tape backup, perform the following steps:

1. Recover the depots volume from your backup tape.
2. Create the `/p4` convenience directory on the OS volume.
3. Create the directories `/metadata/p4/instance/root/save` and `/metadata/p4/instance/offline_db`.
4. Change ownership of these directories to the OS account that runs the Perforce processes.
5. Switch to the Perforce OS account, and create a link in the `/p4` directory to `/depotdata/p4/instance`.
6. Create a link in the `/p4` directory to `/depots/p4/common`.
7. As a super-user, reinstall and enable the init.d scripts
8. Find the last available checkpoint, under `/p4/instance/checkpoints`
9. Recover the latest checkpoint:
   ```bash
   /p4/instance/bin/p4d_instance -r /p4/instance/root -jr -z last_ckp_file
   ```
10. Recover the checkpoint to the offline_db directory:
    ```bash
    /p4/instance/bin/p4d_instance -r /p4/instance/offline_db -jr -z last_ckp_file
    ```
11. Reinstall the Perforce server license to the server root directory.
12. Start the perforce service: `/p4/1/bin/p4d_1_init start`
13. Verify that the server instance is running.
14. Reinstall the server crontab or scheduled tasks.
15. Verify the database and versioned files by running the `p4verify.sh` script.

### Failover to a replicated standby machine

See `DR-Failover-Steps-Linux.docx`

---

# Server Maintenance

This section describes typical maintenance tasks and best practices for administering server machines. The directory `$SDP/Maintenance` contains scripts for several common maintenance tasks.

The user running the maintenance scripts must have administrative access to Perforce for most activities. Most of these scripts can be run from a client machine, but it is easiest to run them on the server via crontab.

## Server upgrades

Upgrading a server instance in the SDP framework is a simple process involving a few steps:

1. Download the new p4 and p4d executables for your OS from ftp.perforce.com and place them in `/p4/common/bin`

2. Run:
   ```bash
   /p4/common/bin/upgrade.sh instance
   ```

3. If you are running replicas, upgrade the replicas first, and then the master (outside -> in)

## Database Modifications

Occasionally modifications are made to the Perforce database from one release to another. For example, server upgrades and some recovery procedures modify the database.

When upgrading the server, replaying a journal patch, or performing any activity that modifies the db.* files, you must restart the offline checkpoint process so that the files in the offline_db directory match the ones in the live server directory. The easiest way to restart the offline checkpoint process is to run the `live_checkpoint` script after modifying the db.* files:

```bash
/p4/common/bin/live_checkpoint.sh instance
```

This script makes a new checkpoint of the modified database files in the live root directory, then recovers that checkpoint to the offline_db directory so that both directories are in sync.

## Listing inactive specifications

To list branch specifications, clients, labels and users that have been inactive for a specified number of weeks, run `accessdates.py`. This script generates four text files listing inactive specifications:

- branches.txt
- clients.txt
- labels.txt
- users.txt

## Unloading and Reloading labels

To use the unload and reload commands for archiving clients and labels, you must first create an unload depot using the `p4 depot` command:

```bash
p4master_run instance /p4/instance/bin/p4_instance depot unload
```

Set the type of the depot to unload and save the form.

After the depot is created, you can use the following command to archive all the clients and labels that have not been accessed since the given date:

```bash
p4master_run instance /p4/instance/bin/p4_instance unload -f -L -z -a -d <date>
```

Users can reload their own clients/labels using the reload command:

```bash
p4 reload -c <clientname>
p4 reload -l <labelname>
```

## Deleting users

To delete users, run `python p4deleteuser.py`, specifying the users to be deleted. The script deletes the users, any workspaces they own, and removes them from any groups they belong to.

To delete all users that have not accessed the server in the past 12 weeks, run `python delusers.py`.

## Listing users

To display a list of users that are in a group but do not have an account on the server, run `python checkusers.py`.

## Group management

To duplicate a specified user's group entries on behalf of another user, run `python mirroraccess.py`:

```bash
python mirroraccess.py sourceuser targetuser
```

To add users to a group:

```bash
python addusertogroup.py user group
```

## Adding users

To add users to a server:

1. Create a text file, such as `users.csv`, containing the users to add:
   ```
   user,email,full name
   ```

2. If you are using LDAP/AD authentication, edit `createusers.py` and comment out:
   ```python
   setpass.setpassword(user[0])
   ```

3. Run `python createusers.py users.csv`.

## Email functions

To send email to all of your Perforce users:

1. Create a file called `message.txt` that contains the body of your message.
2. Run `email.sh`, specifying the email subject in quotes.

To list the email addresses of your Perforce users, run `python make_email_list.py`.

## Workspace management

The form-out trigger `$SDP/Server/Unix/p4/common/bin/triggers/SetWsOptions.py` contains default workspace options, such as `leaveunchanged` instead of `submitunchanged`.

To use the trigger, first copy it to `/p4/common/bin/triggers`. Then modify the OPTIONS variable in the script, providing the set of desired options. Insert an entry in the trigger table:

```
setwsopts form-out client "python /p4/common/bin/triggers/SetWsOptions.py %formfile%"
```

## Removing empty changelists

To delete empty pending changelists, run `python remove_empty_pending_changes.py`.

## Maximizing Server Performance

The following sections provide some guidelines for maximizing the performance of the Perforce Server.

### Optimizing the database files

The Perforce Server's database is composed of b-tree files. The server does not fully rebalance and compress them during normal operation. To optimize the files, you must checkpoint and restore the server. The weekly checkpoint script used as part of the normal server maintenance automates this task.

### Proactive Performance Maintenance

This section describes things that can be done proactively to enhance scalability and maintain performance.

#### Limiting large requests

To prevent large requests from overwhelming the server, you can limit the amount of data and time allowed per query by setting the `maxresults`, `maxscanrows` and `maxlocktime` parameters. As a good starting point:

- Set `maxscanrows` to `maxresults * 3`
- Set `maxresults` to slightly larger than the maximum number of files the users need to sync
- Set `maxlocktime` to 30000 milliseconds

These values must be adjusted up as the size of your server and the number of revisions of the files grow.

#### Offloading remote syncs

For remote users who need to sync large numbers of files, Perforce offers a proxy server. P4P, the Perforce Proxy, is run on a machine that is on the remote users' local network. The Perforce Proxy caches file revisions, serving them to the remote users and diverting that load from the main server.

---

# Tools and Scripts

This section describes the various scripts and files provided as part of the SDP package. To run main scripts, the machine must have Python 3, and a few scripts require Perl 5. The Maintenance scripts can be run from the server machine or from client machines.

## Shell Script Standards

All SDP shell scripts follow these coding standards for reliability and maintainability:

### Error Handling

Scripts use `set -uo pipefail` at the top for robust error handling:

- `-u`: Treat unset variables as errors, preventing silent failures from typos
- `-o pipefail`: Return the exit status of the first failed command in a pipeline

### Instance Initialization

Scripts use the standard `init_sdp_instance()` function from `backup_functions.sh` for consistent behavior:

```bash
#!/bin/bash
set -uo pipefail

source /p4/common/bin/backup_functions.sh
init_sdp_instance "${1:-}"

source /p4/common/bin/p4_vars "$SDP_INSTANCE"
```

### Modern Bash Syntax

- Double brackets `[[ ]]` for conditionals (safer word splitting and globbing)
- `$()` for command substitution instead of backticks
- Proper variable quoting to handle spaces and special characters
- `source` instead of `.` for sourcing files

### Safe File Operations

- Avoid `rm $(ls ...)` patterns which can fail with spaces in filenames
- Use proper globbing with null checks: `for f in pattern*; do [[ -e "$f" ]] || continue; ...`
- Quote all variable expansions in file operations

## Core Scripts

The core SDP scripts are those related to checkpoints and other scheduled operations, and all run from `/p4/common/bin`.

### backup_functions.sh

Core function library sourced by most SDP scripts. This file provides:

- **`init_sdp_instance()`**: Standard initialization for SDP instance scripts. Validates the instance parameter and sets up `SDP_INSTANCE`.
- **`check_vars()` / `set_vars()`**: Environment validation and variable setup
- **`log()` / `die()`**: Logging and error handling functions
- **`ckp_running()` / `ckp_complete()`**: Checkpoint status management
- **`stop_p4d()` / `start_p4d()`**: Service control functions
- **`rsync_with_retry()`**: Reliable file synchronization with retry logic
- **`create_service_user()`**: Service user creation for replicas
- **`remove_old_checkpoints_and_journals()`**: Cleanup functions
- **`rotate_log_file()`**: Log rotation utilities

Location: `/p4/common/bin`

### p4_vars

Defines the environment variables required by the Perforce server. This script uses a specified instance number as a basis for setting environment variables. It will look for and open the respective `p4_<instance>.vars` file.

This script also sets server logging options and configurables.

Location: `/p4/common/bin`

### p4_&lt;instance&gt;.vars

Defines the environment variables for a specific instance, including P4PORT etc.

Location: `/p4/common/config`

### p4master_run

This is the wrapper script to other SDP scripts. This ensures that the shell environment is loaded from `p4_vars`. It provides a `-c` flag for silent operation, used in many crontab entries so that email is sent from the scripts themselves.

Location: `/p4/common/bin`

### recreate_offline_db

Recovers the offline_db database from the latest checkpoint and replays any journals since then. If you have a problem with the offline database then it is worth running this script first before running `live_checkpoint`, as the latter will stop the server while it is running which can take hours.

Run this script if an error occurs while replaying a journal during weekly or daily checkpoint process.

Location: `/p4/common/bin`

### live_checkpoint

Stops the server, creates a checkpoint from the live database files, recovers the offline_db database from that checkpoint to rebalance and compress the files, then recovers the checkpoint in the offline_db directory to ensure that the database files are optimized.

Run this script when creating the server and if an error occurs while replaying a journal during the off-line checkpoint process.

Location: `/p4/common/bin`

### daily_checkpoint

This script is configured to run six days a week using crontab or the Windows scheduler. The script truncates the journal, replays it into the offline_db directory, creates a new checkpoint from the resulting database files, then recreates the offline_db directory from the new checkpoint.

Location: `/p4/common/bin`

### recreate_db_checkpoint

Performs the weekly checkpoint process. This script stops your server for a few minutes to rotate your database files with those in the offline_db directory.

Location: `/p4/common/bin`

### p4verify.sh / p4verify.py

Verifies the integrity of the depot files. These scripts are run by crontab.

- `p4verify.sh` - Shell script version for basic verification
- `p4verify.py` - Python version with advanced features including parallel verification, pull-queue gating, and journal-lag gating for replicas

Location: `/p4/common/bin`

### p4review.py

Sends out email containing the change descriptions to users who are configured as reviewers for affected files.

Location: `/p4/common/bin`

### p4login

Executes a `p4 login` command, using the password configured in mkdirs.sh and stored in a text file.

Location: `/p4/common/bin`

### p4d_instance_init

Starts the Perforce server. This script sources `/p4/common/bin/p4_vars`, then `/p4/common/bin/p4d_base`.

Location: `/p4/instance/bin`

## More Server Scripts

These scripts are helpful components of the SDP that run on the server.

### upgrade.sh

Runs a typical upgrade process, once new p4 and p4d binaries are available in `/p4/common/bin`. Handles upgrading p4d, p4broker, and p4p binaries with proper version linking.

Location: `/p4/common/bin`

### run_if_master.sh / run_if_replica.sh / run_if_edge.sh

Conditional execution scripts that run a command only if the current server matches the specified type:

- `run_if_master.sh` - Runs command only on commit/master servers
- `run_if_replica.sh` - Runs command only on standby or edge replica servers
- `run_if_edge.sh` - Runs command only on edge servers

Usage:
```bash
/p4/common/bin/run_if_master.sh 1 /p4/common/bin/some_script.sh
```

Location: `/p4/common/bin`

### mkstandby.sh / mkedge.sh

Scripts to create standby and edge server configurations on the master server. These set up the service user, server spec, configurables, and protections for replication.

Location: `/p4/common/bin`

### p4.crontab

Contains crontab entries to run the server maintenance scripts.

Location: `/p4/sdp/Server/Unix/p4/common/etc/cron.d`

## Python Utilities

### sdputils.py

Python utility module for SDP maintenance scripts. Located in `$SDP/Maintenance/sdputils.py`.

Provides:

- **`SDPUtils` class**: Wrapper for p4 commands and SDP configuration
  - `run_p4()`: Safe command execution without shell injection
  - `login()`: Perforce login using SDP password files
  - User, group, label, and change operations
  - Email sending via SMTP

- **`EmailSender` class**: SMTP email functionality with TLS support

- **`P4ConnectionManager`**: P4Python connection management

- **Helper functions**: Argument parsing, file operations, protected user lists

Example usage:

```python
from sdputils import SDPUtils

utils = SDPUtils("1")  # For instance 1
utils.login()
result = utils.run_p4(["info"], capture_output=True)
print(result.stdout)

# Get all users
users = utils.get_all_users()

# Send email
utils.send_email("user@example.com", "Subject", "Body text")
```

## Maintenance Scripts

There are many useful scripts in `/p4/sdp/Maintenance` that are not set up to run automatically as part of the SDP installation. The scripts provide maintenance tools and various reports. Each script has comments at the top indicating what it does and how to run it.

## Other Files

| File | Location | Remarks |
|------|----------|---------|
| dummy_ip.txt | `$SDP/Server/config` | Instructions for using a license on more than one machine |
| backup_functions.sh | `/p4/common/bin` | Unix/Linux only. Core utilities for maintenance scripts |
| p4d_base | `/p4/common/bin` | Unix/Linux only. Template for Unix/Linux init.d scripts |
| change.txt | `$SDP/Maintenance` | Template for new pending changelist |

---

# Appendix A – Directory Structure Configuration Script for Linux/Unix

This appendix describes the steps performed by the `mkdirs.sh` script on Linux/Unix platforms. Please review this appendix carefully before running these steps manually.

Assuming the three-volume configuration described in the Volume Layout and Hardware section are used, the following directories are created (examples use "1" as the server instance number):

| Directory | Remarks |
|-----------|---------|
| `/p4` | Must be under / on the OS volume |
| `/logs/p4/1/bin` | Files in here are generated by the mkdirs.sh script |
| `/depots/p4/1/depots` | |
| `/depots/p4/1/tmp` | |
| `/depots/p4/common/config` | Contains p4_<instance>.vars file |
| `/depots/p4/common/bin` | Files from $SDP/Server/Unix/p4/common/bin |
| `/depots/p4/common/etc` | Contains init.d and cron.d |
| `/logs/p4/1/logs` | |
| `/metadata2/p4/1/db2` | Contains offline copy of main server databases |
| `/metadata1/p4/1/db1/save` | Used only during recreate_db_checkpoint.sh |

Next, `mkdirs.sh` creates the following symlinks in the `/depots/p4/1` directory:

| Link source | Link target |
|-------------|-------------|
| `/metadata1/p4/1/db1` | `/p4/1/root` |
| `/metadata2/p4/1/db2` | `/p4/1/offline_db` |
| `/logs/p4/1/logs` | `/p4/1/logs` |
| `/logs/p4/1/bin` | `/p4/1/bin` |
| `/depots/p4/common` | `/p4/common` |
| `/depots/p4/1/depots` | `/p4/1/depots` |
| `/depots/p4/1/checkpoints` | `/p4/1/checkpoints` |
| `/depots/p4/1/tmp` | `/p4/1/tmp` |

Next, `mkdirs.sh` renames the Perforce binaries to include version and build number, and then creates appropriate symlinks.

Example structure for two instances (instance #1 using Perforce 2026.1 and instance #2 using 2026.2):

In `/p4/common/bin`:
```
p4_2026.1_bin   -> p4_2026.1.685046
p4d_2026.1_bin  -> p4d_2026.1.685046
p4_2026.2_bin   -> p4_2026.2.700949
p4d_2026.2_bin  -> p4d_2026.2.700949
p4_1_bin        -> p4_2026.1_bin
p4d_1_bin       -> p4d_2026.1_bin
p4_2_bin        -> p4_2026.2_bin
p4d_2_bin       -> p4d_2026.2_bin
```

In `/p4/1/bin`:
```
p4_1  -> /p4/common/bin/p4_1_bin
p4d_1 -> /p4/common/bin/p4d_1_bin
```

---

# Appendix B – Frequently Asked Questions/Troubleshooting

This appendix lists common questions and problems encountered by SDP users. Do not hesitate to contact consulting@perforce.com if additional assistance is required.

## Journal out of sequence

This error is encountered when the offline and live databases are no longer in sync, and will cause the offline checkpoint process to fail. Because the scripts will replay all outstanding journals, this error is much less likely to occur.

This error can be fixed by running the `live_checkpoint.sh` script. Alternatively, if you know that the checkpoints created from previous runs of `daily_checkpoint.sh` are correct, then restore the offline_db from the last known good checkpoint.

## Unexpected end of file in replica daily sync

Check the start time and duration of the `daily_checkpoint.sh` cron job on the master. If this overlaps with the start time of the `sync_replica.sh` cron job on a replica, a truncated checkpoint may be rsync'd to the replica and replaying this will result in an error.

Adjust the replica's cronjob to start later to resolve this. Default cron job times, as installed by the SDP, are initial estimates and should be adjusted to suit your production environment.
# Change User Description Committed
#1 32388 Russell C. Jackson (Rusty) Updates using Claude.ai to clean up the code, reduce duplication, enhanace security, and use current standards.