SDP_Guide.Unix.adoc #1

  • //
  • guest/
  • perforce_software/
  • sdp/
  • dev/
  • doc/
  • SDP_Guide.Unix.adoc
  • View
  • Commits
  • Open Download .zip Download (56 KB)
= Server Deployment Package (SDP) for Perforce Helix: SDP User Guide (for Unix)
Perforce Professional Services <consulting@perforce.com>
v2019.3, 2020-08-05
:doctype: book
:toc:

== Preface

The Server Deployment Package (SDP) is the implementation of Perforce's recommendations for operating and managing a production Perforce Helix Core Version Control System. It is intended to provide the Helix Core administration team with tools to help:

* Simplify Management
* High Availability (HA)
* Disaster Recovery (DR)
* Fast and Safe Upgrades
* Production Focus
* Best Practice Configurables
* Optimal Performance, Data Safety, and Simplified Backup

This guide is intended to provide instructions of setting up the SDP to help provide users of Helix Core with the above benefits.

This guide assumes some familiarity with Perforce and does not duplicate the basic information in the Perforce user documentation.
This document only relates to the Server Deployment Package (SDP) all other Helix Core documentation can be found here: https://www.perforce.com/support/self-service-resources/documentation[Perforce Support Documentation]

*Please Give Us Feedback*

Perforce welcomes feedback from our users. Please send any suggestions for improving this document or the SDP to consulting@perforce.com.

:sectnums:
== Overview

The SDP has four main components:

* Hardware and storage layout recommendations for Perforce.
* Scripts to automate critical maintenance activities
* Scripts to aid the setup and management of replication (including failover for DR/HA)
* Scripts to assist with routine administration tasks.

Each of these components is covered, in detail, in this guide.

=== Using this Guide

Section 2 consists of what you need to know to setup Helix Core sever on a Unix platform.

Section 3 gives information around the Backup, Restoration and Replication of Helix Core.

Section 4 is about Server Maintenance, upgrades and some performance tips.

Section 5 covers the scripts used within the SDP in some more detail.

=== Getting the SDP 

The SDP is downloaded as a single zipped tar file the latest version can be found at: https://swarm.workshop.perforce.com/projects/perforce-software-sdp/files/downloads

== Setting up the SDP

This section tells you how to configure the SDP to setup a new Helix Core server. Whilst the standard installation of Helix Core is fully covered in the System's Administrator Guide this section covers the details most relevant to the SDP.

The SDP can be installed on multiple server machines, and each server machine can host one or more Helix Core server instances.

=== Terminology and pre-requisites

[arabic]
. The term _server_ refers to a Helix Core server _instance,_ unless otherwise specified.
. The term _metadata_ refers to the Helix Core database files
. _Instance:_ a separate Helix Core instantiation using its own p4d daemon/process

*Pre-Requisites:*

[arabic]
. The Helix Core binaries (p4d, p4, p4broker, p4p) have been downloaded (see section XXXX)
. _sudo_ access is required
. System administrator available for configuration of drives / volumes (especially if on network or SAN or similar)
. Supported Unix version, currently these versions are fully supported - for other versions please speak with Perforce Support

* Ubuntu 16.04 LTS (xenial)
* Ubuntu 18.04 LTS (bionic)
* CentOS or Red Hat 6.x
* CentOS or Red Hat 7.x
* SUSE Linux Enterprise Server 12

=== Volume Layout and Hardware

As can be expected from a version control system, good disk (storage) management is key to maximising data integrity and performance. Perforce recommend using multiple physical volumes for *each* server instance. Using three or four volumes per instance reduces the chance of hardware failure affecting more than one instance. When naming volumes and directories the SDP assumes the "hx" prefix is used to indicate Helix volumes (your own naming conventions/standards can be used instead). For optimal performance on UNIX machines, the XFS file system is recommended but not mandated.

* {blank}
+
*Perforce metadata (database files), 1 or 2 volumes:*
Use the fastest volume possible, ideally SSD or RAID 1+0 on a dedicated controller with the maximum cache available on it. These volumes default to `/hxmetadata1` and `/hxmetadata2`. It is fine to have these both pointing to the same physical volume, e.g. `/hxmetadata`.

* {blank}
+
*Journals and logs:*
a fast volume, ideally SSD or RAID 1+0 on its own controller with the standard amount of cache on it.
This volume is normally called `/hxlogs` and should usually be backed up.
If a separate logs volume is not available, put the logs on the `/hxmetadata1` or `/hxmetadata` volume.

* {blank}
+
*Depot data, archive files, scripts, and checkpoints*:
Use a large volume, with RAID 5 on its own controller with a standard amount of cache or a SAN or NAS volume (NFS access is fine).
This volume is the only volume that *must* be backed up.
The SDP backup scripts place the metadata snapshots on this volume.
This volume is normally called `/hxdepots`.

NOTE: If multiple controllers are not available, put the `hxlogs` and `hxdepots` volumes on the same controller.

NOTE: Do not run anti-virus tools or back up tools against the hxmetadata volume(s) or hxlogs volume(s), because they can interfere with the operation of the Perforce server.*

On Unix/Linux platforms, the SDP will create a "convenience" directory containing links to the volumes for each instance, by default named `/p4`. The volume layout is shown in Appendix SDP Package Contents. This convenience directory enables easy access to the different parts of the file system for each instance.

For example:

* `/p4/1/root` contains the database files for instance `1`,

* `/p4/1/logs` contains the log files for instance `1`,

* `/p4/1/bin` contains the binaries and scripts for instance `1`,

* `/p4/common/bin` contains the binaries and scripts common to all instances.

== Installing the SDP on Unix / Linux

To install Perforce Server and the SDP, perform the steps laid out below:

* Set up a user account, file system, and configuration scripts.
* Run the configuration script.
* Start the server and configure the required file structure for the SDP.

[.arabic]
. If it doesn't already exist, create a group called perforce:

    sudo groupadd perforce

. Create a user called perforce and set the user's home directory to `/p4` on a local disk.

    sudo useradd -d /p4 -s /bin/bash -m perforce -g perforce

. Create or mount the server file system volumes (per layout in previous section)

* /hxdepots
* /hxmetadata1
* /hxmetadata2
* /hxlogs

. These directories should be owned by: `perforce:perforce`

    sudo chown -R perforce:perforce /hx*

. Either download the SDP directly or move the previously downloaded version /hxdepots/ a|
cd /hxdepots

    wget https://swarm.workshop.perforce.com/downloads/guest/perforce_software/sdp/downloads/sdp.Unix.2019.3.26571.tgz

Or:

    mv sdp.Unix.2019.3.26751.tgz /hxdepots

. Untar and uncompress the downloaded sdp files: 

    tar -zxvf sdp.Unix.2019.3.26751.tgz

. Set environment variable SDP, this makes certain later steps easier.

    export SDP=/hxdepots/sdp

. Make the entire $SDP (/hxdepot/) directory writable

    chmod -R +w $SDP

. Download the appropriate p4, p4d and p4broker binaries for your release and platform (substituting desired release for `r20.1` below)

    cd $SDP/Server/Unix/p4/common/bin
    wget http://ftp.perforce.com/perforce/r20.1/bin.linux26x86_64/p4
    wget http://ftp.perforce.com/perforce/r20.1/bin.linux26x86_64/p4d
    wget http://ftp.perforce.com/perforce/r20.1/bin.linux26x86_64/p4broker

. make them executable

    chmod +x p4*

=== Initial setup

The next steps highlight the setup and configuration of a new Helix Core instance using the SDP.

. cd to $SDP/Server/Unix/setup and copy mkdirs.cfg to an instance specific version such as mkdirs.1.cfg and edit it, information on the variables can be found in section 2.3.4 of this document

    Example:

    cd $SDP/Server/Unix/setup
    cp mkdirs.cfg mkdirs.1.cfg
    vi mkdirs.1.cfg

    P4ADMINPASS=********
    MAILFROM=perforceadmin@myDomain.com
    MAILHOST=myMailServer.myDomain.com
    P4DNSNAME=thisMachine.myDomain.com
    P4SERVICEPASS=********
    MASTER_ID=myName.$\{SDP_INSTANCE}

. As the root user (or sudo), run this:

    mkdirs.sh _<instance number/name>_

    e.g.

    mkdirs.sh 1

IMPORTANT: If you use a name for the instance you MUST modify the P4PORT variable in `mkdirs.cfg`

NOTE: instance must map to the name of the cfg file or the default file will be used with potentially unexpected results.

e.g. `mkdirs.sh 1` requires `mkdirs.1.cfg`, or `mkdirs.sh lon` requires `mkdirs.lon.cfg`, or 


. Put the Perforce license file for the server into /p4/1/root

NOTE: if you have multiple instances and have been provided with port-specific licenses by Perforce, the appropriate license file must be stored in the appropriate /p4/_instance_/root folder.

IMPORTANT: the license file must be renamed to `license`

Your Helix Core instance is now setup, but not running. The next steps detail how to make the Helix Core server a system service.

=== For Systems using systemd

RHEL 7, CentOS 7, SuSE 12, Ubuntu (>= v16.04) (and other) distributions utilize *systemd / systemctl* as the mechanism for controlling services, replacing the earlier init process. At present mkdirs.sh does *not* generate the systemd configuration file(s) automatically, but a sample is included in the SDP distribution in ($SDP/Server/Unix/setup/systemd), along with a README.md file that describes the configuration process, including for multiple instances.

We recommend that you give the OS user (perforce) sudo access, so that it can run the commands below prefixing them with sudo.

For simple installation run these commands as the root user: (or sudo)

    cp $SDP/Server/Unix/setup/system/p4d_1.system /etc/systemd/system/
    systemctl enable p4d_1

|The above enables service for auto-start on boot. The following show management commands:

    systemctl status p4d_1
    systemctl start p4d_1
    systemctl status p4d_1
    systemctl stop p4d_1
    systemctl status p4d_1

=== For (older) Systems, still using INIT.d

The mkdirs.sh script creates a set of startup scripts in the instance-specific bin folder:

    /p4/1/bin/p4d_1_init
    /p4/1/bin/p4broker_1_init [only created if a p4broker executable found]
    /p4/1/bin/p4p_1_init [only created if a p4p executable found]

Run these commands as the root user (or sudo): Repeat this step for all init scripts you wish to add.

    cd /etc/init.d
    ln -s /p4/1/bin/p4d_1_init
    chkconfig --add p4d_1_init
    chkconfig p4d_1_init on

== Upgrading an existing SDP installation

If you have an earlier version of the Server Deployment Package (SDP) installed, you'll want to be aware of the new -test flag to the SDP setup script, mkdirs.sh e.g.

    sudo mkdirs.sh 1 -test

This will install into /tmp and allow you to recursively diff the installed files with your existing installation and manually update as necessary.

See the instructions in the file README.md / README.html in the root of the SDP directory.

== Configuration script

The mkdirs.sh script executed above resides in `$SDP/Server/Unix/setup`. It sets up the basic directory structure used by the SDP. Carefully review the config file mkdirs._instance_.cfg for this script before running it, and adjust the values of the variables as required. The important parameters are:

[cols=",",options="header",]
|===
|Parameter |Description
|DB1 |Name of the hxmetadata1 volume (can be same as DB2)
|DB2 |Name of the hxmetadata2 volume (can be same as DB1)
|DD |Name of the hxdepots volume
|LG |Name of the hxlogs volume
|CN |Volume for /p4/common
|SDP |Path to SDP distribution file tree
|SHAREDDATA |TRUE or FALSE - whether sharing the /hxdepots volume with a replica - normally this is FALSE
|ADMINUSER |P4USER value of a Perforce super user that operates SDP scripts, typically perforce or p4admin.
|OSUSER |Operating system user that will run the Perforce instance, typically perforce.
|OSGROUP |Operating system group that OSUSER belongs to, typically perforce.
|CASE_SENSITIVE |Indicates if server has special case sensitivity settings
|SSL_PREFIX |Set if SSL is required so either "ssl:" or blank for no SSL
a|
P4ADMINPASS

P4SERVICEPASS

a|
Password to use for Perforce superuser account - can be edited later in /p4/common/config/.p4password.p4_1.admin

Service User's password for replication - can be edited later - same dir as above.

|P4DNSNAME |Fully qualified DNS name of the Perforce master server machine
|===

For a detailed description of this config file it is fully documented with in-file comments, or see

=== Use of SSL

As documented in the comments in mkdirs.cfg, if you are planning to use SSL you need to set the value of:

    SSL_PREFIX=ssl:

Then you need to put certificates in `/p4/ssl` after the SDP install or you can generate a self signed certificate as follows:

Edit /p4/ssl/config.txt to put in the info for your company. Then run:

    /p4/common/bin/p4master_run <instance> /p4/<instance>/p4d_<instance> -Gc

For example using instance 1:

    /p4/common/bin/p4master_run 1 /p4/1/bin/p4d_1 -Gc

In order to validate that SSL is working correctly:

    source /p4/common/bin/p4_vars 1

Check that P4TRUST is appropriately set in the output of:

    p4 set

Update the P4TRUST values (answer yes when prompted):

    p4 -p ssl:1666 trust

    p4 -p ssl:localhost:1666 trust

Check the stored P4TRUST values:

    p4 trust -l

Check not prompted for trust:

    p4 login
    p4 info

==== Starting/Stopping Perforce Server Products

The SDP includes templates for initialization (start/stop) scripts, "init scripts," for a variety of Perforce server products, including:

* p4d
* p4broker
* p4p
* p4dtg
* p4ftpd
* p4web

The init scripts are named /p4/<instance>/bin/<service>_<instance>_init, e.g. `/p4/1/bin/p4d_1_init` or `/p4/1/bin/p4broker_1_init`.

For example, the init script for starting p4d for Instance 1 is /p4/1/bin/p4d_1_init. All init scripts accept at least start, stop, and status arguments. The perforce user can start p4d by calling:

    p4d_1_init start

And stop it by calling:

    p4d_1_init stop

Once logged into Perforce as a super user, the p4 admin stop command can also be used to stop p4d.

All init scripts can be started as the perforce user or the root user (except p4web, which must start initially as root). The application runs as the perforce user in any case. If the init scripts are configured as system services (non-systemd distributions), they can also be called by the root user using the service command, as in this example to start p4d:

    service p4d_1_init start

Templates for the init scripts used by mkdirs.sh are stored in:

    /p4/common/etc/init.d

There are also basic crontab templates for a Perforce master and replica server in:

    /p4/common/etc/cron.d

These define schedules for routine checkpoint operations, replica status checks, and email reviews.

The Perforce should have a super user defined as named by the P4USER setting in mkdir.

To configure and start instance 1, follow these steps:

[arabic]
. Start the Perforce server by calling

    p4d_1_init start

. Ensure that the admin user configured above has the correct password defined in `/p4/common/config/.p4passwd.p4_1.admin`, and then run the p4login script (which calls the p4 login command using the `.p4passwd.p4_1.admin` file)

. For new servers, run this script, which sets several recommended configurables:

    $SDP/Server/setup/configure_new_server.sh 1

For existing servers, examine this file, and manually apply the p4 configure command to set configurables on your Perforce server.

Initialize the perforce user's crontab with one of these commands:

    crontab /p4/p4.crontab

and customise execution times for the commands within the crontab files to suite the specific installation.

The SDP uses wrapper scripts in the crontab: `run_if_master.sh`, `run_if_edge.sh`, `run_if_replica.sh`. We suggest you ensure these are working as desired, e.g.

    /p4/common/bin/run_if_master.sh 1 echo yes

Any issues with the above indicate incorrect values for $MASTER_ID.

To verify that your server installation is working properly:

[arabic]
. Issue the http://www.perforce.com/perforce/doc.current/manuals/cmdref/info.html#1040665[p4 info] command, after setting appropriate environment variables by running:

    source /p4/common/bin/p4_vars 1

. If the server is running, it will display details about its settings.

Now that the server is running properly, copy the following configuration files to the hxdepots volume for backup purposes:

* Any init scripts used in /etc/init.d.
* A copy of the crontab file, obtained using crontab -l.
* Any other relevant configuration scripts, such as cluster configuration scripts, failover scripts, or disk failover configuration files.

=== Archiving configuration files

Now that the server is running properly, copy the following configuration files to the hxdepots volume for backup:

* The scheduler configuration.
* Cluster configuration scripts, failover scripts, and disk failover configuration files.

=== Configuring protections, file types, monitoring and security

After the server is installed and configured, most sites will want to modify server permissions (protections) and security settings. Other common configuration steps include modifying the file type map and enabling process monitoring. To configure permissions, perform the following steps:

[arabic]
. To set up protections, issue the `p4 protect` command. The protections table is displayed.

. Delete the following line:

    write user * * //depot/...

. Define protections for your server using groups. Perforce uses an inclusionary model. No access is given by default, you must specifically grant access to users/groups in the protections table. It is best for performance to grant users specific access to the areas of the depot that they need rather than granting everyone open access, and then trying to remove access via exclusionary mappings in the protect table even if that means you end up generating a larger protect table.

. To set the server's default file types, run the p4 typemap command and define typemap entries to override Perforce's default behavior.

. Add any file type entries that are specific to your site. Suggestions:

* For already-compressed file types (such as `.zip`, `.gz`, `.avi`, `.gif`), assign a file type of `binary+Fl` to prevent the server from attempting to compress them again before storing them.
* For regular binary files, add `binary+l` to make so that only one person at a time can check them out.

A sample file is provided in $SDP/Server/config/typemap

. To make your changelists default to restricted (for high security environments):

    p4 configure set defaultChangeType=restricted

=== Other server configurables

There are various configurables that you should consider setting for your server.

Some suggestions are in the file: `$SDP/Server/setup/configure_new_server.sh`

Review the contents and either apply individual settings manually, or edit the file and apply the newly edited version. If you have any questions, please see the http://www.perforce.com/perforce/r14.2/manuals/cmdref/appendix.configurables.html[configurables section in Appendix of the Command Reference Guide] (get the right version for your server!). You can also contact support regarding questions.

=== General SDP Usage

This section presents an overview of the SDP scripts and tools. Details about the specific scripts are provided in later sections.

==== Linux 

Most scripts and tools reside in `/p4/common/bin`. The `/p4/_instance_/bin` directory (e.g. `/p4/1/bin`) contains scripts or links that are specific to that instance such as wrappers for the p4d executable.

Older versions of the SDP required you to always run important administrative commands using the `p4master_run` script, and specify fully qualified paths. This script loads environment information from `/p4/common/bin/p4_vars`, the central environment file of the SDP, ensuring a controlled environment. The `p4_vars` file includes instance specific environment data from `/p4/common/config/p4_**_instance_.**vars` e.g. `/p4/common/config/p4_1.vars`. The `p4master_run script` is still used when running p4 commands against the server unless you set up your environment first by sourcing p4_vars with the instance as a parameter (for bash shell: `source /p4/common/bin/p4_vars 1`). Administrative scripts, such as `daily_backup.sh`, no longer need to be called with `p4master_run` however, they just need you to pass the instance number to them as a parameter.

When invoking a Perforce command directly on the server machine, use the p4_*instance* wrapper that is located in /p4/_instance_/bin. This wrapper invokes the correct version of the p4 client for the instance. The use of these wrappers enables easy upgrades, because the wrapper is a link to the correct version of the p4 client. There is a similar wrapper for the p4d executable, called p4d__instance_. This wrapper is important to handle case sensitivity in a consistent manner, e.g. when running a Unix server in case-insensitive mode.

Below are some usage examples for instance 1.

[cols=",",options="header",]
|===
|_Example_ |_Remarks_
|`/p4/common/bin/p4master_run 1 /p4/1/bin/p4_1 admin stop` |Run `p4 admin stop` on instance 1
|`/p4/common/bin/live_checkpoint.sh 1` |Take a checkpoint of the live database on instance 1
|`/p4/common/bin/p4login 1` |Log in as the perforce user (superuser) on instance 1.
|===

Some maintenance scripts can be run from any client workspace, if the user has administrative access to Perforce.

==== Monitoring SDP activities

The important SDP maintenance and backup scripts generate email notifications when they complete.

For further monitoring, you can consider options such as:

* Making the SDP log files available via a password protected HTTP server.
* Directing the SDP notification emails to an automated system that interprets the logs.

=== P4V Performance Settings

These are covered in: https://community.perforce.com/s/article/2878

== Backup, Replication, and Recovery

Perforce servers maintain _metadata_ and _versioned files_. The metadata contains all the information about the files in the depots. Metadata resides in database (db.*) files in the server's root directory (P4ROOT). The versioned files contain the file changes that have been submitted to the server. Versioned files reside on the hxdepots volume.

This section assumes that you understand the basics of Perforce backup and recovery. For more information, consult the Perforce http://www.perforce.com/perforce/doc.current/manuals/p4sag/02_backup.html#1043336[System Administrator's Guide] and the Knowledge Base articles about http://kb.perforce.com/article/1371/perforce-replication[replication].

=== Typical Backup Procedure

The SDP's maintenance scripts, run as `cron` tasks, periodically back up the metadata. The weekly sequence is described below.

*Seven nights a week, perform the following tasks:*

[arabic]
. Truncate the active journal.
. Replay the journal to the offline database. (Refer to Figure 2: SDP Runtime Structure and Volume Layout for more information on the location of the live and offline databases.)
. Create a checkpoint from the offline database.
. Recreate the offline database from the last checkpoint.

*Once a week, perform the following tasks:*

[arabic]
. Verify all depot files.

*Once every few months, perform the following tasks:*

[arabic]
. Stop the live server.
. Truncate the active journal.
. Replay the journal to the offline database. (Refer to Figure 2: SDP Runtime Structure and Volume Layout for more information on the location of the live and offline databases.)
. Archive the live database.
. Move the offline database to the live database directory.
. Start the live server.
. Create a new checkpoint from the archive of the live database.
. Recreate the offline database from the last checkpoint.
. Verify all depots.

This normal maintenance procedure puts the checkpoints (metadata snapshots) on the hxdepots volume, which contains the versioned files. Backing up the hxdepots volume with a normal backup utility like _robocopy_ or _rsync_ provides you with all the data necessary to recreate the server.

To ensure that the backup does not interfere with the metadata backups (checkpoints), coordinate backup of the hxdepots volume using the SDP maintenance scripts.

The preceding maintenance procedure minimizes server downtime, because checkpoints are created from offline or saved databases while the server is running.

NOTE: With no additional configuration, the normal maintenance prevents loss of more than one day's metadata changes. To provide an optimal http://en.wikipedia.org/wiki/Recovery_point_objective[Recovery Point Objective] (RPO), the SDP provides additional tools for replication.

=== Full One-Way Replication

Perforce supports a full one-way http://www.perforce.com/perforce/doc.current/manuals/p4sag/10_replication.html#1056059[replication] of data from a master server to a replica, including versioned files. The http://www.perforce.com/perforce/doc.current/manuals/cmdref/pull.html#1048868[p4 pull] command is the replication mechanism, and a replica server can be configured to know it is a replica and use the replication command. The p4 pull mechanism requires very little configuration and no additional scripting. As this replication mechanism is simple and effective, we recommend it as the preferred replication technique. Replica servers can also be configured to only contain metadata, which can be useful for reporting or offline checkpointing purposes. See the Distributing Perforce Guide for details on setting up replica servers.

If you wish to use the replica as a read-only server, you can use the http://www.perforce.com/perforce/doc.current/manuals/p4sag/11_broker.html#1056059[P4Broker] to direct read-only commands to the replica or you can use a forwarding replica. The broker can do load balancing to a pool of replicas if you need more than one replica to handle your load. Use of the broker may require use of a http://www.perforce.com/perforce/doc.current/manuals/p4sag/03_superuser.html#1093066[P4AUTH] server for authentication.

==== Replication Setup

To configure a replica server, first configure a machine identically to the master server (at least as regards the link structure such as /p4, /p4/common/bin and /p4/_instance_/*), then install the SDP on it to match the master server installation. Once the machine and SDP install is in place, you need to configure the master server for replication.

Perforce supports many types of replicas suited to a variety of purposes, such as:

* Real-time backup,
* Providing a disaster recovery solution,
* Load distribution to enhance performance,
* Distributed development,
* Dedicated resources for automated systems, such as build servers, and more.

We always recommend first setting up the replica as a read-only replica and ensuring that everything is working. Once that is the case you can easily modify server specs and configurables to change it to a forwarding replica, or an edge server etc.

==== Using mkrep.sh

This script automates the following:

* creation of all the configurables for a replica appropriate to its type (e.g. forwarding-replica, forwarding-standby, edge-server etc).
* standard naming conventions are used for server ids, service user names etc. This simplifies managing multiple server/replica topologies and understanding the intended use of a replica (e.g. that it is intended for HA - high availability)
* creation of service user account, password, and with appropriate permissions
* creation of server spec
* detailed instructions to follow in order to create a checkpoint and restore on the replica server

Prerequisites:

* You must have a server spec for your master server, typically defined with Services: "commit-server" ("standard" is fine if no edge servers are to be created, but it is not a problem to use commit-server even without any edge servers) - use the serverid (output of "p4 serverid") as the name.
* You should be running p4d 2018.2 or later
* You should have a configuration file which defines site tags - this is part of naming.

===== Server Types

These are:

* ha - High Availability
* ham - High Availability (Metadata only)
* ro - Read only replica
* rom - Read only replica (Metadata only)
* fr - Forwarding replica
* fs - Forwarding standby
* frm - (Metadata only)
* fsm - (Metadata only)
* ffr - Filtered forwarding replica
* edge - Edge server

Replicas with 'standby' are always unfiltered, and use the 'journalcopy' method of replication, which copies a byte-for-byte verbatim journal file rather than one that is merely logically equivalent.

===== Example

An example run is:

    /p4/common/bin/mkrep.sh -i 1 -t fs -s bos -r replica1 -skip_ssh

The above will:

* Create a replica for instance 1
* Of type `fs` (forwarding standby) - with appropriate configurables
* For site `bos` (e.g. Boston)
* On host name `replica1`
* Without checking that passwordless ssh is possible to the host replica1

The tag has several purposes:

* Short Hand. Each tag represents a combination of 'Type:' and fully qualified 'Services:' values used in server specs.
* Distillation. Only the most useful Type/Services combinations have a shorthand form.
* For forwarding replicas, the name includes the critical distinction of whether any replication filtering is used; as filtering of any kind disqualifies a replica from being a potential failover target. (No such distinction is needed for edge servers, which are filtered by definition).

===== Mkrep.sh output

The output (which is also written to a log file in `/p4/<instance>/logs/mkrep.*`) describes a number of steps required to continue setting up the replica, e.g.

* Rotate the current live journal (to save the configuration parameters required)
* Copy across latest checkpoint and the subsequent rotated journals to the replica host machine
* Restore the copied checkpoints/journals into `/p4/<instance>/root` (and `offline_db`)
* Create a password file for service user
* Create appropriate server.id files
* Login the service user to the upstream server (usually commit server)
* Start the replica process
* Monitor that all is well with `p4 pull -lj`

More details on these steps can be found in the manual process below as well as the actualy mkrep.sh output.

==== Manual Steps

In the sample below, the replica name will be `replica1`, it is instance 1 on a particular host, the service user name is `svc_replica1`, and the master server's hostname is `svrmaster`.

The following sample commands illustrate how to setup a simple read-only replica.

First we ensure that journalPrefix is set appropriately for the master server (in this case we assume instance 1 rather than a named instance):

    p4 configure set master#journalPrefix=/p4/1/checkpoints/p4_1

Then we set values for the replica itself:

    p4 configure set replica1#P4TARGET=svrmaster:1667
    p4 configure set "replica1#startup.1=pull -i 1"
    p4 configure set "replica1#startup.2=pull -u -i 1"
    p4 configure set "replica1#startup.3=pull -u -i 1"
    p4 configure set "replica1#startup.4=pull -u -i 1"
    p4 configure set "replica1#startup.5=pull -u -i 1"
    p4 configure set "replica1#db.replication=readonly"
    p4 configure set "replica1#lbr.replication=readonly"
    p4 configure set replica1#serviceUser=svc_replica1

Then the following also need to be setup:

* Create a service user for the replica (Add the Type: service field to the user form before saving):

    p4 user -f svc_replica1

* Set the service user's password:

    p4 passwd svc_replica1

* Add the service user svc_replica1 to a specific group ServiceUsers which has a timeout value of unlimited:

    p4 group ServiceUsers

* Make sure the ServiceUsers group has super access in protections table:

    p4 protect

Now that the settings are in the master server, you need to create a checkpoint to seed the replica. Run:

    /p4/common/bin/daily_checkpoint.sh 1

When the checkpoint finishes, rsync the checkpoint plus the versioned files over to the replica:

    rsync -avz /p4/1/checkpoints/p4_1.ckp.###.gz perforce@replica:/p4/1/checkpoints/.

    rsync -avz /p4/1/depots/ perforce@replica:/p4/1/depots/

(Assuming perforce is the OS user name and replica is the name of the replica server in the commands above, and that ### is the checkpoint number created by the daily backup.)

Once the rsync finishes, go to the replica machine run the following:

    /p4/1/bin/p4d_1 -r /p4/1/root -jr -z /p4/1/checkpoints/p4_1.ckp.###.gz

Login as the service user (specifying appropriate password when prompted), and making sure that the login ticket generated is stored in the same place as specified in the P4TICKETS configurable value set above for the replica (the following uses bash syntax):

    source /p4/common/bin/p4_vars 1
    /p4/1/bin/p4_1 -p svrmaster:1667 -u svc_replica1 login

Start the replica instance (either using _init script or systemctl if on system):

    /p4/1/bin/p4d_1_init start

Now, you can log into the replica server itself and run p4 pull -lj to check to see if replication is working. If you see any numbers with a negative sign in front of them, replication is not working. The most likely cause of this is that the service user is not logged in. Rerun the steps above to login the service user and check again. If replication still is not working, check /p4/1/logs/log on the replica, and also look for authentication failures in the log for the master instance on svrmaster.

The final steps for setting up the replica server are to set up the crontab for the replica server.

To configure the ssh trust:

On both the master and replica servers, go to the perforce user's home directory and run:

    ssh-keygen -t rsa

Just use the defaults for the questions it asks.

Now from the master, run:

    rsync -avz ~/.ssh/id_rsa.pub perforce@replica:~/.ssh/authorized_keys

and from the replica, run:

    rsync -avz ~/.ssh/id_rsa.pub perforce@master:~/.ssh/authorized_keys

The crontab (/p4/p4.crontab) contains several lines which are prefixed by /p4/common/bin/run_if_replica.sh or run_if_edge.sh or run_if_master.sh

These can be tested to make sure all is valid with:

    /p4/common/bin/run_if_replica.sh 1 echo yes

If "yes" is output then SDP thinks the current hostname with instance 1 is a replica server. Similarly for edge/master.

The log files will be in /p4/1/logs, so you can check for any errors from each script.

=== Recovery Procedures

There are three scenarios that require you to recover server data:

[cols=",,",options="header",]
|===
|Metadata |Depotdata |Action required
|lost or corrupt |Intact |Recover metadata as described below
|Intact |lost or corrupt |Call Perforce Support
|lost or corrupt |lost or corrupt a|
Recover metadata as described below.

Recover the hxdepots volume using your normal backup utilities.

|===

Restoring the metadata from a backup also optimizes the database files.

==== Recovering a master server from a checkpoint and journal(s)

The checkpoint files are stored in the /p4/_instance_/checkpoints directory, and the most recent checkpoint is named p4__instance_.ckp._number_.gz. Recreating up-to-date database files requires the most recent checkpoint, from /p4/_instance_/checkpoints and the journal file from /p4/instance/logs.

To recover the server database manually, perform the following steps from the root directory of the server (/p4/instance/root).

Assuming instance 1:

[arabic]
. Stop the Perforce Server by issuing the following command:

    /p4/1/bin/p4_1 admin stop

. Delete the old database files in the `/p4/1/root/save` directory

. Move the live database files (db.*) to the save directory.

. Use the following command to restore from the most recent checkpoint.

    /p4/1/bin/p4d_1 -r /p4/1/root -jr -z /p4/1/checkpoints/p4_1.ckp.####.gz

. To replay the transactions that occurred after the checkpoint was created, issue the following command:

    /p4/1/bin/p4d_1 -r /p4/1/root -jr /p4/1/logs/journal

[arabic, start=6]
. Restart your Perforce server.

If the Perforce service starts without errors, delete the old database files from `/p4/instance/root/save`.

If problems are reported when you attempt to recover from the most recent checkpoint, try recovering from the preceding checkpoint and journal. If you are successful, replay the subsequent journal. 
If the journals are corrupted, contact mailto:support@perforce.com[Perforce Technical Support]. 
For full details about backup and recovery, refer to the http://perforce.com/perforce/doc.current/manuals/p4sag/02_backup.html#1043336[Perforce System Administrator's Guide].

==== Recovering a replica from a checkpoint

This is very similar to creating a replica in the first place as described above.

If you have been running the replica crontab commands as suggested, then you will have the latest checkpoints from the master already copied across to the replica.

See the steps in the script weekly_sync_replica.sh for details (note that it deletes the state and rdb.lbr files from the replica root directory so that the replica starts replicating from the start of a journal).

Remember to ensure you have logged the service user in to the master server (and that the ticket is stored in the correct location as described when setting up the replica).

==== Recovering from a tape backup

This section describes how to recover from a tape or other offline backup to a new server machine if the server machine fails. The tape backup for the server is made from the hxdepots volume. The new server machine must have the same volume layout and user/group settings as the original server. In other words, the new server must be as identical as possible to the server that failed.

To recover from a tape backup, perform the following steps.

[arabic]
. Recover the hxdepots volume from your backup tape.
. Create the /p4 convenience directory on the OS volume.
. {blank}
+
____
Create the directories /metadata/p4/_instance_/root/save and /metadata/p4/_instance_/offline_db.
____
. Change ownership of these directories to the OS account that runs the Perforce processes.
. Switch to the Perforce OS account, and create a link in the /p4 directory to /depotadata/p4/_instance_.
. Create a link in the /p4 directory to /hxdepots/p4/common.
. As a super-user, reinstall and enable the init.d scripts
. Find the last available checkpoint, under /p4/_instance_/checkpoints
. Recover the latest checkpoint by running:

/p4/_instance_/bin/p4d__instance_ -r /p4/_instance_/root -jr -z _last_ckp_file_

[arabic, start=10]
. Recover the checkpoint to the offline_db directory:

/p4/_instance_/bin/p4d__instance_ -r /p4/_instance_/offline_db -jr -z _last_ckp_file_

[arabic, start=11]
. Reinstall the Perforce server license to the server root directory.
. Start the perforce service by running /p4/1/bin/p4d_1_init start
. Verify that the server instance is running.
. Reinstall the server crontab or scheduled tasks.
. Perform any other initial server machine configuration.
. Verify the database and versioned files by running the p4verify.sh script. Note that files using the http://perforce.com/perforce/doc.092/manuals/cmdref/o.ftypes.html#1040647[+k] file type modifier might be reported as BAD! after being moved. Contact Perforce Technical Support for assistance in determining if these files are actually corrupt.

==== Failover to a replicated standby machine

See DR-Failover-Steps-Linux.docx

== Server Maintenance

This section describes typical maintenance tasks and best practices for administering server machines.

=== Server upgrades

Upgrading a server instance in the SDP framework is a simple process involving a few steps.

* Download the new p4 and p4d executables for your OS from ftp://ftp.perforce.com[ftp.perforce.com] and place them in /p4/common/bin
* Run:

/p4/common/bin/upgrade.sh _instance_

* If you are running replicas, upgrade the replicas first, and then the master (outside -> in)

=== Database Modifications

Occasionally modifications are made to the Perforce database from one release to another. For example, server upgrades and some recovery procedures modify the database.

When upgrading the server, replaying a journal patch, or performing any activity that modifies the db.* files, you must restart the offline checkpoint process so that the files in the offline_db directory match the ones in the live server directory. The easiest way to restart the offline checkpoint process is to run the live_checkpoint script after modifying the db.* files, as follows:

    /p4/common/bin/live_checkpoint.sh 1

This script makes a new checkpoint of the modifed database files in the live root directory, then recovers that checkpoint to the offline_db directory so that both directories are in sync. This script can also be used anytime to create a checkpoint of the live database.

This command should be run when an error occurs during offline checkpointing. It restarts the offline checkpoint process from the live database files to bring the offline copy back in sync. If the live checkpoint script fails, contact Perforce Consulting at consulting@perforce.com.

== Maximizing Server Performance

The following sections provide some guidelines for maximizing the performance of the Perforce Server, using tools provided by the SDP. More information on this topic can be found in the http://www.perforce.com/perforce/doc.current/manuals/p4sag/07_perftune.html#1044128[System Administrator's Guide] and in the http://kb.perforce.com/article/762/performance-tuning[Knowledge Base].

=== Optimizing the database files

The Perforce Server's database is composed of b-tree files. The server does not fully rebalance and compress them during normal operation. To optimize the files, you must checkpoint and restore the server. This normally only needs to be done very few months.

To minimize the size of back up files and maximize server performance, minimize the size of the db.have and db.label files. The scripts for *Error! Reference source not found.*, and *Error! Reference source not found.*, help achieve this goal. For best server performance, run these scripts weekly via /p4/sdp/Maintenance/maintenance

=== Proactive Performance Maintenance

This section describes some things that can be done to proactively to enhance scalability and maintain performance.

==== Limiting large requests

To prevent large requests from overwhelming the server, you can limit the amount of data and time allowed per query by setting the maxresults, maxscanrows and maxlocktime parameters to the lowest setting that does not interfere with normal daily activities. As a good starting point, set maxscanrows to maxresults * 3; set maxresults to slightly larger than the maximum number of files the users need to be able to sync to do their work; and set maxlocktime to 30000 milliseconds. These values must be adjusted up as the size of your server and the number of revisions of the files grow. To simplify administration, assign limits to groups rather than individual users.

To prevent users from inadvertently accessing large numbers of files, define their client view to be as narrow as possible, considering the requirements of their work. Similarly, limit users' access in the protections table to the smallest number of directories that are required for them to do their job.

Finally, keep triggers simple. Complex triggers increase load on the server.

==== Offloading remote syncs

For remote users who need to sync large numbers of files, Perforce offers a http://perforce.com/perforce/doc.current/manuals/p4sag/09_p4p.html#1056059[proxy server]. P4P, the Perforce Proxy, is run on a machine that is on the remote users' local network. The Perforce Proxy caches file revisions, serving them to the remote users and diverting that load from the main server.

P4P is included in the Windows installer. To launch P4P on Unix machines, copy the /p4/common/etc/init.d/p4p_1_init script to /p4/1/bin/p4p_1_init. Then review and customize the script to specify your server volume names and directories.

P4P does not require special hardware but it can be quite CPU intensive if it is working with binary files, which are CPU-intensive to attempt to compress. It doesn't need to be backed up. If the P4P instance isn't working, users can switch their port back to the main server and continue working until the instance of P4P is fixed.

== Tools and Scripts

This section describes the various scripts and files provided as part of the SDP package. To run main scripts, the machine must have Python 2.7, and a few scripts require Perl 5. The Maintenance scripts can be run from the server machine or from client machines.

The following various scripts.

=== Core Scripts

The core SDP scripts are those related to checkpoints and other scheduled operations, and all run from /p4/common/bin.

==== p4_vars

Defines the environment variables required by the Perforce server. This script uses a specified instance number as a basis for setting environment variables. It will look for and open the respective p4_<instance>.vars file (see next section).

This script also sets server logging options and configurables.

*Location*: /p4/common/bin

==== p4_<instance>.vars

Defines the environment variables for a specific instance, including P4PORT etc.

*Location*: /p4/common/config

==== p4master_run

This is the wrapper script to other SDP scripts. This ensures that the shell environment is loaded from p4_vars. It provides a '-c' flag for silent operation, used in many crontab so that email is sent from the scripts themselves.

*Location*: /p4/common/bin

==== recreate_offline_db

Recovers the offline_db database from the latest checkpoint and replays any journals since then. If you have a problem with the offline database then it is worth running this script first before running live_checkpoint, as the latter will stop the server while it is running which can take hours.

Run this script if an error occurs while replaying a journal during weekly or daily checkpoint process.

*Location*: /p4/common/bin

==== live_checkpoint

Stops the server, creates a checkpoint from the live database files, recovers the offline_db database from that checkpoint to rebalance and compress the files, then recovers the checkpoint in the offline_db directory to ensure that the database files are optimized.

Run this script when creating the server and if an error occurs while replaying a journal during the off-line checkpoint process. Be aware it locks live database for the duration of the checkpoint which can take hours.

*Location*: /p4/common/bin

==== daily_checkpoint

This script is configured to run six days a week using crontab or the Windows scheduler. The script truncates the journal, replays it into the offline_db directory, creates a new checkpoint from the resulting database files, then recreates the offline_db directory from the new checkpoint.

This procedure rebalances and compresses the database files in the offline_db directory, which are rotated into the live database directory once a week by the weekly_checkpoint script.

*Location*: /p4/common/bin

==== p4verify

Verifies the integrity of the depot files. This script is run by crontab on a regular basis.

*Location*: /p4/common/bin

==== p4review.py

Sends out email containing the change descriptions to users who are configured as reviewers for affected files (done by setting the Reviews: field in the user specification). This script is a version of the p4review.py script that is available on the Perforce Web site, but has been modified to use the server instance number. It relies on a configuration file in /p4/common/config, called p4_<instance>.p4review.cfg. On Windows, a driver called run_p4review.cmd, located in the same directory, allows you to run the review daemon through the http://en.wikipedia.org/wiki/Task_Scheduler[Windows scheduler].

This is not required if you have installed Swarm which also performs notification functions and is easier for users to configure.

*Location*: /p4/common/bin

==== p4login

Executes a p4 login command, using the administration password configured in mkdirs.cfg and stored in a text file: /p4/common/config/.p4passwd

*Location*: /p4/common/bin

==== p4d_<instance>_init

Starts the Perforce server. Do not use if you have configured systemctl for systemd Linux distributions such as CentOS 7.x

This script sources `/p4/common/bin/p4_vars`, then `/p4/common/bin/p4d_base`.

*Note*: In clustered environments, put this script in the `/p4/_instance_/bin` directory and configure your cluster software to launch it from this location.

*Location*: `/p4/_instance_/bin` with a symlink to it from `/etc/init.d` (or a copy in `/etc/init.d` in a clustered environments). Templates for init scripts for other Perforce server products exist in `/p4/common/etc/init.d`

=== More Server Scripts

These scripts are helpful components of the SDP that run on the server, but are not included in the default crontab schedules.

==== upgrade.sh

Runs a typical upgrade process, once new p4 and p4d binaries are available in `/p4/common/bin`.

This script will:

* Rotate the journal (for clean recovery point)
* Apply all necessary journals to offline_db
* Stop the server
* Create an appropriately versioned link for new p4/p4d/p4broker etc
* Link those into `/p4/1/bin` (per instance)
* Run `p4d -xu` on live and offline_db to perform database upgrades
* Restart server

*Location*: /p4/common/bin

==== p4.crontab

Contains crontab entries to run the server maintenance scripts. The p4.crontab.solaris script is for Solaris.

*Location*: /p4/sdp/Server/Unix/p4/common/etc/cron.d

=== Other Files

The following table describes other files in the SDP distribution. These files are usually not invoked directly by you; rather, they are invoked by higher-level scripts.

[cols=",,",options="header",]
|===
|File |Location |Remarks
|dummy_ip.txt |$SDP/Server/config |Instructions for using a license on more than one machine. Typically used to enable a standby server. Contact mailto:licensing@perforce.com[Perforce Licensing] before using.
|backup_functions.sh |/p4/common/bin |Unix/Linux only. Utilities for maintenance scripts.
|p4admin_verify_client.bat |/p4/common/bin |Unix/Linux only. Used by p4verify.sh.
|p4d_base |/p4/common/bin |Unix/Linux only. Template for Unix/Linux init.d scripts.
|template.(pl|sh) |/p4/common/bin |Sample script templates for Bash and Perl scripts.
|mirror_ldap_groups.pl |/p4/common/bin |Script to mirror selected groups from an LDAP server (e.g. Active Directory).
|Perl Modules (*.pm files) |/p4/common/lib |Modules used by some Perl scripts.
|change.txt |$SDP/Maintenance |Template for new pending changelist.
|===

==== SDP Package Contents

The directory structure of the SDP is shown below in Figure 1 - SDP Package Directory Structure. This includes all SDP files, including documentation and maintenance scripts. A subset of these files are deployed to server machines during the installation process.

    sdp
        doc
        Server (Core SDP Files)
            Unix
                setup (unix specific setup)
                p4
                    common
                        bin (Backup scripts, etc)
                            triggers (Example triggers)
                        config
                        etc
                            cron.d
                            init.d
                            lib
                            test
        setup (cross platform setup - typemap, configure, etc)
        test (automated test scripts)

Figure 1 - SDP Package Directory Structure

==== Volume Layout and Server Planning

Figure 2: SDP Runtime Structure and Volume Layout, viewed from the top down, displays a Perforce _application_ administrator's view of the system, which shows how to navigate the directory structure to find databases, log files, and versioned files in the depots. Viewed from the bottom up, it displays a Perforce _system_ administrator's view, emphasizing the physical volume where Perforce data is stored.

===== Memory and CPU

Make sure the server has enough memory to cache the *db.rev* database file and to prevent the server from paging during user queries. Maximum performance is obtained if the server has enough memory to keep all of the database files in memory.

*Below are some approximate guidelines for* allocating memory.

* 1.5 kilobyte of RAM per file stored in the server.
* 32 MB of RAM per user.

Use the fastest processors available with the fastest available bus speed. Faster processors are typically more desirable than a greater number of cores and provide better performance since quick bursts of computational speed are more important to Perforce's performance than the number of processors. Have a minimum of two processors so that the offline checkpoint and back up processes do not interfere with your Perforce server. There are log analysis options to diagnose underperforming servers and improve things (contact support/consulting for details).

==== Directory Structure Configuration Script for Linux/Unix

This script describes the steps performed by the mkdirs.sh script on Linux/Unix platforms. Please review this appendix carefully before running these steps manually. Assuming the three-volume configuration described in the Volume Layout and Hardware section are used, the following directories are created. The following examples are illustrated with "1" as the server instance number.

[cols=",",options="header",]
|===
|_Directory_ |_Remarks_
|`/p4` |Must be under root (`/`) on the OS volume
|`/hxdepots/p4/1/bin` |Files in here are generated by the mkdirs.sh script.
|`/hxdepots/p4/1/depots` |
|`/hxdepots/p4/1/tmp` |
|`/hxdepots/p4/common/config` |Contains p4_<instance>.vars file, e.g. `p4_1.vars`
|`/hxdepots/p4/common/bin` |Files from `$SDP/Server/Unix/p4/common/bin`.
|`/hxdepots/p4/common/etc` |Contains `init.d` and `cron.d`.
|`/hxlogs/p4/1/logs/old` |
|`/hxmetadata2/p4/1/db2` |Contains offline copy of main server databases (linked by `/p4/1/offline_db`.
|`/hxmetadata1/p4/1/db1/save` |Used only during running of `recreate_db_checkpoint.sh` for extra redundancy.
|===

Next, `mkdirs.sh` creates the following symlinks in the `/hxdepots/p4/1` directory:

[cols=",,",options="header",]
|===
|*_Link source_* |*_Link target_* |*_Command_*
|`/hxmetadata1/p4/1/db1` |`/p4/1/root` |`ln -s /hxmetadata1/p4/1/root`
|`/hxmetadata2/p4/1/db2` |`/p4/1/offline_db` |`ln -s /hxmetadata1/p4/1/offline_db`
|`/hxlogs/p4/1/logs` |`/p4/1/logs` |`ln -s /hxlogs/p4/1/logs`
|===

Then these symlinks are created in the /p4 directory:

[cols=",,",options="header",]
|===
|*_Link source_* |*_Link target_* |*_Command_*
|`/hxdepots/p4/1` |`/p4/1` |`ln -s /hxdepots/p4/1 /p4/1`
|`/hxdepots/p4/common` |`/p4/common` |`ln -s /hxdepots/p4/common /p4/common`
|===

Next, mkdirs.sh renames the Perforce binaries to include version and build number, and then creates appropriate symlinks.

The structure is shown in this example, illustrating values for two instances, with instance #1 using Perforce 2018.1 and instance #2 using 2018.2. 

In `/p4/common/bin`:

    p4_2018.1.685046
    p4_2018.1_bin -> p4_2018.1.685046
    p4d_2018.1.685046
    p4d_2018.1_bin -> p4d_2018.1.685046
    p4d_2018.2.700949
    p4_2018.2_bin -> p4_2018.2.700949
    p4d_2018.2_bin -> p4d_2018.2.700949
    p4_1_bin -> p4_2018.1_bin
    p4d_1_bin -> p4d_2018.1_bin
    p4_2_bin -> p4_2018.2_bin
    p4d_2_bin -> p4d_2018.2_bin

In /p4/1/bin:

    p4_1 -> /p4/common/bin/p4_1_bin
    p4d_1 -> /p4/common/bin/p4d_1_bin

In /p4/2/bin:

    p4_2 -> /p4/common/bin/p4_2
    p4d_2 -> /p4/common/bin/p4d_2

==== Frequently Asked Questions/Troubleshooting

This appendix lists common questions and problems encountered by SDP users. Do not hesitate to contact consulting@perforce.com if additional assistance is required.

===== Journal out of sequence

This error is encountered when the offline and live databases are no longer in sync, and will cause the offline checkpoint process to fail. Because the scripts will replay all outstanding journals, this error is much less likely to occur. This error can be fixed by running the live_checkpoint.sh script. Alternatively, if you know that the checkpoints created from previous runs of daily_checkpoint.sh are correct, then restore the offline_db from the last known good checkpoint.

===== Unexpected end of file in replica daily sync

Check the start time and duration of the daily_checkpoint.sh cron job on the master. If this overlaps with the start time of the sync_replica.sh cron job on a replica, a truncated checkpoint may be rsync'd to the replica and replaying this will result in an error.

Adjust the replica's cronjob to start later to resolve this.

Default cron job times, as installed by the SDP are initial estimates, and should be adjusted to suit your production environment.
# Change User Description Committed
#138 31463 C. Thomas Tyler Enhanced mkdirs.sh to support complex passwwords.

The method uses base64 to encode/decode password strings, thus
allowing password strings to be complex, e.g. something like:
Complex$u.t$a!#%3$CRx

Deprecated the P4SERVICEPASS variable and removed doc references
to it.
#137 31440 C. Thomas Tyler Fixed typo.
#136 31325 C. Thomas Tyler Minor doc tweaks to add more detail on sync_replica.sh and
keep_offline_db_current.sh.
#135 31270 Robert Cowham Change tuned profile to do bootloader stuff if required
#134 31228 C. Thomas Tyler Fixed 'p4login1' typo in docs.
#133 31185 C. Thomas Tyler Added more detailed information on The Site Directory (/p4/common/site).

Adapted Makefile to generate HTML from Markdown for this special case
where the /p4/common/site/ReadMe.md file deployed with the SDP in a
runtime environment in the site folder generates an HTML elsewhere,
in the SDP doc folder.

Fixes SDP-722.
#132 31063 C. Thomas Tyler Updated revnumber and date for release.
#131 31048 C. Thomas Tyler Adjustment to create `/opt/perforce/helix-sdp/backup` directory.
#130 31046 C. Thomas Tyler Update to SDP Guide for SDP 2024.2 release.

This includes new sections and refactoring.
#129 30989 C. Thomas Tyler Take 2: Fixed broken internal doc reference.
#128 30988 C. Thomas Tyler Fixed broken internal doc reference.
#127 30938 Robert Cowham Minor clarifications for getting started using install_sdp.sh script
Updated some links to new Helix Core doc locations.
#126 30937 Robert Cowham Update p4review2.py to work with Python3
Add basic test harness.
Delete p4review.py which is Python2 and update docs.
#125 30926 C. Thomas Tyler Updated version for release.
#124 30910 C. Thomas Tyler Updated rev{number,date} fields in adoc files for release.
#123 30837 C. Thomas Tyler Added ref to new storage doc.
#122 30835 C. Thomas Tyler Adapted Server Spec Naming Standard section detailing the ServerID of the
commit server to the defacto standard already used in HRA. Changed from:

{commit|master}[.<SDPInstance>[.<OrgName>]]

to:

{commit|master}[.<OrgName>[.<SDPInstance>]]

Various typo fixes and minor changes in SDP Guide.

Updated consulting email address (now consulting-helix-core@perforce.com)
in various files.
#121 30782 C. Thomas Tyler Added new install_sdp.sh script and supporting documentation.

The new install_sdp.sh makes SDP independent of the separate
Helix Installer software (the reset_sdp.sh script).  The new
script greatly improves the installation experience for new
server machines. It is ground up rewrite of the reset_sdp.sh
script. The new script preserves the desired behaviors of the
original Helix Installer script, but is focused on the use
case of a fresh install on a new server machine. With this focus,
the scripts does not have any "reset" logic, making it completely
safe.

Added various files and functionalityfrom Helix Installer into SDP.
* Added firewalld templates to SDP, and added ufw support.
* Improved sudoers generation.
* Added bash shell templates.

This script also installs in the coming SDP Package structure.
New installs use a modified SDP structure that makes it so the
/p4/sdp and /p4/common now point to folders on the local OS
volume rather than the /hxepots volume. The /hxdepots volume,
which is often NFS mounted, is still used for depots and
checkpoints, and for backups.

The new structure uses a new /opt/perforce/helix-sdp structure
under which /p4/sdp and /p4/common point. This structure also
contains the expaneded SDP tarball, downloads, helix_binaries,
etc.

This change represents the first of 3-phase rollout of the new
package structure. In this first phase, the "silent beta" phase,
the new structure is used for new installations only. This phase
requires no changes to released SDP scripts except for mkdirs.sh,
and even that script remains backward-compatible with the old
structure if used independently of install_sdp.sh.  If used with
install_sdp.sh, the new structure is used.

In the second phase (targeted for SPD 2024.2 release), the
sdp_upgrade.sh script will convert existing installations to
the new structure.

In the third phase (targeted for SDP 2025.x), this script will
be incorporated into OS pacakge installations for the helix-sdp
package.

Perforce internal wikis have more detail on this change.

#review-30783
#120 30661 Robert Cowham Exapand description for recreate_offline_db.sh
#119 30656 Robert Cowham Tweak xrefs from failover guide and sdp guide.
#118 30608 C. Thomas Tyler Fixed doc typo in triggers table call; trigger type should be
'change-submit', not 'submit-change'.
#117 30606 C. Thomas Tyler Updated content related to to perforce-p4python3 package.

#review-30607
#116 30531 C. Thomas Tyler Merge down from main to dev with:
p4 merge -b perforce_software-sdp-dev
#115 30516 C. Thomas Tyler Doc corrections and clarifications.
#114 30440 Robert Cowham Add a couple of emphases...
#113 30367 C. Thomas Tyler Updated Server Spec Naming Standard to account for allowing
'commit' to be used as a synonym for 'master', and also allowing
for appending an optional '<OrgName>'.
#112 30335 C. Thomas Tyler Corrected doc typo.
#111 30285 C. Thomas Tyler Updated SDP Guide for Unix to include raw perforce_suoders.t file
for better accuracy and easier update.

Added a copy of perforce_sudoers.t from Helix Installer.  For immediate
purposes, this is to allow this file to be included in SDP
documentation.  However, this change is also part of a larger goal
to move extensive Helix Installer functionality into the SDP.
#110 30205 C. Thomas Tyler Refactored Terminology so we can reference indiviual terms with direct URLs.
#109 30164 Mark Zinthefer moved script usage for mkdirs and mkrep to appendix
#108 30161 Mark Zinthefer Adding section on server maintenance.
#107 30031 C. Thomas Tyler Added doc for ccheck.sh, keep_offline_db_current.sh.
#106 30008 C. Thomas Tyler Doc change and Non-functional updates to CheckCaseTrigger.py:
* Bumped version number for recent changes.
* Fixed doc inconsistencies.

Fixes: SDP-1035

#review-30009
#105 30000 C. Thomas Tyler Refined Release Notes and top-level README.md file in preparation
for coming 2023.2 release.

Adjusted Makefile in doc directory to also generate top-level
README.html from top-level README.md file so that the HTML file is
reliably updated in the SDP release process.

Updated :revnumber: and :revdate: docs in AsciiDoc files to
indicate that the are still current.

Avoiding regen of ReleaseNotes.pdf binary file since that will
need at least one more update before shipping SDP 2023.2.
#104 29923 C. Thomas Tyler Updated HTML hyperlinks to use 'portal.perforce.com'.

This replaces currently broken links to 'answers.perforce.com' and
currently redirected links to 'community.perforce.com'.

#review-29924
#103 29914 Robert Cowham Remove link to Helix Installer until we refactor that to avoid support errors.
#102 29844 C. Thomas Tyler Added sdp_health_check to SDP package.

Updated docs in Guide and Release Notes to reflect this change.

Added more docs for this in the SDP Guide.

#review-29845 @vkanczes
#101 29824 C. Thomas Tyler Added comment that P4SERVICEPASS is not used; it remains in place for
backward compatibility.

Added FAQ: How do I change super user password?

Added FAQ: Can I remove the perforce user?

Added FAQ: Can I clone a VM to create a standby replica?

#review-29825
#100 29727 Robert Cowham Note the need for an extra p4 trust statement for $HOSTNAME
#99 29719 Robert Cowham Fix journal numbering example.
Add section to make replication errors visible.
#98 29715 C. Thomas Tyler Doc correction.
 The sample command correctly indicates that
`/home/perforce` should be the home directory, but the text
still says should be `/p4`, the legacy location.

Also added a note advising against user of automounted home dirs.

#review-29716
#97 29695 C. Thomas Tyler Adjusted curl commands, adding '-L' to support URL redirects.
Removed '-k' from curl commands referencing Perforce sites.
#96 29693 C. Thomas Tyler Adjusted /hxserverlocks recommendations:
* Changed filesystem name from 'tmpfs' to 'HxServerLocks' in /etc/fstab.
* Changed mount permissions from '0755' to '0700' to prevent data leaks.
* Changed mounted filesystem size recommendations.
* Updated info about size of files being 17 or 0 bytes depending on p4d version.
* Indicated change should be done in a maintenance window (as /etc/fstab
is modified).

Also updated limited sudoers to include entries for running setcap and getcap.

#review-29694 @robert_cowham
#95 29608 C. Thomas Tyler Doc updates as part of release cycle.
#94 29567 Andy Boutte Adding option to delivery log and alerts via PagerDuty
#93 29563 Andy Boutte Adding optional local config directories for both instance and SDP wide configuration.
#92 29483 Robert Cowham Clarify case-insensitive servers
#91 29475 Robert Cowham For SELinux note the yum package to install for basics
#90 29370 C. Thomas Tyler Fixed a single typo.
#89 29311 C. Thomas Tyler Per Thomas Albert, adjusted title on doc page:

From: Perforce Helix Server Deployment Package (for UNIX/Linux)
To:   Perforce Helix Core Server Deployment Package (for UNIX/Linux)

#review-29312 @thomas_albert
#88 29239 C. Thomas Tyler Updated 'sudoers' documentation.
#87 29238 C. Thomas Tyler Fixing two harmless typos.
#86 29236 C. Thomas Tyler Updated all doc rev numbers for supported and unsupported docs to
2022.2 as prep for SDP 2022.2 release.
#85 29137 C. Thomas Tyler Added docs for proxy_rotate.sh, and updated docs for broker_rotate.sh.
#84 29096 Robert Cowham Add a section on installing Swarm triggers
#83 29055 Robert Cowham Update troubleshooting to check ckp_running.txt semaphore
#82 29044 Robert Cowham Update to include troubleshooting 'p4 pull -ls' errors
#81 29019 C. Thomas Tyler Clarified to indicate RHEL 8 is fine; only CentOS 8 is discouraged.

When changes were made to indicate CentOS 8 was discouraged (due to
being made upstream of RHEL, and thus a dev/test sandbox for RHEL
(like Fedora) rather than a solid downstream distro suitable for
production, use, the text inadvertaintly gavem the impression that
RHEL 8 was not supported.
#80 29002 C. Thomas Tyler Doc correction; tip refers to 'wget' in a sample command that
uses curl instead.
#79 28986 C. Thomas Tyler Clarified text related to mandatory vs.
nomandatory standby replicas.
#78 28980 Robert Cowham Note how to configure Swarm to use postfix
#77 28926 Robert Cowham Added check for Swarm JIRA project access.
#76 28837 C. Thomas Tyler Updated docs for r22.1 release.
#75 28771 C. Thomas Tyler Changed email address for Perforce Support.

#review-28772 @amo @robert_cowham
#74 28767 C. Thomas Tyler SDP Guide Doc Updates:
* Fixed typos.
* Enhanced mandatory/nomandatory description.
* Added detail to instructions on using the `perforce-p4python` packcage,
and change reference from Swarm docs to the more general Perforce Packages
page.
* Refactored FAQ, Troubleshooting Guide, and Sample Procedures appendices
for greater clarity.
* Added Appendix on Brokers in Stack Topology

#review-28768
#73 28716 Andy Boutte Correcting path to p4d_<instance>
#72 28686 Robert Cowham Clarify FAQ for replication errors
#71 28667 Robert Cowham Add a note re monitoring.
Add some FAQ appendix questions.
#70 28649 Andy Boutte Documenting alert notifications via AWS SNS
#review
https://jira.perforce.com:8443/browse/CLD-14
#69 28618 C. Thomas Tyler Fixed missing command re: .ssh directory generation.
#68 28606 C. Thomas Tyler Added SDP Health Checks appendix to UNIX/Linux SDP Guide.

Also removed some references to '-k' (insecure) in curl
statements.

#review-28607 @d_benedict
#67 28605 C. Thomas Tyler Fixed cosmetic/rendering issue; added a blank line in *.adoc
so a bulleted list displays correctly.
#66 28604 Robert Cowham Added notes for Python/P4Python and CheckCaseTrigger installation
#65 28534 lbarbier Enhancements to SDP Guide/Unix for adoc version following job 736.
#64 28503 Robert Cowham Add SELinux tip
#63 28496 Robert Cowham Fix typo in journalctl
#62 28493 Robert Cowham Added notes to get systemd SDP scripts working under SELinux
Thanks to Rich Alloway!
#61 28487 Robert Cowham Document Swarm/JIRA cloud link process.
Also Postfix clarification.
#60 28374 C. Thomas Tyler Updated :revnumber: and :revdate: fields for *.adoc files
for release.
#59 28351 Robert Cowham Tweaked sdp upgrades docs.
#58 28261 C. Thomas Tyler Fixed on-character doc typo (curk -> curl).
#57 28246 C. Thomas Tyler Enahnced the 'Upgrading the SDP' section of the SDP Guide:
* Added sample command to deal with possibly existing tarball.
* Added tips to enable less technical users to get past basic snags.
* Added detail on how to find your /hxdepots directory if not default.
#56 28230 C. Thomas Tyler Minor doc corrections.
#55 28225 C. Thomas Tyler Enhanced info on upgrading the SDP.
#54 28222 C. Thomas Tyler Fixed broken link/ref.
#53 28197 C. Thomas Tyler Partially functional version of sdp_upgrade.sh, with doc updates.
#52 28195 C. Thomas Tyler Refined location of SiteTags.cfg.sample file.
#51 28193 C. Thomas Tyler Renamed sample files (e.g.
SiteTags.cfg) in SDP tarball tree, appending
a .sample tag, to make rsync overlay of /p4/common/config safe.

Updated related docs in mkrep.sh referring to the sample file, to improve
an error message to guide the user to find the sample file.

#review-28194
#50 28180 C. Thomas Tyler Fixed oversight in documentation, describing how to check the
SDP Version file.
#49 28162 C. Thomas Tyler Typo/line add.
#48 28158 C. Thomas Tyler Made former '-n' preview mode the default behavior.
Added new '-y' flag to execute a real upgrade.

Added a message in preview mode to clarify that no actual
upgrade is performed.

Removed the '-n' preview option, as preview mode is now the
default behavior.

#review-28159
#47 28154 C. Thomas Tyler Added new Sample Procedures section.
Added Sample Procedure: Reseeding an Edge Server

Corrected teriminology re: 'instance' and 'process' and 'server' to
be inline with other documentation and common usage.

Other minor fixes.

#review-28155
#46 28104 C. Thomas Tyler Fixed typo.
#45 28102 C. Thomas Tyler Clarified "breathing" comment (as in "breathing room") with more clear
and more translatable language.

#review-28103 @thomas_albert
#44 28100 C. Thomas Tyler Updated SDP Guide for UNIX/Linux:
* Filled in missing information re: new upgrades.
* Expanded on definition of vague "Exceptionally large" term.

Generating HTML for easy review; holding off on PDF as it will
be generated during the release.

#review-28101 @roadkills_r_us
#43 28071 Robert Cowham Clarify some notes re setting up Gmail
#42 27978 Robert Cowham Clarifications and warnings around load_checkpoint.sh
Mention recreate_offline_db.sh a little more prominently
Recommend installing postfix for mail.
#41 27890 C. Thomas Tyler Updated Release Notes and SDP Guide to clarify SDP r20.1 supports
Helix Core binaries up to r21.1, in advance of the coming SDP r21.1
release that will make it more obvious.

In get_helix_binaries.sh:
* Changed default Helix Core binary version to r21.1.
* Changed examples of getting a different version to reference r20.2.

#review-27891 @amo
#40 27875 C. Thomas Tyler Changes to SDP Guide:
* Changed location of home dir for `perforce` OSUSER for manual
installations to be the same as it is for Helix Installer installations,
changing from '/p4' to '/home/perforce'.  The /home/perforce is
preferred for several reasons:
 - To work with common SSHD configs that require home directories to be
   under /home.
 - To keep /p4 clean, used only for SDP things, and not have user
  files like ~/.vimrc, ~/.Xauthority, Desktop, ~/.ssh, ~/.bashrc,
  ~/.p4enviro, ~/.p4tickets, ~/.p4config, etc.  (The original decision
to use '/p4' was made in 2007, long before any ~/.p4* files existed).
 - To have a separate home directory, which has many benefits including
  simplifying operational procedures that rely on having a separate
  home directory available.
 - That said, having the home directory in /p4 was the standard for
a long while, and isn't really broken despite no longer being
preferred.

* Replaced high-byte quotes with low-byte quotes in several places.

* Corrected case of Max* settings (e.g. maxresults -> MaxResults) to
match how they appear in the group spec.

* Fixed a few typos (one in doc, one in a suggested crontab).
#39 27779 C. Thomas Tyler Fixed typo found by a customer.
Thanks!
#38 27764 C. Thomas Tyler Updated Version to release SDP 2020.1.27763.
Re-generated docs.
#37 27722 C. Thomas Tyler Refinements to @27712:
* Resolved one out-of-date file (verify_sdp.sh).
* Added missing adoc file for which HTML file had a change (WorkflowEnforcementTriggers.adoc).
* Updated revdate/revnumber in *.adoc files.
* Additional content updates in Server/Unix/p4/common/etc/cron.d/ReadMe.md.
* Bumped version numbers on scripts with Version= def'n.
* Generated HTML, PDF, and doc/gen files:
  - Most HTML and all PDF are generated using Makefiles that call an AsciiDoc utility.
  - HTML for Perl scripts is generated with pod2html.
  - doc/gen/*.man.txt files are generated with .../tools/gen_script_man_pages.sh.

#review-27712
#36 27710 Robert Cowham Another tweak to tmpfs settings
#35 27709 Robert Cowham Note check for serverlocks.
Fix typo in path in failover.
#34 27643 7ecapilot Doc errors and corrections
#33 27536 C. Thomas Tyler Legacy Upgrade Guide doc updates:
* Added 'Put New SDP in Place' section.
* Added 'Set SDP Counters' section to set SDP_VERSION and SDP_DATE counters.
* Covered updating depot spec Map fields.
* Covered adding server.id files.
* Added missing content on putting new SDP directory in place.

SDP_Guide.Unix doc updates:
* Added Legacy Upgrade Scripts section w/clear_depot_Map_fields.sh.

Updated Makefile with new doc build dependencies.

Regenerated docs.
#32 27518 C. Thomas Tyler Merged Robert's SDP Guide changes in review 27490 with my latest edits.
#31 27505 C. Thomas Tyler Enhanced doc for Systemd/SysV services management and configuration
docs, separating basic configuration for start/stop/status from
enabling for start on boot (with Systemd/SysV variations for each).

Added doc coverage for using systemd to enable multiple broker configs.

Added doc coverage for applying limited sudo.

Spell check.
#30 27321 C. Thomas Tyler General reivew completed.

Removed links that downloaded scripts than referenced doc
content in HTML form.  A future SDP release may change
restore links of a different kind.
#29 27250 C. Thomas Tyler Adjusted JournalPrefix standard to account for shared /hxdepots.

The JournalPrefix standard now allows for unfiltered replicas
(such as HA/DR replicas) to use same journalPrefix value as filtered
replicas and edge servers, using per-ServerID checkpoints folder,
if they share the same /hxdepots (e.g. NFS-mounted) with the master
(e.g. when lbr.replication=shared).

Related code change made to mkdirs.sh and mkrep.sh to support the
tweaks to the standard.

#review-27251
#28 27156 C. Thomas Tyler Consolidated SDP Standards into the SDP Guide for UNIX/Linux.

Added references to those sections in the Windows SDP Guide.

Normalized doc titles.

Various other doc update.
#27 27096 C. Thomas Tyler Refactored SDP Legacy Upgrade content into a separate doc.
The SDP
Guide will be remain comprehensive and cover how to upgrade the SDP
itself forwardm from the current version (2020.1) using the new,
p4d-like incremental upgrade mechanism.

The content for manual upgrade procedures needed to get older
SDP installations to 2020.1 is only useful until sites are on
2020.1. This content is extensive, narrowly focused, and of value
only once per installation, and thus the legacy upgrade content
is separated into its own document.

Regenerated work-in-progress HTML files for easier review.
#26 27059 Robert Cowham Update section link to mention Section 8
#25 27058 Robert Cowham Added direct links to the various scripts where they are explained.
Tweak some wording in SDP upgrade section
#24 27055 C. Thomas Tyler Pulled the SDP Upgrade Guide for Linux into the main SDP Guide,
and deleted the separate upgrade doc. Also other minor refinements.

Pulled in updated mkrep.sh v2.5.0 docs.

This version is still in progress.  Search for EDITME to find
areas requiring addtional content.
#23 27041 Robert Cowham Windows Guide directly includes chunks of the Unix guide for replication etc, with a little
ifdef to avoid Unix only comments.
Fix Makefile and add missing generated man page.
#22 27039 Robert Cowham Minor tweaks for ease of use - docs not generated.
#21 27033 C. Thomas Tyler Work in progress updates to SDP_Guilde.Unix.
#20 27021 C. Thomas Tyler Re-ordered so `systemd` info comes first (as it is more likely
to be relevant), and older SysV docs deferred.

Various other tweaks.
#19 27013 C. Thomas Tyler Updated adoc includes to reference generated manual pages in 'gen'.

The 'gen' subdirectory contains man pages generated from scripts
gen_script_man_pages.sh.

Made various other minor doc tweaks.
#18 26992 Robert Cowham Document SiteTags.cfg file
#17 26981 C. Thomas Tyler Added Appendix on starting and stopping the server.

#review @d_benedict
#16 26851 Robert Cowham Fix typo in tmpfs /etc/fstab entry which stopped it working in the doc.
Mention in pre-requisites for failover and failover guide the need to review
OS Config for your failover server.
Document Ubuntu 2020.04 LTS and CentOS/RHEL 8 support. Note performance
has been observed to be better with CentOS.
Document pull.sh and submit.sh in main SDP guide (remove from Unsupported doc).
Update comments in triggers to reflect that they are reference implementations, not just examples. No code change.
#15 26780 Robert Cowham Complete rename of P4DNSNAME -> P4MASTERHOST
#14 26755 Robert Cowham Include p4verify.sh man page in SDP Guide automatically for usage section.
#13 26748 Robert Cowham Add recommended performance tweaks:
- THP off
- server.locks directory into RAM
#12 26747 Robert Cowham Update with some checklists for failover to ensure valid.
Update to v2020.1
Add Usage sections where missing to Unix guide
Refactor the content in Unix guide to avoid repetition and make things read more sensibly.
#11 26727 Robert Cowham Add section on server host naming conventions
Clarify HA and DR, and update links across docs
Fix doc structure for Appendix numbering
#10 26674 C. Thomas Tyler Removed reference to deleted file.
#9 26661 Robert Cowham Tidying up cross references.
Added missing sync_replica.sh docs.
#8 26656 Robert Cowham Fix typo
#7 26654 Robert Cowham First draft of new Failover Guide using "p4 failover"
Linked from SDP Unix Guide
#6 26649 Robert Cowham More SDP Doc tidy up.
Removed some command summary files.
#5 26644 Robert Cowham SDP Doc Update to address jobs.
Mainly documents scripts which lacked any mention.
#4 26637 Robert Cowham Include script help within doc
Requires a couple of tags in the scripts themselves.
#3 26631 Robert Cowham New AsciiDoc version of Windows SDP guide
#2 26628 Robert Cowham Basically have things working for AsciiDoc
#1 26627 Robert Cowham First version of AsciiDoc with associated PDF