HMS v1.0 System Components === HMS v1.0 consists of the following components. * The (SDP)[https://swarm.workshop.perforce.com/projects/perforce-software-sdp] software, on which HMS builds. * An dedicated SDP instance named "hms" (`/p4/hms`), which is effectively the HMS Server. This instance of the SDP manages and synchronizes SDP and HMS scripts and configuration files on all Perforce server machines managed with the SDP, be they simple proxy machines, edge server, or the master/commit server. SDP scripts detect unversioned changes to backup and failover scripts that might otherwise go unnoticed. It also tracks the many small details that can foil failover, such as incorrect crontab files for the perforce user on various machines. And much more. This provides greater visibility to all the tiny details that matter and need to be considered when doing HMS operations, such as a topology-wide upgrade or full eco-system DR failover. Things like disabling crontabs for backups during maintenance windows, and turning them back on later -- things easily forgotten when human admins have to do a lot of typing to make it happens. The job of the HMS server is to keep the scripts and configuration files in sync during routine operation. The HMS server plays a role in coordinating *planned* failover, but is not required to be available to execute an *unplanned* HA or DR failover. Unplanned failover is executed directly on the machine that is to be the new master. * A Swarm instance associated with the hms SDP instance, for the usual things Swarm provides (web interface, code review, email notification, etc.). * A JSON object data model that defines the global topology. It knows every machine that contains Helix topology components of any kind, and knows about all the SDP instances (or *data sets*). The server will maintain an internal map of the global topology that can be updated from various Helix instances, each of which is aware of its own topology (maintained with `p4 server` specs for each instance). The JSON objects are stored using `p4 keys` in the HMS Server. Thus, like Swarm, HMS uses p4d itself as a data store, thus benefitting from the data protection available to p4d. * A set of standards and naming conventions. Extending on the SDP which defines standards for things like P4ROOT and P4JOURNAL paths, HMS defines standards for naming *server specs*, a key element of Helix server topology. See the [Server Spec Naming Standard](https://swarm.workshop.perforce.com/projects/perforce_software-hms/files/main/ServerSpecNamingStandard.md). * A custom p4broker, which adds new commands. For example: - p4 hms start *fgs* - p4 hms failover -ha *fgs* - p4 hms main start *fgs_ * SSH Keys. The SSH keys for the 'perforce' user must be maintained such that the perforce user can ssh without a password (as needed for automation) among all machines.