HMS v1.0 consists of the following components.
The (SDP)[https://swarm.workshop.perforce.com/projects/perforce-software-sdp] software, on which HMS builds.
An dedicated SDP instance named "hms" (/p4/hms
), which is effectively the HMS Server. This instance of the SDP manages and synchronizes SDP and HMS scripts and configuration files on all Perforce server machines managed with the SDP, be they simple proxy machines, edge server, or the master/commit server. SDP scripts detect unversioned changes to backup and failover scripts that might otherwise go unnoticed. It also tracks the many small details that can foil failover, such as incorrect crontab files for the perforce user on various machines. And much more. This provides greater visibility to all the tiny details that matter and need to be considered when doing HMS operations, such as a topology-wide upgrade or full eco-system DR failover. Things like disabling crontabs for backups during maintenance windows, and turning them back on later -- things easily forgotten when human admins have to do a lot of typing to make it happens.
The job of the HMS server is to keep the scripts and configuration files in sync during routine operation.
The HMS server plays a role in coordinating planned failover, but is not required to be available to execute an unplanned HA or DR failover. Unplanned failover is executed directly on the machine that is to be the new master.
A Swarm instance associated with the hms SDP instance, for the usual things Swarm provides (web interface, code review, email notification, etc.).
A JSON object data model that defines the global topology. It knows every machine that contains Helix topology components of any kind, and knows about all the SDP instances (or data sets). The server will maintain an internal map of the global topology that can be updated from various Helix instances, each of which is aware of its own topology (maintained with p4 server
specs for each instance). The JSON objects are stored using p4 keys
in the HMS Server. Thus, like Swarm, HMS uses p4d itself as a data store, thus benefitting from the data protection available to p4d.
A set of standards and naming conventions. Extending on the SDP which defines standards for things like P4ROOT and P4JOURNAL paths, HMS defines standards for naming server specs, a key element of Helix server topology.
See the Server Spec Naming Standard.
A custom p4broker, which adds new commands. For example:
SSH Keys. The SSH keys for the 'perforce' user must be maintained such that the perforce user can ssh without a password (as needed for automation) among all machines.
HMS v1.0 System Components === HMS v1.0 consists of the following components. * The (SDP)[https://swarm.workshop.perforce.com/projects/perforce-software-sdp] software, on which HMS builds. * An dedicated SDP instance named "hms" (`/p4/hms`), which is effectively the HMS Server. This instance of the SDP manages and synchronizes SDP and HMS scripts and configuration files on all Perforce server machines managed with the SDP, be they simple proxy machines, edge server, or the master/commit server. SDP scripts detect unversioned changes to backup and failover scripts that might otherwise go unnoticed. It also tracks the many small details that can foil failover, such as incorrect crontab files for the perforce user on various machines. And much more. This provides greater visibility to all the tiny details that matter and need to be considered when doing HMS operations, such as a topology-wide upgrade or full eco-system DR failover. Things like disabling crontabs for backups during maintenance windows, and turning them back on later -- things easily forgotten when human admins have to do a lot of typing to make it happens. The job of the HMS server is to keep the scripts and configuration files in sync during routine operation. The HMS server plays a role in coordinating *planned* failover, but is not required to be available to execute an *unplanned* HA or DR failover. Unplanned failover is executed directly on the machine that is to be the new master. * A Swarm instance associated with the hms SDP instance, for the usual things Swarm provides (web interface, code review, email notification, etc.). * A JSON object data model that defines the global topology. It knows every machine that contains Helix topology components of any kind, and knows about all the SDP instances (or *data sets*). The server will maintain an internal map of the global topology that can be updated from various Helix instances, each of which is aware of its own topology (maintained with `p4 server` specs for each instance). The JSON objects are stored using `p4 keys` in the HMS Server. Thus, like Swarm, HMS uses p4d itself as a data store, thus benefitting from the data protection available to p4d. * A set of standards and naming conventions. Extending on the SDP which defines standards for things like P4ROOT and P4JOURNAL paths, HMS defines standards for naming *server specs*, a key element of Helix server topology. See the [Server Spec Naming Standard](https://swarm.workshop.perforce.com/projects/perforce_software-hms/files/main/ServerSpecNamingStandard.md). * A custom p4broker, which adds new commands. For example: - p4 hms start *fgs* - p4 hms failover -ha *fgs* - p4 hms main start *fgs_ * SSH Keys. The SSH keys for the 'perforce' user must be maintained such that the perforce user can ssh without a password (as needed for automation) among all machines.
# | Change | User | Description | Committed | |
---|---|---|---|---|---|
#4 | 29182 | C. Thomas Tyler |
Moved HMS files from /p4/common/bin -> /p4/common/site/bin. Moved HMS files from /p4/common/lib -> /p4/common/site/lib. Removed dependency on SDP libs so that HMS can be deployed with a wider variety of SDP versions. |
||
#3 | 26282 | C. Thomas Tyler | Formatting tweak. | ||
#2 | 25533 | C. Thomas Tyler |
Copied updated and new files from SDP into the new HMS "overlay" structure. A 'p4 copy' was done in all cases, so files in this change match what they did in the SDP. Corresponding files in the SDP are to be deleted. Some files will need modification to adapt to the new HMS structure, e.g. the 'setup' tree. |
||
#1 | 25531 | C. Thomas Tyler |
Refactored to receive merge from SDP. This structure emphasizes that HMS will be layered on the SDP. The tree is now structured to make it clear where files appear in an as-deployed configurtion, overlaid into the '/p4/common' structure of the SDP. The test suite is outside this structure, as it will not be deployed (due to dependencies on infratructure that won't likely appear in SDP production deployments). |
||
//guest/perforce_software/hms/dev/SystemComponents.md | |||||
#1 | 20309 | C. Thomas Tyler | Populate -o -b perforce_software-hms-dev. | ||
//guest/perforce_software/hms/main/SystemComponents.md | |||||
#2 | 20080 | C. Thomas Tyler | Adjusted URLs. | ||
#1 | 20079 | C. Thomas Tyler | Updated and refactored. |