#!/bin/bash set -u # Get EBS volumes with Name tag of <host>-root and # <host>-hxdepots, # e.g. perforce-01-root and perforce-01-hxdepots. Take snapshots, and # tag them with the current journal counter. function msg () { echo -e "$*"; } function errmsg () { msg "\\nError: ${1:-Unknown Error}\\n"; ExitCode=1; } function bail () { errmsg "${1:-Unknown Error}"; exit "${2:-1}"; } declare ThisScript="${0##*/}" declare Version="1.0.2" declare ThisHost=$(hostname -s) declare VolumeBaseName= declare VolumeName= declare VolumeId= declare -i ExitCode=0 msg "Started ${0##*/} v$Version at $(date)." for VolumeBaseName in root hxdepots; do VolumeName="${ThisHost}-${VolumeBaseName}" VolumeId=$(aws ec2 describe-volumes --filters Name=tag:Name,Values=$VolumeName --query 'Volumes[*].{ID:VolumeId}') if [[ -z "$VolumeId" ]]; then errmsg "Could not determine VolumeId for $VolumeName. Skipping it." continue fi msg "Snapshotting volume $VolumeName [$VolumeId]." msg aws ec2 create-snapshot --description "Snaphot of $VolumeName on $(date)." --volume-id $VolumeId --tag-specifications "ResourceType=snapshot,Tags=[{Key=Host,Value=$ThisHost},{Key=Contents,Value=CheckpointsAndArchives}]" aws ec2 create-snapshot --description "Snaphot of $VolumeName on $(date)." --volume-id $VolumeId --tag-specifications "ResourceType=snapshot,Tags=[{Key=Host,Value=$ThisHost},{Key=Contents,Value=CheckpointsAndArchives}]" ||\ errmsg "Failed to create snapshot for $VolumeName." done if [[ "$ExitCode" -eq 0 ]]; then msg "Processing completed OK." else msg "Processing completed WITH ERRORS. Review the output above." fi exit $ExitCode
# | Change | User | Description | Committed | |
---|---|---|---|---|---|
#6 | 29942 | C. Thomas Tyler |
Customer Contributed changes: 1. I use a slightly different method to get the latest journal version (for appending to the snapshot name). The approach I had been using was running into some issues with the Perforce account not being logged into the Perforce server, so I borrowed the approach out of the existing SDP scripts to login and retrieve that value. 2. I've added a section at the end to automatically delete old snapshots when they age past a value that's configurable in the script. If snapshots were being generated automatically by AWS automation, this aging-off process would be part of that; since we're pushing the snapshot creation from the Perforce server, I needed to add this functionality, so I didn't have to go manually delete old versions from AWS occasionally. |
||
#5 | 29941 | C. Thomas Tyler |
Contributed changes to snapshot.sh: * Added /hxlogs to list of snapshotted volumes. * Added explicit output formatting (text) to AWS CLI calls. |
||
#4 | 27722 | C. Thomas Tyler |
Refinements to @27712: * Resolved one out-of-date file (verify_sdp.sh). * Added missing adoc file for which HTML file had a change (WorkflowEnforcementTriggers.adoc). * Updated revdate/revnumber in *.adoc files. * Additional content updates in Server/Unix/p4/common/etc/cron.d/ReadMe.md. * Bumped version numbers on scripts with Version= def'n. * Generated HTML, PDF, and doc/gen files: - Most HTML and all PDF are generated using Makefiles that call an AsciiDoc utility. - HTML for Perl scripts is generated with pod2html. - doc/gen/*.man.txt files are generated with .../tools/gen_script_man_pages.sh. #review-27712 |
||
#3 | 26843 | C. Thomas Tyler | Updated to adapt to changes in AWS CLI. | ||
#2 | 25108 | C. Thomas Tyler | Corrected comments; no functional change. | ||
#1 | 25104 | C. Thomas Tyler |
Added sample script to create EBS snapshot of volumes with a Name tag of <host>-root and <host>-hxdepots, e.g. perforce-01-root and perforce-01-hxdepots. This is intended to be called at the optimal time to reduce risk exposure. The optimal time is immediately after a journal rotation completes near the start of the overall daily checkpoint process, or optionally immediately after the offline checkpoint is created. This script creates 2 EBS snapshots with appropriate resource tagging each time it is run. Note that a full recovery would entail mounting these 2 volumes, creating new hxdepots and hxmetadata volumes, finalizing the SDP structure, etc. This is fairly straightforward, but not trival, and is needed only as a Plan B for recovery. Plan A is to use Perforce replication to a secondary instance for fast and easier recovery. Basic data retention polices can be implemented with EBS Data Lifecycle Policies. Custom automation can copy recovery assets to S3 Glacier for long term storage. In addition to this script, a new high-level SDP structure is created, /p4/common/cloud. Under the new cloud directory is a directory for the cloud provider, e.g. one of aws, azure, gcp, rackspace, etc. Cloud-provider specific files can go in there. |