Suggest you have appropriate entries in ~/.aws/config and ~/.aws/credentials
mkdir /work/sdp-cloud-deploy
p4 -u <workshop-user> clone -p public.perforce.com:1666 -f //guest/perforce_software/sdp-cloud-deploy/main/...
In root directory of project:
terraform init
env/dev/eu-west-1.tfvars - rename as appropriate and edit
provider.tf
variables.tf
instance_1.tf
instance_2.tf
Ensure your public key is in files/id_rsa.pub
Then, provision resources with:
terraform plan -var-file=env/dev/eu-west-1.tfvars
Note it is possible to have multiple workspaces, e.g. dev and prod.
To actually create:
terraform apply -var-file=env/dev/eu-west-1.tfvars
If you want to know the endpoints of resources created by this stack (e.g. EFS), run:
terraform output
(all outputs are defined in outputs.tf)
Get one of the two public DNS records via:
terraform output
And then connect via SSH:
ssh -i private/id_rsa ec2-user@[publicDnsRecord]
Please note that the SSH public keys such as files/id_rsa.pub will be used to configure the SSH access for user ec2-user on the VMs.
You can add multiple public keys to this folder as desired.
This will use "terraform output" to get IP addresses etc and will update the 2 files:
./update_hosts.py
These are in the sdp directory:
The following ssh keys are installed for perforce user account on both boxes and SSH is configured to allow ssh between master and replica without password prompt. This key pair can be regenerated so you can replace these files (but must be without a password):
ansible-playbook -i hosts sdp/filesystems.yaml
ansible-playbook -i hosts sdp/install_sdp.yaml
ansible-playbook -i hosts sdp/create_replica.yaml
Then check you have access:
p4 -p <IPaddress-of-replica>:1666 -u perforce pull -lj
You will be prompted for password which has been created from entry in file sdp/passwords.yaml
Make sure the instances have been stopped. You will not be able to destroy them otherwise.
Then, run:
tf destroy -var-file env/dev/eu-west-1.tfvars