HSM is a Docker container management system that hides details away that might be potentially error prone.
Yes, I may automate this in the future. Until I understand these steps better, they're manual.
Install docker-machine.
docker-machine create --driver virtualbox hsm
$(docker-machine env hsm)
Install docker-compose:
curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
Right now, Rake appears to be the best mechanism for HSM. It's likely that we'd want task dependencies. Tasks like "build this project if it doesn't exist", "start it up if it's not running", etc.
A list of tasks should probably be associated with each of the major docker systems in use.
hsm:machine:status
List current machine stateshsm:machine:start
Start a local VMhsm:machine:stop
Stop the local VMWe probably want to expose HSM configuration as a single Ruby file that
describes every project at the moment, and slowly migrate the docker-compose
setup into the HSM configuration. In the future, the docker-compose.yml
files will likely need to be autogenerated, in order to get the right IDs
in the external_links
configuration, for example.
We'll very likely want a project configuration that allows for local overrides of each component. A CI automation system will need to use the local machine (not a VM) for the docker host. The development environments may want to be able to expose one or more remote debugging sessions for different services.
Here's an example of what might be the first pass of the configuration:
machines {
hsm: {
driver: :virtualbox
}
}
services {
postgres: {
path: 'hsm/library/postgres'
},
perforce: {
path: 'hsm/library/perforce'
},
p4webapi: {
path: 'p4_web_api/p4_web_api'
}
}
$(docker-machine env hsm)
cd hsm/library/perforce
docker-compose build
docker-compose up
Note: this creates a session called 'perforce_p4'. External links will probably
need to reference perforce_p4_1:p4
.
$(docker-machine env hsm)
cd hsm/library/postgres
docker-compose up
See notes below on why I'm depending on this. Should be run before any local projects.
$(docker-machine env hsm)
cd hsm/library/geminabox
docker-compose up
$(docker-machine env hsm)
cd p4_web_services_auth
docker-compose build --no-cache
docker-compose up
docker-compose rm
$(docker-machine env hsm)
cd p4_web_api/p4_web_api
docker-compose build --no-cache
docker-compose up
$(docker-machine env hsm)
cd p4_web_api/clients/ruby/p4_web_api_client
docker-compose build --no-cache
docker-compose up
docker-compose rm
$(docker-machine env hsm)
cd p4_project_services/p4_project_services_data
docker-compose build --no-cache
docker-compose up
docker-compose rm
$(docker-machine env hsm)
cd p4_project_services/p4_project_services_data
docker-compose build --no-cache
docker-compose up
If you get a stack trace when running docker-compose build
, this is likely
due to DOCKER_HOST not being set. (Ergo, you need to run
$(docker-machine env hsm)
.) Double check docker-machine ls
to make sure
the hsm
machine is running.
Mounting "persistent gem cache" is possible using data volumes, but really should be a 'per application' gem cache, since you still run into good ol' Ruby version conflicts.
For the most part, I'm less concerned about the speed of bundle install, and more just trying to figure out how I can set up a development environment. I'd like Gem project G1 to be used by services hosted in containers C1, and C2. If I created a 'persistent volume' for C1 and C2, do we set GEM_HOME to that persistent volume? Do we add the volume to GEM_PATH? Or, do we run a custom geminabox server?
GEM_HOME: this means the entire cache is shared, which is not great, because we do want container isolation between C1 and C2. This technique is described here:
http://www.atlashealth.com/blog/2014/09/persistent-ruby-gems-docker-container/#.VRAxwVw-BBw
GEM_PATH: This would be fast to pick up local dependencies, though bundle install on all third party deps for each container would still hit the external website.
geminabox: This actually starts up a persistent cache that we have to reconfigure
the Gemfiles to. (The line source https://rubygems.org
becomes
source http://geminabox
.)
I'm inclined towards running geminabox, since that allows container isolation. Plus, you can ship a known set of dependencies, which might make it faster to setup a known system.
docker ps -a | grep 'Exited' | awk '{print $1}' | xargs docker rm
Initially, I started testing individual services by creating specs in the "client" projects for each service. This creates a weird issue where the client project often needs the main service to be a deployed gem, since, well for integrated testing, it's actually a dependency.
I sense that in the long haul, there may be 3 ruby projects per service:
# HSM (Helix Services Management) HSM is a Docker container management system that hides details away that might be potentially error prone. ## Setup Yes, I may automate this in the future. Until I understand these steps better, they're manual. 1. Install docker-machine. https://docs.docker.com/machine/ 2. `docker-machine create --driver virtualbox hsm` 3. `$(docker-machine env hsm)` 4. Install docker-compose: curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose ## Design Right now, Rake appears to be the best mechanism for HSM. It's likely that we'd want task dependencies. Tasks like "build this project if it doesn't exist", "start it up if it's not running", etc. A list of tasks should probably be associated with each of the major docker systems in use. * `hsm:machine:status` List current machine states * `hsm:machine:start` Start a local VM * `hsm:machine:stop` Stop the local VM ### HSM Configuration We probably want to expose HSM configuration as a single Ruby file that describes every project at the moment, and slowly migrate the docker-compose setup into the HSM configuration. In the future, the `docker-compose.yml` files will likely need to be autogenerated, in order to get the right IDs in the `external_links` configuration, for example. We'll very likely want a project configuration that allows for local overrides of each component. A CI automation system will need to use the local machine (not a VM) for the docker host. The development environments may want to be able to expose one or more remote debugging sessions for different services. Here's an example of what might be the first pass of the configuration: machines { hsm: { driver: :virtualbox } } services { postgres: { path: 'hsm/library/postgres' }, perforce: { path: 'hsm/library/perforce' }, p4webapi: { path: 'p4_web_api/p4_web_api' } } Notes on Docker Usage of Different Components --------------------------------------------- ### Perforce $(docker-machine env hsm) cd hsm/library/perforce docker-compose build docker-compose up Note: this creates a session called 'perforce_p4'. External links will probably need to reference `perforce_p4_1:p4`. ### Postgres $(docker-machine env hsm) cd hsm/library/postgres docker-compose up ### geminabox See notes below on why I'm depending on this. Should be run before any local projects. $(docker-machine env hsm) cd hsm/library/geminabox docker-compose up ### p4_web_services_auth $(docker-machine env hsm) cd p4_web_services_auth docker-compose build --no-cache docker-compose up docker-compose rm ### p4_web_api $(docker-machine env hsm) cd p4_web_api/p4_web_api docker-compose build --no-cache docker-compose up ### p4_web_api_client $(docker-machine env hsm) cd p4_web_api/clients/ruby/p4_web_api_client docker-compose build --no-cache docker-compose up docker-compose rm ### p4_project_services_data $(docker-machine env hsm) cd p4_project_services/p4_project_services_data docker-compose build --no-cache docker-compose up docker-compose rm ### p4_project_services $(docker-machine env hsm) cd p4_project_services/p4_project_services_data docker-compose build --no-cache docker-compose up Notes ----- ### Stack trace running 'docker-compose build' If you get a stack trace when running `docker-compose build`, this is likely due to DOCKER_HOST not being set. (Ergo, you need to run `$(docker-machine env hsm)`.) Double check `docker-machine ls` to make sure the `hsm` machine is running. ### Sharing gems between containers Mounting "persistent gem cache" is possible using data volumes, but really should be a 'per application' gem cache, since you still run into good ol' Ruby version conflicts. For the most part, I'm less concerned about the speed of bundle install, and more just trying to figure out how I can set up a development environment. I'd like Gem project G1 to be used by services hosted in containers C1, and C2. If I created a 'persistent volume' for C1 and C2, do we set GEM_HOME to that persistent volume? Do we add the volume to GEM_PATH? Or, do we run a custom geminabox server? GEM_HOME: this means the entire cache is shared, which is not great, because we do want container isolation between C1 and C2. This technique is described here: http://www.atlashealth.com/blog/2014/09/persistent-ruby-gems-docker-container/#.VRAxwVw-BBw GEM_PATH: This would be fast to pick up local dependencies, though bundle install on all third party deps for each container would still hit the external website. geminabox: This actually starts up a persistent cache that we have to reconfigure the Gemfiles to. (The line `source https://rubygems.org` becomes `source http://geminabox`.) I'm inclined towards running geminabox, since that allows container isolation. Plus, you can ship a known set of dependencies, which might make it faster to setup a known system. ### Cleaning up exited containers docker ps -a | grep 'Exited' | awk '{print $1}' | xargs docker rm ### Incremental testing of containers Initially, I started testing individual services by creating specs in the "client" projects for each service. This creates a weird issue where the client project often needs the main service to be a deployed gem, since, well for integrated testing, it's actually a dependency. I sense that in the long haul, there may be 3 ruby projects per service: 1. The main service 2. Client API 3. Integrated tests
# | Change | User | Description | Committed | |
---|---|---|---|---|---|
#4 | 13528 | tjuricek |
Moved rack and app server configuration to be managed via Salt. Also, only using a single value "url" to configure how the p4_project_services instance references the p4_web_api. And, removing the Docker setup, since that won't work for a production system. |
||
#3 | 13480 | tjuricek |
Creating several Rake tasks for our "hsm" configuration that are based on a Ruby API config file. The config file points to machines and services in the system which then takes care of setting up the docker-machine environment and docker-compose configuration. |
||
#2 | 13477 | tjuricek |
More docker-compose configuration for projects in research for HSM. Added a geminabox host for the cluster, which allows for quick rebuilds. Each cluster will cache the gems it needs, and publish as well. We may want to put the overwrite commands for all publish steps by default. Note: referencing "external links" can only be done 'at runtime', so "bundle install" really needs to be run in the context of 'docker-compose up' not 'docker-compose build'. This was edited in all configured libraries so far. Finally, added the p4_project_services projects to configuration, which is running under it's own puma instance. The postgres instance is a little tricky to figure out the exact workflow. Right now I'm using the host rake db:migrate task and calling createdb manually, since we need to pass the password along. |
||
#1 | 13476 | tjuricek |
Some initial concepts for 'hsm' Using docker-machine and docker-compose for a basis is very useful, instructions are documented in hsm/hsm/README.md Currently uses the installer for perforce, doesn't have to. We *very likely* want to consider a way to set up a Docker image with data. Probably should change the Docker image for perforce to use an internal 15.x release. We may want to have all other web services just use the p4_web_api for auth, and alter our shared middleware approach. It's likely that we'd want a library of 'docker images' for our components, since not every component will get touched frequently. This will save a lot of startup time, just download and go. This may be step #1 even of hsm. |