Benchmarking tool for Perforce and SVN === Based on Locust: Locust is an easy-to-use, distributed, user load testing tool. Intended for load testing web sites (or other systems) and figuring out how many concurrent users a system can handle. * Write user test scenarios in plain-old Python * Distributed & Scalable - supports hundreds of thousands of users * Web-based UI * Can test any system For more info: * Docs: http://docs.locust.io/en/latest/what-is-locust.html * Github source: https://github.com/locustio/locust Background -- A customised version of Locust which supports Perforce (and SVN) benchmarking with a configurable number of users executing basic tasks (e.g. sync/edit/add/delete/submit). It performs random numbers of adds/edits/deletes with files which are randomly text or binary (according to some currently hard coded relative distributions). Basic measure in the output is number of submits/commits per second/minute. Easily extended for more involved benchmarking tasks. Configuration -- Edit the file benchmark_config.yaml according the comments in it to specify your local P4D (or SVN server) and repository paths to use. general: min_wait: 1000 max_wait: 10000 workspace_root: /tmp/bench_tests # Perforce benchmark testing parameters # Specify password if required perforce: port: localhost:2001 user: bruno charset: password: repoPath: //depot/Jam/... Examples of running the tool -- Running in no-web mode: locust -f p4_locust.py --no-web locust -f svn_locust.py --no-web Running in web mode (defaults to http://localhost:8089): locust -f p4_locust.py locust -f svn_locust.py Web output -- Note the key measure is the "commit" line, as every commit contains add/edit/delete and is preceded by a sync. Unfortunately the init_sync values are reset currently (web app background of initial design) - needs tweak to leave as useful to understand what happens first time you sync a workspace as opposed to just syncing updates. ![alt text](images/running_locust.png "Locust running in browser") Running with multiple client machines -- As per locust documentation - easy to run a single master locust with multiple child instances on different machines, each communicating results to the master for reporting. * Locust distributed: http://docs.locust.io/en/latest/running-locust-distributed.html Pre-requisites === Tested on Windows (with 32 bit python) and Linux/Mac. * Python 2.7.11+ or 3.5+ The following packages should be installed via pip: pip install -r requirements.txt Generating test data === There is a script to generate large numbers of test files of varying sizes. ``` # python3.6 createfiles.py usage: createfiles.py [-h] [-m MAX] [-l LEVELS [LEVELS ...]] [-s SIZE] [-d ROOTDIR] [-c] [-t] [-b] optional arguments: -h, --help show this help message and exit -m MAX, --max MAX Number of files to create (default 100) -l LEVELS [LEVELS ...], --levels LEVELS [LEVELS ...] Directories to create at each level, e.g. -l 5 10 -s SIZE, --size SIZE Average size of files (default 20000) -d ROOTDIR, --rootdir ROOTDIR Directory where to start -c, --create Create the files as specified instead of just printing names -t, --textonly Only create text files -b, --binaryonly Only create binary files ``` Example (allows multi-threading of several jobs in parallel): nohup python3.6 createfiles.py --max 2000 --size 5000000 --levels 40 40 --rootdir /test/ws --binaryonly --create > f1.out 2>&1 & nohup python3.6 createfiles.py --max 2000 --size 100000 --levels 40 40 --rootdir /test/ws --create > f2.out 2>&1 & Once you have created the amount of files/total size you wish ("du -sh" is useful), then add it to your Perforce server in the usual way from command line (although you may wish to create smaller batches and submit them in parallel). To do list --- Things like: * Make sure binary files are actually written as binary * Review the resetting of init_sync values * Review other changes people have created pull requests for on GitHub as some useful stuff there