= Helix Web Services Internal Guide: 2016.1 Alpha Tristan Juricek <tjuricek@perforce.com> v2016.1.alpha, January 2016 :toc: right :sectlinks: WARNING: This is a reference guide for Perforce employees. Do not share with anyone outside the company. Much of the information requires access to internal resources. == Software Design === Java 8 Traits The initial Java 8 codebase made extensive use of default methods on interfaces to create a kind of _trait_-like concept. My first experience with this technique comes from Scala - in fact, it was pretty much the *main* Scala feature I liked. Other features of Scala will tend to make your head melt in complexity, initiate infinite bikeshedding fights, etc. But not traits; simple to use, fantastic at composition. NOTE: After I started writing this, I found https://opencredo.com/traits-java-8-default-methods/[this article from OpenCredo]. It's a pretty great overview of what's going on. You'll find that most of what I'm doing falls in the "shared function" category. Now, if you go read the http://www.scala-lang.org/old/node/126[overview of Scala's traits], you'll find that they are pretty darn close to a Java 8 interface. You have an interface, and that interface can include default methods. This is different from Java 7 and before, where you almost always have interface and implementation "pairs" that make up a reusable pool of functions. To wire all of these components together, dependency injection frameworks like Spring and Guice are used. I've found a couple of things to be true in practice: 1. 99% of dependency injection usage is "static" in scope (basically, the scoping capability is never used) 2. Trait-style composition is far, far less verbose Let's compare older style composition and newer by checking out the `P4Methods` implementation. This class handles calling the `IServer#execMapCmd` method of P4Java and converting the output to our handy `ResultMap` class. The older-style Java 7 would probably split the interface and implementation into separate components. You'd likely reference the interface in a consuming class, and then wire it together via your DI system. [source,java] .... public interface P4Methods { List<ResultMap> exec(IServer server, String command, String... args); } public class P4MethodsImpl implements P4Methods { static void Logger logger = LoggerFactory.getLogger(P4MethodsImpl.class); List<ResultMap> exec(IServer supplier, String command, String... args) { // Implementation } } public interface UserService { List<UserCommand> listUsers(IServer server); } // Now, let's use it somewhere public class UserServiceImpl implements UserService { private P4Methods p4Methods; // You'll probably have constructor variations public UserServiceImpl(P4Methods p4Methods) { this.p4Methods = p4Methods; } // Of course, you'd have getter/setters too public void setP4Methods(P4Methods p) /* ... */ // And a method to consume the interface public List<UserCommand> listUsers(IServer server) { List<ResultMap> resultMaps = p4Methods.exec(server, "users"); // convert to UserCommand } } // And your DI system's configuration object @Configuration public class MyAppConfig { @Bean public P4Methods p4Methods() { return new P4MethodsImpl(); } @Bean public UserService userService() { return new UserServiceImpl(p4Methods()); } } .... (Remember when people celebrated not having to use XML for this?) How I've done this via these Java 8 "traits": [source,java] .... public interface P4Methods { default List<ResultMap> exec(Supplier<IServer> supplier, String command, String... args) { // Implementation } } public interface UserService extends P4Methods { default List<UserCommand> listUsers(Supplier<IServer> supplier) { List<ResultMap> resultMaps = exec(server, "users"); // convert to UserCommand } } .... A few thoughts: * Both of these implementations are equally testable and mockable * You might want to rename `exec` to something else (see my next section), but it was the only case I've used it * The use of `Supplier<IServer>` instead of just `IServer` is really important (and I'll cover this in a different section) ==== A word of warning: don't overload default method names (with other default methods) As pointed out, Java 8 default methods http://zeroturnaround.com/rebellabs/how-your-addiction-to-java-8-default-methods-may-make-pandas-sad-and-your-teammates-angry/[can cause pandas to be sad]. It basically happens when you have multiple default methods with the same name. Don't do this. When it comes to overloading, you get a weird resolution mechanism in Java: 1. The concrete method wins 2. The "lowest" implementation in the implemented interface wins. 3. If there are multiple implementations in different class inheritance paths, then it's a compile error. So, if you write a really generic default method, like `void sort(List<String> l)`, I will pretty much guarantee you someone will get confused. Someone will write a `void sort(List l)` and, depending on the environment, could run into a strange compiler error or a surprise override by a base class. Frequently, a developer will run into this when doing function decomposition. A lot of times you'll have your "main method" that's your logic, then you'll have an idea "oh I'll just move these lines into another method". That helper method is what ends up with the simple generic name. In practice, I tend to find that this process of needing helper methods is usually ripe for revisions using Java 8 lambda callbacks anyway. ==== Decompose complex default methods by using helper classes Say you really, really want to break down a complex function in this trait you have. And you really don't want your helper to be accessible. Here, we basically use a "functional class", where the default method just basically delegates to a class to do all the real work. [source,java] .... public interface MagicMachine { default Magic performMagic(InputData inputData) { return new RealImplementation(inputData).get(); } } // Note: RealImplementation is package-private. class RealImplementation implements Supplier<Magic> { InputData inputData; Magic magic; public RealImplementation(InputData inputData) { this.inputData; } public Magic get() { doSomethingInteresting1(); doSomethingInteresting2(); return magic; } private void doSomethingInteresting1() { // magic! } private void doSomethingInteresting2() { // magic! } } .... The whole point is that `MagicMachine` is the only interface the consuming bits really need. The `RealImplementation` is complicated, but it's different methods provide no real help for users of the framework. It's a way of simply restricting down the "surface area" of your packages, such that other parts of the system really have no need to look at it. Here I did it by making this class package private. I've done this in the past with the Helix Sync submission algorithm, which involved making lots of queries and doing different things based on the results. So, you might have some different command calls depending on the state of the system, maybe try to create a workspace or not, etc. === Use `Supplier` for "lazy resource access" One important aspect of the `UsesServerHandles` interface is that it provides a way to get a `Supplier<IServer>`, and not just an `IServer`. Why? So that your code doesn't actually go out and make a connection to the server until it absolutely needs to. This way you can put validation logic that might not need the server handle ahead of it without really needing to think about it. Say we have a method that has a validation method in our callback block from `withServerHandle`: [source,java] .... class MyRoute implements Route, UsesServerHandles, ProbablySomethingElse { @Override public Object handle(Request request, Response response) throws Exception { // Initialize serverId, sessionData, settings withServerHandle(serverId, sessionData, settings, serverSupplier -> { if (!someValidationLogic()) { throw new IllegalStateException("invalid"); } // OK go do something with serverSupplier }); } } .... When `someValidationLogic()` decides to halt via exception, we won't even have tried to connect to the p4d. But you know that if you did, by the time you reach outside the `withServerHandle` block the p4d connection is closed and gone. === In Java 8, anonymous inner classes should probably be replaced by lambdas You'll find that developers will likely fall back to old habits, which may be reinforced by their IDE: [source,java] .... List<String> loadAllTheThings(String id) { List<String> theThings = // something that actually fetches the things // hey I'm not used to lambdas so I'll do this Java 6 style theThings.sort(theThings, new Comparator<String>() { public int compare(String s1, String s2) { return s1.toLowerCase().compareTo(s2.toLowerCase()); } }); return theThings; } .... What does that look like with Java 8? [source,java] .... List<String> loadAllTheThings(String id) { List<String> theThings = // something that actually fetches the things theThings.sort((s1,s2) -> s1.toLowerCase().compareTo(s2.toLowerCase())); return theThings; } .... === Prefer unchecked exceptions, otherwise your lambdas will get really noisy. One thing I haven't done quite yet, that I might do more strongly, is to wrap all checked exception with unchecked exceptions. The main reason is that java 8 lambda functions rarely allow for any kind of checked exception. Thus in code like this: [source,java] .... list.forEach(x -> doSomething(x)); .... That `doSomething` can't throw any kind of checked exception. If that method happens to have a signature like `void doSomething(Value x) throws TheException`, well, then you need to do something like this in your lambda: [source,java] .... list.forEach(x -> { try { doSomething(x) } catch (TheException ex) { throw new RuntimeException(ex); } }); .... You'll find there are lots of variations on how to handle this sort of thing. Personally, my recommendation is to take your exception hierarchy, and create "Unchecked" variations, like the new `java.io.UncheckedIOException`. This keeps everything from just becoming an anonymous `RuntimeException`. Expect to create lots of proxy interfaces around older APIs. == Infrastructure === Internal Documentation Build Locations Documentation is built accessible internally: Mainline: * Main HTML https://swarm.perforce.com/files/builds/main/p4-bin/doc/internal/helix-web-services-doc.tgz * Main PDF https://swarm.perforce.com/files/builds/main/p4-bin/doc/internal/helix-web-services.pdf * Unstyled HTML https://swarm.perforce.com/view/builds/main/p4-bin/doc/internal/hws.html[//builds/main/p4-bin/doc/internal/hws.html] * (This internal guide) https://swarm.perforce.com/view/builds/main/p4-bin/doc/internal/hws-internal.html[//builds/main/p4-bin/doc/internal/hws-internal.html] Candidate: * Old location, until next sweep to candidate: https://swarm.perforce.com/view/builds/candidate/p4-doc/internal/hws.html[//builds/candidate/p4-doc/internal/hws.html] * New lcoation, updated for release with RNA: https://swarm.perforce.com/view/builds/candidate/p4-bin/doc/internal/hws.html[//builds/candidate/p4-bin/doc/internal/hws.html] === Obtaining Pre-release Builds of Helix Web Services Internal locations for mainline builds of the different distributables: * Binary archive (Windows) - https://swarm.perforce.com/downloads/builds/main/p4-bin/bin.noarch/helix-web-services-bin.zip[//builds/main/p4-bin/bin.noarch/helix-web-services-bin.zip] * Binary archive (Linux, OS X) - https://swarm.perforce.com/downloads/builds/main/p4-bin/bin.noarch/helix-web-services-bin.tar.gz[//builds/main/p4-bin/bin.noarch/helix-web-services-bin.tar.gz] * .deb package - Ubuntu 12 - https://swarm.perforce.com/downloads/builds/main/p4-bin/bin.ubuntu12x86_64/helix-web-services-deb.tgz[//builds/main/p4-bin/bin.ubuntu12x86_64/helix-web-services-deb.tgz] * .deb package - Ubuntu 14 - https://swarm.perforce.com/downloads/builds/main/p4-bin/bin.ubuntu14x86_64/helix-web-services-deb.tgz[//builds/main/p4-bin/bin.ubuntu14x86_64/helix-web-services-deb.tgz] * .rpm package - CentOS 6 - https://swarm.perforce.com/downloads/builds/main/p4-bin/bin.centos6x86_64/helix-web-services-rpm.tgz[//builds/main/p4-bin/bin.centos6x86_64/helix-web-services-rpm.tgz] == System Configuration === "Undoc" HWS Configuration These configuration values are to be used for internal usage, and not documented in the main user guide. [cols="5*", options="header"] |=== | Variable | Type | Overridable | Description | Default | `ENABLE_MAN_IN_MIDDLE_ATTACKS` | Boolean | No | When true, we always accept the SSL certificate from any SSL connection to p4d. (Which basically makes the whole point of certificates moot.) | False | `REQUEST_FILTER_PATH` | String | No | Path to javaScript logic that can filter requests on this instance. | |=== == Development Environment Notes === Environment Setup ==== Requirements 1. Java 8 JDK with JCE: see <<Installing Oracle JDK 8 with JCE>> 2. A classic Perforce client that mimics our build structure: see <<Setup of the Perforce Client>> 3. (Optional) To rebuild installers, a licensed installation of https://www.ej-technologies.com/products/install4j/overview.html[install4j]. 4. (Optional) To build/run the Ruby client SDK, or test the Helix Cloud configuration, you'll need Ruby 2.2 installed via RVM: https://rvm.io/ 5. (Optional) To build/run the JavaScript client SDK, you will need to <<Setup Node for JavaScript Development>>. 6. (Optional) To run the PHP client SDK, you'll need to <<Setup PHP for Client Development>>. It is *strongly* recommended you use an IDE that can generate it's project setup from the Gradle build files. IntelliJ's IDEA will work, and the free community edition should be fine for our needs. ==== Setup of the Perforce Client You should mimic the organization of how our EC system sets up it's job environments. The final reference resides in the file: https://swarm.perforce.com/files/depot/main/tools/build/conf/helix-web-services/module-hws.conf#100[//depot/main/tools/build/conf/helix-web-services/module-hws.conf]. It is important that you *do not remap directories*. Several build tools - tools *not* in HWS - have been written assuming a particular directory structure. If you try to simplify your directory structure, your builds will likely fail in ways we can't easily predict. Here's an example client mapping you can probably sync with abandon: .... //builds/main/p4-bin/bin.{platform}/p4d[.exe] //CLIENT/builds/main/p4-bin/bin.{platform}/p4d[.exe] //depot/main/helix-web-services/source/... //CLIENT/depot/main/helix-web-services/source/... //depot/main/p4-doc/user/helix-web-services-notes.txt //CLIENT/depot/main/p4-doc/user/helix-web-services-notes.txt //depot/main/p4-doc/manuals/_build/... //CLIENT/depot/main/p4-doc/manuals/_build/... //depot/main/tools/build/bin/... //CLIENT/depot/main/tools/build/bin/... //depot/main/tools/build/lib/... //CLIENT/depot/main/tools/build/lib/... //depot/main/tools/build/conf/build.conf //CLIENT/depot/main/tools/build/conf/build.conf //depot/main/tools/build/conf/helix-web-services/* //CLIENT/depot/main/tools/build/conf/helix-web-services/* //depot/r14.1/p4-bin/bin.{platform}/p4d[.exe] //CLIENT/depot/r14.1/p4-bin/bin.{platform}/p4d[.exe] //depot/r14.2/p4-bin/bin.{platform}/p4d[.exe] //CLIENT/depot/r14.2/p4-bin/bin.{platform}/p4d[.exe] //depot/r15.1/p4-bin/bin.{platform}/p4d[.exe] //CLIENT/depot/r15.1/p4-bin/bin.{platform}/p4d[.exe] //depot/r15.2/p4-bin/bin.{platform}/p4d[.exe] //CLIENT/depot/r15.2/p4-bin/bin.{platform}/p4d[.exe] //depot/r16.1/p4-bin/bin.{platform}/p4d[.exe] //CLIENT/depot/r16.1/p4-bin/bin.{platform}/p4d[.exe] .... Note: the `{platform}` should map your platform's p4d folders, and if you're on Windows, you'll probably need the .exe extension. Of course, if you're on Windows, I'm sure something else will break because I rarely run automation on Windows. You can always use a "wide open" client without exclusions, just make sure to map the main source locations. ==== Installing Oracle JDK 8 with JCE HWS requires a Java 8 JDK, and if you use the Oracle-based JDKs, you will need to deploy the Java Cryptography Extensions for P4 SSL support. It's not always obvious _where_ you should put the jar files. ===== Installing JDK 8 on Linux Looks like CentOS can just use the Oracle-provided RPMs, here's how: * https://www.digitalocean.com/community/tutorials/how-to-install-java-on-centos-and-fedora (RPM downloads): * http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html Debian-based java 8 package repo: * http://www.webupd8.org/2014/03/how-to-install-oracle-java-8-in-debian.html ===== Installing JDK 8 on OS X Use the Oracle provided .dmg: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html NOTE: You will need to set your `JAVA_HOME`. In my `.bash_profile`, this is set like so: `export JAVA_HOME=$(/usr/libexec/java_home)`. ===== Installing JDK 8 on Windows Use the Oracle provided installers: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html ===== Installing the Java Cryptography Extensions You basically copy two jar files from the .zip file into the lib/security folder of the JRE embedded in your JDK. Since they're .jar files, you don't need to download these for each OS: it's the same for all platforms. JCE extension download page: http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html Why we need JCE extensions: http://answers.perforce.com/articles/KB/2620 On OS X, you can use the /usr/libexec/java_home program to print out the JAVA_HOME of your JDK. The `jre` directory of this path is the JRE, so, the path you copy the JCE .jar files is: `/usr/libexec/java_home`/jre/lib/security On Windows, this is likely a path like `C:\Program Files\Java\jdk1.8.0_65`. On Linux, many of the packages install under `/usr/lib/jvm`. Double check, however, that the installation method might have set the JAVA_HOME environment variable, which should point to your JDK. === Using the Gradle Build System Familiarize yourself with the basics of gradle via their https://gradle.org/getting-started-gradle-java/[user guide]. We use a Gradle wrapper to bootstrap gradle. When calling gradle, you don't run the `gradle` command, but call the `gradlew` script in the root of the `source` directory. In general, you will probably not interact with Gradle tasks frequently, but a few common tasks are: * `./gradlew :server:jar` - Create the main hws.jar * `./gradlew tasks` list available tasks you can run ** Can be run on specific projects too, e.g., `./gradlew swagger:tasks` * `./gradlew automation:cleanup` - shut down the testing environment * `./gradlew install4j:media` - Rebuild installers (except for RPMs because they suck) There are *tons* of individual tasks in the system, however. ==== Running a local development mode server via Gradle Run `./gradlew server:jar automation:basic16_1EnvironmentSetup` INFO: This runs in the foreground, dumping p4d output to the terminal, so you'll leave this open. To close this process, you can run `./gradlew automation:cleanup` in a new tab, or Ctrl+C should work. This setup initializes and seeds a 16.1 p4d, along with launching HWS. WARNING: You do not need to run this to execute tests. Each test className run will rebuild the p4d, and will probably shut your server process down to re-initialize it. ==== Running a local development mode server via IntelliJ debugger This will just launch the web application under a debugger, typically done during development. 1. Create a execution configuration to launch a plain ol' java Application. 2. Set the Main class to: `com.perforce.hws.server.HelixWebServices` 3. Ensure the current working directory is the `server` subdirectory of the project 4. Set the Java system property: `-Dlog4j.configurationFile=/Users/tristan/dev/p4/depot/main/helix-web-services/source/server/debug-log4j.xml` 5. Set this environment variable: `HWS_CONFIG_PATH=./config/dev.yaml` INFO: When running HWS under a debugger, it *WILL NOT* be restarted in each test className run. Tasks like `./gradlew automation:basic16_1EnvironmentSetup` will skip HWS initialization and just run p4d. ==== Running tests via Gradlew There are lots of testing runs, but the big one is: .... ./gradlew testing:runAll .... This will seed and spin up both p4d and HWS for every development-mode className. We test multiple versions of p4d via this className, and, different client SDKs. INFO: You *can* run HWS under a debugger, in which case the tests just restart p4d. Some, but not all suites have gradle tasks associated with it, because it's assumed you usually will run individual suites under a debugger. ==== Running test suites via the IntelliJ debugger 1. Create a TestNG runtime configuration 2. Use the `Suite` configuration option (in the radio selector) 3. Specify the TestNG XML file to run, typically in the `./testing` subdirectory of the project source 4. Use the `testing_test` classpath, and probably set the working directory to `testing` INFO: If you're not running HWS in the debugger, this will attempt to start the most recently built `hws.jar` in the build tree. If you do not have the file `server/build/libs/hws.jar` you should run `./gradlew server:jar` in the build root to get it. ==== Managing Third Party Dependencies in Gradle Our use of gradle is very simple, except for how we leverage third party software. HWS versions all third party software in with the source code. We use the gradle system to download software as needed, based on `vendor` declarations in projects. Example vendor block: [source,gradle] .build.gradle .... dependencies { vendor 'com.esotericsoftware.yamlbeans:yamlbeans:1.09' vendor 'com.esotericsoftware.yamlbeans:yamlbeans:1.09:sources' vendor 'com.google.http-client:google-http-client:1.21.0' vendor 'com.google.http-client:google-http-client:1.21.0:sources' vendor 'com.google.http-client:google-http-client-gson:1.21.0' vendor 'com.google.http-client:google-http-client-gson:1.21.0:sources' vendor 'com.google.http-client:google-http-client-jackson2:1.21.0' vendor 'com.google.http-client:google-http-client-jackson2:1.21.0:sources' vendor 'com.sparkjava:spark-core:2.3' vendor 'com.sparkjava:spark-core:2.3:sources' } .... ===== Including new dependencies . Add `vendor` declarations to the `dependencies` block for the jar *and the sources* . Execute the `getdeps` task for your project, e.g., `./gradlew :server:getdeps` from the command line . Version all new files downloaded to the `vendor` directory in that project. ===== Updating dependencies . Use the `dependencies` task for your project to print out the dependencies. . Remove all files in the `vendor` directory that are related to your update + WARNING: Do not just remove all files! Some dependencies are not available via Maven + . Re-run the `getdeps` task and version the new files downloaded to `vendor` === Generating Installers via Install4j We are using https://www.ej-technologies.com/products/install4j/overview.html[install4j] to create almost all packages and distributable for the project. (We don't generate RPMs this way because they don't let us generate some required fields for publishing to our package manager.) We generally edit logic using the Install4j UI, and perform actual builds via the `./gradlew install4j:media` task. The gradle wrapper sets several paths used by the installation. WARNING: The `install4j:media` tasks don't depend directly on all of the documentation tasks. You'll need to run the `doc:pdf` and `doc:publicsite` tasks directly within the doc/ directory before creating installers. This is due to some strange issues within the ant build system that converts DocBook XML to our internal PDF and HTML formats. If they are not run directly from the `doc/` directory, output tends to go wherever the current directory is. This kind of limits what we can do from gradle. The UI of Install4j is mostly straightforward, and well-documented. The only notable conventions I've used is to set up a few compiler variables to reference locations of where different artifacts are. In general, nearly everything should reference from some kind of `${compiler:...}` variable. (Don't hard-code absolute paths to your own machine!) For the build machines, we only need the Install4j command line install, typically installed to `/opt/install4j6` on Linux machines. If you install the product somewhere else, you'll need to adjust the rules in `install4j/build.gradle` to find the new location. (If you need to use it on Windows, for example, you'll probably need to add those rules.) === Ruby Setup Basics [RVM](https://rvm.io/) was selected to manage Ruby versions in the CI machines, so it's what you'll need in your local development environment. Setup of RVM is generally well documented on the main page, we won't reproduce that here. From there, you'll need to setup a Ruby version and install third party dependencies for the project: .... $ rvm install ruby-2.2.3 $ cd clients/2016.1.0/ruby $ bundle install $ cd ../../../mock_raymond $ bundle install .... From there, you should be good to go to work on either the Ruby client SDK or test the Helix Cloud setup. === Setup Node for JavaScript Development [Node](https://nodejs.org/) needs to be installed in your system. We are not particuarly sensitive to the version, as node is mostly used for a few basic automation tasks. On OS X, I've used `brew install node`, which uses the "stable" branch. Node dependencies you need to install globally: .... $ npm install -g gulp-cli .... === Setup PHP for Client Development ==== Ubuntu 14 The following packages were installed to get php setup for executing tests: .... sudo apt-get install software-properties-common sudo add-apt-repository ppa:ondrej/php sudo apt-get update sudo apt-get install php7.0 php7.0-curl php7.0-mbstring php7.0-simplexml .... ==== OS X I found using Homebrew to be relatively straightforward: .... $ brew install homebrew/php/php70 .... On OS X, I've been using the default php installation, which seemed to only require creating this config file to get composer to run: [source,json] .~/.composer/config.json .... { "config": { "secure-http": false } } .... === Setup Python for Client Development ==== Ubuntu 14 I've used the following PPA to grab python via packages instead of compiling it myself: .... sudo add-apt-repository ppa:fkrull/deadsnakes sudo apt-get update sudo apt-get install python2.7 python-pip .... Then you need to setup virtualenv: .... sudo pip install virtualenv .... ==== OS X The default python VM isn't really recommended. It's out of date, doesn't come with the tools like pip or virtualenv to manage third party software cleanly. Using homebrew is relatively simple, which will setup python 2.7: .... brew install python .... Get virtualenv which we use to manage third party dependencies. .... pip install virtualenv .... == How-Tos === How to generate a new "environment" for a test suite Say you want a new configuration or data setup of your p4d or other servers - we've also included Git Fusion, or Helix Cloud mocking servers, on occasion. You'll want to both be able to run that setup manually (just for debugging) and automatically for tests. Here's a checklist of conventions. . Create a wrapper "application" class in the `automation` project. .. One example of this is our `BasicMainlineEnvironmentSetup`. .. Lots more on conventions on this later. . (Optional) Create a gradle `JavaExec` task to call this application class. . Create a `TestSetup` class in the `testing` project that contains `@BeforeSuite` and `@AfterSuite` annotations to call your new `automation` class. . Create a new `testng.xml` file in the `testing` project that points to your new `TestSetup` initializer. ==== Conventions on creating automation projects When creating a new automation environment, there's several issues to keep in mind: * Each new "server process" should probably have a interface to manage it * Try to create pid files, we can * Let the process run and log itself into stdout/stderr * Create output logger threads to monitor the process while it's active, and forward to a Java logger ** You can use logic stuff like `ManageP4dMethods.P4DOutputLogger` to do this on the fly, or `StreamReaderThread` to buffer the entire output. If you do all of these things, it should be pretty easy to debug, automatically test, and track the output of tests to each of the different services you run. == Troubleshooting/FAQ === I'm in IntelliJ, and the project doesn't seem to recognize my third party dependency. IntelliJ won't automatically reload anything that newly pops up in a `vendor/` directory of a project. You'll have to reload the project from gradle. 1. Open up the gradle tool tab via the menu option `View` / `Tool Windows` / `Gradle` 2. Select the "Refresh All" projects icon, which looks like two arrows in a circle in the top left. The new .jar file should now be recognized. You should be able to navigate the jar file, by expanding it in the project tree.
# | Change | User | Description | Committed | |
---|---|---|---|---|---|
#22 | 19635 | cgrant |
Move doc-like artifacts over to the new location. I.e. p4-doc/... -> p4-bin/doc/... Follow on to add hws to a global change needed across products. See @1392982 for summary. |
||
#21 | 19552 | drobins | Align references to WebApp with the actual class name of HelixWebServices | ||
#20 | 19535 | drobins | Refactor package names to hws | ||
#19 | 19465 | tjuricek | Add checklist style how-to on creating new integrated environments in the HWS ecosystem | ||
#18 | 19461 | tjuricek | Add more notes on how to decompose complicated stuff. | ||
#17 | 19460 | tjuricek | Adding sections regarding software techniques and design in source code. | ||
#16 | 19360 | tjuricek | Convert to using r16.1 builds out of the main depot instead of the p16.1 line out of the remote builds depot. | ||
#15 | 19339 | tjuricek | Removed sections on webhooks, custom javascript handlers, since we've removed that functionality. | ||
#14 | 19338 | tjuricek |
Create custom reporting mechanism to allow for clear "multiple suite" reports... with logging! This standardizes on log4j 2 as the backend (was just used on the server side). We have an "in memory appender" that we use to capture results during testing, which we associate with each method run. The whole system spits out yaml files for each suite run, which are then read in via a final task and a single html report is spit out with ... everything... in it. |
||
#13 | 19277 | tjuricek |
Add self-signed SSL configuration variation (just with the 16.1 p4d) for testing. No bugs were actually uncovered, though it may be useful to debug environments. (If people don't have a properly patched JVM with the JCE extensions, this test will fail.) |
||
#12 | 19268 | tjuricek | Document steps for launching HWS via gradle or the IntelliJ debugger | ||
#11 | 19185 | tjuricek |
Removing Helix Sync, SSH key logic that is not tested or used. After discussing with Doug Scheirer, we'll need to revisit exactly what an HWS api needs to do with Helix Cloud. The integration may be trickier then most assume, since a large chunk of the API will not be allowed for Cloud users. |
||
#10 | 18846 | tjuricek |
Added test setups for p4d versions going back to r14.1. There is now a big single testng test suite that runs through most of these variations. I've switched to using the p16.1 builds by default, which can easily be switched to the r16.1 builds. Mainline p4d builds are still in use, since they are informative. We may want to not use those on candidate line runs, however. |
||
#9 | 18818 | tjuricek | Python Client SDK for Helix Web Services | ||
#8 | 18811 | tjuricek | Initial PHP client SDK for Helix Web Services. | ||
#7 | 18810 | tjuricek |
First-pass at JavaScript client SDK. JavaScript requires Node with Gulp to "browserfy" the library. It's the easiest way I found to use the swagger-js project; bundle up a wrapping method. There is no JavaScript reference guide. The swagger-js doesn't really document what they do very well, actually. Overall I'm not particularly impressed by swagger-js, it was hard to even figure out what the right method syntax was. We may want to invest time in doing it better. This required setting CORS response headers, which are currently defaulted to a fairly insecure setting. |
||
#6 | 18806 | tjuricek | Add notes on how to setup Ruby for the development environments. | ||
#5 | 18631 | tjuricek | Add notes regarding what p4d paths to include in the developer clients. | ||
#4 | 18607 | tjuricek | Update internal guide to add where builds are saved, notes on developer environment setup, and how the Install4j system is typically used. | ||
#3 | 18441 | tjuricek | Add notes on setting JAVA_HOME on OS X | ||
#2 | 18033 | tjuricek |
As it turns out the :toc declaration doesn't seem to affect the DocBook XML, so put it back. Add a minor note in the internal guide regarding SSL. |
||
#1 | 17323 | tjuricek |
Revise the documentation to use the JVM-based asciidoctor. This will generate the DocBook XML. I'm still ironing out exactly what the CD process is here. |
||
//guest/perforce_software/helix-web-services/main/source/doc/hws-internal.asc | |||||
#2 | 17228 | tjuricek |
Revising configuration, removing clearly obsolete variables, and renaming some for consistency. Some variables need to remove underscores, because those underscores are actually not really a good idea for HTTP headers. (Some servers block those values as keys, depending on config.) |
||
#1 | 17225 | tjuricek | Adding hws-internal documentation for internal users. |