= Server Deployment Package (SDP) for Perforce Helix: Unsupported Scripts and Triggers Perforce Professional Services <consulting@perforce.com> :revnumber: v2019.3 :revdate: 2020-08-21 :doctype: book :icons: font :toc: :toclevels: 3 :xrefstyle: full == Preface The Server Deployment Package (SDP) is the implementation of Perforce's recommendations for operating and managing a production Perforce Helix Core Version Control System. It is intended to provide the Helix Core administration team with tools to help: * Simplify Management * High Availability (HA) * Disaster Recovery (DR) * Fast and Safe Upgrades * Production Focus * Best Practice Configurables * Optimal Performance, Data Safety, and Simplified Backup This guide documents some scripts and triggers which are categorised as *Unsupported*. All the scripts and triggers referred to by the SDP Guides for Unix and Windows are fully supported by Perforce Support, assuming your license entitles you to support. All triggers and scripts in the `Unsupported` folder are provided as examples. They are Community supported only. They are known to work in customer environments, but are not maintained and tested at the same quality levels as the mains scripts. *Please Give Us Feedback* Perforce welcomes feedback from our users. Please send any suggestions for improving this document or the SDP to consulting@perforce.com. :sectnums: == Samples Scripts and utilities in this folder are examples which were part of the SDP in the past. === bin/htd_move_logs.sh Script to compress and move Helix Server structured audit logs Implementation assumptions and suggestions: * Assumes the rotated log files are named audit-nnn.csv * Do NOT configure your log files to be placed in $P4ROOT * Set TARGETDIR in script === bin/p4web_base P4Web base init script for running a p4web instance. Similar function for `/p4/common/bin/p4d_base`. Please refer to SDP_Guide.Unix for more details. Can be used from service files or even systemd service definitions. This is located here because *P4Web* is a deprecated and unsupported product. === broker/one_per_user.sh This broker filter script limits commands of a given type to one process running at a time for the same user. It relies on `p4 monitor` having been enabled with `p4 configure set monitor=1` (or higher than 1). This is called by the `p4broker` process. The p4broker provides an array of fields on STDIN, e.g. "command: populate" and "user: joe", which get parsed to inform the business logic of this script. This is enabled by adding a block like the following to a broker config file, with this example limiting the `p4 populate` command: command: ^populate$ { action = filter; execute = /p4/common/bin/broker/one_per_user.sh; } === Triggers ==== Workflow Enforcement Triggers These triggers are documented in link:WorkflowEnforcementTriggers.html[HTML doc] or link:WorkflowEnforcementTriggers.pdf[PDF doc] Where appropriate below reference will be made to this section. ==== CheckCaseTrigger.py This trigger checks for attempts to add new files (including via branching or renaming) which only differ in case in *some part of their path* from existing files. This avoids issues in these scenarios: * for a case insensitive server: ** Windows clients (case insensitive) can accidentally create new directories or files which sync into different directoroes on Unix (case sensitive) * for a case sensitive server: ** Unix clients can create 2 (or more) files which only differ in case, and when synced to a Windows client, only one of them appears [source] .Usage ---- include::../Samples/triggers/CheckCaseTrigger.py[tags=includeManual] ---- ==== CheckChangeDesc.py This trigger parses change descriptions to ensure that they only have the required format. See source for an example of Python regexes. [source] .Usage ---- include::../Samples/triggers/CheckChangeDesc.py[tags=includeManual] ---- ==== CheckFixes.py Part of <<_workflow_enforcement_triggers>> This trigger is intended for use with P4DTG (Defect Tracking Replication) installations. It will allows fixes to be added or deleted depending on the values of job fields. Thus you can control workflow and disallow fixes if jobs are in a particular state. So if the field JiraStatus is closed, then you are not allowed to add or delete a fix. [source] .Usage ---- include::../Samples/triggers/CheckFixes.py[tags=includeManual] ---- ==== CheckFixes.yaml Sample YAML config file for <<_checkfixes_py>> ==== CheckFolderStructure.py Part of <<_workflow_enforcement_triggers>> This trigger will ensure that any attempt to add (or branch) a file does not create a new folder name at specific levels. With new streams protections options in p4d 2020.1 this can be removed. [source] .Usage ---- include::../Samples/triggers/CheckFolderStructure.py[tags=includeManual] ---- ==== CheckJobEditTrigger.py For enforcment of job editing with P4DTG usage (e.g. JIRA). [source] .Usage ---- include::../Samples/triggers/CheckJobEditTrigger.py[tags=includeManual] ---- ==== CheckStreamNameFormat.py [source] .Usage ---- include::../Samples/triggers/CheckStreamNameFormat.py[tags=includeManual] ---- ==== CheckSubmitHasReview.py Part of <<_workflow_enforcement_triggers>> Since Swarm 2019.1 this trigger is no longer required since it can be replaced with Swarm's Workflow functionality. [source] .Usage ---- include::../Samples/triggers/CheckSubmitHasReview.py[tags=includeManual] ---- ==== ControlStreamCreation.py [source] .Usage ---- include::../Samples/triggers/ControlStreamCreation.py[tags=includeManual] ---- ==== CreateSwarmReview.py Part of <<_workflow_enforcement_triggers>> This trigger creates Swarm reviews automatically for committed changes. It is deprecated since Swarm now supports this as part of its workflow. [source] .Usage ---- include::../Samples/triggers/CreateSwarmReview.py[tags=includeManual] ---- ==== DefaultChangeDesc.py Creates a default changelist description. [source] .Usage ---- include::../Samples/triggers/DefaultChangeDesc.py[tags=includeManual] ---- ==== DefaultSwarmReviewDesc.py Updates Swarm review changelist with suitable template text. This can encourage users to record information against the review, or use a checklist. Part of <<_workflow_enforcement_triggers>> [source] .Usage ---- include::../Samples/triggers/DefaultSwarmReviewDesc.py[tags=includeManual] ---- ==== DefaultSwarmReviewDesc.yaml Example YAML config file for <<_defaultswarmreviewdesc_py>> ==== JobIncrement.pl Trigger to increment jobs with custom names. Enable this as a form-in trigger on the job form, for example: Triggers: JobIncrement form-in job "/p4/common/bin/triggers/JobIncrement.pl %formfile%" The default job naming convention with Perforce Jobs has an auto-increment feature, so that if you create job with a `Job:` field value of `new`, it will be changed to jobNNNNNN, with the six-digit value being automatically incremented. Perforce jobs also support custom job naming, e.g. to name jobs PROJX-123, where the name itself is more meaningful. But if you use custom job names, you forego the convenience of automatic generation of a new job number. Now typically, if the default job naming feature isn't used, it's because new issues originate jn an external issue tracking, so there's no need for incrementing by Perforce; the custom job names just mirror the value in the external system. This script is aims to make it easier to use custom job names with Perforce even when there is no external issue tracker integration, by providing the ability to generate new job names automatically. The `Project:` field in the Jobspec has a `select` field with pre-defined values for project names. Projects desiring to use custom jobs names will define a counter named JobPrefix-<project_name>, with the value being a tag name, a short form of the project name, to be used as a prefix for job names. For example, a project named joe_schmoe-wicked-good-thing might have a prefix of WGT. Jobs will be named WGT-1, WGT-2, etc. By convention, job prefixes are comprised only of uppercase letters, matching the pattern ^[A-Z]$. No spaces, commas, dashes, underbars, etc. allowed. (There is no mechanicm for mechanical enforcement of this convention, nor none needed, as tags are defined and created by Perforce Admins). To define a prefix for a project, an admin define a value for the appropriate counter, e.g. by doing: p4 counter JobPrefix-some-cool-project SCT *High Number Counter* For projects with defined tags, there will also be a high number counter tracking the highest numbered job with a give prefix. This counter is created automatically and maintained by this script. This trigger script fires as a `form-in` trigger on job forms, i.e. it fires on jobs that are on their way into the server. If `Job:`` field value is `new` and the `Project:` field value has an associated JobPrefix counter, then the name of the job is determined and set by incrementing the High Number counter, ultimately replacing the value `new` with something like SCT-201 before it ever gets to the server. If no High Number counter exists for the project, it gets set to `1`. Usage: JobIncrement.pl -h|-man Display a usage message. The `-h` display a short synopsis only, while `-man` displays this message. Return status: Zero indicates normal completion, Non-Zero indicates an error. ==== JobsCmdFilter.py This script is designed to run as a jobs command filter trigger on the server. It ensures that multiple wildcards are not specified in any search string (which can cause the server to perform excessive processing and impact performance) Usage: jobs-cmd-filter command pre-user-jobs "/p4/common/bin/triggers/JobsCmdFilter.py %args%" ==== P4Triggers.py Base class for many of the Python triggers. ==== PreventWsNonAscii.py This script is designed to run as a form save trigger on the server. It will cause a workspace save to fail if any non-ascii characters are present in the workspace spec. It also will block odd characters from the workspace name. Usage: PreventWSNonASCII form-save client "/p4/common/bin/triggers/PreventWsNonAscii.py %formfile%" ==== RequireJob.py Allows admins to require jobs to be associated with all submits in particular areas of repository. Part of <<_workflow_enforcement_triggers>> [source] .Usage ---- include::../Samples/triggers/RequireJob.py[tags=includeManual] ---- ==== SetLabelOptions.py This script is designed to run as a form-in and form-out trigger on the server. It sets the autoreload option for static labels and doesn't allow the user to change it. Usage: setlabelopts form-out label "/p4/common/bin/triggers/SetLabelOptions.py %formfile%" setlabelopts form-in label "/p4/common/bin/triggers/SetLabelOptions.py %formfile%" ==== SwarmReviewTemplate.py Part of <<_workflow_enforcement_triggers>> [source] .Usage ---- include::../Samples/triggers/SwarmReviewTemplate.py[tags=includeManual] ---- ==== TFSJobCheck.py Simple trigger to ensure jobs always contain a particular string. Note that <<_checkchangedesc_py>> is a more general form of this trigger. Trigger table entry: TFSJobCheck change-submit //depot/path/... "/p4/common/bin/triggers/TFSJobCheck.py %changelist%" ==== ValidateContentFormat.py This trigger is intended for file content validation as part of change-content trigger Works for YAML, XML - only checks files matching required file extensions - see below. Particularly useful to ensure that `Workflow.yaml` file itself doesn't get accidentally corrupted! Part of <<_workflow_enforcement_triggers>> [source] .Usage ---- include::../Samples/triggers/ValidateContentFormat.py[tags=includeManual] ---- ==== Workflow.yaml Sample generic configuration file for use with <<_workflow_enforcement_triggers>> ==== WorkflowTriggers.py Base class for use by various Workflow triggers as per <<_workflow_enforcement_triggers>> Imports <<_p4triggers_py>> ==== archive_long_name_trigger.pl This script unconverts #@*% from ascii in archive files. The reason for doing this is when they are expanded they can overflow the unix file path length. Usage: $file_prefix -op <operation> -rev <revision> -lbr <path/file> < stdin Trigger: arch archive //... "/p4/common/bin/triggers/archive_long_name_trigger.pl -op %op% -lbr %file% -rev %rev%" ==== command_block.py This is command trigger to allow you to block commands from all but listed users. Trigger table entry examples: command-block command pre-user-obliterate "/p4/common/bin/triggers/command_block.py %user%" command-block command pre-user-(obliterate|protect$) "/p4/common/bin/triggers/command_block.py %user%" ==== dictionary This file contains dictionary translations for parsing requests and generating responses. Used by <<_rad_authcheck_py>> ==== externalcopy.txt Documents how to use externalcopy programs to transfer data between commit and edge. ==== otpauthcheck.py This trigger will check to see if the userid is in a the LOCL_PASSWD_FILE first and authenticate with that if found. If the users isn't in the local file, it checks to see if the user is a service user, and if so, it will authenticate against just the auth server. Finally, it will check the user's password and authenticator token if the other two conditions don't match. The trigger table entry is: authtrigger auth-check auth "/p4/common/bin/triggers/vip_authcheckTrigger.py %user% %serverport%" ==== otpkeygen.py This script generates a key for a user and stores the key in the Perforce server. Used together with <<_otpauthcheck_py>> ==== pull.sh Example pull trigger for https://community.perforce.com/s/article/15337[External Archive Transfer using pull-archive and edge-content triggers] Read filename to get list of files to copy from commit to edge. Do the copy using `ascp` (Aspera file copy) Configurable `pull.trigger.dir` should be set to a temp folder like `/p4/1/tmp` Startup commands look like: startup.2=pull -i 1 -u --trigger --batch=1000 The trigger entry for the pull commands looks like this: pull_archive pull-archive pull "/p4/common/bin/triggers/pull.sh %archiveList%" There are some pull trigger options, but the are not necessary with Aspera. Aspera works best if you give it the max batch size of 1000 and set up 1 or more threads. Note, that each thread will use the max bandwidth you specify, so a single pull-trigger thread is probably all you will want. The ascp user needs to have ssl public keys set up or export `ASPERA_SCP_PASS`. The ascp user should be set up with the target as / with full write access to the volume where the depot files are located. The easiest way to do that is to use the same user that is running the p4d service. TIP: ensure ascp is correctly configured and working in your environment: https://www-01.ibm.com/support/docview.wss?uid=ibm10747281 (search for "ascp connectivity testing") Standard SDP environment is assumed, e.g P4USER, P4PORT, OSUSER, P4BIN, etc. are set, PATH is appropriate, and a super user is logged in with a non-expiring ticket. ==== pull_test.sh IMPORTANT: THIS IS A TEST SCRIPT - it substitutes for <<_pull_sh>> which uses Aspera IT IS NOT INTENDED FOR PRODUCTION USE!!!! If you don't have an Aspera license, then you can test with this script to understand the process. ==== rad_authcheck.py This trigger will check to see if the userid is in a the LOCL_PASSWD_FILE first and authenticate with that if found. If the users isn't in the local file, it checks to see if the user is a service user, and if so, it will authenticate against LDAP. Finally, it will check the user against the Radius server if the other two conditions don't match. You need to install the python-pyrad package and python-six package for it to work. It also needs the file named dictionary in the same directory as the script. Set the Radius servers in RAD_SERVERS below Set the shared secret Pass in the user name as a parameter and the password from stdin. The trigger table entry is: authtrigger auth-check auth "/p4/common/bin/triggers/rad_authcheck.py %user% %serverport% %clientip%" NOTE: The script current is set such that the Perforce user names should match the RSA ID's. In the case of one customer, the RSA ID's were all numeric, so we made the Perforce usernames be realname_RSAID and had this script just strip off the realname_ part. Example commented out in the main function. ==== radtest.py This is a radius test script. You need to install the python-pyrad package and python-six package for it to work. It also needs the file named dictionary in the same directory as the script. Set the Radius servers in radsvrs below Set the shared secret Pass in the user name as a parameter and the password from stdin. ==== submit.sh Example submit trigger for https://community.perforce.com/s/article/15337[External Archive Transfer using pull-archive and edge-content triggers] Partner script to <<_pull_sh>> Uses `fstat -Ob` with some filtering to generate a list of files to be copied. Create a temp file with the filename pairs expected by ascp, and then perform the copy. This configurable must be set: rpl.submit.nocopy=1 The edge-content trigger looks like this: EdgeSubmit edge-content //... "/p4/common/bin/triggers/ascpSubmit.sh %changelist%" The `ascp` user needs to have ssl public keys set up or export ASPERA_SCP_PASS. The `ascp` user should be set up with the target as / with full write access to the volume where the depot files are located. The easiest way to do that is to use the same user that is running the p4d service. TIP: ensure ascp is correctly configured and working in your environment: https://www-01.ibm.com/support/docview.wss?uid=ibm10747281 (search for "ascp connectivity testing") Standard SDP environment is assumed, e.g P4USER, P4PORT, OSUSER, P4BIN, etc. are set, PATH is appropriate, and a super user is logged in with a non-expiring ticket. ==== submit_form_1.py This script modifies the description of the change form for the Perforce users listed in the submit_form_1_users group. Trigger table entry: submitform1 form-out change "/p4/common/bin/triggers/submit_form_1.py %formfile% %user%" ==== submit_form_1_in.py This script checks the input of the description form for the path specified in the triggers table. Trigger table entry: submitform1_in change-submit //depot/somepath/... "/p4/common/bin/triggers/submit_form_1_in.py %changelist% %user%" ==== submit_test.sh IMPORTANT: THIS IS A TEST SCRIPT - it substitutes for <<_submit_sh>> which uses Aspera IT IS NOT INTENDED FOR PRODUCTION USE!!!! If you don't have an Aspera license, then you can test with this script to understand the process. See script for details. === triggers / tests This directory contains test harnesses for various triggers document in the section above. They import a common base class `p4testutils.py` and then run various unit tests. The tests are one of two types: * simple unit tests * integration tests using a real `p4d` instance (running in DVCS-style mode). The tests make it straight forward to ensure that you haven't broken any tests. You need to ensure you have a `p4d` executable in your PATH. Run any individual test: python3 TestRequireJob.py == Maintenance These are example scripts which have proven useful in the past. They are NOT fully tested and NOT guaranteed to work, although in most cases they are fairly simple and reliable. Treat with some caution!!! === accessdates.py This script is normally called by another script, such as <<_unload_clients_py>> However, if run standalone, it will generate 4 files with a list of specs that should be archived based on the number of weeks in <<_maintenance_cfg>> The file generated are: branches.txt clients.txt labels.txt users.txt === add_users.sh This script adds a bunch of users from a users_to_add.csv file of the form: <user>,<email>,<full_name>[,<group1 group2 group3 ...] This first line of the `users_to_add.csv` file is assumed to be a header and is always ignored. vi users_to_add.csv ./add_users.sh 2>&1 | tee add_users.$(date +'%Y%m%d-%H%M').log === addusertogroup.py This script adds a user or users to the specified group. Usage: python addusertogroup.py [instance] user group * instance defaults to 1 if not given. * user = user_name or a file containing a list of user names, one per line. * group = name of Perforce group to add the user(s) to. === checkusers.py This script will generate a list of all user names that are listed in any group, but do not have a user account on the server. The results are written to `removeusersfromgroups.txt`. Usage: python checkusers.py You can pass that to `removeuserfromgroups.py` to do the cleanup. === checkusers_not_in_group.py This script will generate a list of all standard Perforce user names that are not listed in any group. It prints the results to the screen, so you may want to redirect it to a file. Usage: python checkusers_not_in_group.py === clean_protect.py This script will drop all lines in the protect tabel that have a group referenced from the file `groups.txt` passed into the script. The list of groups to drop is passed in as the first parameter and the protect table is passed in as the 2nd parameter. Usage: python protect_groups.py remove_groups.txt p4.protect `remove_groups.txt` is generated using <<_protect_groups_py>> - See that script for details. Run `p4 protect -o > p4.protect` to generate the protections table. You can redirect the output of this script to a file called `new.p4.protect` and then you can compare the original `p4.protect` and the `new.p4.protect`. If everything looks okay, you can update the protections table by running: p4 protect -i < new.p4.protect === convert_label_to_autoreload.py Converts label and sets `autoreload` option https://www.perforce.com/manuals/cmdref/Content/CmdRef/p4_label.html#Form_Fields_..503[see Command Reference] Usage: convert_label_to_autoreload.py <label or file_with_list_of_labels> === convert_rcs_to_unix.sh Executes command which converts Windows RCS files with CRLF endings to Unix LF endings: find . -type f -name '*,v' -print -exec perl -p -i -e 's/\r\n/\n/' {} \; === countrevs.py This script will count the total number of files and revisions from a list of files in the Perforce server. Usage: p4 files //... > files.txt then run: python countrevs.py files.txt === creategroups.py This script creates groups on the server based on the entries in a local file called `groups.txt` which contains lines in this format: group,groupname1 username1 username2 : group,groupname2 username2 username3 Run: python creategroups [instance] Instance defaults to 1 if not given. === createusers.py This script will create a group of users all at once based on an input file. The input file should contain the users in the following format, one per line: user,email,fullname Run python createusers.py userlist.csv <instance> Instance defaults to 1 if not passed in. === del_shelve.py This script will delete shelves and clients that have not been accessed in the number of weeks defined by the variable weeks in `maintenance.cfg`. Run the script as: del_shelve.py [instance] If no instance is given, it defaults to 1. === delusers.py Calls <<_p4deleteusers_py>> automain module which will remove allusers that haven't accessed Perforce in the variable userweeks set in `maintenance.cfg` Usage: delusers.py === edge_maintenance Runs regular tasks such as: * removes server.locks dir * unloading clients === email.bat This script is used to send email to all of your Perforce users. Create a file called `message.txt` that contains the body of your message. Then call this script and pass the subject in quotes as the only parameter. It makes a copy of the previous email list, then call make_email_list.py to generate a new one from Perforce. The reason for making the copy is so that you will always have an email list that you can use to send email with. Just comment out the call to `python make_email_list.py` in the script and run it. It will use the current list to send email from. This is handy in case your server goes off-line. === email.sh Unix version of <<_email_bat>> === email_pending_client_deletes.py This script will email users that their clients haven't been accessed in the number of weeks defined in the weeks variable (of `maintenance.cfg`), and warn them that it will be deleted in one week if they do not use it. === email_pending_user_deletes.py This script will email users that their user accounts haven't been accessed in the number of weeks defined in the weeks variable (of `maintenance.cfg`), and warn them that it will be deleted in one week if they do not use it. === EvilTwinDetector.sh Detects "evil twins". [source] .Usage ---- include::../Maintenance/EvilTwinDetector.sh[tags=includeManual] ---- === group_audit.py This script emails the owners of each group with instructions on how to validate the membership of the groups they own. === isitalabel.py Determine if a label exists in Perforce. Usage: isitalabel.py labelname Program will print out results of the search. Sets ISALABEL in the environment to 0 if found, 1 if not found. Also will exit with errorlevel 1 if the label is not found === license_status_check.sh Determines how many days/hours etc are remaining for a license. Usage: license_status_check.sh <instance> === lowercp.py This script will make a lowercase copy of the source folder, and report any conflicts found during the copy. Run this script from the source path adjust the target path below. Pass in the folder to be copied in lower case to the target. ie: To copy `/p4/1/depots/depot` to `/depotdata2/p4/1/depots`, run: cd /p4/1/depots lowercp.py depot === lowertree.py This script is used to rename a tree to all lowercase. It is helpful if you are trying to convert your server from case sensitive to case insensitive. It takes the directory to convert as the first parameter. === maintain_user_from_groups.py Usage: maintain_user_froup_groups.py [instance] Defaults to instance 1 if parameter not given. What this scripts does: Reads users from groups Creates any missing user accounts Removes accounts that are not in the group === maintenance This is an example maintenance script to run the recommended maintenance scripts on a weekly basis. You need to make sure you update the hard coded locations to match yours. === maintenance.cfg Created from <<_template_maintenance_cfg>> === make_email_list.py This script creates a list of email address for your users directly from their Perforce user accounts. It is intended to be used as part of email.bat, but can be used with any mail program that can read addresses from a list. This can be replaced by a single command using recent options for p4 CLI: p4 -F "%email%|%user%" users === mirroraccess.py This script will add a user to all the groups of another user in Perforce. Usage: python mirroraccess.py instance user1 user2 <user3> <user4> ... <userN> * user1 = user to mirror access from. * user2 = user to mirror access to. * <user3> ... <userN> = additional users to mirror access to. === p4deleteuser.py Removes user and any clients/shelves owned by that user. Note: This script will pull all the servers addresses off the servers output. In order for that to work, you must add the Address field on each server form. Usage: p4deleteuser.py [instance] user_to_remove p4deleteuser.py [instance] file_with_users_to_remove === p4lock.py This script locks a perforce label. The only command line parameter is the label p4lock.py LABEL See <<_p4unlock_py>> === p4unlock.py This script unlocks a perforce label. The only command line parameter is the label p4unlock.py labelname === protect_groups.py This script will list all the groups mentioned in "p4 protect" that are not a Perforce group. You need to pull the groups from p4 protect with: p4 protect -o | grep group | cut -d " " -f 3 | sort | uniq > protect_groups.txt and pass protect_groups.txt to this script. Usage: python protect_groups.py [instance] protect_groups.txt > remove_groups.txt * instance defaults to 1 if not given. === proxysearch.py This script will search your server log file and find any proxy servers that are connecting to the server. The server log file needs to be set to a debug level 1 or higher in order to record all the commands coming into the server. Just pass the log file in as parameter to this script and it will print out a list of the proxy servers. === pymail.py Reads values from `maintenance.cfg` for things like `mailhost`. Sends email according to parameters below. Usage: pymail.py -t <to-address or address file> -s <subject> -i <input file> === remove_empty_pending_changes.py This script will remove all of the empty pending changelists on the server. Usage: remove_empty_pending_changes.py === remove_jobs.py This script will remove all of the fixes associated with jobs and then delete the jobs listed in the file passed as the first argument. The list can be created with `p4 jobs > jobs.txt`. The script will handles the extra text in the lines. Usage: remove_jobs.py [SDP_INSTANCE] list_of_jobs_to_remove === removeuserfromgroups.py This script will look for the specified users in all groups and remove them. Usage: removeuserfromgroups.py [instance] USER removeuserfromgroups.py [instance] FILE * USER can be a single username or, it can be a FILE with a list of users. * instance defaults to 1 if not given. === removeusersfromgroup.py This script will remove the specified users from given group. Usage: removeusersfromgroup.py [instance] USER groupname removeusersfromgroup.py [instance] FILE groupname * USER can be a single username or, it can be a FILE with a list of users. * instance defaults to 1 if not given. === sample_cron_entries.txt These is a sample crontab entry to call the `maintenance` script from cron. === sdputils.py This module is a lbrary for other modules in this directory and is imported by some of them. === server_status.sh This script is used to check the status of all the instances running on this machine. It can handle named and numbered instances. === setpass.py This script will set the password for a user to the value set in the password variable in the main function. The name of the user to set the password for is passed as a parameter to the file. Usage: python setpass.py [instance] user * instance defaults to 1 if not given. === template.maintenance.cfg Example for editing and copying to `maintenance.cfg`. Read by many of the scripts. Values include things like default number of weeks to use to deted *old* users/clients/labels. === unload_clients.py This script will unload clients not accessed since the weeks parameter specified in `maintenance.cfg`. === unload_clients_with_delete.py This script will unload clients that have not been accessed since the number of weeks specified in the `maintenance.cfg` file. This version of unload clients will delete clients with exclusive checkouts since unload will not unload those clients. It also deletes shelves from the clients to be deleted since delete will not delete a client with a shelf. === unload_labels.py This script will unload labels that have not been accessed since the number of weeks specified in the `maintenance.cfg` entry.
# | Change | User | Description | Committed | |
---|---|---|---|---|---|
#10 | 30383 | C. Thomas Tyler | Updated rev{number,date} fields in adoc files for release. | ||
#9 | 30085 | C. Thomas Tyler | SDP-ified P4DTG changeid.pl script rescued from P4DTG-SDK, added basic docs (just copied from the script and tweaked). | ||
#8 | 30000 | C. Thomas Tyler |
Refined Release Notes and top-level README.md file in preparation for coming 2023.2 release. Adjusted Makefile in doc directory to also generate top-level README.html from top-level README.md file so that the HTML file is reliably updated in the SDP release process. Updated :revnumber: and :revdate: docs in AsciiDoc files to indicate that the are still current. Avoiding regen of ReleaseNotes.pdf binary file since that will need at least one more update before shipping SDP 2023.2. |
||
#7 | 29608 | C. Thomas Tyler | Doc updates as part of release cycle. | ||
#6 | 29236 | C. Thomas Tyler |
Updated all doc rev numbers for supported and unsupported docs to 2022.2 as prep for SDP 2022.2 release. |
||
#5 | 27727 | C. Thomas Tyler |
Refined docs for Unsupported scripts. Enhanced Unsupported doc Makefile to handle pod2html doc. Re-generated PDF and HTML docs. |
||
#4 | 27722 | C. Thomas Tyler |
Refinements to @27712: * Resolved one out-of-date file (verify_sdp.sh). * Added missing adoc file for which HTML file had a change (WorkflowEnforcementTriggers.adoc). * Updated revdate/revnumber in *.adoc files. * Additional content updates in Server/Unix/p4/common/etc/cron.d/ReadMe.md. * Bumped version numbers on scripts with Version= def'n. * Generated HTML, PDF, and doc/gen files: - Most HTML and all PDF are generated using Makefiles that call an AsciiDoc utility. - HTML for Perl scripts is generated with pod2html. - doc/gen/*.man.txt files are generated with .../tools/gen_script_man_pages.sh. #review-27712 |
||
#3 | 26885 | C. Thomas Tyler |
Removed obsolete README.txt file, and put useful bits of its content into Unsupported_SDP.adoc. Removed excess general SDP content from Unsupported_SDP.adoc. Enhanced Makefile for AsciiDocc to support building individual targets, and also incrementally build only what's needed. |
||
#2 | 26851 | Robert Cowham |
Fix typo in tmpfs /etc/fstab entry which stopped it working in the doc. Mention in pre-requisites for failover and failover guide the need to review OS Config for your failover server. Document Ubuntu 2020.04 LTS and CentOS/RHEL 8 support. Note performance has been observed to be better with CentOS. Document pull.sh and submit.sh in main SDP guide (remove from Unsupported doc). Update comments in triggers to reflect that they are reference implementations, not just examples. No code change. |
||
#1 | 26681 | Robert Cowham |
Removing Deprecated folder - if people want it they can look at past history! All functions have been replaced with standard functionality such as built in LDAP, or default change type. Documentation added for the contents of Unsupported folder. Changes to scripts/triggers are usually to insert tags for inclusion in ASCII Doctor docs. |