<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="generator" content="Asciidoctor 2.0.22"> <meta name="author" content="Perforce Professional Services"> <title>Perforce Helix Core Server Deployment Package (for UNIX/Linux)</title> <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Open+Sans:300,300italic,400,400italic,600,600italic%7CNoto+Serif:400,400italic,700,700italic%7CDroid+Sans+Mono:400,700"> <style> /*! Asciidoctor default stylesheet | MIT License | https://asciidoctor.org */ /* Uncomment the following line when using as a custom stylesheet */ /* @import "https://fonts.googleapis.com/css?family=Open+Sans:300,300italic,400,400italic,600,600italic%7CNoto+Serif:400,400italic,700,700italic%7CDroid+Sans+Mono:400,700"; */ html{font-family:sans-serif;-webkit-text-size-adjust:100%} a{background:none} a:focus{outline:thin dotted} a:active,a:hover{outline:0} h1{font-size:2em;margin:.67em 0} b,strong{font-weight:bold} abbr{font-size:.9em} abbr[title]{cursor:help;border-bottom:1px dotted #dddddf;text-decoration:none} dfn{font-style:italic} hr{height:0} mark{background:#ff0;color:#000} code,kbd,pre,samp{font-family:monospace;font-size:1em} pre{white-space:pre-wrap} q{quotes:"\201C" "\201D" "\2018" "\2019"} small{font-size:80%} sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline} sup{top:-.5em} sub{bottom:-.25em} img{border:0} svg:not(:root){overflow:hidden} figure{margin:0} audio,video{display:inline-block} audio:not([controls]){display:none;height:0} fieldset{border:1px solid silver;margin:0 2px;padding:.35em .625em .75em} legend{border:0;padding:0} button,input,select,textarea{font-family:inherit;font-size:100%;margin:0} button,input{line-height:normal} button,select{text-transform:none} button,html input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer} button[disabled],html input[disabled]{cursor:default} input[type=checkbox],input[type=radio]{padding:0} button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0} textarea{overflow:auto;vertical-align:top} table{border-collapse:collapse;border-spacing:0} *,::before,::after{box-sizing:border-box} html,body{font-size:100%} body{background:#fff;color:rgba(0,0,0,.8);padding:0;margin:0;font-family:"Noto Serif","DejaVu Serif",serif;line-height:1;position:relative;cursor:auto;-moz-tab-size:4;-o-tab-size:4;tab-size:4;word-wrap:anywhere;-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased} a:hover{cursor:pointer} img,object,embed{max-width:100%;height:auto} object,embed{height:100%} img{-ms-interpolation-mode:bicubic} .left{float:left!important} .right{float:right!important} .text-left{text-align:left!important} .text-right{text-align:right!important} .text-center{text-align:center!important} .text-justify{text-align:justify!important} .hide{display:none} img,object,svg{display:inline-block;vertical-align:middle} textarea{height:auto;min-height:50px} select{width:100%} .subheader,.admonitionblock td.content>.title,.audioblock>.title,.exampleblock>.title,.imageblock>.title,.listingblock>.title,.literalblock>.title,.stemblock>.title,.openblock>.title,.paragraph>.title,.quoteblock>.title,table.tableblock>.title,.verseblock>.title,.videoblock>.title,.dlist>.title,.olist>.title,.ulist>.title,.qlist>.title,.hdlist>.title{line-height:1.45;color:#7a2518;font-weight:400;margin-top:0;margin-bottom:.25em} div,dl,dt,dd,ul,ol,li,h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6,pre,form,p,blockquote,th,td{margin:0;padding:0} a{color:#2156a5;text-decoration:underline;line-height:inherit} a:hover,a:focus{color:#1d4b8f} a img{border:0} p{line-height:1.6;margin-bottom:1.25em;text-rendering:optimizeLegibility} p aside{font-size:.875em;line-height:1.35;font-style:italic} h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6{font-family:"Open Sans","DejaVu Sans",sans-serif;font-weight:300;font-style:normal;color:#ba3925;text-rendering:optimizeLegibility;margin-top:1em;margin-bottom:.5em;line-height:1.0125em} h1 small,h2 small,h3 small,#toctitle small,.sidebarblock>.content>.title small,h4 small,h5 small,h6 small{font-size:60%;color:#e99b8f;line-height:0} h1{font-size:2.125em} h2{font-size:1.6875em} h3,#toctitle,.sidebarblock>.content>.title{font-size:1.375em} h4,h5{font-size:1.125em} h6{font-size:1em} hr{border:solid #dddddf;border-width:1px 0 0;clear:both;margin:1.25em 0 1.1875em} em,i{font-style:italic;line-height:inherit} strong,b{font-weight:bold;line-height:inherit} small{font-size:60%;line-height:inherit} code{font-family:"Droid Sans Mono","DejaVu Sans Mono",monospace;font-weight:400;color:rgba(0,0,0,.9)} ul,ol,dl{line-height:1.6;margin-bottom:1.25em;list-style-position:outside;font-family:inherit} ul,ol{margin-left:1.5em} ul li ul,ul li ol{margin-left:1.25em;margin-bottom:0} ul.circle{list-style-type:circle} ul.disc{list-style-type:disc} ul.square{list-style-type:square} ul.circle ul:not([class]),ul.disc ul:not([class]),ul.square ul:not([class]){list-style:inherit} ol li ul,ol li ol{margin-left:1.25em;margin-bottom:0} dl dt{margin-bottom:.3125em;font-weight:bold} dl dd{margin-bottom:1.25em} blockquote{margin:0 0 1.25em;padding:.5625em 1.25em 0 1.1875em;border-left:1px solid #ddd} blockquote,blockquote p{line-height:1.6;color:rgba(0,0,0,.85)} @media screen and (min-width:768px){h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6{line-height:1.2} h1{font-size:2.75em} h2{font-size:2.3125em} h3,#toctitle,.sidebarblock>.content>.title{font-size:1.6875em} h4{font-size:1.4375em}} table{background:#fff;margin-bottom:1.25em;border:1px solid #dedede;word-wrap:normal} table thead,table tfoot{background:#f7f8f7} table thead tr th,table thead tr td,table tfoot tr th,table tfoot tr td{padding:.5em .625em .625em;font-size:inherit;color:rgba(0,0,0,.8);text-align:left} table tr th,table tr td{padding:.5625em .625em;font-size:inherit;color:rgba(0,0,0,.8)} table tr.even,table tr.alt{background:#f8f8f7} table thead tr th,table tfoot tr th,table tbody tr td,table tr td,table tfoot tr td{line-height:1.6} h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6{line-height:1.2;word-spacing:-.05em} h1 strong,h2 strong,h3 strong,#toctitle strong,.sidebarblock>.content>.title strong,h4 strong,h5 strong,h6 strong{font-weight:400} .center{margin-left:auto;margin-right:auto} .stretch{width:100%} .clearfix::before,.clearfix::after,.float-group::before,.float-group::after{content:" ";display:table} .clearfix::after,.float-group::after{clear:both} :not(pre).nobreak{word-wrap:normal} :not(pre).nowrap{white-space:nowrap} :not(pre).pre-wrap{white-space:pre-wrap} :not(pre):not([class^=L])>code{font-size:.9375em;font-style:normal!important;letter-spacing:0;padding:.1em .5ex;word-spacing:-.15em;background:#f7f7f8;border-radius:4px;line-height:1.45;text-rendering:optimizeSpeed} pre{color:rgba(0,0,0,.9);font-family:"Droid Sans Mono","DejaVu Sans Mono",monospace;line-height:1.45;text-rendering:optimizeSpeed} pre code,pre pre{color:inherit;font-size:inherit;line-height:inherit} pre>code{display:block} pre.nowrap,pre.nowrap pre{white-space:pre;word-wrap:normal} em em{font-style:normal} strong strong{font-weight:400} .keyseq{color:rgba(51,51,51,.8)} kbd{font-family:"Droid Sans Mono","DejaVu Sans Mono",monospace;display:inline-block;color:rgba(0,0,0,.8);font-size:.65em;line-height:1.45;background:#f7f7f7;border:1px solid #ccc;border-radius:3px;box-shadow:0 1px 0 rgba(0,0,0,.2),inset 0 0 0 .1em #fff;margin:0 .15em;padding:.2em .5em;vertical-align:middle;position:relative;top:-.1em;white-space:nowrap} .keyseq kbd:first-child{margin-left:0} .keyseq kbd:last-child{margin-right:0} .menuseq,.menuref{color:#000} .menuseq b:not(.caret),.menuref{font-weight:inherit} .menuseq{word-spacing:-.02em} .menuseq b.caret{font-size:1.25em;line-height:.8} .menuseq i.caret{font-weight:bold;text-align:center;width:.45em} b.button::before,b.button::after{position:relative;top:-1px;font-weight:400} b.button::before{content:"[";padding:0 3px 0 2px} b.button::after{content:"]";padding:0 2px 0 3px} p a>code:hover{color:rgba(0,0,0,.9)} #header,#content,#footnotes,#footer{width:100%;margin:0 auto;max-width:62.5em;*zoom:1;position:relative;padding-left:.9375em;padding-right:.9375em} #header::before,#header::after,#content::before,#content::after,#footnotes::before,#footnotes::after,#footer::before,#footer::after{content:" ";display:table} #header::after,#content::after,#footnotes::after,#footer::after{clear:both} #content{margin-top:1.25em} #content::before{content:none} #header>h1:first-child{color:rgba(0,0,0,.85);margin-top:2.25rem;margin-bottom:0} #header>h1:first-child+#toc{margin-top:8px;border-top:1px solid #dddddf} #header>h1:only-child{border-bottom:1px solid #dddddf;padding-bottom:8px} #header .details{border-bottom:1px solid #dddddf;line-height:1.45;padding-top:.25em;padding-bottom:.25em;padding-left:.25em;color:rgba(0,0,0,.6);display:flex;flex-flow:row wrap} #header .details span:first-child{margin-left:-.125em} #header .details span.email a{color:rgba(0,0,0,.85)} #header .details br{display:none} #header .details br+span::before{content:"\00a0\2013\00a0"} #header .details br+span.author::before{content:"\00a0\22c5\00a0";color:rgba(0,0,0,.85)} #header .details br+span#revremark::before{content:"\00a0|\00a0"} #header #revnumber{text-transform:capitalize} #header #revnumber::after{content:"\00a0"} #content>h1:first-child:not([class]){color:rgba(0,0,0,.85);border-bottom:1px solid #dddddf;padding-bottom:8px;margin-top:0;padding-top:1rem;margin-bottom:1.25rem} #toc{border-bottom:1px solid #e7e7e9;padding-bottom:.5em} #toc>ul{margin-left:.125em} #toc ul.sectlevel0>li>a{font-style:italic} #toc ul.sectlevel0 ul.sectlevel1{margin:.5em 0} #toc ul{font-family:"Open Sans","DejaVu Sans",sans-serif;list-style-type:none} #toc li{line-height:1.3334;margin-top:.3334em} #toc a{text-decoration:none} #toc a:active{text-decoration:underline} #toctitle{color:#7a2518;font-size:1.2em} @media screen and (min-width:768px){#toctitle{font-size:1.375em} body.toc2{padding-left:15em;padding-right:0} body.toc2 #header>h1:nth-last-child(2){border-bottom:1px solid #dddddf;padding-bottom:8px} #toc.toc2{margin-top:0!important;background:#f8f8f7;position:fixed;width:15em;left:0;top:0;border-right:1px solid #e7e7e9;border-top-width:0!important;border-bottom-width:0!important;z-index:1000;padding:1.25em 1em;height:100%;overflow:auto} #toc.toc2 #toctitle{margin-top:0;margin-bottom:.8rem;font-size:1.2em} #toc.toc2>ul{font-size:.9em;margin-bottom:0} #toc.toc2 ul ul{margin-left:0;padding-left:1em} #toc.toc2 ul.sectlevel0 ul.sectlevel1{padding-left:0;margin-top:.5em;margin-bottom:.5em} body.toc2.toc-right{padding-left:0;padding-right:15em} body.toc2.toc-right #toc.toc2{border-right-width:0;border-left:1px solid #e7e7e9;left:auto;right:0}} @media screen and (min-width:1280px){body.toc2{padding-left:20em;padding-right:0} #toc.toc2{width:20em} #toc.toc2 #toctitle{font-size:1.375em} #toc.toc2>ul{font-size:.95em} #toc.toc2 ul ul{padding-left:1.25em} body.toc2.toc-right{padding-left:0;padding-right:20em}} #content #toc{border:1px solid #e0e0dc;margin-bottom:1.25em;padding:1.25em;background:#f8f8f7;border-radius:4px} #content #toc>:first-child{margin-top:0} #content #toc>:last-child{margin-bottom:0} #footer{max-width:none;background:rgba(0,0,0,.8);padding:1.25em} #footer-text{color:hsla(0,0%,100%,.8);line-height:1.44} #content{margin-bottom:.625em} .sect1{padding-bottom:.625em} @media screen and (min-width:768px){#content{margin-bottom:1.25em} .sect1{padding-bottom:1.25em}} .sect1:last-child{padding-bottom:0} .sect1+.sect1{border-top:1px solid #e7e7e9} #content h1>a.anchor,h2>a.anchor,h3>a.anchor,#toctitle>a.anchor,.sidebarblock>.content>.title>a.anchor,h4>a.anchor,h5>a.anchor,h6>a.anchor{position:absolute;z-index:1001;width:1.5ex;margin-left:-1.5ex;display:block;text-decoration:none!important;visibility:hidden;text-align:center;font-weight:400} #content h1>a.anchor::before,h2>a.anchor::before,h3>a.anchor::before,#toctitle>a.anchor::before,.sidebarblock>.content>.title>a.anchor::before,h4>a.anchor::before,h5>a.anchor::before,h6>a.anchor::before{content:"\00A7";font-size:.85em;display:block;padding-top:.1em} #content h1:hover>a.anchor,#content h1>a.anchor:hover,h2:hover>a.anchor,h2>a.anchor:hover,h3:hover>a.anchor,#toctitle:hover>a.anchor,.sidebarblock>.content>.title:hover>a.anchor,h3>a.anchor:hover,#toctitle>a.anchor:hover,.sidebarblock>.content>.title>a.anchor:hover,h4:hover>a.anchor,h4>a.anchor:hover,h5:hover>a.anchor,h5>a.anchor:hover,h6:hover>a.anchor,h6>a.anchor:hover{visibility:visible} #content h1>a.link,h2>a.link,h3>a.link,#toctitle>a.link,.sidebarblock>.content>.title>a.link,h4>a.link,h5>a.link,h6>a.link{color:#ba3925;text-decoration:none} #content h1>a.link:hover,h2>a.link:hover,h3>a.link:hover,#toctitle>a.link:hover,.sidebarblock>.content>.title>a.link:hover,h4>a.link:hover,h5>a.link:hover,h6>a.link:hover{color:#a53221} details,.audioblock,.imageblock,.literalblock,.listingblock,.stemblock,.videoblock{margin-bottom:1.25em} details{margin-left:1.25rem} details>summary{cursor:pointer;display:block;position:relative;line-height:1.6;margin-bottom:.625rem;outline:none;-webkit-tap-highlight-color:transparent} details>summary::-webkit-details-marker{display:none} details>summary::before{content:"";border:solid transparent;border-left:solid;border-width:.3em 0 .3em .5em;position:absolute;top:.5em;left:-1.25rem;transform:translateX(15%)} details[open]>summary::before{border:solid transparent;border-top:solid;border-width:.5em .3em 0;transform:translateY(15%)} details>summary::after{content:"";width:1.25rem;height:1em;position:absolute;top:.3em;left:-1.25rem} .admonitionblock td.content>.title,.audioblock>.title,.exampleblock>.title,.imageblock>.title,.listingblock>.title,.literalblock>.title,.stemblock>.title,.openblock>.title,.paragraph>.title,.quoteblock>.title,table.tableblock>.title,.verseblock>.title,.videoblock>.title,.dlist>.title,.olist>.title,.ulist>.title,.qlist>.title,.hdlist>.title{text-rendering:optimizeLegibility;text-align:left;font-family:"Noto Serif","DejaVu Serif",serif;font-size:1rem;font-style:italic} table.tableblock.fit-content>caption.title{white-space:nowrap;width:0} .paragraph.lead>p,#preamble>.sectionbody>[class=paragraph]:first-of-type p{font-size:1.21875em;line-height:1.6;color:rgba(0,0,0,.85)} .admonitionblock>table{border-collapse:separate;border:0;background:none;width:100%} .admonitionblock>table td.icon{text-align:center;width:80px} .admonitionblock>table td.icon img{max-width:none} .admonitionblock>table td.icon .title{font-weight:bold;font-family:"Open Sans","DejaVu Sans",sans-serif;text-transform:uppercase} .admonitionblock>table td.content{padding-left:1.125em;padding-right:1.25em;border-left:1px solid #dddddf;color:rgba(0,0,0,.6);word-wrap:anywhere} .admonitionblock>table td.content>:last-child>:last-child{margin-bottom:0} .exampleblock>.content{border:1px solid #e6e6e6;margin-bottom:1.25em;padding:1.25em;background:#fff;border-radius:4px} .sidebarblock{border:1px solid #dbdbd6;margin-bottom:1.25em;padding:1.25em;background:#f3f3f2;border-radius:4px} .sidebarblock>.content>.title{color:#7a2518;margin-top:0;text-align:center} .exampleblock>.content>:first-child,.sidebarblock>.content>:first-child{margin-top:0} .exampleblock>.content>:last-child,.exampleblock>.content>:last-child>:last-child,.exampleblock>.content .olist>ol>li:last-child>:last-child,.exampleblock>.content .ulist>ul>li:last-child>:last-child,.exampleblock>.content .qlist>ol>li:last-child>:last-child,.sidebarblock>.content>:last-child,.sidebarblock>.content>:last-child>:last-child,.sidebarblock>.content .olist>ol>li:last-child>:last-child,.sidebarblock>.content .ulist>ul>li:last-child>:last-child,.sidebarblock>.content .qlist>ol>li:last-child>:last-child{margin-bottom:0} .literalblock pre,.listingblock>.content>pre{border-radius:4px;overflow-x:auto;padding:1em;font-size:.8125em} @media screen and (min-width:768px){.literalblock pre,.listingblock>.content>pre{font-size:.90625em}} @media screen and (min-width:1280px){.literalblock pre,.listingblock>.content>pre{font-size:1em}} .literalblock pre,.listingblock>.content>pre:not(.highlight),.listingblock>.content>pre[class=highlight],.listingblock>.content>pre[class^="highlight "]{background:#f7f7f8} .literalblock.output pre{color:#f7f7f8;background:rgba(0,0,0,.9)} .listingblock>.content{position:relative} .listingblock code[data-lang]::before{display:none;content:attr(data-lang);position:absolute;font-size:.75em;top:.425rem;right:.5rem;line-height:1;text-transform:uppercase;color:inherit;opacity:.5} .listingblock:hover code[data-lang]::before{display:block} .listingblock.terminal pre .command::before{content:attr(data-prompt);padding-right:.5em;color:inherit;opacity:.5} .listingblock.terminal pre .command:not([data-prompt])::before{content:"$"} .listingblock pre.highlightjs{padding:0} .listingblock pre.highlightjs>code{padding:1em;border-radius:4px} .listingblock pre.prettyprint{border-width:0} .prettyprint{background:#f7f7f8} pre.prettyprint .linenums{line-height:1.45;margin-left:2em} pre.prettyprint li{background:none;list-style-type:inherit;padding-left:0} pre.prettyprint li code[data-lang]::before{opacity:1} pre.prettyprint li:not(:first-child) code[data-lang]::before{display:none} table.linenotable{border-collapse:separate;border:0;margin-bottom:0;background:none} table.linenotable td[class]{color:inherit;vertical-align:top;padding:0;line-height:inherit;white-space:normal} table.linenotable td.code{padding-left:.75em} table.linenotable td.linenos,pre.pygments .linenos{border-right:1px solid;opacity:.35;padding-right:.5em;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none} pre.pygments span.linenos{display:inline-block;margin-right:.75em} .quoteblock{margin:0 1em 1.25em 1.5em;display:table} .quoteblock:not(.excerpt)>.title{margin-left:-1.5em;margin-bottom:.75em} .quoteblock blockquote,.quoteblock p{color:rgba(0,0,0,.85);font-size:1.15rem;line-height:1.75;word-spacing:.1em;letter-spacing:0;font-style:italic;text-align:justify} .quoteblock blockquote{margin:0;padding:0;border:0} .quoteblock blockquote::before{content:"\201c";float:left;font-size:2.75em;font-weight:bold;line-height:.6em;margin-left:-.6em;color:#7a2518;text-shadow:0 1px 2px rgba(0,0,0,.1)} .quoteblock blockquote>.paragraph:last-child p{margin-bottom:0} .quoteblock .attribution{margin-top:.75em;margin-right:.5ex;text-align:right} .verseblock{margin:0 1em 1.25em} .verseblock pre{font-family:"Open Sans","DejaVu Sans",sans-serif;font-size:1.15rem;color:rgba(0,0,0,.85);font-weight:300;text-rendering:optimizeLegibility} .verseblock pre strong{font-weight:400} .verseblock .attribution{margin-top:1.25rem;margin-left:.5ex} .quoteblock .attribution,.verseblock .attribution{font-size:.9375em;line-height:1.45;font-style:italic} .quoteblock .attribution br,.verseblock .attribution br{display:none} .quoteblock .attribution cite,.verseblock .attribution cite{display:block;letter-spacing:-.025em;color:rgba(0,0,0,.6)} .quoteblock.abstract blockquote::before,.quoteblock.excerpt blockquote::before,.quoteblock .quoteblock blockquote::before{display:none} .quoteblock.abstract blockquote,.quoteblock.abstract p,.quoteblock.excerpt blockquote,.quoteblock.excerpt p,.quoteblock .quoteblock blockquote,.quoteblock .quoteblock p{line-height:1.6;word-spacing:0} .quoteblock.abstract{margin:0 1em 1.25em;display:block} .quoteblock.abstract>.title{margin:0 0 .375em;font-size:1.15em;text-align:center} .quoteblock.excerpt>blockquote,.quoteblock .quoteblock{padding:0 0 .25em 1em;border-left:.25em solid #dddddf} .quoteblock.excerpt,.quoteblock .quoteblock{margin-left:0} .quoteblock.excerpt blockquote,.quoteblock.excerpt p,.quoteblock .quoteblock blockquote,.quoteblock .quoteblock p{color:inherit;font-size:1.0625rem} .quoteblock.excerpt .attribution,.quoteblock .quoteblock .attribution{color:inherit;font-size:.85rem;text-align:left;margin-right:0} p.tableblock:last-child{margin-bottom:0} td.tableblock>.content{margin-bottom:1.25em;word-wrap:anywhere} td.tableblock>.content>:last-child{margin-bottom:-1.25em} table.tableblock,th.tableblock,td.tableblock{border:0 solid #dedede} table.grid-all>*>tr>*{border-width:1px} table.grid-cols>*>tr>*{border-width:0 1px} table.grid-rows>*>tr>*{border-width:1px 0} table.frame-all{border-width:1px} table.frame-ends{border-width:1px 0} table.frame-sides{border-width:0 1px} table.frame-none>colgroup+*>:first-child>*,table.frame-sides>colgroup+*>:first-child>*{border-top-width:0} table.frame-none>:last-child>:last-child>*,table.frame-sides>:last-child>:last-child>*{border-bottom-width:0} table.frame-none>*>tr>:first-child,table.frame-ends>*>tr>:first-child{border-left-width:0} table.frame-none>*>tr>:last-child,table.frame-ends>*>tr>:last-child{border-right-width:0} table.stripes-all>*>tr,table.stripes-odd>*>tr:nth-of-type(odd),table.stripes-even>*>tr:nth-of-type(even),table.stripes-hover>*>tr:hover{background:#f8f8f7} th.halign-left,td.halign-left{text-align:left} th.halign-right,td.halign-right{text-align:right} th.halign-center,td.halign-center{text-align:center} th.valign-top,td.valign-top{vertical-align:top} th.valign-bottom,td.valign-bottom{vertical-align:bottom} th.valign-middle,td.valign-middle{vertical-align:middle} table thead th,table tfoot th{font-weight:bold} tbody tr th{background:#f7f8f7} tbody tr th,tbody tr th p,tfoot tr th,tfoot tr th p{color:rgba(0,0,0,.8);font-weight:bold} p.tableblock>code:only-child{background:none;padding:0} p.tableblock{font-size:1em} ol{margin-left:1.75em} ul li ol{margin-left:1.5em} dl dd{margin-left:1.125em} dl dd:last-child,dl dd:last-child>:last-child{margin-bottom:0} li p,ul dd,ol dd,.olist .olist,.ulist .ulist,.ulist .olist,.olist .ulist{margin-bottom:.625em} ul.checklist,ul.none,ol.none,ul.no-bullet,ol.no-bullet,ol.unnumbered,ul.unstyled,ol.unstyled{list-style-type:none} ul.no-bullet,ol.no-bullet,ol.unnumbered{margin-left:.625em} ul.unstyled,ol.unstyled{margin-left:0} li>p:empty:only-child::before{content:"";display:inline-block} ul.checklist>li>p:first-child{margin-left:-1em} ul.checklist>li>p:first-child>.fa-square-o:first-child,ul.checklist>li>p:first-child>.fa-check-square-o:first-child{width:1.25em;font-size:.8em;position:relative;bottom:.125em} ul.checklist>li>p:first-child>input[type=checkbox]:first-child{margin-right:.25em} ul.inline{display:flex;flex-flow:row wrap;list-style:none;margin:0 0 .625em -1.25em} ul.inline>li{margin-left:1.25em} .unstyled dl dt{font-weight:400;font-style:normal} ol.arabic{list-style-type:decimal} ol.decimal{list-style-type:decimal-leading-zero} ol.loweralpha{list-style-type:lower-alpha} ol.upperalpha{list-style-type:upper-alpha} ol.lowerroman{list-style-type:lower-roman} ol.upperroman{list-style-type:upper-roman} ol.lowergreek{list-style-type:lower-greek} .hdlist>table,.colist>table{border:0;background:none} .hdlist>table>tbody>tr,.colist>table>tbody>tr{background:none} td.hdlist1,td.hdlist2{vertical-align:top;padding:0 .625em} td.hdlist1{font-weight:bold;padding-bottom:1.25em} td.hdlist2{word-wrap:anywhere} .literalblock+.colist,.listingblock+.colist{margin-top:-.5em} .colist td:not([class]):first-child{padding:.4em .75em 0;line-height:1;vertical-align:top} .colist td:not([class]):first-child img{max-width:none} .colist td:not([class]):last-child{padding:.25em 0} .thumb,.th{line-height:0;display:inline-block;border:4px solid #fff;box-shadow:0 0 0 1px #ddd} .imageblock.left{margin:.25em .625em 1.25em 0} .imageblock.right{margin:.25em 0 1.25em .625em} .imageblock>.title{margin-bottom:0} .imageblock.thumb,.imageblock.th{border-width:6px} .imageblock.thumb>.title,.imageblock.th>.title{padding:0 .125em} .image.left,.image.right{margin-top:.25em;margin-bottom:.25em;display:inline-block;line-height:0} .image.left{margin-right:.625em} .image.right{margin-left:.625em} a.image{text-decoration:none;display:inline-block} a.image object{pointer-events:none} sup.footnote,sup.footnoteref{font-size:.875em;position:static;vertical-align:super} sup.footnote a,sup.footnoteref a{text-decoration:none} sup.footnote a:active,sup.footnoteref a:active,#footnotes .footnote a:first-of-type:active{text-decoration:underline} #footnotes{padding-top:.75em;padding-bottom:.75em;margin-bottom:.625em} #footnotes hr{width:20%;min-width:6.25em;margin:-.25em 0 .75em;border-width:1px 0 0} #footnotes .footnote{padding:0 .375em 0 .225em;line-height:1.3334;font-size:.875em;margin-left:1.2em;margin-bottom:.2em} #footnotes .footnote a:first-of-type{font-weight:bold;text-decoration:none;margin-left:-1.05em} #footnotes .footnote:last-of-type{margin-bottom:0} #content #footnotes{margin-top:-.625em;margin-bottom:0;padding:.75em 0} div.unbreakable{page-break-inside:avoid} .big{font-size:larger} .small{font-size:smaller} .underline{text-decoration:underline} .overline{text-decoration:overline} .line-through{text-decoration:line-through} .aqua{color:#00bfbf} .aqua-background{background:#00fafa} .black{color:#000} .black-background{background:#000} .blue{color:#0000bf} .blue-background{background:#0000fa} .fuchsia{color:#bf00bf} .fuchsia-background{background:#fa00fa} .gray{color:#606060} .gray-background{background:#7d7d7d} .green{color:#006000} .green-background{background:#007d00} .lime{color:#00bf00} .lime-background{background:#00fa00} .maroon{color:#600000} .maroon-background{background:#7d0000} .navy{color:#000060} .navy-background{background:#00007d} .olive{color:#606000} .olive-background{background:#7d7d00} .purple{color:#600060} .purple-background{background:#7d007d} .red{color:#bf0000} .red-background{background:#fa0000} .silver{color:#909090} .silver-background{background:#bcbcbc} .teal{color:#006060} .teal-background{background:#007d7d} .white{color:#bfbfbf} .white-background{background:#fafafa} .yellow{color:#bfbf00} .yellow-background{background:#fafa00} span.icon>.fa{cursor:default} a span.icon>.fa{cursor:inherit} .admonitionblock td.icon [class^="fa icon-"]{font-size:2.5em;text-shadow:1px 1px 2px rgba(0,0,0,.5);cursor:default} .admonitionblock td.icon .icon-note::before{content:"\f05a";color:#19407c} .admonitionblock td.icon .icon-tip::before{content:"\f0eb";text-shadow:1px 1px 2px rgba(155,155,0,.8);color:#111} .admonitionblock td.icon .icon-warning::before{content:"\f071";color:#bf6900} .admonitionblock td.icon .icon-caution::before{content:"\f06d";color:#bf3400} .admonitionblock td.icon .icon-important::before{content:"\f06a";color:#bf0000} .conum[data-value]{display:inline-block;color:#fff!important;background:rgba(0,0,0,.8);border-radius:50%;text-align:center;font-size:.75em;width:1.67em;height:1.67em;line-height:1.67em;font-family:"Open Sans","DejaVu Sans",sans-serif;font-style:normal;font-weight:bold} .conum[data-value] *{color:#fff!important} .conum[data-value]+b{display:none} .conum[data-value]::after{content:attr(data-value)} pre .conum[data-value]{position:relative;top:-.125em} b.conum *{color:inherit!important} .conum:not([data-value]):empty{display:none} dt,th.tableblock,td.content,div.footnote{text-rendering:optimizeLegibility} h1,h2,p,td.content,span.alt,summary{letter-spacing:-.01em} p strong,td.content strong,div.footnote strong{letter-spacing:-.005em} p,blockquote,dt,td.content,td.hdlist1,span.alt,summary{font-size:1.0625rem} p{margin-bottom:1.25rem} .sidebarblock p,.sidebarblock dt,.sidebarblock td.content,p.tableblock{font-size:1em} .exampleblock>.content{background:#fffef7;border-color:#e0e0dc;box-shadow:0 1px 4px #e0e0dc} .print-only{display:none!important} @page{margin:1.25cm .75cm} @media print{*{box-shadow:none!important;text-shadow:none!important} html{font-size:80%} a{color:inherit!important;text-decoration:underline!important} a.bare,a[href^="#"],a[href^="mailto:"]{text-decoration:none!important} a[href^="http:"]:not(.bare)::after,a[href^="https:"]:not(.bare)::after{content:"(" attr(href) ")";display:inline-block;font-size:.875em;padding-left:.25em} abbr[title]{border-bottom:1px dotted} abbr[title]::after{content:" (" attr(title) ")"} pre,blockquote,tr,img,object,svg{page-break-inside:avoid} thead{display:table-header-group} svg{max-width:100%} p,blockquote,dt,td.content{font-size:1em;orphans:3;widows:3} h2,h3,#toctitle,.sidebarblock>.content>.title{page-break-after:avoid} #header,#content,#footnotes,#footer{max-width:none} #toc,.sidebarblock,.exampleblock>.content{background:none!important} #toc{border-bottom:1px solid #dddddf!important;padding-bottom:0!important} body.book #header{text-align:center} body.book #header>h1:first-child{border:0!important;margin:2.5em 0 1em} body.book #header .details{border:0!important;display:block;padding:0!important} body.book #header .details span:first-child{margin-left:0!important} body.book #header .details br{display:block} body.book #header .details br+span::before{content:none!important} body.book #toc{border:0!important;text-align:left!important;padding:0!important;margin:0!important} body.book #toc,body.book #preamble,body.book h1.sect0,body.book .sect1>h2{page-break-before:always} .listingblock code[data-lang]::before{display:block} #footer{padding:0 .9375em} .hide-on-print{display:none!important} .print-only{display:block!important} .hide-for-print{display:none!important} .show-for-print{display:inherit!important}} @media amzn-kf8,print{#header>h1:first-child{margin-top:1.25rem} .sect1{padding:0!important} .sect1+.sect1{border:0} #footer{background:none} #footer-text{color:rgba(0,0,0,.6);font-size:.9em}} @media amzn-kf8{#header,#content,#footnotes,#footer{padding:0}} </style> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css"> </head> <body class="book"> <div id="header"> <h1>Perforce Helix Core Server Deployment Package (for UNIX/Linux)</h1> <div class="details"> <span id="author" class="author">Perforce Professional Services</span><br> <span id="email" class="email"><a href="mailto:consulting@perforce.com">consulting@perforce.com</a></span><br> <span id="revnumber">version v2024.1,</span> <span id="revdate">2024-05-30</span> </div> <div id="toc" class="toc"> <div id="toctitle">Table of Contents</div> <ul class="sectlevel1"> <li><a href="#_preface">Preface</a></li> <li><a href="#_overview">1. Overview</a> <ul class="sectlevel2"> <li><a href="#_using_this_guide">1.1. Using this Guide</a></li> <li><a href="#_getting_the_sdp">1.2. Getting the SDP</a></li> <li><a href="#_checking_the_sdp_version">1.3. Checking the SDP Version</a></li> </ul> </li> <li><a href="#_setting_up_the_sdp">2. Setting up the SDP</a> <ul class="sectlevel2"> <li><a href="#_terminology_definitions">2.1. Terminology Definitions</a> <ul class="sectlevel3"> <li><a href="#_process">2.1.1. Process</a></li> <li><a href="#_instance">2.1.2. Instance</a></li> <li><a href="#_server_machine">2.1.3. Server machine</a></li> <li><a href="#_server_spec">2.1.4. Server spec</a></li> <li><a href="#_server">2.1.5. Server</a></li> </ul> </li> </ul> </li> <li><a href="#_pre_requisites">3. Pre-Requisites</a> <ul class="sectlevel2"> <li><a href="#_volume_layout_and_hardware">3.1. Volume Layout and Hardware</a></li> </ul> </li> <li><a href="#_maintaining_the_sdp_on_unix_linux">4. Maintaining the SDP on Unix / Linux</a> <ul class="sectlevel2"> <li><a href="#_backup_procedures">4.1. Backup procedures</a> <ul class="sectlevel3"> <li><a href="#_metadata_checkpoints">4.1.1. Metadata checkpoints</a></li> <li><a href="#_backup_of_the_partition_containing_depots_checkpoints_and_the_sdp_configuration">4.1.2. Backup of the partition containing depots, checkpoints, and the SDP configuration</a></li> </ul> </li> <li><a href="#_notifications">4.2. Notifications</a> <ul class="sectlevel3"> <li><a href="#_configuration">4.2.1. Configuration</a></li> <li><a href="#_notifications_to_monitor">4.2.2. Notifications to monitor</a> <ul class="sectlevel4"> <li><a href="#_daily_checkpoint">4.2.2.1. Daily Checkpoint</a></li> <li><a href="#_verify">4.2.2.2. Verify</a></li> <li><a href="#_sync_replica">4.2.2.3. Sync Replica</a></li> </ul> </li> </ul> </li> <li><a href="#_disk_usage">4.3. Disk usage</a></li> </ul> </li> <li><a href="#_installing_the_sdp_on_unix_linux">5. Installing the SDP on Unix / Linux</a> <ul class="sectlevel2"> <li><a href="#_manual_install">5.1. Manual Install</a> <ul class="sectlevel3"> <li><a href="#_manual_install_initial_setup">5.1.1. Manual Install Initial setup</a> <ul class="sectlevel4"> <li><a href="#_use_of_ssl">5.1.1.1. Use of SSL</a></li> <li><a href="#_configuration_script_mkdirs_cfg">5.1.1.2. Configuration script mkdirs.cfg</a></li> </ul> </li> <li><a href="#_sdp_init_scripts">5.1.2. SDP Init Scripts</a> <ul class="sectlevel4"> <li><a href="#_configuring_systemd">5.1.2.1. Configuring systemd</a> <ul class="sectlevel5"> <li><a href="#_configuring_systemd_for_p4d">Configuring systemd for p4d</a></li> <li><a href="#_configuring_systemd_for_p4p">Configuring systemd for p4p</a></li> <li><a href="#_configuring_systemd_for_p4dtg">Configuring systemd for p4dtg</a></li> <li><a href="#_configuring_systemd_p4broker_multiple_configs">Configuring systemd p4broker - multiple configs</a></li> </ul> </li> <li><a href="#_enabling_systemd_under_selinux">5.1.2.2. Enabling systemd under SELinux</a></li> <li><a href="#_configuring_sysv_init_scripts">5.1.2.3. Configuring SysV Init Scripts</a></li> </ul> </li> <li><a href="#_configuring_automatic_service_start_on_boot">5.1.3. Configuring Automatic Service Start on Boot</a> <ul class="sectlevel4"> <li><a href="#_automatic_start_for_systems_using_systemd">5.1.3.1. Automatic Start for Systems using systemd</a></li> <li><a href="#_for_systems_using_the_sysv_init_mechanism">5.1.3.2. For systems using the SysV init mechanism</a></li> </ul> </li> <li><a href="#_sdp_crontab_templates">5.1.4. SDP Crontab Templates</a></li> <li><a href="#_completing_your_server_configuration">5.1.5. Completing Your Server Configuration</a></li> <li><a href="#_validating_your_sdp_installation">5.1.6. Validating your SDP installation</a></li> </ul> </li> <li><a href="#_local_sdp_configuration">5.2. Local SDP Configuration</a> <ul class="sectlevel3"> <li><a href="#_load_order">5.2.1. Load Order</a></li> </ul> </li> <li><a href="#_setting_your_login_environment_for_convenience">5.3. Setting your login environment for convenience</a></li> <li><a href="#_configuring_protections_file_types_monitoring_and_security">5.4. Configuring protections, file types, monitoring and security</a></li> <li><a href="#_operating_system_configuration">5.5. Operating system configuration</a> <ul class="sectlevel3"> <li><a href="#_configuring_email_for_notifications">5.5.1. Configuring email for notifications</a></li> <li><a href="#_swarm_email_configuration">5.5.2. Swarm Email Configuration</a></li> <li><a href="#_configuring_pagerduty_for_notifications">5.5.3. Configuring PagerDuty for notifications</a> <ul class="sectlevel4"> <li><a href="#_prerequisites">5.5.3.1. Prerequisites</a></li> <li><a href="#_sdp_configuration">5.5.3.2. SDP Configuration</a></li> <li><a href="#_optional_variables">5.5.3.3. Optional variables</a> <ul class="sectlevel5"> <li><a href="#_example_additional_context_configuration">Example Additional Context Configuration</a></li> </ul> </li> </ul> </li> <li><a href="#_configuring_aws_simple_notification_service_sns_for_notifications">5.5.4. Configuring AWS Simple Notification Service (SNS) for notifications</a> <ul class="sectlevel4"> <li><a href="#_prerequisites_2">5.5.4.1. Prerequisites</a></li> <li><a href="#_sdp_configuration_2">5.5.4.2. SDP Configuration</a></li> <li><a href="#_example_iam_policy">5.5.4.3. Example IAM Policy</a></li> </ul> </li> </ul> </li> <li><a href="#_other_server_configurables">5.6. Other server configurables</a></li> <li><a href="#_archiving_configuration_files">5.7. Archiving configuration files</a></li> <li><a href="#_installing_swarm_triggers">5.8. Installing Swarm Triggers</a></li> </ul> </li> <li><a href="#_backup_replication_and_recovery">6. Backup, Replication, and Recovery</a> <ul class="sectlevel2"> <li><a href="#_typical_backup_procedure">6.1. Typical Backup Procedure</a></li> <li><a href="#_planning_for_ha_and_dr">6.2. Planning for HA and DR</a> <ul class="sectlevel3"> <li><a href="#_further_resources">6.2.1. Further Resources</a></li> <li><a href="#_creating_a_failover_replica_for_commit_or_edge_server">6.2.2. Creating a Failover Replica for Commit or Edge Server</a></li> <li><a href="#_what_is_a_failover_replica">6.2.3. What is a Failover Replica?</a></li> <li><a href="#_mandatory_vs_non_mandatory_standbys">6.2.4. Mandatory vs Non-mandatory Standbys</a></li> <li><a href="#_server_host_naming_conventions">6.2.5. Server host naming conventions</a></li> </ul> </li> <li><a href="#_full_one_way_replication">6.3. Full One-Way Replication</a> <ul class="sectlevel3"> <li><a href="#_replication_setup">6.3.1. Replication Setup</a></li> <li><a href="#_replication_setup_for_failover">6.3.2. Replication Setup for Failover</a></li> <li><a href="#_pre_requisites_for_failover">6.3.3. Pre-requisites for Failover</a></li> <li><a href="#_using_mkrep_sh">6.3.4. Using mkrep.sh</a> <ul class="sectlevel4"> <li><a href="#_sitetags_cfg">6.3.4.1. SiteTags.cfg</a></li> <li><a href="#_output_of_mkrep_sh">6.3.4.2. Output of <code>mkrep.sh</code></a></li> </ul> </li> <li><a href="#_addition_replication_setup">6.3.5. Addition Replication Setup</a></li> <li><a href="#_sdp_installation">6.3.6. SDP Installation</a> <ul class="sectlevel4"> <li><a href="#_ssh_key_setup">6.3.6.1. SSH Key Setup</a></li> </ul> </li> </ul> </li> <li><a href="#_recovery_procedures">6.4. Recovery Procedures</a> <ul class="sectlevel3"> <li><a href="#_recovering_a_master_server_from_a_checkpoint_and_journals">6.4.1. Recovering a master server from a checkpoint and journal(s)</a></li> <li><a href="#_recovering_a_replica_from_a_checkpoint">6.4.2. Recovering a replica from a checkpoint</a></li> <li><a href="#_recovering_from_a_tape_backup">6.4.3. Recovering from a tape backup</a></li> <li><a href="#_failover_to_a_replicated_standby_machine">6.4.4. Failover to a replicated standby machine</a></li> </ul> </li> </ul> </li> <li><a href="#_upgrades">7. Upgrades</a> <ul class="sectlevel2"> <li><a href="#_upgrade_order_sdp_first_then_helix_p4d">7.1. Upgrade Order: SDP first, then Helix P4D</a></li> <li><a href="#_sdp_and_p4d_version_compatibility">7.2. SDP and P4D Version Compatibility</a></li> <li><a href="#_upgrading_the_sdp">7.3. Upgrading the SDP</a> <ul class="sectlevel3"> <li><a href="#_sample_sdp_upgrade_procedure">7.3.1. Sample SDP Upgrade Procedure</a></li> <li><a href="#_sdp_legacy_upgrade_procedure">7.3.2. SDP Legacy Upgrade Procedure</a></li> </ul> </li> <li><a href="#_upgrading_helix_software_with_the_sdp">7.4. Upgrading Helix Software with the SDP</a> <ul class="sectlevel3"> <li><a href="#_get_latest_helix_binaries">7.4.1. Get Latest Helix Binaries</a></li> <li><a href="#_upgrade_each_instance">7.4.2. Upgrade Each Instance</a></li> <li><a href="#_global_topology_upgrades_outer_to_inner">7.4.3. Global Topology Upgrades - Outer to Inner</a></li> </ul> </li> <li><a href="#_database_modifications">7.5. Database Modifications</a></li> </ul> </li> <li><a href="#_maximizing_server_performance">8. Maximizing Server Performance</a> <ul class="sectlevel2"> <li><a href="#_ensure_transparent_huge_pages_thp_is_turned_off">8.1. Ensure Transparent Huge Pages (THP) is turned off</a></li> <li><a href="#_putting_server_locks_directory_into_ram">8.2. Putting server.locks directory into RAM</a></li> <li><a href="#_installing_monitoring_packages">8.3. Installing monitoring packages</a></li> <li><a href="#_optimizing_the_database_files">8.4. Optimizing the database files</a></li> <li><a href="#_p4v_performance_settings">8.5. P4V Performance Settings</a></li> <li><a href="#_proactive_performance_maintenance">8.6. Proactive Performance Maintenance</a> <ul class="sectlevel3"> <li><a href="#_limiting_large_requests">8.6.1. Limiting large requests</a></li> <li><a href="#_offloading_remote_syncs">8.6.2. Offloading remote syncs</a></li> </ul> </li> </ul> </li> <li><a href="#_tools_and_scripts">9. Tools and Scripts</a> <ul class="sectlevel2"> <li><a href="#_general_sdp_usage">9.1. General SDP Usage</a> <ul class="sectlevel3"> <li><a href="#_linux">9.1.1. Linux</a></li> <li><a href="#_monitoring_sdp_activities">9.1.2. Monitoring SDP activities</a></li> </ul> </li> <li><a href="#_upgrade_scripts">9.2. Upgrade Scripts</a> <ul class="sectlevel3"> <li><a href="#_get_helix_binaries_sh">9.2.1. get_helix_binaries.sh</a></li> <li><a href="#_upgrade_sh">9.2.2. upgrade.sh</a></li> <li><a href="#_sdp_upgrade_sh">9.2.3. sdp_upgrade.sh</a></li> </ul> </li> <li><a href="#_legacy_upgrade_scripts">9.3. Legacy Upgrade Scripts</a> <ul class="sectlevel3"> <li><a href="#_clear_depot_map_fields_sh">9.3.1. clear_depot_Map_fields.sh</a></li> </ul> </li> <li><a href="#_core_scripts">9.4. Core Scripts</a> <ul class="sectlevel3"> <li><a href="#_p4_vars">9.4.1. p4_vars</a></li> <li><a href="#_p4_instance_vars">9.4.2. p4_<instance>.vars</a></li> <li><a href="#_p4master_run">9.4.3. p4master_run</a></li> <li><a href="#_daily_checkpoint_sh">9.4.4. daily_checkpoint.sh</a></li> <li><a href="#_keep_offline_db_current_sh">9.4.5. keep_offline_db_current.sh</a></li> <li><a href="#_live_checkpoint_sh">9.4.6. live_checkpoint.sh</a></li> <li><a href="#_mkrep_sh">9.4.7. mkrep.sh</a></li> <li><a href="#_p4verify_sh">9.4.8. p4verify.sh</a></li> <li><a href="#_p4login">9.4.9. p4login</a></li> <li><a href="#_p4d_instance_init">9.4.10. p4d_<instance>_init</a></li> <li><a href="#_recreate_offline_db_sh">9.4.11. recreate_offline_db.sh</a></li> <li><a href="#_refresh_p4root_from_offline_db_sh">9.4.12. refresh_P4ROOT_from_offline_db.sh</a></li> <li><a href="#_run_if_master_sh">9.4.13. run_if_master.sh</a></li> <li><a href="#_run_if_edge_sh">9.4.14. run_if_edge.sh</a></li> <li><a href="#_run_if_replica_sh">9.4.15. run_if_replica.sh</a></li> <li><a href="#_run_if_masteredgereplica_sh">9.4.16. run_if_master/edge/replica.sh</a></li> <li><a href="#_sdp_health_check_sh">9.4.17. sdp_health_check.sh</a></li> </ul> </li> <li><a href="#_more_server_scripts">9.5. More Server Scripts</a> <ul class="sectlevel3"> <li><a href="#_p4_crontab">9.5.1. p4.crontab</a></li> <li><a href="#_verify_sdp_sh">9.5.2. verify_sdp.sh</a></li> </ul> </li> <li><a href="#_other_scripts_and_files">9.6. Other Scripts and Files</a> <ul class="sectlevel3"> <li><a href="#_backup_functions_sh">9.6.1. backup_functions.sh</a></li> <li><a href="#_broker_rotate_sh">9.6.2. broker_rotate.sh</a></li> <li><a href="#_ccheck_sh">9.6.3. ccheck.sh</a></li> <li><a href="#_edge_dump_sh">9.6.4. edge_dump.sh</a></li> <li><a href="#_edge_vars">9.6.5. edge_vars</a></li> <li><a href="#_edge_shelf_replicate_sh">9.6.6. edge_shelf_replicate.sh</a></li> <li><a href="#_load_checkpoint_sh">9.6.7. load_checkpoint.sh</a></li> <li><a href="#_gen_default_broker_cfg_sh">9.6.8. gen_default_broker_cfg.sh</a></li> <li><a href="#_journal_watch_sh">9.6.9. journal_watch.sh</a></li> <li><a href="#_kill_idle_sh">9.6.10. kill_idle.sh</a></li> <li><a href="#_mkdirs_sh">9.6.11. mkdirs.sh</a></li> <li><a href="#_p4d_base">9.6.12. p4d_base</a></li> <li><a href="#_p4broker_base">9.6.13. p4broker_base</a></li> <li><a href="#_p4ftpd_base">9.6.14. p4ftpd_base</a></li> <li><a href="#_p4p_base">9.6.15. p4p_base</a></li> <li><a href="#_p4pcm_pl">9.6.16. p4pcm.pl</a></li> <li><a href="#_p4review_py">9.6.17. p4review.py</a></li> <li><a href="#_p4review2_py">9.6.18. p4review2.py</a></li> <li><a href="#_proxy_rotate_sh">9.6.19. proxy_rotate.sh</a></li> <li><a href="#_p4sanity_check_sh">9.6.20. p4sanity_check.sh</a></li> <li><a href="#_p4dstate_sh">9.6.21. p4dstate.sh</a></li> <li><a href="#_ps_functions_sh">9.6.22. ps_functions.sh</a></li> <li><a href="#_pull_sh">9.6.23. pull.sh</a></li> <li><a href="#_pull_test_sh">9.6.24. pull_test.sh</a></li> <li><a href="#_purge_revisions_sh">9.6.25. purge_revisions.sh</a></li> <li><a href="#_recover_edge_sh">9.6.26. recover_edge.sh</a></li> <li><a href="#_replica_cleanup_sh">9.6.27. replica_cleanup.sh</a></li> <li><a href="#_replica_status_sh">9.6.28. replica_status.sh</a></li> <li><a href="#_request_replica_checkpoint_sh">9.6.29. request_replica_checkpoint.sh</a></li> <li><a href="#_rotate_journal_sh">9.6.30. rotate_journal.sh</a></li> <li><a href="#_submit_sh">9.6.31. submit.sh</a></li> <li><a href="#_submit_test_sh">9.6.32. submit_test.sh</a></li> <li><a href="#_sync_replica_sh">9.6.33. sync_replica.sh</a></li> <li><a href="#_templates_directory">9.6.34. templates directory</a></li> <li><a href="#_update_limits_py">9.6.35. update_limits.py</a></li> </ul> </li> </ul> </li> <li><a href="#_sample_procedures">10. Sample Procedures</a> <ul class="sectlevel2"> <li><a href="#_installing_python3_and_p4python">10.1. Installing Python3 and P4Python</a></li> <li><a href="#_installing_checkcasetrigger_py">10.2. Installing CheckCaseTrigger.py</a></li> <li><a href="#_swarm_jira_link">10.3. Swarm JIRA Link</a></li> <li><a href="#_reseeding_an_edge_server">10.4. Reseeding an Edge Server</a></li> <li><a href="#_edge_reseed_scenario">10.5. Edge Reseed Scenario</a> <ul class="sectlevel3"> <li><a href="#_step_0_preflight_checks">10.5.1. Step 0: Preflight Checks</a></li> <li><a href="#_step_1_create_new_edge_seed_checkpoint">10.5.2. Step 1: Create New Edge Seed Checkpoint</a></li> <li><a href="#_step_2_transfer_edge_seed">10.5.3. Step 2: Transfer Edge Seed</a></li> <li><a href="#_step_3_reseed_the_edge">10.5.4. Step 3: Reseed the Edge</a></li> </ul> </li> </ul> </li> <li><a href="#_sdp_package_contents_and_planning">Appendix A: SDP Package Contents and Planning</a> <ul class="sectlevel2"> <li><a href="#_volume_layout_and_server_planning">A.1. Volume Layout and Server Planning</a> <ul class="sectlevel3"> <li><a href="#_memory_and_cpu">A.1.1. Memory and CPU</a></li> <li><a href="#_directory_structure_configuration_script_for_linuxunix">A.1.2. Directory Structure Configuration Script for Linux/Unix</a></li> <li><a href="#_p4d_versions_and_links">A.1.3. P4D versions and links</a></li> <li><a href="#_case_insensitive_p4d_on_unix">A.1.4. Case Insensitive P4D on Unix</a></li> </ul> </li> </ul> </li> <li><a href="#_the_journalprefix_standard">Appendix B: The journalPrefix Standard</a> <ul class="sectlevel2"> <li><a href="#_sdp_scripts_that_set_journalprefix">B.1. SDP Scripts that set <code>journalPrefix</code></a></li> <li><a href="#_first_form_of_journalprefix_value">B.2. First Form of <code>journalPrefix</code> Value</a> <ul class="sectlevel3"> <li><a href="#_detail_on_completely_unfiltered">B.2.1. Detail on "Completely Unfiltered"</a></li> </ul> </li> <li><a href="#_second_form_of_journalprefix_value">B.3. Second Form of <code>journalPrefix</code> Value</a></li> <li><a href="#_scripts_for_maintaining_the_offline_db">B.4. Scripts for Maintaining the <code>offline_db</code></a></li> <li><a href="#_sdp_structure_and_journalprefix">B.5. SDP Structure and <code>journalPrefix</code></a></li> <li><a href="#_replicas_of_edge_servers">B.6. Replicas of Edge Servers</a></li> <li><a href="#_goals_of_the_journalprefix_standard">B.7. Goals of the <code>journalPrefix</code> Standard</a></li> </ul> </li> <li><a href="#_server_spec_naming_standard">Appendix C: Server Spec Naming Standard</a> <ul class="sectlevel2"> <li><a href="#_general_form">C.1. General Form</a> <ul class="sectlevel3"> <li><a href="#_commit_server_spec">C.1.1. Commit Server Spec</a></li> <li><a href="#_helix_server_tags">C.1.2. Helix Server Tags</a></li> <li><a href="#_replica_type_tags">C.1.3. Replica Type Tags</a> <ul class="sectlevel4"> <li><a href="#_replication_notes">C.1.3.1. Replication Notes</a></li> </ul> </li> <li><a href="#_site_tags">C.1.4. Site Tags</a></li> </ul> </li> <li><a href="#_example_server_specs">C.2. Example Server Specs</a></li> <li><a href="#_implications_of_replication_filtering">C.3. Implications of Replication Filtering</a></li> <li><a href="#_other_replica_types">C.4. Other Replica Types</a></li> <li><a href="#_the_sdp_mkrep_sh_script">C.5. The SDP <code>mkrep.sh</code> script</a></li> </ul> </li> <li><a href="#_frequently_asked_questions">Appendix D: Frequently Asked Questions</a> <ul class="sectlevel2"> <li><a href="#_how_do_i_tell_what_version_of_the_sdp_i_have">D.1. How do I tell what version of the SDP I have?</a></li> <li><a href="#_how_do_i_change_super_user_password">D.2. How do I change super user password?</a></li> <li><a href="#_can_i_remove_the_perforce_user">D.3. Can I remove the perforce user?</a></li> <li><a href="#_can_i_clone_a_vm_to_create_a_standby_replica">D.4. Can I clone a VM to create a standby replica?</a></li> </ul> </li> <li><a href="#_troubleshooting_guide">Appendix E: Troubleshooting Guide</a> <ul class="sectlevel2"> <li><a href="#_daily_checkpoint_sh_fails">E.1. Daily_checkpoint.sh fails</a> <ul class="sectlevel3"> <li><a href="#_last_checkpoint_not_complete_check_the_backup_process_or_contact_support">E.1.1. Last checkpoint not complete. Check the backup process or contact support.</a></li> </ul> </li> <li><a href="#_replication_appears_to_be_stalled">E.2. Replication appears to be stalled</a> <ul class="sectlevel3"> <li><a href="#_resolution">E.2.1. Resolution</a></li> <li><a href="#_make_errors_visible">E.2.2. Make Errors Visible</a></li> <li><a href="#_remove_state_file">E.2.3. Remove state file</a></li> </ul> </li> <li><a href="#_archive_pull_queue_appears_to_be_stalled">E.3. Archive pull queue appears to be stalled</a> <ul class="sectlevel3"> <li><a href="#_resolutions">E.3.1. Resolutions</a> <ul class="sectlevel4"> <li><a href="#_remove_and_re_queue">E.3.1.1. Remove and re-queue</a></li> <li><a href="#_check_for_verify_errors_on_the_parent_server">E.3.1.2. Check for verify errors on the parent server</a></li> </ul> </li> </ul> </li> <li><a href="#_cant_login_to_edge_server">E.4. Can’t login to edge server</a> <ul class="sectlevel3"> <li><a href="#_resolution_2">E.4.1. Resolution</a></li> </ul> </li> <li><a href="#_updating_offline_db_for_an_edge_server">E.5. Updating offline_db for an edge server</a> <ul class="sectlevel3"> <li><a href="#_resolution_3">E.5.1. Resolution</a></li> </ul> </li> <li><a href="#_journal_out_of_sequence_in_checkpoint_log_file">E.6. Journal out of sequence in checkpoint.log file</a></li> <li><a href="#_unexpected_end_of_file_in_replica_daily_sync">E.7. Unexpected end of file in replica daily sync</a></li> </ul> </li> <li><a href="#_starting_and_stopping_services">Appendix F: Starting and Stopping Services</a> <ul class="sectlevel2"> <li><a href="#_sdp_service_management_with_the_systemd_init_mechanism">F.1. SDP Service Management with the systemd init mechanism</a> <ul class="sectlevel3"> <li><a href="#_brokers_and_proxies">F.1.1. Brokers and Proxies</a></li> <li><a href="#_root_or_sudo_required_with_systemd">F.1.2. Root or sudo required with systemd</a></li> </ul> </li> <li><a href="#_sdp_service_management_with_sysv_init_mechanism">F.2. SDP Service Management with SysV init mechanism</a></li> </ul> </li> <li><a href="#_brokers_in_stack_topology">Appendix G: Brokers in Stack Topology</a></li> <li><a href="#_sdp_health_checks">Appendix H: SDP Health Checks</a></li> </ul> </div> </div> <div id="content"> <div class="sect1"> <h2 id="_preface">Preface</h2> <div class="sectionbody"> <div class="paragraph"> <p>The Server Deployment Package (SDP) is the implementation of Perforce’s recommendations for operating and managing a production Perforce Helix Core Version Control System. It is intended to provide the Helix Core administration team with tools to help:</p> </div> <div class="ulist"> <ul> <li> <p>Simplify Management</p> </li> <li> <p>Simplify Upgrades</p> </li> <li> <p>High Availability (HA)</p> </li> <li> <p>Disaster Recovery (DR)</p> </li> <li> <p>Fast and Safe Upgrades</p> </li> <li> <p>Production Focus</p> </li> <li> <p>Best Practice Configurables</p> </li> <li> <p>Optimal Performance, Data Safety, and Simplified Backup</p> </li> </ul> </div> <div class="paragraph"> <p>This guide is intended to provide instructions of setting up the SDP to help provide users of Helix Core with the above benefits.</p> </div> <div class="paragraph"> <p>This guide assumes some familiarity with Perforce and does not duplicate the basic information in the Perforce user documentation. This document only relates to the Server Deployment Package (SDP). All other Helix Core documentation can be found here: <a href="https://www.perforce.com/support/self-service-resources/documentation">Perforce Support Documentation</a>.</p> </div> <div class="paragraph"> <p><strong>Please Give Us Feedback</strong></p> </div> <div class="paragraph"> <p>Perforce welcomes feedback from our users. Please send any suggestions for improving this document or the SDP to <a href="mailto:consulting@perforce.com">consulting@perforce.com</a>.</p> </div> </div> </div> <div class="sect1"> <h2 id="_overview">1. Overview</h2> <div class="sectionbody"> <div class="paragraph"> <p>The SDP has four main components:</p> </div> <div class="ulist"> <ul> <li> <p>Hardware and storage layout recommendations for Perforce.</p> </li> <li> <p>Scripts to automate critical maintenance activities.</p> </li> <li> <p>Scripts to aid the setup and management of replication (including failover for DR/HA).</p> </li> <li> <p>Scripts to assist with routine administration tasks.</p> </li> </ul> </div> <div class="paragraph"> <p>Each of these components is covered, in detail, in this guide.</p> </div> <div class="sect2"> <h3 id="_using_this_guide">1.1. Using this Guide</h3> <div class="paragraph"> <p><a href="#_setting_up_the_sdp">Chapter 2, <em>Setting up the SDP</em></a> describes concepts, terminology and pre-requisites</p> </div> <div class="paragraph"> <p><a href="#_maintaining_the_sdp_on_unix_linux">Chapter 4, <em>Maintaining the SDP on Unix / Linux</em></a> covers administrative duties associated with keeping an installation of the SDP in good shape.</p> </div> <div class="paragraph"> <p><a href="#_installing_the_sdp_on_unix_linux">Chapter 5, <em>Installing the SDP on Unix / Linux</em></a> consists of what you need to know to setup Helix Core sever on a Unix platform.</p> </div> <div class="paragraph"> <p><a href="#_backup_replication_and_recovery">Chapter 6, <em>Backup, Replication, and Recovery</em></a> gives information around the Backup, Restoration and Replication of Helix Core, including some guidance on planning for HA (High Availability) and DR (Disaster Recovery)</p> </div> <div class="paragraph"> <p><a href="#_upgrades">Chapter 7, <em>Upgrades</em></a> covers upgrades of <code>p4d</code> and related Helix Core executables.</p> </div> <div class="paragraph"> <p><a href="#_upgrading_the_sdp">Section 7.3, “Upgrading the SDP”</a> covers upgrading the SDP itself.</p> </div> <div class="paragraph"> <p><a href="#_maximizing_server_performance">Chapter 8, <em>Maximizing Server Performance</em></a> covers optimizations and proactive actions.</p> </div> <div class="paragraph"> <p><a href="#_tools_and_scripts">Chapter 9, <em>Tools and Scripts</em></a> covers all the scripts used within the SDP in detail.</p> </div> <div class="paragraph"> <p><a href="#_sdp_package_contents_and_planning">Appendix A, <em>SDP Package Contents and Planning</em></a> describes the details of the SDP package.</p> </div> <div class="paragraph"> <p><a href="#_the_journalprefix_standard">Appendix B, <em>The journalPrefix Standard</em></a> describes the standard for setting the <code>journalPrefix</code> configurable.</p> </div> <div class="paragraph"> <p><a href="#_server_spec_naming_standard">Appendix C, <em>Server Spec Naming Standard</em></a> describes the standard for naming 'server' specs created with the <code>p4 server</code> command.</p> </div> <div class="paragraph"> <p><a href="#_frequently_asked_questions">Appendix D, <em>Frequently Asked Questions</em></a> and <a href="#_troubleshooting_guide">Appendix E, <em>Troubleshooting Guide</em></a> are useful for other questions.</p> </div> <div class="paragraph"> <p><a href="#_starting_and_stopping_services">Appendix F, <em>Starting and Stopping Services</em></a> gives on overview of starting and stopping services with common init mechanisms, <code>systemd</code> and SysV.</p> </div> </div> <div class="sect2"> <h3 id="_getting_the_sdp">1.2. Getting the SDP</h3> <div class="paragraph"> <p>The SDP is downloaded as a single zipped tar file the latest version can be found at: <a href="https://swarm.workshop.perforce.com/projects/perforce-software-sdp/files/downloads" class="bare">https://swarm.workshop.perforce.com/projects/perforce-software-sdp/files/downloads</a></p> </div> <div class="paragraph"> <p>The file to download containing the latest SDP is consistently named <code>sdp.Unix.tgz</code>. A copy of this file also exists with a version-identifying name, e.g. <code>sdp.Unix.2021.2.28649.tgz</code>.</p> </div> <div class="paragraph"> <p>The direct download link to use with <code>curl</code> or <code>wget</code> is illustrated with this command:</p> </div> <div class="literalblock"> <div class="content"> <pre>curl -L -O https://swarm.workshop.perforce.com/projects/perforce-software-sdp/download/downloads/sdp.Unix.tgz</pre> </div> </div> </div> <div class="sect2"> <h3 id="_checking_the_sdp_version">1.3. Checking the SDP Version</h3> <div class="paragraph"> <p>Once installed, the SDP <code>Version</code> file exists as <code>/p4/sdp/Version</code>. This is a simple text file that contains the SDP version string. The version can be checked using a command like <code>cat</code>, as in this sample command:</p> </div> <div class="literalblock"> <div class="content"> <pre>$ cat /p4/sdp/Version Rev. SDP/MultiArch/2020.1/27955 (2021/08/13)</pre> </div> </div> <div class="paragraph"> <p>That string can be found in Change History section of the <a href="ReleaseNotes.html">SDP Release Notes</a>. This can be useful in determining if your SDP is the latest available, and to see what features are included.</p> </div> <div class="paragraph"> <p>When an SDP tarball is extracted, the <code>Version</code> file appears in the top-level <code>sdp</code> directory.</p> </div> </div> </div> </div> <div class="sect1"> <h2 id="_setting_up_the_sdp">2. Setting up the SDP</h2> <div class="sectionbody"> <div class="paragraph"> <p>This section tells you how to configure the SDP to setup a new Helix Core server.</p> </div> <div class="paragraph"> <p>The SDP can be installed on multiple server machines, and each server machine can host one or more Helix Core server instances. See <a href="#_terminology_definitions">Section 2.1, “Terminology Definitions”</a> for detailed definition of terms.</p> </div> <div class="paragraph"> <p>The SDP implements a standard logical directory structure which can be implemented flexibly on one or many physical server machines.</p> </div> <div class="paragraph"> <p>Additional relevant information is available in the <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/Home-p4sag.html">System Administrator Guide</a>.</p> </div> <div class="sect2"> <h3 id="_terminology_definitions">2.1. Terminology Definitions</h3> <div class="paragraph"> <p>Key terms are defined in this section.</p> </div> <div class="sect3"> <h4 id="_process">2.1.1. Process</h4> <div class="paragraph"> <p>A <em>process</em> is a running operating system process with a process identifier (PID) known to the operating system. It should normally be qualified as to what type of process it is:</p> </div> <div class="ulist"> <ul> <li> <p><strong>p4d process</strong> - a running p4d process with it’s own copy of db.* files. P4D processes may be of any one of the standard types, e.g. standard or commit-server, and any of the valid replica types: standby, forwarding-replica, edge-server etc.</p> </li> <li> <p><strong>p4p process</strong> – proxy instance talking to a single upstream p4d instance</p> </li> <li> <p><strong>p4broker process</strong> – p4broker talking to a single upstream p4d instance</p> </li> </ul> </div> </div> <div class="sect3"> <h4 id="_instance">2.1.2. Instance</h4> <div class="paragraph"> <p>An <em>instance</em> is a logically independent set of Helix Core data and metadata, represented by entities such as changelist numbers and depot paths, and existing a storage device in the form of db.* files (metadata) and versioned files (archive files). Thus, the instance is a reference to the logical data set, with its set of users, files, changelists.</p> </div> <div class="paragraph"> <p>Some facts about SDP instance names:</p> </div> <div class="ulist"> <ul> <li> <p>The default SDP instance name is simply <code>1</code> (the digit 'one').</p> </li> <li> <p>Any alphanumeric name can be used. It is mainly of interest to administrators, not regular users. Underscores are also allowed; dots should not be used in SDP instance names.</p> </li> <li> <p>As they are typed often in various admin operational tasks:</p> <div class="ulist"> <ul> <li> <p>Instance names are best kept short. A length of 1-5 characters is recommended, with a maximum of 32 characters.</p> </li> <li> <p>Lowercase letters are preferred and required at some sites, but not required by the SDP.</p> </li> </ul> </div> </li> <li> <p>SDP instance names can be any alphanumeric name. Underscores (<code>_</code>) and dashes (<code>-</code>) are also allowed. Dots, spaces, and other special characters should not be used in SDP instance names.</p> </li> <li> <p>An <strong>instance</strong> has a well defined name, embedded in its P4ROOT value. If the P4ROOT is <code>/p4/ace/root</code>, for example, <code>ace</code> is the instance name.</p> </li> <li> <p>An <strong>instance</strong> must operate with at least one p4d process on a master server machine. The instance may also extend to many machines running additional p4d, p4broker, and p4p processes. For the additional p4d processes, they can be replicas of various types, to include standby, edge, and filtered forwarding replicas (to name a few).</p> </li> <li> <p>On all machines on which an instance is physically extended, including proxy, broker, and replica machines, the instance exists as <code>/p4/N</code>, where <code>N</code> is the instance name.</p> </li> <li> <p>There can be more than one instance a machine.</p> </li> </ul> </div> </div> <div class="sect3"> <h4 id="_server_machine">2.1.3. Server machine</h4> <div class="paragraph"> <p>A <em>server machine</em> is a host machine (virtual or physical) with operating system and on which any number of p4d or other processes may be running.</p> </div> </div> <div class="sect3"> <h4 id="_server_spec">2.1.4. Server spec</h4> <div class="paragraph"> <p>A <em>server spec</em> (or <em>server specification</em>) is the entity managed using the <code>p4 server</code> command (and the plural <code>p4 servers</code> to list all of them).</p> </div> </div> <div class="sect3"> <h4 id="_server">2.1.5. Server</h4> <div class="paragraph"> <p>A <em>server</em> is a vague term. It needs to be fully qualified, and use on its own (unadorned) depends on context. It may mean any one of:</p> </div> <div class="ulist"> <ul> <li> <p>Server machine</p> </li> <li> <p>P4d process (this is usually the most common usage - tend to assume this unless otherwise defined.)</p> </li> <li> <p>Any other type of instance!</p> </li> </ul> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> The phrase "p4d server" is unclear as to whether you are talking about a p4d process, or a server machine on which the p4d process runs, or a combination of both (since there may be a single instance on a single machine, or many instances on a machine, etc). Make sure you understand what is being referred to! </td> </tr> </table> </div> </div> </div> </div> </div> <div class="sect1"> <h2 id="_pre_requisites">3. Pre-Requisites</h2> <div class="sectionbody"> <div class="olist arabic"> <ol class="arabic"> <li> <p>The Helix Core binaries (p4d, p4, p4broker, p4p) have been downloaded (see <a href="#_installing_the_sdp_on_unix_linux">Chapter 5, <em>Installing the SDP on Unix / Linux</em></a>)</p> </li> <li> <p><em>sudo</em> access is required</p> </li> <li> <p>System administrator available for configuration of drives / volumes (especially if on network or SAN or similar)</p> </li> <li> <p>Supported Linux version, currently these versions are fully supported - for other versions please speak with Perforce Support.</p> <div class="ulist"> <ul> <li> <p>Ubuntu 18.04 LTS (bionic)</p> </li> <li> <p>Ubuntu 20.04 LTS (focal)</p> </li> <li> <p>Red Hat Enterprise Linux (RHEL) 7.x</p> </li> <li> <p>Red Hat Enterprise Linux (RHEL) 8.x</p> </li> <li> <p>CentOS 7</p> </li> <li> <p>CentOS 8 (not recommended for production; Rocky Linux replaces CentOS 8)</p> </li> <li> <p>Rocky Linux 8.x</p> </li> <li> <p>SUSE Linux Enterprise Server 12</p> </li> </ul> </div> </li> </ol> </div> <div class="sect2"> <h3 id="_volume_layout_and_hardware">3.1. Volume Layout and Hardware</h3> <div class="paragraph"> <p>As can be expected from a version control system, good disk (storage) management is key to maximizing data integrity and performance. Perforce recommend using multiple physical volumes for <strong>each</strong> p4d server instance. Using three or four volumes per instance reduces the chance of hardware failure affecting more than one instance. When naming volumes and directories the SDP assumes the "hx" prefix is used to indicate Helix volumes. Your own naming conventions/standards can be used instead, though this is discouraged as it will create inconsistency with documentation. For optimal performance on UNIX machines, the XFS file system is recommended, but not mandated. The EXT4 filesystem is also considered proven and widely used.</p> </div> <div class="ulist"> <ul> <li> <p></p> <div class="paragraph"> <p><strong>Depot data, archive files, scripts, and checkpoints</strong>: Use a large volume, with RAID 6 on its own controller with a standard amount of cache or a SAN or NAS volume (NFS access is fine).</p> </div> </li> </ul> </div> <div class="paragraph"> <p>This volume is the only volume that <strong>must</strong> be backed up. The SDP backup scripts place the metadata snapshots on this volume.</p> </div> <div class="paragraph"> <p>+ This volume is normally called <code>/hxdepots</code>.</p> </div> <div class="ulist"> <ul> <li> <p></p> <div class="paragraph"> <p><strong>Perforce metadata (database files), 1 or 2 volumes:</strong> Use the fastest volume possible, ideally SSD or RAID 1+0 on a dedicated controller with the maximum cache available on it. Typically a single volume is used, <code>/hxmetadata</code>. In some sites with exceptionally large metadata, 2 volumes are used for metadata, <code>/hxmetadata</code> and <code>/hxmetadata2</code>. Exceptionally large in this case means the metadata size on disk is such that (2x(size of db.* files)+room for growth) approaches or exceeds the storage capacity of the storage device used for metadata. That’s driven by how big /hxmetadata volume. So if you have a 16T storage volume and your total size of db.* files is some ~7T or less (so ~14T total), that’s probably a reasonable cutoff for the definition of "exceptionally large" in this context.</p> </div> </li> </ul> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> Do not run anti-virus tools or back up tools against the <code>hxmetadata</code> volume(s) or <code>hxlogs</code> volume(s), because they can interfere with the operation of the Perforce server executable. </td> </tr> </table> </div> <div class="ulist"> <ul> <li> <p></p> <div class="paragraph"> <p><strong>Journals and logs:</strong> a fast volume, ideally SSD or RAID 1+0 on its own controller with the standard amount of cache on it. This volume is normally called <code>/hxlogs</code> and can optionally be backed up.</p> </div> <div class="paragraph"> <p>If a separate logs volume is not available, put the logs on the <code>/hxmetadata</code> or <code>/hxmetadata1</code> volume, as metadata and logs have similar performance needs that differ from <code>/hxdepots</code>.</p> </div> </li> </ul> </div> <div class="admonitionblock warning"> <table> <tr> <td class="icon"> <i class="fa icon-warning" title="Warning"></i> </td> <td class="content"> Storing metadata and logs on the same volume is discouraged, since the redundancy benefit of the P4JOURNAL (stored on <code>/hxlogs</code>) is greatly reduced if P4JOURNAL is on the same volume as the metadata in the P4ROOT directory. </td> </tr> </table> </div> <div class="admonitionblock note"> <table> <tr> <td class="icon"> <i class="fa icon-note" title="Note"></i> </td> <td class="content"> If multiple controllers are not available, put the <code>/hxlogs</code> and <code>/hxdepots</code> volumes on the same controller. </td> </tr> </table> </div> <div class="paragraph"> <p>On all SDP machines, a <code>/p4</code> directory will exist containing a subdirectory for each instance, and each instance named <code>/p4</code>. The volume layout is shown in <a href="#_sdp_package_contents_and_planning">Appendix A, <em>SDP Package Contents and Planning</em></a>. This <code>/p4</code> directory enables easy access to the different parts of the file system for each instance.</p> </div> <div class="paragraph"> <p>For example:</p> </div> <div class="ulist"> <ul> <li> <p><code>/p4/1/root</code> contains the database files for instance <code>1</code></p> </li> <li> <p><code>/p4/1/logs</code> contains the log files for instance <code>1</code></p> </li> <li> <p><code>/p4/1/bin</code> contains the binaries and scripts for instance <code>1</code></p> </li> <li> <p><code>/p4/common/bin</code> contains the binaries and scripts common to all instances</p> </li> </ul> </div> </div> </div> </div> <div class="sect1"> <h2 id="_maintaining_the_sdp_on_unix_linux">4. Maintaining the SDP on Unix / Linux</h2> <div class="sectionbody"> <div class="sect2"> <h3 id="_backup_procedures">4.1. Backup procedures</h3> <div class="paragraph"> <p>Helix Core’s purpose is to maintain long-running history of all your development. As such, it is important to take reliable backups to preserve your dataset integrity.</p> </div> <div class="sect3"> <h4 id="_metadata_checkpoints">4.1.1. Metadata checkpoints</h4> <div class="paragraph"> <p>The SDP contains scripts and a default crontab which will create daily checkpoints with no downtime. The script <a href="#_daily_checkpoint_sh">Section 9.4.4, “daily_checkpoint.sh”</a> accomplishes this my rotating the journal, replaying it into the <code>offline_db</code> directory, and checkpointing the <code>offline_db</code> directory. The resulting checkpoints, rotated journals, and checkpoint checksum files can be found in <code>/p4/<instance>/checkpoints</code>.</p> </div> <div class="paragraph"> <p>It is difficult to overstate the importance of regular checkpoints. Perforce metadata (the <code>db.*</code> files) is in a constant state of flux, and a checkpoint is the most reliable point of recovery for a commit server. Attempts to back up the <code>root</code> directory with <code>cp</code> or <code>rsync</code> will result in a metadata set that is probably inconsistent and corrupt. Simple backups of the root directory are insufficient.</p> </div> </div> <div class="sect3"> <h4 id="_backup_of_the_partition_containing_depots_checkpoints_and_the_sdp_configuration">4.1.2. Backup of the partition containing depots, checkpoints, and the SDP configuration</h4> <div class="paragraph"> <p>There are three important parts to an SDP installation of Perforce: Metadata, archive storage (back-end version file storage), and configuration. A standard SDP installation will have all three of these on the <code>/hxdepots</code> partition or equivalent. Whatever your server backup strategy is, ensure that you are taking regular snapshots of <code>/hxdepots</code>.</p> </div> </div> </div> <div class="sect2"> <h3 id="_notifications">4.2. Notifications</h3> <div class="paragraph"> <p>The SDP contains the framework to allow your server to communicate its automated maintenance activities, both successes and failures. It is important to ensure that the SPD is properly configured to send emails to the right people, and that the right people are monitoring their emails.</p> </div> <div class="sect3"> <h4 id="_configuration">4.2.1. Configuration</h4> <div class="paragraph"> <p>Setting up mailx, postfix, or mailutils will allow your server to send out emails to your administrative team. Details can be found in <a href="#_configuring_email_for_notifications">Section 5.5.1, “Configuring email for notifications”</a>.</p> </div> <div class="paragraph"> <p>To tell the SDP whom to mail, you will need to set that in the file <code>/p4/common/config/p4_<instance.vars></code> on a per-instance basis. The relevant lines are:</p> </div> <div class="paragraph"> <p><code>export MAILTO=<a href="mailto:P4AdminList@p4demo.com">P4AdminList@p4demo.com</a></code></p> </div> <div class="paragraph"> <p><code>export MAILFROM=<a href="mailto:P4Admin@p4demo.com">P4Admin@p4demo.com</a></code></p> </div> <div class="paragraph"> <p>The <code>MAILTO</code> value can be a distribution group like <code>administrators@company.net</code>, a single recipient like <code>bruno@company.net</code>, or a comma delimited list like <code>bruno@company.net,<a href="mailto:mary@company.net">mary@company.net</a>,<a href="mailto:pat@company.net">pat@company.net</a></code>.</p> </div> <div class="paragraph"> <p>The <code>MAILFROM</code> value can be a valid email address, or a placeholder like <code>do-not-reply@company.net</code>.</p> </div> </div> <div class="sect3"> <h4 id="_notifications_to_monitor">4.2.2. Notifications to monitor</h4> <div class="paragraph"> <p>Your administrator should be aware of the emails that the SDP will be sending on a regular basis. Be careful to not simply redirect them into an unmonitored folder.</p> </div> <div class="sect4"> <h5 id="_daily_checkpoint">4.2.2.1. Daily Checkpoint</h5> <div class="paragraph"> <p>Probably the most important notification to follow, the daily checkpoint job lets you know that your metadata is backed up. Any error messages should be investigated.</p> </div> </div> <div class="sect4"> <h5 id="_verify">4.2.2.2. Verify</h5> <div class="paragraph"> <p>By default, the SDP will run a verify on all your back-end versioned file storage on a weekly basis. It is possible that errors or warnings will creep into an instance as time goes on. These should be investigated, but they are often not mission-critical.</p> </div> </div> <div class="sect4"> <h5 id="_sync_replica">4.2.2.3. Sync Replica</h5> <div class="paragraph"> <p>If you are in a Helix topology that contains replicas or edges, those machines will have their own automated jobs that synchronize checkpoints from the commit server, and keep the metadata in sync. To maintain a healthy topology, these emails should also be investigated if they contain errors.</p> </div> </div> </div> </div> <div class="sect2"> <h3 id="_disk_usage">4.3. Disk usage</h3> <div class="paragraph"> <p>Running out of disk is never fun. You should keep an eye on your disk usage, expanding when needed. A default SDP instance has the following configurables set:</p> </div> <div class="paragraph"> <p><code>filesys.P4JOURNAL.min = 5G</code></p> </div> <div class="paragraph"> <p><code>filesys.P4ROOT.min = 5G</code></p> </div> <div class="paragraph"> <p><code>filesys.depot.min = 5G</code></p> </div> <div class="paragraph"> <p>These settings will cause Perforce to halt when they discover that free disk space is under 5G on the specified partition. This will spare you from corruption if Perforce tries to write to a database and isn’t able to finish. <em>However</em>, there are some edge cases where disk usage can still be disruptive. If your total partition size is 5G or lower, Perforce will halt automatically even if 5G was your intended partition size. Monitoring and expanding your storage space is an important part of maintenance.</p> </div> </div> </div> </div> <div class="sect1"> <h2 id="_installing_the_sdp_on_unix_linux">5. Installing the SDP on Unix / Linux</h2> <div class="sectionbody"> <div class="sect2"> <h3 id="_manual_install">5.1. Manual Install</h3> <div class="paragraph"> <p>The following documentation covers internal details of how the SDP can be deployed manually.</p> </div> <div class="paragraph"> <p>To install Perforce Helix Core server and the SDP, perform the steps laid out below:</p> </div> <div class="ulist"> <ul> <li> <p>Set up a user account, file system, and configuration scripts.</p> </li> <li> <p>Run the configuration script.</p> </li> <li> <p>Start the p4d process and configure the required file structure for the SDP.</p> </li> </ul> </div> <div class="olist arabic arabic"> <ol class="arabic"> <li> <p>If it doesn’t already exist, create a group called <code>perforce</code>:</p> <div class="literalblock"> <div class="content"> <pre>sudo groupadd perforce</pre> </div> </div> </li> <li> <p>Create a user called <code>perforce</code> and set the user’s home directory to <code>/home/perforce</code> on a local disk. We recommend using a local rather than automounted home directory for the <code>perforce</code> OS user. Using an automounted home directory introduces new failure modes for p4d, as well as potential performance issues. A local directory on the local storage is recommend for the home directory. (If the <code>/home</code> directory is always automounted, consider using something else, like <code>/usr/local/home/perforce</code> in the example below):</p> <div class="literalblock"> <div class="content"> <pre>sudo useradd -d /home/perforce -s /bin/bash -m perforce -g perforce</pre> </div> </div> </li> <li> <p>Allow the perforce user sudo access - Option 1 (full sudo)</p> <div class="literalblock"> <div class="content"> <pre>sudo touch /etc/sudoers.d/perforce sudo chmod 0600 /etc/sudoers.d/perforce sudo echo "perforce ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/perforce sudo chmod 0400 /etc/sudoers.d/perforce</pre> </div> </div> </li> <li> <p>Allow the perforce user sudo access - Option 2 (limited sudo)</p> <div class="literalblock"> <div class="content"> <pre>sudo touch /etc/sudoers.d/perforce sudo chmod 0600 /etc/sudoers.d/perforce vi /etc/sudoers.d/perforce</pre> </div> </div> </li> <li> <p>In the text editor, make the file look like this to give limited sudo, replacing <code><em>OSUSER</em></code> with <code>perforce</code>, and replacing <code><em>HOSTNAME</em></code> with the name of current machine, as returned by the <code>hostname</code> command:</p> </li> </ol> </div> <div class="listingblock"> <div class="title">Template for <code>/etc/sudoers.d/perforce</code>:</div> <div class="content"> <pre>Cmnd_Alias P4_SVC = /usr/bin/systemctl start node_exporter, \ /usr/bin/systemctl stop node_exporter, \ /usr/bin/systemctl restart node_exporter, \ /usr/bin/systemctl status node_exporter, \ /usr/bin/systemctl cat node_exporter, \ /usr/bin/systemctl enable node_exporter, \ /usr/bin/systemctl disable node_exporter, \ /usr/bin/systemctl is-enabled node_exporter, \ /usr/bin/systemctl start p4d_*, \ /usr/bin/systemctl stop p4d_*, \ /usr/bin/systemctl restart p4d_*, \ /usr/bin/systemctl status p4d_*, \ /usr/bin/systemctl cat p4d_*, \ /usr/bin/systemctl enable p4d_*, \ /usr/bin/systemctl disable p4d_*, \ /usr/bin/systemctl is-enabled p4d_*, \ /usr/bin/systemctl start p4dtg_*, \ /usr/bin/systemctl stop p4dtg_*, \ /usr/bin/systemctl restart p4dtg_*, \ /usr/bin/systemctl status p4dtg_*, \ /usr/bin/systemctl cat p4dtg_*, \ /usr/bin/systemctl enable p4dtg_*, \ /usr/bin/systemctl disable p4dtg_*, \ /usr/bin/systemctl is-enabled p4dtg_*, \ /usr/bin/systemctl start p4broker_*, \ /usr/bin/systemctl stop p4broker_*, \ /usr/bin/systemctl restart p4broker_*, \ /usr/bin/systemctl status p4broker_*, \ /usr/bin/systemctl cat p4broker_*, \ /usr/bin/systemctl enable p4broker_*, \ /usr/bin/systemctl disable p4broker_*, \ /usr/bin/systemctl is-enabled p4broker_*, \ /usr/bin/systemctl start p4p_*, \ /usr/bin/systemctl stop p4p_*, \ /usr/bin/systemctl restart p4p_*, \ /usr/bin/systemctl status p4p_*, \ /usr/bin/systemctl cat p4p_*, \ /usr/bin/systemctl enable p4p_*, \ /usr/bin/systemctl disable p4p_*, \ /usr/bin/systemctl is-enabled p4p_*, \ /usr/bin/systemctl start p4prometheus, \ /usr/bin/systemctl stop p4prometheus, \ /usr/bin/systemctl restart p4prometheus, \ /usr/bin/systemctl status p4prometheus, \ /usr/bin/systemctl cat p4prometheus, \ /usr/bin/systemctl enable p4prometheus, \ /usr/bin/systemctl disable p4prometheus, \ /usr/bin/systemctl is-enabled p4prometheus, \ /usr/bin/systemctl start helix-auth, \ /usr/bin/systemctl stop helix-auth, \ /usr/bin/systemctl restart helix-auth, \ /usr/bin/systemctl status helix-auth, \ /usr/bin/systemctl cat helix-auth, \ /usr/bin/systemctl enable helix-auth, \ /usr/bin/systemctl disable helix-auth, \ /usr/bin/systemctl is-enabled helix-auth, \ /usr/bin/lslocks, \ /usr/bin/getcap, \ /usr/bin/setcap, \ /usr/sbin/setcap, \ /usr/sbin/getcap, \ /sbin/getcap, \ /sbin/setcap, \ /bin/getcap, \ /bin/setcap __OSUSER__ __HOSTNAME__ = (root) NOPASSWD: P4_SVC</pre> </div> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Then lock down the file:</p> <div class="literalblock"> <div class="content"> <pre>sudo chmod 0400 /etc/sudoers.d/perforce</pre> </div> </div> </li> <li> <p>Create or mount the OS server file system volumes (per layout in previous section)</p> <div class="ulist"> <ul> <li> <p><code>/hxdepots</code></p> </li> <li> <p><code>/hxlogs</code></p> <div class="paragraph"> <p>and either:</p> </div> </li> <li> <p><code>/hxmetadata</code></p> </li> </ul> </div> <div class="paragraph"> <p>or</p> </div> <div class="ulist"> <ul> <li> <p><code>/hxmetadata1</code></p> </li> <li> <p><code>/hxmetadata2</code></p> </li> </ul> </div> </li> <li> <p>These directories should be owned by: <code>perforce:perforce</code></p> <div class="literalblock"> <div class="content"> <pre>sudo chown -R perforce:perforce /hx*</pre> </div> </div> </li> <li> <p>(Optional) if you have different root directories, or are putting all files into one mounted filesystem (only recommended for small repositories), then do something like the following:</p> <div class="paragraph"> <p>Option 1, all under a single directory <code>/data</code>:</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /data mkdir hxmetadata hxlogs hxdepots sudo chown -R perforce:perforce /data/hx* cd / ln -s /data/hx* . sudo chown -h perforce:perforce /hx*</pre> </div> </div> <div class="paragraph"> <p>Option 2, different mounted root folders, e.g. <code>/P4metadata</code>, <code>/P4logs</code>, <code>/P4depots</code>:</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo chown -R perforce:perforce /P4metadata /P4logs /P4depots ln -s /P4metadata /hxmetadata ln -s /P4logs /hxlogs ln -s /P4depots /hxdepots sudo chown -h perforce:perforce /hx*</pre> </div> </div> </li> <li> <p>Extract the SDP tarball.</p> <div class="literalblock"> <div class="content"> <pre>cd /hxdepots tar -xzf /WhereYouDownloaded/sdp.Unix.tgz</pre> </div> </div> </li> <li> <p>Set environment variable SDP.</p> <div class="literalblock"> <div class="content"> <pre>export SDP=/hxdepots/sdp</pre> </div> </div> </li> <li> <p>Make the entire $SDP (<code>/hxdepots/sdp</code>) directory writable by <code>perforce:perforce</code> with this command:</p> <div class="literalblock"> <div class="content"> <pre>chmod -R +w $SDP</pre> </div> </div> </li> <li> <p>Download the appropriate p4, p4d and p4broker binaries for your release and platform:</p> <div class="literalblock"> <div class="content"> <pre>cd /hxdepots/sdp/helix_binaries ./get_helix_binaries.sh</pre> </div> </div> <div class="paragraph"> <p>If you want to specify a particular release, use the <code>-r</code> option as in this example specifying the r20.2 release:</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /hxdepots/sdp/helix_binaries ./get_helix_binaries.sh -r r20.2</pre> </div> </div> </li> </ol> </div> <div class="sect3"> <h4 id="_manual_install_initial_setup">5.1.1. Manual Install Initial setup</h4> <div class="paragraph"> <p>The next steps highlight the setup and configuration of a new Helix Core instance using the <code>mkdirs.sh</code> script included in the SDP. Please refer to <a href="#_mkdirs_sh">Section 9.6.11, “mkdirs.sh”</a> for the full usage statement.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> If you use a "name" for the instance (not an integer) you MUST modify the P4PORT variable in the <code>mkdirs.<em>instance</em>.cfg</code> file. </td> </tr> </table> </div> <div class="admonitionblock note"> <table> <tr> <td class="icon"> <i class="fa icon-note" title="Note"></i> </td> <td class="content"> The instance name must map to the name of the cfg file or the default file will be used with potentially unexpected results. </td> </tr> </table> </div> <div class="paragraph"> <p>Examples:</p> </div> <div class="ulist"> <ul> <li> <p><code>mkdirs.sh 1</code> requires <code>mkdirs.1.cfg</code></p> </li> <li> <p><code>mkdirs.sh ion</code> requires <code>mkdirs.ion.cfg</code></p> </li> </ul> </div> <div class="olist arabic"> <ol class="arabic" start="3"> <li> <p>Put the Perforce license file for the p4d server instance into <code>/p4/1/root</code></p> </li> </ol> </div> <div class="admonitionblock note"> <table> <tr> <td class="icon"> <i class="fa icon-note" title="Note"></i> </td> <td class="content"> if you have multiple instances and have been provided with port-specific licenses by Perforce, the appropriate license file must be stored in the appropriate <code>/p4/<instance>/root</code> folder. </td> </tr> </table> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> the license file must be renamed to simply the name <code>license</code>. </td> </tr> </table> </div> <div class="paragraph"> <p>Your Helix Core instance is now setup, but not running. The next steps detail how to make the Helix Core p4d instance a system service.</p> </div> <div class="paragraph"> <p>You are then free to start up the <code>p4d</code> instance as documented in <a href="#_starting_and_stopping_services">Appendix F, <em>Starting and Stopping Services</em></a>.</p> </div> <div class="paragraph"> <p>Please note that if you have configured SSL, then refer to <a href="#_use_of_ssl">Section 5.1.1.1, “Use of SSL”</a>.</p> </div> <div class="sect4"> <h5 id="_use_of_ssl">5.1.1.1. Use of SSL</h5> <div class="paragraph"> <p>As documented in the comments in mkdirs.cfg, if you are planning to use SSL you need to set the value of:</p> </div> <div class="literalblock"> <div class="content"> <pre>SSL_PREFIX=ssl:</pre> </div> </div> <div class="paragraph"> <p>Then you need to put certificates in <code>/p4/ssl</code> after the SDP install or you can generate a self signed certificate as follows:</p> </div> <div class="paragraph"> <p>Edit <code>/p4/ssl/config.txt</code> to put in the info for your company. Then run:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/common/bin/p4master_run <instance> /p4/<instance>/bin/p4d_<instance> -Gc</pre> </div> </div> <div class="paragraph"> <p>For example using instance 1:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/common/bin/p4master_run 1 /p4/1/bin/p4d_1 -Gc</pre> </div> </div> <div class="paragraph"> <p>In order to validate that SSL is working correctly:</p> </div> <div class="literalblock"> <div class="content"> <pre>source /p4/common/bin/p4_vars 1</pre> </div> </div> <div class="paragraph"> <p>Check that P4TRUST is appropriately set in the output of:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 set</pre> </div> </div> <div class="paragraph"> <p>Update the P4TRUST values:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 trust -y p4 -p ssl:$HOSTNAME:1666 trust -y # Assuming correct port p4 -p $P4MASTERPORT trust -y</pre> </div> </div> <div class="paragraph"> <p>Check the stored P4TRUST values:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 trust -l</pre> </div> </div> <div class="paragraph"> <p>You need to have an entry for the above for both loopback (<code>127.0.0.1</code> and the IP address of current machine)</p> </div> <div class="paragraph"> <p>Check you are not prompted for trust:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 login p4 info</pre> </div> </div> </div> <div class="sect4"> <h5 id="_configuration_script_mkdirs_cfg">5.1.1.2. Configuration script mkdirs.cfg</h5> <div class="paragraph"> <p>The <code>mkdirs.sh</code> script executed above resides in <code>$SDP/Server/Unix/setup</code>. It sets up the basic directory structure used by the SDP. Carefully review the config file <code>mkdirs.<strong><em>instance</em></strong>.cfg</code> for this script before running it, and adjust the values of the variables as required. The important parameters are:</p> </div> <table class="tableblock frame-all grid-all stretch"> <colgroup> <col style="width: 50%;"> <col style="width: 50%;"> </colgroup> <thead> <tr> <th class="tableblock halign-left valign-top">Parameter</th> <th class="tableblock halign-left valign-top">Description</th> </tr> </thead> <tbody> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">DB1</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Name of the hxmetadata1 volume (can be same as DB2)</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">DB2</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Name of the hxmetadata2 volume (can be same as DB1)</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">DD</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Name of the hxdepots volume</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">LG</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Name of the hxlogs volume</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">CN</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Volume for /p4/common</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">SDP</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Path to SDP distribution file tree</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">SHAREDDATA</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">TRUE or FALSE - whether sharing the /hxdepots volume with a replica - normally this is FALSE</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">ADMINUSER</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">P4USER value of a Perforce super user that operates SDP scripts, typically <code>perforce</code>.</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">OSUSER</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Operating system user that will run the Perforce instance, typically perforce.</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">OSGROUP</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Operating system group that OSUSER belongs to, typically perforce.</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">CASE_SENSITIVE</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Indicates if p4d server instance has special case sensitivity settings</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">SSL_PREFIX</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Set if SSL is required so either "ssl:" or blank for no SSL</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">P4ADMINPASS</p></td> <td class="tableblock halign-left valign-top"><div class="content"><div class="paragraph"> <p>Password to use for Perforce superuser account - can be edited later in /p4/common/config/.p4password.p4_1.admin</p> </div></div></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">P4SERVICEPASS</p></td> <td class="tableblock halign-left valign-top"><div class="content"><div class="paragraph"> <p>This value is not used by any SDP scripts or standard procedures. It is left in place for backward compatibility.</p> </div></div></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">P4MASTERHOST</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Fully qualified DNS name of the Perforce master server machine for this instance. should refer to the DNS of the edge server machine. Otherwise replicas should refer to the commit-server machine.</p></td> </tr> </tbody> </table> <div class="paragraph"> <p>For a detailed description of this config file it is fully documented with in-file comments, or see</p> </div> </div> </div> <div class="sect3"> <h4 id="_sdp_init_scripts">5.1.2. SDP Init Scripts</h4> <div class="paragraph"> <p>The SDP includes templates for initialization scripts ("init scripts") that provide basic service <code>start</code>/<code>stop</code>/<code>status</code> functionality for a variety of Perforce server products, including:</p> </div> <div class="ulist"> <ul> <li> <p>p4d</p> </li> <li> <p>p4broker</p> </li> <li> <p>p4p</p> </li> <li> <p>p4dtg</p> </li> </ul> </div> <div class="paragraph"> <p>During initialization for an SDP instance, the SDP <code>mkdirs.sh</code> script creates a set of initialization scripts based on the templates, and writes them in the instance-specific bin folder (the "Instance Bin" directory), <code>/p4/<em>N</em>/bin</code>. For example, the <code>/p4/1/bin</code> folder for instance <code>1</code> might contain any of the following:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4d_1_init p4broker_1_init p4p_1_init p4dtg_1_init</pre> </div> </div> <div class="paragraph"> <p>The set of <code>*_init</code> files in the Instance Bin directory defines which services (p4d, p4broker, p4p, and/or p4dtg) are active for the given instance on the current machine. A common configuration is to run both p4d and p4broker together, or only run a p4p on a machine. Unused init scripts must be removed from the Instance Bin dir. For example, if a p4p is not needed for instance 1 on the current machine, then <code>/p4/1/bin/p4p_1_init</code> should be removed.</p> </div> <div class="paragraph"> <p>For example, the init script for starting p4d for instance 1 is <code>/p4/1/bin/p4d_1_init</code>. All init scripts accept at least <code>start</code>, <code>stop</code>, and <code>status</code> arguments. How the init scripts are called depends on whether your operating system uses the systemd or older SysV init mechanism. This is detailed in sections specific to each init mechanism below.</p> </div> <div class="paragraph"> <p>Templates for the init scripts are stored in:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/common/etc/init.d</pre> </div> </div> <div class="sect4"> <h5 id="_configuring_systemd">5.1.2.1. Configuring systemd</h5> <div class="sect5"> <h6 id="_configuring_systemd_for_p4d">Configuring systemd for p4d</h6> <div class="paragraph"> <p>RHEL/CentOS 7 or 8, SuSE 12, Ubuntu (>= v16.04), Amazon Linux 2, and other Linux distributions utilize <strong>systemd / systemctl</strong> as the mechanism for controlling services, replacing the earlier SysV init process. Templates for systemd *.service files are included in the SDP distribution in <code>$SDP/Server/Unix/p4/common/etc/systemd/system</code>.</p> </div> <div class="paragraph"> <p>Note that using <code>systemd</code> is strongly recommended on systems that support it, for safety reasons. However, enabling services to start automatically on boot is optional.</p> </div> <div class="paragraph"> <p>To configure p4d for systemd, run these commands as the root user:</p> </div> <div class="literalblock"> <div class="content"> <pre>I=1</pre> </div> </div> <div class="paragraph"> <p>Replace the <code>1</code> on the right side of the <code>=</code> with your SDP instance name, e.g. xyz if your P4ROOT is /p4/xyz/root. Then:</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /etc/systemd/system sed -e "s:__INSTANCE__:$I:g" -e "s:__OSUSER__:perforce:g" $SDP/Server/Unix/p4/common/etc/systemd/system/p4d_N.service.t > p4d_${I}.service chmod 644 p4d_${I}.service systemctl daemon-reload</pre> </div> </div> <div class="paragraph"> <p>If you are configuring p4d for more than one instance, repeat the <code>I=</code> command with each instance name on the right side of the <code>=</code>, and then repeat the block of commands above.</p> </div> <div class="paragraph"> <p>Once configured, the following are sample management commands to start, stop, and status the service. These following commands are typically run as the <code>perforce</code> OSUSER using <code>sudo</code> where needed:</p> </div> <div class="literalblock"> <div class="content"> <pre>systemctl cat p4d_1 systemctl status p4d_1 sudo systemctl start p4d_1 sudo systemctl stop p4d_1</pre> </div> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> if running with SELinux in enforcing mode, see <a href="#_enabling_systemd_under_selinux">Section 5.1.2.2, “Enabling systemd under SELinux”</a> </td> </tr> </table> </div> <div class="sidebarblock"> <div class="content"> <div class="title">Systemd Required if Configured</div> <div class="paragraph"> <p>If you are using <code>systemd</code> and you have configured services as above, then you can no longer run the <code>\*_init</code> scripts directly for normal service <code>start</code>/<code>stop</code>, though they can still be used for <code>status</code>. The <code>sudo systemctl</code> commands <strong>must</strong> be used for <code>start</code>/<code>stop</code>. Attempting to run the underlying scripts directly will result in an error message if systemd is configured. This is for safety: systemd’s concept of service status (up or down) is only reliable when systemd starts and stops the service itself. The SDP init scripts require the systemd mechanism (using the <code>systemctl</code> command) to be used if it is configured. This ensures that services will gracefully stop the service on reboot (which would otherwise present a risk of data corruption for p4d on reboot).</p> </div> <div class="paragraph"> <p>The SDP requires systemd to be used if it is configured, and we strongly recommend using system on systems that use it. We recommend this to eliminate the risk of corruption on reboot, and also for consistency of operations. However, the SDP does not require systemd to be used. The SDP uses <code>systemctl cat</code> of the service name (e.g. <code>p4d_1</code>) to determine if systemd is configured for any given service.</p> </div> </div> </div> </div> <div class="sect5"> <h6 id="_configuring_systemd_for_p4p">Configuring systemd for p4p</h6> <div class="paragraph"> <p>Configuring p4p for systemd is identical to the configuration the for p4d, except that you would replace <code>p4d</code> with <code>p4p</code> in the sample commands above for configuring p4d.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> Note SELinux fix (<a href="#_enabling_systemd_under_selinux">Section 5.1.2.2, “Enabling systemd under SELinux”</a>) may be similarly required. </td> </tr> </table> </div> </div> <div class="sect5"> <h6 id="_configuring_systemd_for_p4dtg">Configuring systemd for p4dtg</h6> <div class="paragraph"> <p>Configuring p4dtg for systemd is identical to the configuration the for p4d, except that you would replace <code>p4d</code> with <code>p4dtg</code> in the sample commands above for configuring p4d.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> Note SELinux fix (<a href="#_enabling_systemd_under_selinux">Section 5.1.2.2, “Enabling systemd under SELinux”</a>) may be similarly required. </td> </tr> </table> </div> </div> <div class="sect5"> <h6 id="_configuring_systemd_p4broker_multiple_configs">Configuring systemd p4broker - multiple configs</h6> <div class="paragraph"> <p>Configuring p4broker for systemd can be similar to configuration the for p4d, but there are extra options as you may choose to run multiple broker configurations. For example, you may have:</p> </div> <div class="ulist"> <ul> <li> <p>a default p4broker configuration that runs when the service is live,</p> </li> <li> <p>a "Down for Maintenance" (DFM) broker used in place of the default broker during maintenance to help lock out users broadcasting a friendly message like "Perforce is offline for scheduled maintenance."</p> </li> <li> <p>SSL broker config enabling an SSL-encrypted connection to a server that might not yet require SSL encryption for all users.</p> </li> </ul> </div> <div class="paragraph"> <p>The service name for the default broker configuration is always <code>p4broker_N</code>, where <code>N</code> is the instance name, e.g. <code>p4broker_1</code> for instance <code>1</code>. This uses the default broker config file, <code>/p4/common/config/p4_1.broker.cfg</code>.</p> </div> <div class="sidebarblock"> <div class="content"> <div class="title">Host Specific Broker Config</div> <div class="paragraph"> <p>For circumstances where host-specific broker configuration is required, the default broker will use a <code>/p4/common/config/p4_N.broker.<short-hostname>.cfg</code> if it exists, where <code><short-hostname></code> is whatever is returned by the command <code>hostname -s</code>. The logic in the broker init script will favor the host-specific config if found, otherwise it will use the standard broker config.</p> </div> </div> </div> <div class="paragraph"> <p>When alternate broker configurations are used, each alternate configuration file must have a separate systemd unit file associated with managing that configuration. The service file must specify a configuration tag name, such as 'dfm' or 'ssl'. That tag name is used to identify both the broker config file and the systemd unit file for that broker. If the broker config is intended to run concurrently with the default broker config, it must listen on a different port number than the one specified in the default broker config. If it is only intended to run in place of the standard config, as with a 'dfm' config, then it should listen on the same port number as the default broker if a default broker is used, or else the same port as the p4d server if brokers are used only for dfm. The systemd service for a broker intended to run only during maintenance should not be enabled, and thus only manually started/stopped as part of maintenance procedures.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> If maintenance procedures involve a reboot of a server machine, you may also want to disable all services during maintenance and re-enable them afterward. </td> </tr> </table> </div> <div class="paragraph"> <p>For example, say you want a default broker, a DFM broker, and an SSL broker for instance 1. The default and SSL brokers will run continuously, and the DFM broker only during scheduled maintenance. The following broker config files would be needed in <code>/p4/common/config</code>:</p> </div> <div class="ulist"> <ul> <li> <p><code>p4_1.broker.cfg</code> - default broker, targets p4d on port 1999, listens on port 1666</p> </li> <li> <p><code>p4_1.broker.ssl.cfg</code> - SSL broker, targets p4d on port 1999, listens on port 1667</p> </li> <li> <p><code>p4_1.broker.dfm.cfg</code> - DFM broker, targets p4d on port 1999 , listens on port 1666.</p> </li> </ul> </div> <div class="paragraph"> <p>Then, create a systemd *.service file that references each config. For the default broker, use the template just as with p4d above. Do the following as the <code>root</code> user:</p> </div> <div class="literalblock"> <div class="content"> <pre>I=1</pre> </div> </div> <div class="paragraph"> <p>Replace the <code>1</code> on the right side of the <code>=</code> with your SDP instance name, e.g. xyz if your P4ROOT is /p4/xyz/root. Then:</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /etc/systemd/system sed -e "s:__INSTANCE__:$I:g" -e "s:__OSUSER__:perforce:g" $SDP/Server/Unix/p4/common/etc/systemd/system/p4broker_N.service.t > p4broker_$I.service chmod 644 p4broker_$I.service systemctl daemon-reload</pre> </div> </div> <div class="paragraph"> <p>Once configured, the following are sample management commands to start, stop, and status the service. These following commands are typically run as the <code>perforce</code> OSUSER using <code>sudo</code> where needed:</p> </div> <div class="literalblock"> <div class="content"> <pre>systemctl cat p4broker_1 systemctl status p4broker_1 sudo systemctl start p4broker_1 sudo systemctl stop p4broker_1</pre> </div> </div> <div class="paragraph"> <p>For the non-default broker configs for the SSL and DFM brokers, start by copying the default broker config to a new *.service file with <code>_ssl</code> or <code>_dfm</code> inserted into the name, like so:</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /etc/systemd/system cp p4broker_1.service p4broker_1_dfm.service cp p4broker_1.service p4broker_1_ssl.service</pre> </div> </div> <div class="paragraph"> <p>Next, modify the p4broker_1_dfm.service file and p4broker_1_ssl.service files with a text editor, making the following edits:</p> </div> <div class="ulist"> <ul> <li> <p>Find the string that says <code>using default broker config</code>, and change the word <code>default</code> to <code>dfm</code> or <code>ssl</code> as appropriate, so it reads something like <code>using dfm broker config</code>.</p> </li> <li> <p>Change the ExecStart and ExecStop definitions by appending the <code>dfm</code> or <code>ssl</code> tag. For example, change these two lines:</p> <div class="literalblock"> <div class="content"> <pre>ExecStart=/p4/1/bin/p4broker_1_init start ExecStop=/p4/1/bin/p4broker_1_init stop</pre> </div> </div> </li> </ul> </div> <div class="paragraph"> <p>to look like this for the <code>dfm</code> broker:</p> </div> <div class="literalblock"> <div class="content"> <pre>ExecStart=/p4/1/bin/p4broker_1_init start dfm ExecStop=/p4/1/bin/p4broker_1_init stop dfm</pre> </div> </div> <div class="paragraph"> <p>After any modifications to systemd *.services files are made, reload them into with:</p> </div> <div class="literalblock"> <div class="content"> <pre>systemctl daemon-reload</pre> </div> </div> <div class="paragraph"> <p>At this point, the services <code>p4broker_1</code>, <code>p4broker_1_dfm</code>, and <code>p4broker_1_ssl</code> can be started and stopped normally.</p> </div> <div class="paragraph"> <p>Finally, enable those services you want to start on boot. In our example here, we will enable the default and ssl broker services to start on boot, but not the DFM broker:</p> </div> <div class="literalblock"> <div class="content"> <pre>systemctl enable p4broker_1 systemctl enable p4broker_1_ssl</pre> </div> </div> <div class="paragraph"> <p>You must be aware of which configurations listen on the same port, and not try to runs those configurations concurrently. In this case, ensure the default and dfm brokers don’t run at the same time. So, for example, you might start a maintenance window with:</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo systemctl stop p4broker_1 p4d_1 sudo systemctl start p4broker_1_dfm</pre> </div> </div> <div class="paragraph"> <p>and end maintenance in the opposite order:</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo systemctl stop p4broker_1_dfm sudo systemctl start p4broker_1 p4d_1</pre> </div> </div> <div class="paragraph"> <p>Details may vary depending on what is occurring during maintenance.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> Note SELinux fix (<a href="#_enabling_systemd_under_selinux">Section 5.1.2.2, “Enabling systemd under SELinux”</a>) may be similarly required. </td> </tr> </table> </div> </div> </div> <div class="sect4"> <h5 id="_enabling_systemd_under_selinux">5.1.2.2. Enabling systemd under SELinux</h5> <div class="paragraph"> <p>If you have <code>SELinux</code> in <code>Enforcing</code> mode, then you may get an error message when you try and start the service:</p> </div> <div class="listingblock"> <div class="content"> <pre class="highlight"><code>$ systemctl start p4d_1 $ systemctl status p4d_1 : Active: failed Process: 1234 ExecStart=/p4/1/bin/p4d_1_init start (code=exited, status=203/EXEC) : $ journalctl -u p4d_1 --no-pager | tail : ... p4d_1.service: Failed to execute command: Permission denied ... p4d_1.service: Failed at step EXEC spawning p4d_1_init: Permission denied</code></pre> </div> </div> <div class="paragraph"> <p>This can be easily fixed (as <code>root</code>):</p> </div> <div class="literalblock"> <div class="content"> <pre>semanage fcontext -a -t bin_t /p4/1/bin/p4d_1_init restorecon -vF /p4/1/bin/p4d_1_init</pre> </div> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> If not already installed then <code>yum install policycoreutils-python-utils</code> gets you the basic commands mentioned above - you don’t need the full <code>setools</code> which comes with a GUI! </td> </tr> </table> </div> <div class="paragraph"> <p>Then try again:</p> </div> <div class="literalblock"> <div class="content"> <pre>systemctl start p4d_1 systemctl status p4d_1</pre> </div> </div> <div class="paragraph"> <p>The status command should show <code>Active: active</code></p> </div> <div class="paragraph"> <p>For troubleshooting SELinux, we recommend <a href="https://www.serverlab.ca/tutorials/linux/administration-linux/troubleshooting-selinux-centos-red-hat/">the setroubleshoot utility</a></p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> Look for denied in /var/log/audit.log and then <code>ls -alZ <file></code> for any file that triggered the denied message and go from there. </td> </tr> </table> </div> </div> <div class="sect4"> <h5 id="_configuring_sysv_init_scripts">5.1.2.3. Configuring SysV Init Scripts</h5> <div class="paragraph"> <p>To configure services for an instance on systems using the SysV init mechanism, run these commands as the <code>root</code> user: Repeat this step for all instance init scripts you wish to configure as system services.</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /etc/init.d ln -s /p4/1/bin/p4d_1_init chkconfig --add p4d_1_init</pre> </div> </div> <div class="paragraph"> <p>With that done, you can <code>start</code>/<code>stop</code>/<code>status</code> the service as <code>root</code> by running commands like:</p> </div> <div class="literalblock"> <div class="content"> <pre>service p4d_1_init status service p4d_1_init start service p4d_1_init stop</pre> </div> </div> <div class="paragraph"> <p>On SysV systems, you can also run the underlying init scripts directly as either the <code>root</code> or <code>perforce</code> user. If run as <code>root</code>, the script becomes <code>perforce</code> immediately, so that no processing occurs as root.</p> </div> </div> </div> <div class="sect3"> <h4 id="_configuring_automatic_service_start_on_boot">5.1.3. Configuring Automatic Service Start on Boot</h4> <div class="paragraph"> <p>You may want to configure your server machine such that the Helix Core Server for any given instance (and/or Proxy and/or Broker) will start automatically when the machine boots.</p> </div> <div class="paragraph"> <p>This is done using Systemd or Init scripts as covered below.</p> </div> <div class="sect4"> <h5 id="_automatic_start_for_systems_using_systemd">5.1.3.1. Automatic Start for Systems using systemd</h5> <div class="paragraph"> <p>Once systemd services are configured, you can enable the service to start on boot with a command like this, run a s <code>root</code>:</p> </div> <div class="literalblock"> <div class="content"> <pre>systemctl enable p4d_1</pre> </div> </div> <div class="paragraph"> <p>The <code>enable</code> command configures the services to start automatically when the machine reboots, but does not immediately start the service. <em>Enabling services is optional</em>; you can start and stop the services manually regardless of whether it is enabled for automatic start on boot.</p> </div> </div> <div class="sect4"> <h5 id="_for_systems_using_the_sysv_init_mechanism">5.1.3.2. For systems using the SysV init mechanism</h5> <div class="paragraph"> <p>Once SysV services are configured, you can enable the service to start on boot with a command like this, run as <code>root</code>:</p> </div> <div class="literalblock"> <div class="content"> <pre>chkconfig p4d_1_init on</pre> </div> </div> </div> </div> <div class="sect3"> <h4 id="_sdp_crontab_templates">5.1.4. SDP Crontab Templates</h4> <div class="paragraph"> <p>The SDP includes basic crontab templates for master, replica, and edge servers in:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/common/etc/cron.d</pre> </div> </div> <div class="paragraph"> <p>These define schedules for routine checkpoint operations, replica status checks, and email reviews.</p> </div> </div> <div class="sect3"> <h4 id="_completing_your_server_configuration">5.1.5. Completing Your Server Configuration</h4> <div class="olist arabic"> <ol class="arabic"> <li> <p>Ensure that the admin user configured above has the correct password defined in <code>/p4/common/config/.p4passwd.p4_1.admin</code>, and then run the <code>p4login1</code> script (which calls the <code>p4 login</code> command using the <code>.p4passwd.p4_1.admin</code> file).</p> </li> <li> <p>For new server instances, run this script, which sets several recommended configurables:</p> <div class="literalblock"> <div class="content"> <pre>cd /p4/sdp/Server/setup/configure_new_server.sh 1</pre> </div> </div> </li> </ol> </div> <div class="paragraph"> <p>For existing servers, examine this file, and manually apply the <code>p4 configure</code> command to set configurables on your Perforce server instance.</p> </div> <div class="paragraph"> <p>Initialize the perforce user’s crontab with one of these commands:</p> </div> <div class="literalblock"> <div class="content"> <pre>crontab /p4/p4.crontab</pre> </div> </div> <div class="paragraph"> <p>and customize execution times for the commands within the crontab files to suite the specific installation.</p> </div> <div class="paragraph"> <p>The SDP uses wrapper scripts in the crontab: <code>run_if_master.sh</code>, <code>run_if_edge.sh</code>, <code>run_if_replica.sh</code>. We suggest you ensure these are working as desired, e.g.</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/common/bin/run_if_master.sh 1 echo yes /p4/common/bin/run_if_replica.sh 1 echo yes /p4/common/bin/run_if_edge.sh 1 echo yes</pre> </div> </div> <div class="paragraph"> <p>The above should output <code>yes</code> if you are on the master (commit) machine (or replica/edge as appropriate), but otherwise nothing. Any issues with the above indicate incorrect values for <code>$MASTER_ID</code>, or for other values within <code>/p4/common/config/p4_1.vars</code> (assuming instance <code>1</code>). You can debug this with:</p> </div> <div class="literalblock"> <div class="content"> <pre>bash -xv /p4/common/bin/run_if_master.sh 1 echo yes</pre> </div> </div> <div class="paragraph"> <p>If in doubt contact support.</p> </div> </div> <div class="sect3"> <h4 id="_validating_your_sdp_installation">5.1.6. Validating your SDP installation</h4> <div class="paragraph"> <p>Source your SDP environment variables and check that they look appropriate - for <instance> <code>1</code>:</p> </div> <div class="literalblock"> <div class="content"> <pre>source /p4/common/bin/p4_vars 1</pre> </div> </div> <div class="paragraph"> <p>The output of <code>p4 set</code> should be something like:</p> </div> <div class="literalblock"> <div class="content"> <pre>P4CONFIG=/p4/1/.p4config (config 'noconfig') P4ENVIRO=/dev/null/.p4enviro P4JOURNAL=/p4/1/logs/journal P4LOG=/p4/1/logs/log P4PCACHE=/p4/1/cache P4PORT=ssl:1666 P4ROOT=/p4/1/root P4SSLDIR=/p4/ssl P4TICKETS=/p4/1/.p4tickets P4TRUST=/p4/1/.p4trust P4USER=perforce</pre> </div> </div> <div class="paragraph"> <p>There is a script <code>/p4/common/bin/verify_sdp.sh</code>. Run this specifying the <instance> id, e.g.</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/common/bin/verify_sdp.sh 1</pre> </div> </div> <div class="paragraph"> <p>The output should be something like:</p> </div> <div class="literalblock"> <div class="content"> <pre>verify_sdp.sh v5.6.1 Starting SDP verification on host helixcorevm1 at Fri 2020-08-14 17:02:45 UTC with this command line: /p4/common/bin/verify_sdp.sh 1</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre>If you have any questions about the output from this script, contact support-helix-core@perforce.com. ------------------------------------------------------------------------------ Doing preflight sanity checks. Preflight Check: Ensuring these utils are in PATH: date ls grep awk id head tail Verified: Essential tools are in the PATH. Preflight Check: cd /p4/common/bin Verified: cd works to: /p4/common/bin Preflight Check: Checking current user owns /p4/common/bin Verified: Current user [perforce] owns /p4/common/bin Preflight Check: Checking /p4 and /p4/<instance> are local dirs. Verified: P4HOME has expected value: /p4/1 Verified: This P4HOME path is not a symlink: /p4/1 Verified: cd to /p4 OK. Verified: Dir /p4 is a local dir. Verified: cd to /p4/1 OK. Verified: P4HOME dir /p4/1 is a local dir.</pre> </div> </div> <div class="paragraph"> <p>Finishing with:</p> </div> <div class="literalblock"> <div class="content"> <pre>Verifications completed, with 0 errors and 0 warnings detected in 57 checks.</pre> </div> </div> <div class="paragraph"> <p>If it mentions something like:</p> </div> <div class="literalblock"> <div class="content"> <pre>Verifications completed, with 2 errors and 1 warnings detected in 57 checks.</pre> </div> </div> <div class="paragraph"> <p>then review the details. If in doubt contact Perforce Support: <a href="mailto:support-helix-core@perforce.com">support-helix-core@perforce.com</a></p> </div> </div> </div> <div class="sect2"> <h3 id="_local_sdp_configuration">5.2. Local SDP Configuration</h3> <div class="paragraph"> <p>There are many scenarios where you may need to override a default value that the SDP provides. These changes must be done in specific locations so that your changes persist across SDP upgrades. There are two different scopes of configuration to be aware of and two locations you can place your configuration in:</p> </div> <table class="tableblock frame-all grid-all stretch"> <colgroup> <col style="width: 33.3333%;"> <col style="width: 33.3333%;"> <col style="width: 33.3334%;"> </colgroup> <thead> <tr> <th class="tableblock halign-left valign-top">Location</th> <th class="tableblock halign-left valign-top">Scope</th> <th class="tableblock halign-left valign-top">Description</th> </tr> </thead> <tbody> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">/p4/common/site/config/$P4SERVER.vars.local</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">SDP Instance Specific</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Single configuration file that is scoped to a single SDP Instance</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">/p4/common/site/config/$P4SERVER.vars.local.d/*</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">SDP Instance Specific</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Directory of configuration files that are scoped to a single SDP Instance</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">/p4/common/site/config/p4_vars.local</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">SDP Wide</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Single configuration file that is scoped to all SDP Instances</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">/p4/common/site/config/p4_vars.local.d/*</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">SDP Wide</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Directory of configuration files that are scoped to all SDP Instances</p></td> </tr> </tbody> </table> <div class="sect3"> <h4 id="_load_order">5.2.1. Load Order</h4> <div class="olist arabic"> <ol class="arabic"> <li> <p><code>/p4/common/bin/p4_vars</code></p> </li> <li> <p><code>/p4/common/site/config/p4_vars.local</code></p> </li> <li> <p><code>/p4/common/site/config/p4_vars.local.d/*</code></p> </li> <li> <p><code>/p4/common/config/$P4SERVER.vars</code></p> </li> <li> <p><code>/p4/common/site/config/$P4SERVER.vars.local.d/*</code></p> </li> </ol> </div> </div> </div> <div class="sect2"> <h3 id="_setting_your_login_environment_for_convenience">5.3. Setting your login environment for convenience</h3> <div class="paragraph"> <p>Consider adding this to your <code>.bashrc</code> for the perforce user as a convenience for when you login:</p> </div> <div class="literalblock"> <div class="content"> <pre>echo "source /p4/common/bin/p4_vars 1" >> ~/.bashrc</pre> </div> </div> <div class="paragraph"> <p>Obviously if you have multiple instances on the same machine you might want to setup an alias or two to quickly switch between them.</p> </div> </div> <div class="sect2"> <h3 id="_configuring_protections_file_types_monitoring_and_security">5.4. Configuring protections, file types, monitoring and security</h3> <div class="paragraph"> <p>After the server instance is installed and configured, either with the Helix Installer or a manual installation, most sites will want to modify server permissions ("Protections") and security settings. Other common configuration steps include modifying the file type map and enabling process monitoring. To configure permissions, perform the following steps:</p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>To set up protections, issue the <code>p4 protect</code> command. The protections table is displayed.</p> </li> <li> <p>Delete the following line:</p> <div class="literalblock"> <div class="content"> <pre>write user * * //depot/...</pre> </div> </div> </li> <li> <p>Define protections for your repository using groups. Perforce uses an inclusionary model. No access is given by default, you must specifically grant access to users/groups in the protections table. It is best for performance to grant users specific access to the areas of the depot that they need rather than granting everyone open access, and then trying to remove access via exclusionary mappings in the protect table even if that means you end up generating a larger protect table.</p> </li> <li> <p>To set the default file types, run the p4 typemap command and define typemap entries to override Perforce’s default behavior.</p> </li> <li> <p>Add any file type entries that are specific to your site. Suggestions:</p> <div class="ulist"> <ul> <li> <p>For already-compressed file types (such as <code>.zip</code>, <code>.gz</code>, <code>.avi</code>, <code>.gif</code>), assign a file type of <code>binary+Fl</code> to prevent p4d from attempting to compress them again before storing them.</p> </li> <li> <p>For regular binary files, add <code>binary+l</code> to make so that only one person at a time can check them out.</p> </li> </ul> </div> <div class="paragraph"> <p>A sample file is provided in <code>$SDP/Server/setup/typemap</code></p> </div> </li> </ol> </div> <div class="paragraph"> <p>If you are doing things like games development with <code>Unreal Engine</code> or <code>Unity</code>, then there are specific recommended typemap to add in KB articles: <a href="https://portal.perforce.com/s/">Search the Knowledge Base</a></p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>To make your changelists default to restricted (for high security environments):</p> <div class="literalblock"> <div class="content"> <pre>p4 configure set defaultChangeType=restricted</pre> </div> </div> </li> </ol> </div> </div> <div class="sect2"> <h3 id="_operating_system_configuration">5.5. Operating system configuration</h3> <div class="paragraph"> <p>Check <a href="#_maximizing_server_performance">Chapter 8, <em>Maximizing Server Performance</em></a> for detailed recommendations.</p> </div> <div class="sect3"> <h4 id="_configuring_email_for_notifications">5.5.1. Configuring email for notifications</h4> <div class="paragraph"> <p>Use Postfix - which Integrates easily with Gmail, Office365 etc just search for postfix and the email provider. Examples:</p> </div> <div class="ulist"> <ul> <li> <p><a href="https://www.howtoforge.com/tutorial/configure-postfix-to-use-gmail-as-a-mail-relay/" class="bare">https://www.howtoforge.com/tutorial/configure-postfix-to-use-gmail-as-a-mail-relay/</a></p> </li> <li> <p><a href="https://support.google.com/accounts/answer/185833?hl=en#zippy=%2Cwhy-you-may-need-an-app-password" class="bare">https://support.google.com/accounts/answer/185833?hl=en#zippy=%2Cwhy-you-may-need-an-app-password</a></p> </li> <li> <p><a href="https://www.middlewareinventory.com/blog/postfix-relay-office-365/#3_Office_365_SMTP_relay_Discussed_in_this_Post" class="bare">https://www.middlewareinventory.com/blog/postfix-relay-office-365/#3_Office_365_SMTP_relay_Discussed_in_this_Post</a></p> </li> </ul> </div> <div class="paragraph"> <p>Please note that for Gmail:</p> </div> <div class="ulist"> <ul> <li> <p>You must turn on 2FA for the account which is trying to create an app password</p> </li> <li> <p>The organization must allow 2FA (2-Step Verification) - this is normally turned off in Google Workspace (formerly known as G Suite).</p> </li> </ul> </div> <div class="paragraph"> <p>Testing of email once configured:</p> </div> <div class="literalblock"> <div class="content"> <pre>echo "Test email" | mail -s "Test email subject" user@example.com</pre> </div> </div> <div class="paragraph"> <p>If there are problems sending email, then this may find the problem:</p> </div> <div class="literalblock"> <div class="content"> <pre>grep postfix /var/log/* cat /var/log/maillog</pre> </div> </div> </div> <div class="sect3"> <h4 id="_swarm_email_configuration">5.5.2. Swarm Email Configuration</h4> <div class="paragraph"> <p>The advantage of installing Postfix is that it is easily testable from the command line as above.</p> </div> <div class="paragraph"> <p>The Swarm configuration then becomes editing <code>config.php</code> as below (optional sender address) and restarting Swarm in the normal way (resetting its cache first):</p> </div> <div class="listingblock"> <div class="content"> <pre class="highlight"><code class="language-php" data-lang="php"> // this block should be a peer of 'p4' 'mail' => array( // 'sender' => 'swarm@my.domain', // defaults to 'notifications@hostname' 'transport' => array( 'name' => 'localhost', // name of SMTP host 'host' => 'localhost', // host/IP of SMTP host ), ), ),</code></pre> </div> </div> <div class="paragraph"> <p>Restarting Swarm (on CentOS):</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /opt/perforce/swarm/data rm cache/*cache.php systemctl restart httpd</pre> </div> </div> </div> <div class="sect3"> <h4 id="_configuring_pagerduty_for_notifications">5.5.3. Configuring PagerDuty for notifications</h4> <div class="paragraph"> <p>The default behavior of the SDP is to use email for delivering alerts and log files. This section details replacing email with <a href="https://www.pagerduty.com/">PagerDuty</a>.</p> </div> <div class="sect4"> <h5 id="_prerequisites">5.5.3.1. Prerequisites</h5> <div class="ulist"> <ul> <li> <p><a href="https://www.pagerduty.com/">PagerDuty Account</a></p> </li> <li> <p><a href="https://support.pagerduty.com/docs/service-directory">PagerDuty Service</a> where SDP/Helix Core incidents will be created</p> </li> <li> <p>Events API V2 Integration added to PagerDuty Service, this will produce an Integration Key which will be used later</p> </li> <li> <p><a href="https://github.com/martindstone/pagerduty-cli/wiki/PagerDuty-CLI-User-Guide#installation-and-getting-started">Install PagerDuty CLI</a></p> </li> </ul> </div> </div> <div class="sect4"> <h5 id="_sdp_configuration">5.5.3.2. SDP Configuration</h5> <div class="paragraph"> <p>The following can be added to <code>/p4/common/site/config/p4_vars.local</code> to configure the SDP to use PagerDuty:</p> </div> <div class="literalblock"> <div class="content"> <pre># set this environment variable to the Integration Key that was created when adding the # Events API V2 Integration to your PagerDuty Service export PAGERDUTY_ROUTING_KEY="2ac2....e5c3"</pre> </div> </div> </div> <div class="sect4"> <h5 id="_optional_variables">5.5.3.3. Optional variables</h5> <div class="paragraph"> <p>The SDP will automatically set the Title of the PagerDuty Incident based on the exception that occurred. The SDP will also include the log file from the exception (example: checkpoint log, p4verify log, etc).</p> </div> <div class="paragraph"> <p>If you have multiple Helix Core servers it will be helpful to include some additional context with the incident so you know which server the alert is coming from.</p> </div> <div class="paragraph"> <p>The following environment variable can optionally be used to add additional context to the PagerDuty Incident:</p> </div> <div class="literalblock"> <div class="content"> <pre># export PAGERDUTY_CUSTOM_FIELD=""</pre> </div> </div> <div class="sect5"> <h6 id="_example_additional_context_configuration">Example Additional Context Configuration</h6> <div class="paragraph"> <p>The following snippet will create environment variables in <code>p4_vars.local</code> that will provide additional context in each PagerDuty Incident:</p> </div> <div class="literalblock"> <div class="content"> <pre>curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2021-02-01" > /tmp/azure_metadata cat <<-EOF >> /p4/common/site/config/p4_vars.local export PAGERDUTY_ROUTING_KEY="2ac2....e5c3" export VM_ID="$(jq -r '.compute.vmId' /tmp/azure_metdata)" export REGION="$(jq -r '.compute.location' /tmp/azure_metdata)" export AZURE_SUBSCRIPTION_ID="$(jq -r '.compute.subscriptionId' /tmp/azure_metdata)" export PAGERDUTY_CUSTOM_FIELD=\$(cat <<-END ############################################# Azure Subscription: \$AZURE_SUBSCRIPTION_ID Region: \$REGION Azure VM ID: \$VM_ID ############################################# END ) EOF</pre> </div> </div> <div class="paragraph"> <p>The following context will be added as a field on the PagerDuty Incident:</p> </div> <div class="literalblock"> <div class="content"> <pre>############################################# Azure Subscription: f306878d-d321-4731-4cd3-f3afafbbd3ac Region: eastus Azure VM ID: 5ee13bfe-8a0c-486f-ae08-c43e44255d15 #############################################</pre> </div> </div> </div> </div> </div> <div class="sect3"> <h4 id="_configuring_aws_simple_notification_service_sns_for_notifications">5.5.4. Configuring AWS Simple Notification Service (SNS) for notifications</h4> <div class="paragraph"> <p>The default behavior of the SDP is to use email for delivering alerts and log files. This section details replacing email with AWS SNS.</p> </div> <div class="sect4"> <h5 id="_prerequisites_2">5.5.4.1. Prerequisites</h5> <div class="ulist"> <ul> <li> <p>AWS CLI installed</p> </li> <li> <p>Authorization for <code>publish</code> to a AWS SNS topic</p> </li> </ul> </div> </div> <div class="sect4"> <h5 id="_sdp_configuration_2">5.5.4.2. SDP Configuration</h5> <div class="paragraph"> <p>The following can be added to <code>/p4/common/config/p4_1.vars</code> to configure the SDP to use SNS:</p> </div> <div class="literalblock"> <div class="content"> <pre># SNS Alert Configurations # Two methods of authentication are supported: key pair (on prem, azure, etc) and IAM role (AWS deployment) # In the case of IAM role the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables must not be set, not even empty strings</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre># To test SNS delivery use the following command: aws sns publish --topic-arn $SNS_ALERT_TOPIC_ARN --subject test --message "this is a test"</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre># export AWS_ACCESS_KEY_ID="" # export AWS_SECRET_ACCESS_KEY=""</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre>export AWS_DEFAULT_REGION="us-east-1" export SNS_ALERT_TOPIC_ARN="arn:aws:sns:us-east-1:541621974560:Perforce-Notifications-SnsTopic-1FIRH0KEAXTU"</pre> </div> </div> </div> <div class="sect4"> <h5 id="_example_iam_policy">5.5.4.3. Example IAM Policy</h5> <div class="paragraph"> <p>The following is an example policy that could be used for either an IAM Role or an IAM user with key/secret:</p> </div> <div class="literalblock"> <div class="content"> <pre>{ "Version": "2012-10-17", "Statement": [ { "Action": "sns:Publish", "Resource": "arn:aws:sns:us-east-1:541621974560:Perforce-Notifications-*", "Effect": "Allow" } ] }</pre> </div> </div> </div> </div> </div> <div class="sect2"> <h3 id="_other_server_configurables">5.6. Other server configurables</h3> <div class="paragraph"> <p>There are various configurables that you should consider setting for your server instance.</p> </div> <div class="paragraph"> <p>Some suggestions are in the file: <code>$SDP/Server/setup/configure_new_server.sh</code></p> </div> <div class="paragraph"> <p>Review the contents and either apply individual settings manually, or edit the file and apply the newly edited version. If you have any questions, please see the <a href="https://www.perforce.com/manuals/cmdref/Content/CmdRef/configurables.configurables.html">configurables section in Command Reference Guide appendix</a> (get the right version for your server!). You can also contact support regarding questions.</p> </div> </div> <div class="sect2"> <h3 id="_archiving_configuration_files">5.7. Archiving configuration files</h3> <div class="paragraph"> <p>Now that the server instance is running properly, copy the following configuration files to the hxdepots volume for backup:</p> </div> <div class="ulist"> <ul> <li> <p>Any init scripts used in <code>/etc/init.d</code> or any systemd scripts to <code>/etc/systemd/system</code></p> </li> <li> <p>A copy of the crontab file, obtained using <code>crontab -l</code>.</p> </li> <li> <p>Any other relevant configuration scripts, such as cluster configuration scripts, failover scripts, or disk failover configuration files.</p> </li> </ul> </div> </div> <div class="sect2"> <h3 id="_installing_swarm_triggers">5.8. Installing Swarm Triggers</h3> <div class="paragraph"> <p>On the commit server (<strong>NOT</strong> the Swarm machine), get it setup to connect to the Perforce package repo (if not already done). See: <a href="https://www.perforce.com/perforce-packages" class="bare">https://www.perforce.com/perforce-packages</a></p> </div> <div class="paragraph"> <p>Install the trigger package, e.g.:</p> </div> <div class="ulist"> <ul> <li> <p><code>yum install helix-swarm-triggers</code> (if Red Hat family, i.e. RHEL, Rocky Linux, CentOS, Amazon Linux).</p> </li> <li> <p><code>apt install helix-swarm-triggers</code> (for Ubuntu)</p> </li> </ul> </div> <div class="paragraph"> <p>Then (for SDP environments for ease):</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo chown -R perforce:perforce /opt/perforce/etc</pre> </div> </div> <div class="paragraph"> <p>Then install the triggers on the p4d server. Something like:</p> </div> <div class="literalblock"> <div class="content"> <pre>vi /opt/perforce/etc/swarm-triggers.conf</pre> </div> </div> <div class="paragraph"> <p>Make it look something like (in SDP env):</p> </div> <div class="literalblock"> <div class="content"> <pre>SWARM_HOST='https://swarm.p4.p4bsw.com' SWARM_TOKEN='MY-UUID-STYLE-TOKEN' ADMIN_USER='swarm' ADMIN_TICKET_FILE='/p4/1/.p4tickets' P4_PORT='ssl:1666' P4='/p4/1/bin/p4_1' EXEMPT_FILE_COUNT=0 EXEMPT_EXTENSIONS='' VERIFY_SSL=1 TIMEOUT=30 IGNORE_TIMEOUT=1 IGNORE_NOSERVER=1</pre> </div> </div> <div class="paragraph"> <p>Then test that config file:</p> </div> <div class="literalblock"> <div class="content"> <pre>chmod +x /p4/sdp/Unsupported/setup/swarm_triggers_test.sh /p4/sdp/Unsupported/setup/swarm_triggers_test.sh</pre> </div> </div> <div class="paragraph"> <p>Get that to be happy. May require iteration of the conf file, trigger install, etc.</p> </div> <div class="paragraph"> <p>Then install triggers on the server.</p> </div> <div class="listingblock"> <div class="content"> <pre class="highlight"><code>cd /p4/1/tmp p4 triggers -o > temp_file.txt /opt/perforce/swarm-triggers/bin/swarm-trigger.pl -o >> tmp_file.txt vi tmp_file.txt # Clean up formatting, make it syntactically correct. p4 triggers -i < temp_file.txt p4 triggers -o # Make sure it's there.</code></pre> </div> </div> <div class="paragraph"> <p>Then test!</p> </div> <div class="listingblock"> <div class="content"> <pre class="highlight"><code>mkdir /p4/1/tmp/swarm_test cd /p4/1/tmp/swarm_test export P4CONFIG=.p4config echo P4CLIENT=swarm_test.$(hostname -s)>>.p4config # Make a workspace, map View to some location where we can edit harmlessly, # or use a stream like //sandbox/main p4 client p4 add chg.txt # The important thing is '#review' which trigger will process p4 change -o | sed 's:<enter description here>:#review' > chg.txt p4 change -i < chg.txt p4 shelve -c CL # Use CL listed in output from prior command p4 describe -s CL # if #review gets replace by something like #review-12345, you're Done!</code></pre> </div> </div> </div> </div> </div> <div class="sect1"> <h2 id="_backup_replication_and_recovery">6. Backup, Replication, and Recovery</h2> <div class="sectionbody"> <div class="paragraph"> <p>Perforce server instances maintain <em>metadata</em> and <em>versioned files</em>. The metadata contains all the information about the files in the depots. Metadata resides in database (db.*) files in the server instance’s root directory (P4ROOT). The versioned files contain the file changes that have been submitted to the repository. Versioned files reside on the hxdepots volume.</p> </div> <div class="paragraph"> <p>This section assumes that you understand the basics of Perforce backup and recovery. For more information, consult the Perforce <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/chapter.backup.html">System Administrator’s Guide</a> and <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/failover.html#Failover">failover</a>.</p> </div> <div class="sect2"> <h3 id="_typical_backup_procedure">6.1. Typical Backup Procedure</h3> <div class="paragraph"> <p>The SDP’s maintenance scripts, run as <code>cron</code> tasks, periodically back up the metadata. The weekly sequence is described below.</p> </div> <div class="paragraph"> <p><strong>Seven nights a week, perform the following tasks:</strong></p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Truncate the active journal.</p> </li> <li> <p>Replay the journal to the offline database. (Refer to Figure 2: SDP Runtime Structure and Volume Layout for more information on the location of the live and offline databases.)</p> </li> <li> <p>Create a checkpoint from the offline database.</p> </li> <li> <p>Recreate the offline database from the last checkpoint.</p> </li> </ol> </div> <div class="paragraph"> <p><strong>Once a week, perform the following tasks:</strong></p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Verify all depot files.</p> </li> </ol> </div> <div class="paragraph"> <p><strong>Once every few months, perform the following tasks:</strong></p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Stop the live server instance.</p> </li> <li> <p>Truncate the active journal.</p> </li> <li> <p>Replay the journal to the offline database. (Refer to Figure 2: SDP Runtime Structure and Volume Layout for more information on the location of the live and offline databases.)</p> </li> <li> <p>Archive the live database.</p> </li> <li> <p>Move the offline database to the live database directory.</p> </li> <li> <p>Start the live server instance.</p> </li> <li> <p>Create a new checkpoint from the archive of the live database.</p> </li> <li> <p>Recreate the offline database from the last checkpoint.</p> </li> <li> <p>Verify all depots.</p> </li> </ol> </div> <div class="paragraph"> <p>This normal maintenance procedure puts the checkpoints (metadata snapshots) on the hxdepots volume, which contains the versioned files. Backing up the hxdepots volume with a normal backup utility like <em>rsync</em> preserves the critical assets necessary for recovery.</p> </div> <div class="paragraph"> <p>To ensure that the backup does not interfere with the metadata backups (checkpoints), coordinate backup of the hxdepots volume using the SDP maintenance scripts.</p> </div> <div class="paragraph"> <p>The preceding maintenance procedure minimizes service outage, because checkpoints are created from offline or saved databases while the live p4d server process is running on the live databases in P4ROOT.</p> </div> <div class="admonitionblock note"> <table> <tr> <td class="icon"> <i class="fa icon-note" title="Note"></i> </td> <td class="content"> With no additional configuration, the normal maintenance prevents loss of more than one day’s metadata changes. To provide an optimal <a href="http://en.wikipedia.org/wiki/Recovery_point_objective">Recovery Point Objective</a> (RPO), the SDP provides additional tools for replication. </td> </tr> </table> </div> </div> <div class="sect2"> <h3 id="_planning_for_ha_and_dr">6.2. Planning for HA and DR</h3> <div class="paragraph"> <p>The concepts for HA (High Availability) and DR (Disaster Recovery) are fairly similar - they are both types of Helix Core replica.</p> </div> <div class="paragraph"> <p>When you have server specs with <code>Services</code> field set to <code>commit-server</code>, <code>standard</code>, or <code>edge-server</code> - see <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/deployment-architecture.html">deployment architectures</a> you should consider your requirements for how to recover from a failure to any such servers.</p> </div> <div class="paragraph"> <p>See also <a href="https://portal.perforce.com/s/article/5434">Replica types and use cases</a></p> </div> <div class="paragraph"> <p>The key issues are around ensuring that you have have appropriate values for the following measures for your Helix Core installation:</p> </div> <div class="ulist"> <ul> <li> <p>RTO - Recovery Time Objective - how long will it take you to recover to a backup?</p> </li> <li> <p>RPO - Recovery Point Objective - how much data are you prepared to risk losing if you have to failover to a backup server?</p> </li> </ul> </div> <div class="paragraph"> <p>We need to consider planned vs unplanned failover. Planned may be due to upgrading the core Operating System or some other dependency in your infrastructure, or a similar activity.</p> </div> <div class="paragraph"> <p>Unplanned covers risks you are seeking to mitigate with failover:</p> </div> <div class="ulist"> <ul> <li> <p>loss of a machine, or some machine related hardware failure (e.g. network)</p> </li> <li> <p>loss of a VM cluster</p> </li> <li> <p>failure of storage</p> </li> <li> <p>loss of a data center or machine room</p> </li> <li> <p>etc…​</p> </li> </ul> </div> <div class="paragraph"> <p>So, if your main <code>commit-server</code> fails, how fast should be you be able to be up and running again, and how much data might you be prepared to lose? What is the potential disruption to your organization if the Helix Core repository is down? How many people would be impacted in some way?</p> </div> <div class="paragraph"> <p>You also need to consider the costs of your mitigation strategies. For example, this can range from:</p> </div> <div class="ulist"> <ul> <li> <p>taking a backup once per 24 hours and requiring maybe an hour or two to restore it. Thus you might lose up to 24 hours of work for an unplanned failure, and require several hours to restore.</p> </li> <li> <p>having a high availability replica which is a mirror of the server hardware and ready to take over within minutes if required</p> </li> </ul> </div> <div class="paragraph"> <p>Having a replica for HA or DR is likely to reduce your RPO and RTO to well under an hour (<10 minutes if properly prepared for) - at the cost of the resources to run such a replica, and the management overhead to monitor it appropriately.</p> </div> <div class="paragraph"> <p>Typically we would define:</p> </div> <div class="ulist"> <ul> <li> <p>An HA replica is close to its upstream server, e.g. in the same Data Center - this minimizes the latency for replication, and reduces RPO</p> </li> <li> <p>A DR replica is in a more remote location, so maybe risks being further behind in replication (thus higher RPO), but mitigates against catastrophic loss of a data center or similar. Note that "further behind" is still typically seconds for metadata, but can be minutes for submits with many GB of files.</p> </li> </ul> </div> <div class="sect3"> <h4 id="_further_resources">6.2.1. Further Resources</h4> <div class="ulist"> <ul> <li> <p><a href="https://portal.perforce.com/s/article/3166">High Reliability Solutions</a></p> </li> </ul> </div> </div> <div class="sect3"> <h4 id="_creating_a_failover_replica_for_commit_or_edge_server">6.2.2. Creating a Failover Replica for Commit or Edge Server</h4> <div class="paragraph"> <p>A commit server instance is the ultimate store for submitted data, and also for any workspace state (WIP - work in progress) for users directly working with the commit server (part of the same "data set")</p> </div> <div class="paragraph"> <p>An edge server instance maintains its own copy of workspace state (WIP). If you have people connecting to an edge server, then any workspaces they create (and files they open for some action) will be only stored on the edge server. Thus it is normally recommended to have an HA backup server, so that users don’t lose their state in case of failover.</p> </div> <div class="paragraph"> <p>There is a concept of a "build edge" which is an edge server which only supports build farm users. In this scenario it may be deemed acceptable to not have an HA backup server, since in the case of failure of the edge, it can be re-seeded from the commit server. All build farm clients would be recreated from scratch so there would be no problems.</p> </div> </div> <div class="sect3"> <h4 id="_what_is_a_failover_replica">6.2.3. What is a Failover Replica?</h4> <div class="paragraph"> <p>A Failover is the hand off of the role of a master/primary/commit server from a primary server machine to a standby replica (typically on a different server machine). As part of failover processing the secondary/backup is promoted to become the new master/primary/commit server.</p> </div> <div class="paragraph"> <p>As of 2018.2 release, p4d supports a <code>p4 failover</code> command that performs a failover to a <code>standby</code> replica (i.e. a replica with <code>Services:</code> field value set to <code>standby</code> or <code>forwarding-standby</code>). Such a replica performs a <code>journalcopy</code> replication of metadata, with a local pull thread to update its <code>db.*</code> files. After the failover is complete, traffic must be redirected to the server machine where newly promoted standby server operates, e.g. with a DNS change (possibly automated with a post-failover trigger).</p> </div> <div class="paragraph"> <p>See also: <a href="https://portal.perforce.com/s/article/16462">Configuring a Helix Core Standby</a>.</p> </div> <div class="paragraph"> <p>On Linux the SDP script <code>mkrep.sh</code> greatly simplifies the process of setting up a replica suitable for use with the <code>p4 failover</code> command. See: <a href="#_using_mkrep_sh">Section 6.3.4, “Using mkrep.sh”</a>.</p> </div> </div> <div class="sect3"> <h4 id="_mandatory_vs_non_mandatory_standbys">6.2.4. Mandatory vs Non-mandatory Standbys</h4> <div class="paragraph"> <p>You can modify the <code>Options:</code> field of the server spec of a <code>standby</code> or <code>forwarding-standby</code> replica to make it <code>mandatory</code>. This setting affects the mechanics of how failover works.</p> </div> <div class="paragraph"> <p>When a <code>standby</code> server instance is configured as mandatory, the master/commit server will wait until this server confirms it has processed journal data before allowing that journal data to be released to other replicas. This can simplify failover if the master server is unavailable to participate in the failover, since it provides a guarantee that no downstream servers are <strong>ahead</strong> of the replica.</p> </div> <div class="paragraph"> <p>This guarantee is important, as it ensures downstream servers can simply be re-directed to point to the standby after the master server has failed over to its standby, and will carry on working without problems or need for human intervention on the servers.</p> </div> <div class="paragraph"> <p>Failovers in which the master does not participate are generally referred to as <em>unscheduled</em> or <em>reactive</em>, and are generally done in response to an unexpected situation. Failovers in which the master server is alive and well at the start of processing, and in which the master server participates in the failover, are referred to as <em>scheduled</em> or <em>planned</em>.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> If a server which is marked as <code>mandatory</code> goes offline for any reason, the replication to other replicas will stop replicating. In this scenario, the server spec of the replica can be changed to <code>nomandatory</code>, and then replication will immediately resume, so long as the replication has not been offline for so long that the master server has removed numbered journals that would be needed to catch up (typically several days or weeks depending on the KEEPJNLS setting). If this happens, the p4d server logs of all impacted servers will clearly indicate the root cause, so long p4d versions are 2019.2 or later. </td> </tr> </table> </div> <div class="paragraph"> <p>If set to <code>nomandatory</code> then there is no risk of delaying downstream replicas, however there is no guarantee that they will be able to switch seamlessly over to the new server in event of an unscheduled failover.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> We recommend creating <code>mandatory</code> standby replica(s) if the server is local to its commit server. We also recommend active monitoring in place to quickly detect replication lag or other issues. </td> </tr> </table> </div> <div class="paragraph"> <p>To change a server spec to be <code>mandatory</code> or <code>nomandatory</code>, modify the server spec with a command like <code>p4 server p4d_ha_bos</code> to edit the form, and then change the value in the <code>Options:</code> field to be as desired, <code>mandatory</code> or <code>nomandatory</code>, and the save and exit the editor.</p> </div> </div> <div class="sect3"> <h4 id="_server_host_naming_conventions">6.2.5. Server host naming conventions</h4> <div class="paragraph"> <p>This is recommended, but not a requirement for SDP scripts to implement failover.</p> </div> <div class="ulist"> <ul> <li> <p>Use a name that does not indicate switchable roles, e.g. don’t indicate in the name whether a host is a master/primary or backup, or edge server and its backup. This might otherwise lead to confusion once you have performed a failover and the host name is no longer appropriate.</p> </li> <li> <p>Use names ending numeric designators, e.g. -01 or -05. The goal is to avoid being in a post-failover situation where a machine with <code>master</code> or <code>primary</code> is actually the backup. Also, the assumption is that host names will never need to change.</p> </li> <li> <p>While you don’t want switchable roles baked into the hostname, you can have static roles, e.g. use p4d vs. p4p in the host name (as those generally don’t change). The p4d could be primary, standby, edge, edge’s standby (switchable roles).</p> </li> <li> <p>Using a short geographic site is sometimes helpful/desirable. If used, use the same site tag used in the ServerID, e.g. aus.</p> <div class="paragraph"> <p>Valid site tags should be listed in: <code>/p4/common/config/SiteTags.cfg</code> - see <a href="#_sitetags_cfg">Section 6.3.4.1, “SiteTags.cfg”</a></p> </div> </li> <li> <p>Using a short tag to indicate the major OS version is <strong>sometimes</strong> helpful/desirable, eg. c7 for CentOS 7, or r8 for RHEL 8. This is based on the idea that when the major OS is upgraded, you either move to new hardware, or change the host name (an exception to the rule above about never changing the hostname). This option maybe overkill for many sites.</p> </li> <li> <p>End users should reference a DNS name that may include the site tag, but would exclude the number, OS indicator, and server type (<code>p4d</code>/<code>p4p</code>/<code>p4broker</code>), replacing all that with just <code>perforce</code> or optionally just <code>p4</code>. General idea is that users needn’t be bothered by under-the-covers tech of whether something is a proxy or replica.</p> </li> <li> <p>For edge servers, it is advisable to include <code>edge</code> in both the host and DNS name, as users and admins needs to be aware of the functional differences due to a server being an edge server.</p> </li> </ul> </div> <div class="paragraph"> <p>Examples:</p> </div> <div class="ulist"> <ul> <li> <p><code>p4d-aus-r7-03</code>, a master in Austin on RHEL 7, pointed to by a DNS name like <code>p4-aus</code>.</p> </li> <li> <p><code>p4d-aus-03</code>, a master in Austin (no indication of server OS), pointed to by a DNS name like <code>p4-aus</code>.</p> </li> <li> <p><code>p4d-aus-r7-04</code>, a standby replica in Austin on RHEL 7, not pointed to by a DNS until failover, at which point it gets pointed to by <code>p4-aus</code>.</p> </li> <li> <p><code>p4p-syd-r8-05</code>, a proxy in Sydney on RHEL 8, pointed to by a DNS name like <code>p4-syd</code>.</p> </li> <li> <p><code>p4d-syd-r8-04</code>, a replica that replaced the proxy in Sydney, on RHEL 8, pointed to by a DNS name like <code>p4-syd</code> (same as the proxy it replaced).</p> </li> <li> <p><code>p4d-edge-tok-s12-03</code>, an edge in Tokyo running SuSE12, pointed to by a DNS name like <code>p4edge-tok</code>.</p> </li> <li> <p><code>p4d-edge-tok-s12-04</code>, a replica of an edge in Tokyo running SuSE12, not pointed to by a DNS name until failover, at which point it gets pointed to by <code>p4edge-tok</code>.</p> </li> </ul> </div> <div class="paragraph"> <p>FQDNs (fully qualified DNS names) of short DNS names used in these examples would also exist, and would be based on the same short names.</p> </div> </div> </div> <div class="sect2"> <h3 id="_full_one_way_replication">6.3. Full One-Way Replication</h3> <div class="paragraph"> <p>Perforce supports a full one-way <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/replication.html">replication</a> of data from a master server to a replica, including versioned files. The <a href="https://www.perforce.com/manuals/cmdref/Content/CmdRef/p4_pull.html#p4_pull">p4 pull</a> command is the replication mechanism, and a replica server can be configured to know it is a replica and use the replication command. The p4 pull mechanism requires very little configuration and no additional scripting. As this replication mechanism is simple and effective, we recommend it as the preferred replication technique. Replica servers can also be configured to only contain metadata, which can be useful for reporting or offline checkpointing purposes. See the Distributing Perforce Guide for details on setting up replica servers.</p> </div> <div class="paragraph"> <p>If you wish to use the replica as a read-only server, you can use the <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/chapter.broker.html">P4Broker</a> to direct read-only commands to the replica or you can use a forwarding replica. The broker can do load balancing to a pool of replicas if you need more than one replica to handle your load.</p> </div> <div class="sect3"> <h4 id="_replication_setup">6.3.1. Replication Setup</h4> <div class="paragraph"> <p>To configure a replica server, first configure a machine identically to the master server (at least as regards the link structure such as <code>/p4</code>, <code>/p4/common/bin</code> and <code>/p4/<strong><em>instance</em></strong>/*</code>), then install the SDP on it to match the master server installation. Once the machine and SDP install is in place, you need to configure the master server for replication.</p> </div> <div class="paragraph"> <p>Perforce supports many types of replicas suited to a variety of purposes, such as:</p> </div> <div class="ulist"> <ul> <li> <p>Real-time backup,</p> </li> <li> <p>Providing a disaster recovery solution,</p> </li> <li> <p>Load distribution to enhance performance,</p> </li> <li> <p>Distributed development,</p> </li> <li> <p>Dedicated resources for automated systems, such as build servers, and more.</p> </li> </ul> </div> <div class="paragraph"> <p>We always recommend first setting up the replica as a read-only replica and ensuring that everything is working. Once that is the case you can easily modify server specs and configurables to change it to a forwarding replica, or an edge server etc.</p> </div> </div> <div class="sect3"> <h4 id="_replication_setup_for_failover">6.3.2. Replication Setup for Failover</h4> <div class="paragraph"> <p>This is just a special case of replication, but implementing <a href="#_what_is_a_failover_replica">Section 6.2.3, “What is a Failover Replica?”</a></p> </div> <div class="paragraph"> <p>Please note the section below <a href="#_using_mkrep_sh">Section 6.3.4, “Using mkrep.sh”</a> which implements many details.</p> </div> </div> <div class="sect3"> <h4 id="_pre_requisites_for_failover">6.3.3. Pre-requisites for Failover</h4> <div class="paragraph"> <p>These are vital as part of your planning.</p> </div> <div class="ulist"> <ul> <li> <p>Obtain and install a license for your replica(s)</p> <div class="paragraph"> <p>Your commit or standard server has a license file (tied to IP address), while your replicas do not require one to function as replicas.</p> </div> <div class="paragraph"> <p>However, in order for a replica to function as a replacement for a commit or standard server, it must have a suitable license installed.</p> </div> <div class="paragraph"> <p>This should be requested when the replica is first created. See the form: <a href="https://www.perforce.com/support/duplicate-server-request" class="bare">https://www.perforce.com/support/duplicate-server-request</a></p> </div> </li> <li> <p>Review your authentication mechanism (LDAP etc) - is the LDAP server contactable from the replica machine (firewalls etc configured appropriately).</p> </li> <li> <p>Review all your triggers and how they are deployed - will they work on the failover host?</p> <div class="paragraph"> <p>Is the right version of Perl/Python etc correctly installed and configured on the failover host with all imported libraries?</p> </div> </li> </ul> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> TEST, TEST, TEST!!! It is important to test the above issues as part of your planning. For peace of mind you don’t want to be finding problems at the time of trying to failover for real, which may be in the middle of the night! </td> </tr> </table> </div> <div class="paragraph"> <p>On Linux:</p> </div> <div class="ulist"> <ul> <li> <p>Review the configuration of options such as <a href="#_ensure_transparent_huge_pages_thp_is_turned_off">Section 8.1, “Ensure Transparent Huge Pages (THP) is turned off”</a> and also <a href="#_putting_server_locks_directory_into_ram">Section 8.2, “Putting server.locks directory into RAM”</a> are correctly configured for your HA server machine - otherwise you <strong>risk reduced performance</strong> after failover.</p> </li> </ul> </div> </div> <div class="sect3"> <h4 id="_using_mkrep_sh">6.3.4. Using mkrep.sh</h4> <div class="paragraph"> <p>The SDP <code>mkrep.sh</code> script should be used to expand your Helix Topology, e.g. adding replicas and edge servers. For the detailed usage statement, go to <a href="#_mkrep_sh">Section 9.4.7, “mkrep.sh”</a></p> </div> <div class="admonitionblock note"> <table> <tr> <td class="icon"> <i class="fa icon-note" title="Note"></i> </td> <td class="content"> When creating server machines to be used as Helix servers, the server machines should be named following a well-designed host naming convention. The SDP has no dependency on the convention used, and so any existing local naming convention can be applied. The SDP includes a suggested naming convention in <a href="#_server_host_naming_conventions">Section 6.2.5, “Server host naming conventions”</a> </td> </tr> </table> </div> <div class="sect4"> <h5 id="_sitetags_cfg">6.3.4.1. SiteTags.cfg</h5> <div class="paragraph"> <p>The <code>mkrep.sh</code> documentation references a SiteTags.cfg file used to register short tag names for geographic sites. Location is: <code>/p4/common/config/SiteTags.cfg</code></p> </div> <div class="paragraph"> <p>Your tags should use abbreviations that are meaningful to your organization.</p> </div> <div class="listingblock"> <div class="title">Example/Format</div> <div class="content"> <pre># Valid Geographic site tags. # Each is intended to indicate a geography, and optionally a specific Data # Center (or Computer Room, or Computer Closet) within a given geographic # location. # # The format is: # Name: Description # The Name must be alphanumeric only. The Description may contain spaces. # Lines starting with # and blank lines are ignored. bej: Beijing, China bos: Boston, MA, USA blr: Bangalore, India chi: Chicago greater metro area cni: Chennai, India pune: Pune, India lv: Las Vegas, NV, USA mlb: Melbourne, Australia syd: Sydney, Australia awsuseast1: AWS US-East-1 azuksouth: Azure UK South</pre> </div> </div> <div class="paragraph"> <p>A sample file exists <code>/p4/common/config/SiteTags.cfg.sample</code>.</p> </div> </div> <div class="sect4"> <h5 id="_output_of_mkrep_sh">6.3.4.2. Output of <code>mkrep.sh</code></h5> <div class="paragraph"> <p>The output of <code>mkrep.sh</code> (which is also written to a log file in <code>/p4/<instance>/logs/mkrep.*</code>) describes a number of steps required to continue setting up the replica after the metadata configuration performed by the script is done.</p> </div> </div> </div> <div class="sect3"> <h4 id="_addition_replication_setup">6.3.5. Addition Replication Setup</h4> <div class="paragraph"> <p>In addition to steps recommended by <code>mkrep.sh</code>, there are other steps to be aware of to prepare a replica server machine.</p> </div> </div> <div class="sect3"> <h4 id="_sdp_installation">6.3.6. SDP Installation</h4> <div class="paragraph"> <p>The SDP must first be installed on the replica server machine. If SDP already exists on the machine but not for the current instance, then <code>mkdirs.sh</code> must be used to add a new instance to the machine.</p> </div> <div class="sect4"> <h5 id="_ssh_key_setup">6.3.6.1. SSH Key Setup</h5> <div class="paragraph"> <p>SSH keys for the <code>perforce</code> operating system user should be setup to allow the <code>perforce</code> user to <code>ssh</code> and <code>rsync</code> among the Helix server machines in the topology. If no <code>~perforce/.ssh</code> directory exist on a machine, it can be created with this command:</p> </div> <div class="literalblock"> <div class="content"> <pre>ssh-keygen -t rsa -b 4096</pre> </div> </div> </div> </div> </div> <div class="sect2"> <h3 id="_recovery_procedures">6.4. Recovery Procedures</h3> <div class="paragraph"> <p>There are three scenarios that require you to recover server data:</p> </div> <table class="tableblock frame-all grid-all stretch"> <colgroup> <col style="width: 33.3333%;"> <col style="width: 33.3333%;"> <col style="width: 33.3334%;"> </colgroup> <thead> <tr> <th class="tableblock halign-left valign-top">Metadata</th> <th class="tableblock halign-left valign-top">Depotdata</th> <th class="tableblock halign-left valign-top">Action required</th> </tr> </thead> <tbody> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">lost or corrupt</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Intact</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Recover metadata as described below</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">Intact</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">lost or corrupt</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Call Perforce Support</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock">lost or corrupt</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">lost or corrupt</p></td> <td class="tableblock halign-left valign-top"><div class="content"><div class="paragraph"> <p>Recover metadata as described below.</p> </div> <div class="paragraph"> <p>Recover the hxdepots volume using your normal backup utilities.</p> </div></div></td> </tr> </tbody> </table> <div class="paragraph"> <p>Restoring the metadata from a backup also optimizes the database files.</p> </div> <div class="sect3"> <h4 id="_recovering_a_master_server_from_a_checkpoint_and_journals">6.4.1. Recovering a master server from a checkpoint and journal(s)</h4> <div class="paragraph"> <p>The checkpoint files are stored in the <code>/p4/<strong><em>instance</em></strong>/checkpoints</code> directory, and the most recent checkpoint is named <code>p4_<strong><em>instance</em></strong>.ckp.<strong><em>number</em></strong>.gz</code>. Recreating up-to-date database files requires the most recent checkpoint, from <code>/p4/<strong><em>instance</em></strong>/checkpoints</code> and the journal file from <code>/p4/<strong><em>instance</em></strong>/logs</code>.</p> </div> <div class="paragraph"> <p>To recover the server database manually, perform the following steps from the root directory of the server (/p4/instance/root).</p> </div> <div class="paragraph"> <p>Assuming instance 1:</p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Stop the Perforce Server by issuing the following command:</p> <div class="literalblock"> <div class="content"> <pre>/p4/1/bin/p4_1 admin stop</pre> </div> </div> </li> <li> <p>Delete the old database files in the <code>/p4/1/root/save</code> directory</p> </li> <li> <p>Move the live database files (db.*) to the save directory.</p> </li> <li> <p>Use the following command to restore from the most recent checkpoint.</p> <div class="literalblock"> <div class="content"> <pre>/p4/1/bin/p4d_1 -r /p4/1/root -jr -z /p4/1/checkpoints/p4_1.ckp.####.gz</pre> </div> </div> </li> <li> <p>To replay the transactions that occurred after the checkpoint was created, issue the following command:</p> <div class="literalblock"> <div class="content"> <pre>/p4/1/bin/p4d_1 -r /p4/1/root -jr /p4/1/logs/journal</pre> </div> </div> </li> </ol> </div> <div class="olist arabic"> <ol class="arabic" start="6"> <li> <p>Restart your Perforce server.</p> </li> </ol> </div> <div class="paragraph"> <p>If the Perforce service starts without errors, delete the old database files from <code>/p4/instance/root/save</code>.</p> </div> <div class="paragraph"> <p>If problems are reported when you attempt to recover from the most recent checkpoint, try recovering from the preceding checkpoint and journal. If you are successful, replay the subsequent journal. If the journals are corrupted, contact <a href="mailto:support-helix-core@perforce.com">Perforce Technical Support</a>. For full details about backup and recovery, refer to the <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/chapter.backup.html">Perforce System Administrator’s Guide</a>.</p> </div> </div> <div class="sect3"> <h4 id="_recovering_a_replica_from_a_checkpoint">6.4.2. Recovering a replica from a checkpoint</h4> <div class="paragraph"> <p>This is very similar to creating a replica in the first place as described above.</p> </div> <div class="paragraph"> <p>If you have been running the replica crontab commands as suggested, then you will have the latest checkpoints from the master already copied across to the replica through the use of <a href="#_sync_replica_sh">Section 9.6.33, “sync_replica.sh”</a>.</p> </div> <div class="paragraph"> <p>See the steps in the script <a href="#_sync_replica_sh">Section 9.6.33, “sync_replica.sh”</a> for details (note that it deletes the state and rdb.lbr files from the replica root directory so that the replica starts replicating from the start of a journal).</p> </div> <div class="paragraph"> <p>Remember to ensure you have logged the service user in to the master server (and that the ticket is stored in the correct location as described when setting up the replica).</p> </div> </div> <div class="sect3"> <h4 id="_recovering_from_a_tape_backup">6.4.3. Recovering from a tape backup</h4> <div class="paragraph"> <p>This section describes how to recover from a tape or other offline backup to a new server machine if the server machine fails. The tape backup for the server is made from the hxdepots volume. The new server machine must have the same volume layout and user/group settings as the original server. In other words, the new server must be as identical as possible to the server that failed.</p> </div> <div class="paragraph"> <p>To recover from a tape backup, perform the following steps (assuming instance <code>1</code>):</p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Recover the hxdepots volume from your backup tape.</p> </li> <li> <p>Create the <code>/p4</code> convenience directory on the OS volume.</p> </li> <li> <p>Create the directories <code>/hxmetadata/p4/1/db1/save</code> and <code>/hxmetadata/p4/1/offline_db</code>.</p> </li> <li> <p>Create the directories <code>/hxmetadata/p4/1/db2/save</code> and <code>/hxmetadata/p4/2/offline_db</code>.</p> </li> <li> <p>Change ownership of these directories to the OS account that runs the Perforce processes.</p> </li> <li> <p>Switch to the Perforce OS account, and create a link in the <code>/p4</code> directory to <code>/hxdepots/p4/1</code>.</p> </li> <li> <p>Create a link in the <code>/p4</code> directory to <code>/hxdepots/p4/common</code>.</p> </li> <li> <p>As a super-user, reinstall and enable the Systemd service files or or SysV init scripts.</p> </li> <li> <p>Find the last available checkpoint, under <code>/p4/1/checkpoints</code></p> </li> <li> <p>Recover the latest checkpoint by running:</p> <div class="literalblock"> <div class="content"> <pre>/p4/1/bin/p4d_1 -r /p4/1/root -jr -z <last_ckp_file></pre> </div> </div> </li> <li> <p>Recover the checkpoint to the offline_db directory (assuming instance 1):</p> <div class="literalblock"> <div class="content"> <pre>/p4/1/bin/p4d_1 -r /p4/1/offline_db -jr -z <last_ckp_file></pre> </div> </div> </li> <li> <p>Reinstall the Perforce server license to the server root directory.</p> </li> <li> <p>Start the perforce service by running 1/p4/1/bin/p4d_1_init start`</p> </li> <li> <p>Verify that the server instance is running.</p> </li> <li> <p>Reinstall the server crontab or scheduled tasks.</p> </li> <li> <p>Perform any other initial server machine configuration.</p> </li> <li> <p>Verify the database and versioned files by running the <code>p4verify.sh</code> script. Note that files using the <a href="https://www.perforce.com/manuals/cmdref/Content/CmdRef/file.types.synopsis.modifiers.html">+k</a> file type modifier might be reported as BAD! after being moved. Contact Perforce Technical Support for assistance in determining if these files are actually corrupt.</p> </li> </ol> </div> </div> <div class="sect3"> <h4 id="_failover_to_a_replicated_standby_machine">6.4.4. Failover to a replicated standby machine</h4> <div class="paragraph"> <p>See <a href="SDP_Failover_Guide.pdf">SDP Failover Guide (PDF)</a> or <a href="SDP_Failover_Guide.html">SDP Failover Guide (HTML)</a> for detailed steps.</p> </div> </div> </div> </div> </div> <div class="sect1"> <h2 id="_upgrades">7. Upgrades</h2> <div class="sectionbody"> <div class="paragraph"> <p>This section describes both upgrades of the SDP itself, as well as upgrades of Helix software such as p4d, p4broker, p4p, and the the p4 command line client in the SDP structure.</p> </div> <div class="sect2"> <h3 id="_upgrade_order_sdp_first_then_helix_p4d">7.1. Upgrade Order: SDP first, then Helix P4D</h3> <div class="paragraph"> <p>The SDP should normally be upgraded prior to the upgrade of Helix Core (P4D). If you are upgrading P4D to or beyond P4D 2019.1 from a prior version of P4D, you <em>must</em> upgrade the SDP first. If you run multiple instances of P4D on a given machine (potentially each running different versions of P4D), upgrade the SDP first before upgrading any of the instances.</p> </div> <div class="paragraph"> <p>The SDP should also be upgraded before upgrading other Helix software on machines using the SDP, including p4d, p4p, p4broker, and p4 (the command line client).</p> </div> <div class="paragraph"> <p>Upgrading a Helix Core server instance in the SDP framework is a simple process involving a few steps.</p> </div> </div> <div class="sect2"> <h3 id="_sdp_and_p4d_version_compatibility">7.2. SDP and P4D Version Compatibility</h3> <div class="paragraph"> <p>Starting with the SDP 2020.1 release, the released versions of SDP match the released versions of P4D. So SDP r20.1 is guaranteed to work with P4D r20.1. In addition, the <a href="ReleaseNotes.html">SDP Release Notes</a> clarify all the specific versions of P4D supported.</p> </div> <div class="paragraph"> <p>The SDP is often forward- and backward-compatible with P4D versions, but for best results they should be kept in sync by upgrading SDP before P4D. This is partly because the SDP contains logic that helps upgrade P4D, which can change as P4D evolves (most recently for 2019.1).</p> </div> <div class="paragraph"> <p>The SDP is aware of the P4D version, and has backward-compatibility logic to support older versions of P4D. This is guaranteed for supported versions of P4D. Backward compatibility of SDP with older versions of P4D may extend farther back, though without the "officially supported" guarantee.</p> </div> </div> <div class="sect2"> <h3 id="_upgrading_the_sdp">7.3. Upgrading the SDP</h3> <div class="paragraph"> <p>Starting with this SDP 2021.1 release, upgrades of the SDP from 2020.1 and later use a new mechanism. The SDP upgrade procedure starting from 2020.1 and later uses the <code>sdp_upgrade.sh</code> script. Some highlights of the new upgrade mechanism:</p> </div> <div class="ulist"> <ul> <li> <p><strong>Automated</strong>: Upgrades from SDP 2020.1 are automated with <code>sdp_upgrade.sh</code> provided with each new version of the SDP.</p> </li> <li> <p><strong>Continuous</strong>: Each new SDP version, starting from SDP 2021.1, will maintain the capability to upgrade from all prior versions, so long as the starting version is SDP 2020.1 or later.</p> </li> <li> <p><strong>Independent</strong>: SDP upgrades will enable upgrades to new Helix Core versions, but will not directly cause Helix Core upgrades to occur immediately. Each Helix Core instance can be upgraded independently of the SDP on its own schedule.</p> </li> </ul> </div> <div class="sect3"> <h4 id="_sample_sdp_upgrade_procedure">7.3.1. Sample SDP Upgrade Procedure</h4> <div class="paragraph"> <p>For complete information, see: <a href="#_sdp_upgrade_sh">Section 9.2.3, “sdp_upgrade.sh”</a>.</p> </div> <div class="paragraph"> <p>A basic set of commands is:</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /hxdepots [[ -d downloads ]] || mkdir downloads cd downloads [[ -d new ]] && mv new old.$(date +'%Y%m%d-%H%M%S') [[ -e sdp.Unix.tgz ]] && mv sdp.Unix.tgz sdp.Unix.old.$(date +'%Y%m%d-%H%M%S') curl -L -s -O https://swarm.workshop.perforce.com/projects/perforce-software-sdp/download/downloads/sdp.Unix.tgz ls -l sdp.Unix.tgz mkdir new cd new tar -xzf ../sdp.Unix.tgz</pre> </div> </div> <div class="paragraph"> <p>After extracting the SDP tarball, cd to the directory where the <code>sdp_ugprade.sh</code> script resides, and execute it from there:</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /hxdepots/downloads/new/sdp/Server/Unix/p4/common/sdp_upgrade ./sdp_upgrade.sh -man</pre> </div> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> If the <code>curl</code> command cannot be used (perhaps due to lack of outbound internet access), replace that step with some other means of acquiring the SDP tarball such that it lands as <code>/hxdepots/downloads/sdp.Unix.tgz</code>, and then proceed from that point forward. </td> </tr> </table> </div> <div class="sidebarblock"> <div class="content"> <div class="title">What if there is no <code>/hxdepots</code> ?</div> <div class="paragraph"> <p>If the existing SDP does not have a <code>/hxdepots</code> directory, find the correct value with this command:</p> </div> <div class="literalblock"> <div class="content"> <pre>bash -c 'cd /p4/common; d=$(pwd -P); echo ${d%/p4/common}'</pre> </div> </div> <div class="paragraph"> <p>This can be run from any shell (bash, tcsh, zsh, etc.)</p> </div> </div> </div> </div> <div class="sect3"> <h4 id="_sdp_legacy_upgrade_procedure">7.3.2. SDP Legacy Upgrade Procedure</h4> <div class="paragraph"> <p>If your current SDP is older than the 2020.1 release, see the <a href="SDP_Legacy_Upgrades.Unix.html">SDP Legacy Upgrade Guide (for Unix)</a> for information on upgrading SDP to SDP 2020.1 from any prior version (dating back to 2007).</p> </div> </div> </div> <div class="sect2"> <h3 id="_upgrading_helix_software_with_the_sdp">7.4. Upgrading Helix Software with the SDP</h3> <div class="paragraph"> <p>The following outlines the procedure for upgrading Helix binaries using the SDP scripts.</p> </div> <div class="sect3"> <h4 id="_get_latest_helix_binaries">7.4.1. Get Latest Helix Binaries</h4> <div class="paragraph"> <p>Acquire the latest Perforce Helix binaries to stage them for upgrade using the <a href="#_get_helix_binaries_sh">Section 9.2.1, “get_helix_binaries.sh”</a> script.</p> </div> <div class="paragraph"> <p>If you have multiple server machines with SDP, staging can be done with this script on one machine first, and then the <code>/hxdepots/sdp/helix_binaries</code> folder can be rsync’d to other machines.</p> </div> <div class="paragraph"> <p>Alternately, this script can be run on each machine, but as patches can be released at any time, running it once and then distributing the helix_binaries directory internally via rsync is preferred to ensure all machines at your site deploy with the same binary versions.</p> </div> <div class="paragraph"> <p>See <a href="#_get_helix_binaries_sh">Section 9.2.1, “get_helix_binaries.sh”</a></p> </div> </div> <div class="sect3"> <h4 id="_upgrade_each_instance">7.4.2. Upgrade Each Instance</h4> <div class="paragraph"> <p>Use the SDP <code>upgrade.sh</code> script to upgrade each instance of Helix on the current machine, using the staged binaries. The upgrade process handles all aspects of upgrading, including adjusting the database structure, executing commands to upgrade the p4d database schema, and managing the SDP symlinks in <code>/p4/common/bin</code>.</p> </div> <div class="paragraph"> <p>Instances can be upgraded independently of each other.</p> </div> <div class="paragraph"> <p>See <a href="#_upgrade_sh">Section 9.2.2, “upgrade.sh”</a>.</p> </div> </div> <div class="sect3"> <h4 id="_global_topology_upgrades_outer_to_inner">7.4.3. Global Topology Upgrades - Outer to Inner</h4> <div class="paragraph"> <p>For any given instance, be aware of the Helix topology when performing upgrades, specifically whether that instance has replicas and/or edge servers. When replicas and edge servers exist (and are active), the order in which <code>upgrade.sh</code> must be run on different server machines matters. Perform upgrades following an "outer to inner" strategy.</p> </div> <div class="paragraph"> <p>For example, say for SDP instance 1, your site has the following server machines:</p> </div> <div class="ulist"> <ul> <li> <p>bos-helix-01 - The master (in Boston, USA)</p> </li> <li> <p>bos-helix-02 - Replica of master (in Boston, USA)</p> </li> <li> <p>nyc-helix-03 - Replica of master (in New York, USA)</p> </li> <li> <p>syd-helix-04 - Edge Server (in Sydney, AU)</p> </li> <li> <p>syd-helix-05 - Replica of Sydney edge (in Sydney)</p> </li> </ul> </div> <div class="paragraph"> <p>Envision the above topology with the master server in the center, and two concentric circles.</p> </div> <div class="paragraph"> <p>The Replica of the Sydney edge would be done first, as it is by itself in the outermost circle.</p> </div> <div class="paragraph"> <p>The Edge server and two Replicas of the master are all at the next inner circle. So bos-helix-02, nyc-helix-03, and syd-helix-04 could be upgraded in any order with respect to each other, or even simultaneously, as they are in the same circle.</p> </div> <div class="paragraph"> <p>The master is the innermost, and would be upgraded last.</p> </div> <div class="paragraph"> <p>A few standards need to be in place to make this super easy:</p> </div> <div class="ulist"> <ul> <li> <p>The <code>perforce</code> operating system user would have properly configured SSH keys to allow passwordless ssh from the master to all other servers.</p> </li> <li> <p>The <code>perforce</code> user shell environment (~/.bash_profile and ~/.bashrc) ensured that the SDP shell environment automatically sourced</p> </li> </ul> </div> <div class="paragraph"> <p>The Helix global topology upgrade could be done something like, starting as perforce@bos-helix-01:</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /p4/sdp/helix_binaries ./get_helix_binaries.sh rsync -a /p4/sdp/helix_binaries/ syd-helix-05:/p4/sdp/helix_binaries rsync -a /p4/sdp/helix_binaries/ syd-helix-04:/p4/sdp/helix_binaries rsync -a /p4/sdp/helix_binaries/ nyc-helix-03:/p4/sdp/helix_binaries rsync -a /p4/sdp/helix_binaries/ bos-helix-02:/p4/sdp/helix_binaries</pre> </div> </div> <div class="paragraph"> <p>Then do a preview of the upgrade on all machines, in outer-to-inner order:</p> </div> <div class="literalblock"> <div class="content"> <pre>ssh syd-helix-05 upgrade.sh ssh syd-helix-04 upgrade.sh ssh nyc-helix-03 upgrade.sh ssh bos-helix-02 upgrade.sh ssh bos-helix-01 upgrade.sh</pre> </div> </div> <div class="paragraph"> <p>On each machine, check for a message in the output that contains <code>Success: Finished</code>. If that looks good, then proceed to execute the actual upgrades:</p> </div> <div class="literalblock"> <div class="content"> <pre>ssh syd-helix-05 upgrade.sh -y ssh syd-helix-04 upgrade.sh -y ssh nyc-helix-03 upgrade.sh -y ssh bos-helix-02 upgrade.sh -y ssh bos-helix-01 upgrade.sh -y</pre> </div> </div> <div class="paragraph"> <p>As with the preview, check for a message in the output that contains <code>Success: Finished</code>.</p> </div> </div> </div> <div class="sect2"> <h3 id="_database_modifications">7.5. Database Modifications</h3> <div class="paragraph"> <p>Occasionally modifications are made to the Perforce database from one release to another. For example, server upgrades and some recovery procedures modify the database.</p> </div> <div class="paragraph"> <p>When upgrading the server, replaying a journal patch, or performing any activity that modifies the db.* files, you must ensure that the offline checkpoint process is functioning correctly so that the files in the offline_db directory match the ones in the live server directory.</p> </div> <div class="paragraph"> <p>Normally upgrades to the offline_db after a P4D upgrade will be applied by rotating the journal in the normal way, and applying it to the offline_db.</p> </div> <div class="paragraph"> <p>In some cases it is necessary to restart the offline checkpoint process and the easiest way to is to run the live_checkpoint script after modifying the db.* files, as follows:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/common/bin/live_checkpoint.sh 1</pre> </div> </div> <div class="paragraph"> <p>This script makes a new checkpoint of the modified database files in the live <code>root</code> directory, then recovers that checkpoint to the <code>offline_db</code> directory so that both directories are in sync. This script can also be used anytime to create a checkpoint of the live database.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> Please note the warnings about how long this process may take at <a href="#_live_checkpoint_sh">Section 9.4.6, “live_checkpoint.sh”</a> </td> </tr> </table> </div> <div class="paragraph"> <p>This command should be run when an error occurs during offline checkpointing. It restarts the offline checkpoint process from the live database files to bring the offline copy back in sync. If the live checkpoint script fails, contact Perforce Consulting at <a href="mailto:consulting@perforce.com">consulting@perforce.com</a>.</p> </div> </div> </div> </div> <div class="sect1"> <h2 id="_maximizing_server_performance">8. Maximizing Server Performance</h2> <div class="sectionbody"> <div class="paragraph"> <p>The following sections provide some guidelines for maximizing the performance of the Perforce Helix Core Server, using tools provided by the SDP. More information on this topic can be found in the <a href="https://portal.perforce.com/s/article/2529">Knowledge Base</a>.</p> </div> <div class="sect2"> <h3 id="_ensure_transparent_huge_pages_thp_is_turned_off">8.1. Ensure Transparent Huge Pages (THP) is turned off</h3> <div class="paragraph"> <p>This is reference <a href="https://portal.perforce.com/s/article/3005">KB Article on Platform Notes</a></p> </div> <div class="paragraph"> <p>There is a (now deprecated) script in the SDP which will do this:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/sdp/Server/Unix/setup/os_tweaks.sh</pre> </div> </div> <div class="paragraph"> <p>It needs to be run as <code>root</code> or using <code>sudo</code>. This will not persist after system is rebooted - and is thus no longer the recommended option.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> We recommend the usage of <code>tuned</code> instead of the above, since it will persist after reboots. </td> </tr> </table> </div> <div class="paragraph"> <p>Install as appropriate for your Linux distribution (so as <code>root</code>):</p> </div> <div class="literalblock"> <div class="content"> <pre>yum install tuned</pre> </div> </div> <div class="paragraph"> <p>or</p> </div> <div class="literalblock"> <div class="content"> <pre>apt-get install tuned</pre> </div> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Create a customized <code>tuned</code> profile with disabled THP. Create a new directory in <code>/etc/tuned</code> directory with desired profile name:</p> <div class="literalblock"> <div class="content"> <pre>mkdir /etc/tuned/nothp_profile</pre> </div> </div> </li> <li> <p>Then create a new <code>tuned.conf</code> file for <code>nothp_profile</code>, and insert the new tuning info:</p> <div class="listingblock"> <div class="content"> <pre class="highlight"><code>cat <<EOF > /etc/tuned/nothp_profile/tuned.conf [main] include= throughput-performance [vm] transparent_hugepages=never EOF</code></pre> </div> </div> </li> <li> <p>Make the script executable</p> <div class="literalblock"> <div class="content"> <pre>chmod +x /etc/tuned/nothp_profile/tuned.conf</pre> </div> </div> </li> <li> <p>Enable <code>nothp_profile</code> using the <code>tuned-adm</code> command.</p> <div class="literalblock"> <div class="content"> <pre>tuned-adm profile nothp_profile</pre> </div> </div> </li> <li> <p>This change will immediately take effect and persist after reboots. To verify if THP are disabled or not, run below command:</p> <div class="literalblock"> <div class="content"> <pre>cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]</pre> </div> </div> </li> </ol> </div> </div> <div class="sect2"> <h3 id="_putting_server_locks_directory_into_ram">8.2. Putting server.locks directory into RAM</h3> <div class="paragraph"> <p>The <code>server.locks</code> directory is maintained in the $P4ROOT (so <code>/p4/1/root</code>) for a running server instance. This directory contains a tree of 0-length files (or 17 byte files in earlier p4d versions) used for lock coordination amongst p4d processes.</p> </div> <div class="paragraph"> <p>This directory can be removed every time the p4d instance is restarted, so it is safe to put it into a tmpfs filesystem (which by its nature does not survive a reboot).</p> </div> <div class="paragraph"> <p>Even on a large installation with many hundreds or thousands of users, this directory will be unlikely to exceed 64M. The files in this directory are 17 or 0 bytes depending on th p4d version; space is needed for inodes.</p> </div> <div class="paragraph"> <p>To do this, first determine if the setting will be global for all p4d servers at your site, or will be determined on a per-server machine basis. If set globally, the per-machine configuration described below MUST be done on all p4d server machines.</p> </div> <div class="paragraph"> <p>This should be done in a scheduled maintenance window.</p> </div> <div class="paragraph"> <p>For each p4d server machine (<strong>all</strong> server machines if you intend to make this a global setting), do the following as user <code>root</code>:</p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Create a local directory mount point, and change owner/group to <code>perforce:perforce</code> (or <code>$OSUSER</code> if SDP config specifies a different OS user, and whatever group is used):</p> <div class="literalblock"> <div class="content"> <pre>mkdir /hxserverlocks chown perforce:perforce /hxserverlocks</pre> </div> </div> </li> <li> <p>Add a line to <code>/etc/fstab</code> (adjusting appropriately if <code>$OSUSER</code> and group are set to something other than <code>perforce:perforce</code>):</p> <div class="literalblock"> <div class="content"> <pre>HxServerLocks /hxserverlocks tmpfs uid=perforce,gid=perforce,size=64M,mode=0700 0 0</pre> </div> </div> </li> </ol> </div> <div class="paragraph"> <p>Note: The <code>64M</code> in the above example is suitable for many sites, including large ones. For servers with less available RAM, a smaller value is recommended, but no less than 128K.</p> </div> <div class="paragraph"> <p>If multiple SDP instances are operated on the machine, the value must be large enough for all instances.</p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Mount the storage volume:</p> <div class="literalblock"> <div class="content"> <pre>mount -a</pre> </div> </div> </li> <li> <p>Check it is looking correct and has correct ownership (<code>perforce</code> or <code>$OSUSER</code>):</p> <div class="literalblock"> <div class="content"> <pre>df -h ls -la /hxserverlocks</pre> </div> </div> </li> </ol> </div> <div class="paragraph"> <p>As user <code>perforce</code> (or <code>$OSUSER</code>), set the configurable <code>server.locks.dir</code>. This will be set in one of two ways, depending on whether it was set globally, or on a per-server machine.</p> </div> <div class="paragraph"> <p>First, set the shell environment for your instance:</p> </div> <div class="literalblock"> <div class="content"> <pre>source /p4/common/bin/p4_vars N</pre> </div> </div> <div class="paragraph"> <p>Replacing <code>N</code> with your instance name; <code>1</code> by default.</p> </div> <div class="paragraph"> <p>To set <code>server.locks.dir</code> globally, do:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 configure set server.locks.dir="/hxserverlocks${P4HOME}/server.locks"</pre> </div> </div> <div class="paragraph"> <p>e.g.</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 configure set ${SERVERID}#server.locks.dir=/hxserverlocks${P4HOME}/server.locks</pre> </div> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> If you set this globally (without <code>serverid#</code> prefix), then you must ensure that all server machines running p4d, including replicas end edge servers, have a similarly named directory available (or bad things will happen!) </td> </tr> </table> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> Consider failover options. A failover will, by nature, change the ServerID on a given machine. If <code>server.locks.dir</code> is set globally, and all machines have the HxServerLocks configuration done as noted above, then the <code>server.locks.dir</code> setting is fully accounted for, and will not cause a problem in a failover situaion. </td> </tr> </table> </div> <div class="paragraph"> <p>If <code>server.locks.dir</code> is set on a per-machine basis, then you should ensure that every standby server has the same configuration with respect to <code>server.locks.dir</code> and the HxServerLocks filesystem as its target server. So any standby servers replicating from a commit server should have the same configuration as the commit server, and any standby servers replicating from an edge server should have the same configuration as the target edge server. For simplicity, using a global setting should be considered.</p> </div> <div class="paragraph"> <p>If you are defining server machine templates (such as an AMI in AWS or with Terraform or similar), the HxServerLoccks configuration can and should be accounted for in the system template.</p> </div> </div> <div class="sect2"> <h3 id="_installing_monitoring_packages">8.3. Installing monitoring packages</h3> <div class="paragraph"> <p>The <code>sysstat</code> and <code>sos</code> packages are recommended for helping investigate any performance issues on a server.</p> </div> <div class="literalblock"> <div class="content"> <pre>yum install sysstat sos</pre> </div> </div> <div class="paragraph"> <p>or</p> </div> <div class="literalblock"> <div class="content"> <pre>apt install sysstat sos</pre> </div> </div> <div class="paragraph"> <p>Then enable it:</p> </div> <div class="literalblock"> <div class="content"> <pre>systemctl enable --now sysstat</pre> </div> </div> <div class="paragraph"> <p>The reports are text based, but you can use kSar (<a href="https://github.com/vlsi/ksar" class="bare">https://github.com/vlsi/ksar</a>) to visualize the data. If installed before <code>sosreport</code> is run, <code>sosreport</code> will include the <code>sysstat</code> data.</p> </div> <div class="paragraph"> <p>We also recommend <code>P4prometheus</code> - <a href="https://github.com/perforce/p4prometheus" class="bare">https://github.com/perforce/p4prometheus</a>. See <a href="https://github.com/perforce/p4prometheus/blob/master/INSTALL.md#automated-script-installation">Automated script installer for SDP instances</a> which makes it easy to install <code>node_exporter</code>, <code>p4prometheus</code> and monitoring scripts in the <code>crontab</code></p> </div> <div class="paragraph"> <p>See an example of <a href="https://brian-candler.medium.com/interpreting-prometheus-metrics-for-linux-disk-i-o-utilization-4db53dfedcfc">interpreting prometheus metrics</a></p> </div> </div> <div class="sect2"> <h3 id="_optimizing_the_database_files">8.4. Optimizing the database files</h3> <div class="paragraph"> <p>The Perforce Server’s database is composed of b-tree files. The server does not fully rebalance and compress them during normal operation. To optimize the files, you must checkpoint and restore the server. This normally only needs to be done very few months.</p> </div> <div class="paragraph"> <p>To minimize the size of back up files and maximize server performance, minimize the size of the db.have and db.label files.</p> </div> </div> <div class="sect2"> <h3 id="_p4v_performance_settings">8.5. P4V Performance Settings</h3> <div class="paragraph"> <p>These are covered in: <a href="https://portal.perforce.com/s/article/2878" class="bare">https://portal.perforce.com/s/article/2878</a></p> </div> </div> <div class="sect2"> <h3 id="_proactive_performance_maintenance">8.6. Proactive Performance Maintenance</h3> <div class="paragraph"> <p>This section describes some things that can be done to proactively to enhance scalability and maintain performance.</p> </div> <div class="sect3"> <h4 id="_limiting_large_requests">8.6.1. Limiting large requests</h4> <div class="paragraph"> <p>To prevent large requests from overwhelming the server, you can limit the amount of data and time allowed per query by setting the MaxResults, MaxScanRows and MaxLockTime parameters to the lowest setting that does not interfere with normal daily activities. As a good starting point, set MaxScanRows to MaxResults * 3; set MaxResults to slightly larger than the maximum number of files the users need to be able to sync to do their work; and set MaxLockTime to 30000 milliseconds. These values must be adjusted up as the size of your server and the number of revisions of the files grow. To simplify administration, assign limits to groups rather than individual users.</p> </div> <div class="paragraph"> <p>To prevent users from inadvertently accessing large numbers of files, define their client view to be as narrow as possible, considering the requirements of their work. Similarly, limit users' access in the protections table to the smallest number of directories that are required for them to do their job.</p> </div> <div class="paragraph"> <p>Finally, keep triggers simple. Complex triggers increase load on the server.</p> </div> </div> <div class="sect3"> <h4 id="_offloading_remote_syncs">8.6.2. Offloading remote syncs</h4> <div class="paragraph"> <p>For remote users who need to sync large numbers of files, Perforce offers a <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/chapter.proxy.html">proxy server</a>. P4P, the Perforce Proxy, is run on a machine that is on the remote users' local network. The Perforce Proxy caches file revisions, serving them to the remote users and diverting that load from the main server.</p> </div> <div class="paragraph"> <p>P4P is included in the Windows installer. To launch P4P on Unix machines, copy the <code>/p4/common/etc/init.d/p4p_1_init script</code> to <code>/p4/1/bin/p4p_1_init</code>. Then review and customize the script to specify your server volume names and directories.</p> </div> <div class="paragraph"> <p>P4P does not require special hardware but it can be quite CPU intensive if it is working with binary files, which are CPU-intensive to attempt to compress. It doesn’t need to be backed up. If the P4P instance isn’t working, users can switch their port back to the main server and continue working until the instance of P4P is fixed.</p> </div> </div> </div> </div> </div> <div class="sect1"> <h2 id="_tools_and_scripts">9. Tools and Scripts</h2> <div class="sectionbody"> <div class="paragraph"> <p>This section describes the various scripts and files provided as part of the SDP package.</p> </div> <div class="sect2"> <h3 id="_general_sdp_usage">9.1. General SDP Usage</h3> <div class="paragraph"> <p>This section presents an overview of the SDP scripts and tools, with details covered in subsequent sections.</p> </div> <div class="sect3"> <h4 id="_linux">9.1.1. Linux</h4> <div class="paragraph"> <p>Most scripts and tools reside in <code>/p4/common/bin</code>. The <code>/p4/<instance>/bin</code> directory (e.g. <code>/p4/1/bin</code>) contains scripts or links that are specific to that instance such as wrappers for the p4d executable.</p> </div> <div class="paragraph"> <p>Older versions of the SDP required you to always run important administrative commands using the <code>p4master_run</code> script, and specify fully qualified paths. This script loads environment information from <code>/p4/common/bin/p4_vars</code>, the central environment file of the SDP, ensuring a controlled environment. The <code>p4_vars</code> file includes instance specific environment data from <code>/p4/common/config/p4_<strong><em>instance</em>.</strong>vars</code> e.g. <code>/p4/common/config/p4_1.vars</code>. The <code>p4master_run script</code> is still used when running p4 commands against the server unless you set up your environment first by sourcing p4_vars with the instance as a parameter (for bash shell: <code>source /p4/common/bin/p4_vars 1</code>). Administrative scripts, such as <code>daily_checkpoint.sh</code>, no longer need to be called with <code>p4master_run</code> however, they just need you to pass the instance number to them as a parameter.</p> </div> <div class="paragraph"> <p>When invoking a Perforce command directly on the server machine, use the p4_<strong><em>instance</em></strong> wrapper that is located in <code>/p4/<strong><em>instance</em></strong>/bin</code>. This wrapper invokes the correct version of the p4 client for the instance. The use of these wrappers enables easy upgrades, because the wrapper is a link to the correct version of the p4 client. There is a similar wrapper for the p4d executable, called p4d_<strong><em>instance</em></strong>.</p> </div> <div class="admonitionblock note"> <table> <tr> <td class="icon"> <i class="fa icon-note" title="Note"></i> </td> <td class="content"> This wrapper is important to handle case sensitivity in a consistent manner, e.g. when running a Unix server in case-insensitive mode. If you just execute <code>p4d</code> directly when it should be case-insensitive, then you may cause problems, or commands will fail. </td> </tr> </table> </div> <div class="paragraph"> <p>Below are some usage examples for instance 1.</p> </div> <table class="tableblock frame-all grid-all stretch"> <colgroup> <col style="width: 50%;"> <col style="width: 50%;"> </colgroup> <thead> <tr> <th class="tableblock halign-left valign-top"><em>Example</em></th> <th class="tableblock halign-left valign-top"><em>Remarks</em></th> </tr> </thead> <tbody> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/p4/common/bin/p4master_run 1 /p4/1/bin/p4_1 admin stop</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Run <code>p4 admin stop</code> on instance 1</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/p4/common/bin/live_checkpoint.sh 1</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Take a checkpoint of the live database on instance 1</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/p4/common/bin/p4login 1</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Log in as the perforce user (superuser) on instance 1.</p></td> </tr> </tbody> </table> <div class="paragraph"> <p>Some maintenance scripts can be run from any client workspace, if the user has administrative access to Perforce.</p> </div> </div> <div class="sect3"> <h4 id="_monitoring_sdp_activities">9.1.2. Monitoring SDP activities</h4> <div class="paragraph"> <p>The important SDP maintenance and backup scripts generate email notifications when they complete.</p> </div> <div class="paragraph"> <p>For further monitoring, you can consider options such as:</p> </div> <div class="ulist"> <ul> <li> <p>Making the SDP log files available via a password protected HTTP server.</p> </li> <li> <p>Directing the SDP notification emails to an automated system that interprets the logs.</p> </li> </ul> </div> </div> </div> <div class="sect2"> <h3 id="_upgrade_scripts">9.2. Upgrade Scripts</h3> <div class="sect3"> <h4 id="_get_helix_binaries_sh">9.2.1. get_helix_binaries.sh</h4> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>USAGE for get_helix_binaries.sh v1.4.0: get_helix_binaries.sh [-r <HelixMajorVersion>] [-b <Binary1>,<Binary2>,...] [-sbd <StageBinDir>] [-n] [-D] or get_helix_binaries.sh -h|-man DESCRIPTION: This script acquires Perforce Helix binaries from the Perforce FTP server. The four Helix binaries that can be acquired are: * p4, the command line client * p4d, the Helix Core server * p4p, the Helix Proxy * p4broker, the Helix Broker This script gets the latest patch of binaries for the current major Helix version. It is intended to acquire the latest patch for an existing install, or to get initial binaries for a fresh new install. It must be run from the /hxdepots/sdp/helix_binaries directory (or similar; the /hxdepots directory is the default but is subject to local configuration). The helix_binaries directory is used for staging binaries for later upgrade with the SDP 'upgrade.sh' script (documented separately). This helix_binaries directory is used to stage binaries on the current machine, while the 'upgrade.sh' script updates a single SDP instance (of which there might be several on a machine). The helix_binaries directory may not be in the PATH. As a safety feature, the 'verify_sdp.sh' will report an error if the 'p4d' binary is found outside /p4/common/bin in the PATH. The SDP 'upgrade.sh' check uses 'verify_sdp.sh' as part of its preflight checks, and will refuse to upgrade if any 'p4d' is found outside /p4/common/bin. When a newer major version of Helix binaries is needed, this script should not be modified directly. Instead, the recommended approach is to upgrade the SDP to get the latest version of SDP first, which will included a newer version of this script, as well as the latest 'upgrade.sh'. The 'upgrade.sh' script is updated with each major SDP version to be aware of any changes in the upgrade procedure for the corresponding p4d version. Upgrading SDP first ensures you have a version of the SDP that works with newer versions of p4d and other Helix binaries. OPTIONS: -r <HelixMajorVersion> Specify the Helix Version, using the short form. The form is rYY.N, e.g. r21.2 to denote the 2021.2 release. The default: is r23.2 -b <Binary1>[,<Binary2>,...] Specify a comma-delimited list of Helix binaries. The default is: p4 p4d p4broker p4p -sbd <StageBinDir> Specify the staging directory to install downloaded binaries. By default, this script downloads files into the current directory, which is expected and required to be /p4/sdp/helix_binaries. Documented workflows for using this script involve first cd'ing to that directory. Using this option disables the expected directory check and allows binaries to be installed in any directory. -n Specify the '-n' (No Operation) option to show the commands needed to fetch the Helix binaries from the Perforce FTP server without attempting to execute them. -D Set extreme debugging verbosity using bash 'set -x' mode. HELP OPTIONS: -h Display short help message -man Display this manual page EXAMPLES: Note: All examples assume the SDP is in the standard location, /hxdepots/sdp. Example 1 - Typical Usage with no arguments: cd /hxdepots/sdp/helix_binaries ./get_helix_binaries.sh This acquires the latest patch of all 4 binaries for the r23.2 release (aka 2023.2). Example 2 - Specifying the major version: cd /hxdepots/sdp/helix_binaries ./get_helix_binaries.sh -r r21.2 This gets the latest patch of for the 2021.2 release of all 4 binaries. Note: Only supported Helix binaries are guaranteed to be available from the Perforce FTP server. Note: Only the latest patch of any given binary is available from the Perforce FTP server. Example 3 - Sample getting r22.2 and skipping the proxy binary (p4p): cd /hxdepots/sdp/helix_binaries ./get_helix_binaries.sh -r r22.2 -b p4,p4d,p4broker Example 4 - Install r22.1 in a non-default directory. cd /any/directory/you/want ./get_helix_binaries.sh -r r23.2 -sbd . or: ./get_helix_binaries.sh -r r23.2 -sbd /any/directory/you/want DEPENDENCIES: This script requires outbound internet access. Depending on your environment, it may also require HTTPS_PROXY to be defined, or may not work at all. If this script doesn't work due to lack of outbound internet access, it is still useful illustrating the locations on the Perforce FTP server where Helix Core binaries can be found. If outbound internet access is not available, use the '-n' flag to see where on the Perforce FTP server the files must be pulled from, and then find a way to get the files from the Perforce FTP server to the correct directory on your local machine, /hxdepots/sdp/helix_binaries by default. EXIT CODES: An exit code of 0 indicates no errors were encountered. An non-zero exit code indicates errors were encountered.</code></pre> </div> </div> </div> <div class="sect3"> <h4 id="_upgrade_sh">9.2.2. upgrade.sh</h4> <div class="paragraph"> <p>The <code>upgrade.sh</code> script is used to upgrade <code>p4d</code> and other Perforce Helix binaries on a given server machine.</p> </div> <div class="paragraph"> <p>The links for different versions of <code>p4d</code> are described in <a href="#_p4d_versions_and_links">Section A.1.3, “P4D versions and links”</a></p> </div> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>USAGE for upgrade.sh v4.12.2: upgrade.sh <instance> [-p|-I] [-M] [-Od] [-Osp] [-c] [-y] [-L <log>] [-d|-D] or upgrade.sh [-h|-man] DESCRIPTION: This script upgrades the following Helix Core software: * p4d, the Perforce Helix Core server * p4broker, the Helix Broker server * p4p, the Helix Proxy server * p4, the command line client The preferred process for using this script is to start with the services to be upgraded (p4d, p4broker, and/or p4p ) up and running at the start of processing. The p4d service must be online if it is to be upgraded. Details of each upgrade are described below. Prior to executing any upgrades, a preflight check is done to help ensure upgrades will go smoothly. Also, checks are done to determine what (if any) of the above software products need to be updated. To prepare for an upgrade, new binaries must be update in the /p4/sdp/helix_binaries directory. This is generally done using the get_helix_binaries.sh script in that directory. Binaries in this directory are not referenced by live running servers, and so it is safe to upgrade files in this directory to stage for a future upgrade at any time. Also, the SDP standard PATH does not include this directory, as verified by the verify_sdp.sh script. THE INSTANCE BIN DIR The 'instance bin' directory, /p4/<instance>/bin, (e.g. /p4/1/bin for instance 1), is expected to contain *_init scripts for services that operate on the current machine. For example, a typical commit server machine for instance 1 might have the following in /p4/1/bin: * p4broker_1_init script * p4broker_1 symlink * p4d_1_init script * p4d_1 symlink or script * p4_1 symlink (a reference to the 'p4' command line client) A server machine for instance 1 that runs only the proxy server would have the following in /p4/1/bin: * p4p_1_init script * p4p_1 symlink * p4_1 symlink The instance bin directory is never modified by the 'upgrade.sh' script. The addition of new binaries and update of symlinks occur in . The existence of *_init scripts for any given binary determines whether this script attempts to manage the service on a given machine, stopping it before upgrades, restarting it afterward, and other processing in the case of p4d. Note that Phase 2, adding new binaries and updating symlinks, will occur for all binaries for which new staged versions are available, regardless of whether they are operational on the given machine. THE COMMON DIR This script performs it operations in the SDP common bin dir, . Unlike the instance bin directory, the directory is expected to be identical across all machines in a topology. Scripts and symlinks should always be the same, with only temporary differences while global topology upgrades are in progress. Thus, all binaries available to be upgraded will be upgraded in Phase 2, even if the binary does not operate on the current machine. For example, if a new version of 'p4p' binary is available, a new version will be copied to and symlink references updated there. However, the p4p binary will not be stopped/started. GENERAL UPGRADE PROCESS This script determines what binaries need to be upgraded, based on what new binaries are available in the /p4/sdp/helix_binaries directory compared to what binaries the current instance uses. There are 5 potential phases. Which phases execute depend on the set of binaries being upgraded. The phases are: * PHASE 1 - Establish a clean rollback point. This phase executes on the master if p4d is upgraded. * PHASE 2 - Install new binaries and update SDP symlinks in . This phase executes for all upgrades. * PHASE 3 - Stop services to be upgraded. This phase executes for all upgrades involving p4d, p4p, p4broker. Only a 'p4' client only upgrade skips this phase. * PHASE 4 - Perforce p4d schema upgrades This step involves the 'p4d -xu' processing. It executes if p4d is upgraded to a new major version, and occurs on the master as well as all replicas/edge servers. The behavior of 'p4d -xu' differs depending on whether the server is the master or a replica. This phase is skipped if upgrading to a patch of the same major version, as patches do not require 'p4d -xu' processing. * PHASE 5 - Start upgraded services. This phase executes for all upgrades involving p4d, p4p, p4broker. Only a 'p4' client only upgrade skips this phase. SPECIAL CASE - TO OR THRU P4D 2019.1 If you are upgrading from a version that is older than 2019.1, services are NOT restarted after the upgrade in Phase 5, except on the master. Services must be restarted manually on all other servers. For these 'to-or-thru' 2019.1 upgrades, after ensuring all replicas/edges are caught up (per 'p4 pull -lj'), shutdown all servers other than the master. Proceeding outer-to-inner, execute this script like so on all machines except the master: 1. Deploy new executables in /p4/sdp/helix_binaries 2. Stop p4d. 3. Run 'verify_sdp.sh -skip cron,version'; fix problems if needed until it reports clean. 4. Run 'upgrade.sh -M' to update symlinks. 5. Do the upgrade manually with: p4d -xu 6. Leave the server offline. On the master, execute like this: 1. Deploy new executables in /p4/sdp/helix_binaries 2. Run 'verify_sdp.sh -skip cron,version'; fix problems if needed until it reports clean. 3. upgrade.sh When the script completes (it will wait for 'p4 storage' upgrades), restart services manually after the upgrade in the 'inner-to-outer' direction. Restart services on replicas/edges going inner-to-outer This procedure requiring extra steps is specific to 'to-or-thru' P4D 2019.1 upgrades. For upgrades starting from P4D 2019.1 or later, things are simpler. UPGRADES FOR P4D 2019.1+ For upgrades where the P4D start version is 2019.1 and going to any subsequent version, run this script going outer-to-inner. On each machine, it leaves the services online and running. Going in the outer-to-inner direction an all servers, do: 1. Deploy new executables in /p4/sdp/helix_binaries 2. Run 'verify_sdp.sh -skip cron,version'; fix problems if needed until it reports clean. 3. upgrade.sh UPGRADE PREPARATION The steps for deploying new binaries to server machines and running verify_sdp.sh (and potentially correcting any issues it discovers) can and should be done before the time or even day of any planned upgrade. UPGRADING HELIX CORE - P4D The p4d process, the Perforce Helix Core Server, is the center of the Perforce Helix universe, and the only server with a significant database component. Most of the upgrade phases above are about performing the p4d upgrade. This 'upgrade.sh' script requires that the 'p4d' service be running at the beginning of processing if p4d is to be upgraded, and will abort if p4d is not running. ORDER OF UPGRADES Any given Perforce Helix installation will have least one p4d master server, and may have several other p4d servers deployed on different machines as replicas and edge servers. When upgrading multiple p4d servers for any given instance (i.e. any given data set, with a unique set of changelist numbers and users), the order in which upgrades are performed matters. Upgrades must be done in "outer to inner" order. The master server, at the center of the topology, is the innermost server and must be upgraded last. Any replicas or edge servers connected directly to the master constitute the next outer circle. These can be upgraded in any order relative to each other, but must be done before the master and after any replicas farther out from the master in the topology. So this 'upgrade.sh' script should be run first on the server machines that are "outermost" from the master from a replication perspective, and moving inward. The last run is done on the master server machine. Server machines running only proxies and brokers do not have a strict order dependency for upgrades. These are commonly done in the same "outer to inner" methodology as p4d for process consistency rather than strict technical need. See the SDP_Guide.Unix.html for more information related to performing global topology upgrades. COMMIT SERVER JOURNAL ROTATIONS This script helps minimize downtime for upgrades by taking advantage of the SDP offline checkpoint mechanism. Rather than wait for a full checkpoint, a journal is rotated and replayed to the offline_db. This typically takes very little time compared to a checkpoint, reducing downtime needed for the overall upgrade. It also prepares the offline_db in case a rollback is needed. When the commit server is upgraded, two rotations of the commit server's journal occur during processing for major upgrades, and a single journal rotation is done for patch upgrades. The first journal rotation occurs before any upgrade processing occurs, i.e. before the new binaries are added and symlinks are updated. This gives a clean rollback point. This journal is immediately replayed to the offline_db. Later, after p4d has started performs its journaled upgrade processing, a second journal rotation occurs in Phase 5 if a major upgrade was done. This second journal rotation captures all upgrade-related processing in a separately numbered journal. This second journal is not applied to the offline_db by this script. Instead, the replay of the second journal to the offline_db will occur the next time a call is made to the daily_checkpoint.sh or rotate_journal.sh, e.g. via routine crontab. For a p4d patch upgrade, there will not be any upgrade processing. In the very unlikely event that a rollback were to ever be needed, the offline_db is left in a state that it could be used for a fast rollback on the commit server. MULTI-SERVER OUTER-TO-INNER UPGRADES Before starting an outer-to-inner upgrade involving multiple p4d servers, (standby, edge, and other p4d replica servers), a manual journal rotation should be done on the commit server before starting to call upgrade.sh on each of the p4d servers in outer-to-inner order. Take note of the journal counter used for this pre-start journal rotation. It can be useful in event of a rollback. That journal may need to be replayed to the offline_db on all servers other than the commit in a rollback scenario. In prepartion in days or weeks before an upgrade, every p4d server in the topology should be checked to ensure its offline_db is healthy and current. ROLLBACK In the very unlikely event that a rollback is needed, bear in mind the following: * There is no standard procedure for rolling back, because a procedure would need to take into account the reason a decision was made to rollback. Presumably the decision would be driven by some kind of failure. A large factor in determining whether rollback is practical is the point in the process at which a rollback is needed. In some situations, a 'Fix and Roll Forward' approach may be more pragmatic than a rollback, and should always be considered. * This script and supporting documentation will help prepare your data for as smooth a rollback as possible should it ever become necessary. * To best prepare for a rollback, it is essential to manage user lockout as part of your overall maintenance procedure. Then let users back in after you have confirmed you are moving forward. User lockout is outside the scope of this script, but can be managed using several possible methods such as: - Crafting a special Protections table to be used during maintenance, - Using "Down for Maintenance" brokers, - Using network and/or on-host firewall rules, - Using temporary ports for maintenance. * If Phase 2 (update of symlinks and binaries) completed and must be undone, than can be achieved by putting the pre-upgrade binaries in place in the directory /p4/sdp/helix_binaries, named simply p4, p4d, p4broker, and p4p. Then run a command like this example for Instance 1: upgrade.sh 1 -M -I -y This will change symlinks back to reference the older versions. The new binaries will still exist in /p4/common/bin, but will no longer be referenced for Instance 1. UPGRADING HELIX BROKER Helix Broker (p4broker) servers are commonly deployed on the same machine as a Helix Core server, and can also be deployed on stand-alone machines (e.g. deployed to a DMZ host to provide secure access outside a corporate firewall). Helix Brokers configured in the SDP environment can use a default configuration file, and may have other configurations. The default configuration is the done defined in /p4/common/config/p4_N.broker.cfg (or a host-specific override file if it exists named /p4/common/config/p4_N.broker.<short_hostname>.cfg). Other broker configurations may exist, such as a DFM (Down for Maintenance) broker config /p4/common/config/p4_N.broker.dfm.cfg. During upgrade processing, this 'upgrade.sh' script only stops and restarts the broker with the default configuration. Thus, if coordinating DFM brokers, first manually shutdown the default broker and start the DFM brokers before calling this script. This script will leave the DFM brokers running while adding the new binaries and updating the symlinks. (Note: Depending on how services are configured, this DFM configuration might not survive a machine reboot. typically the default broker will come online after a machine reboot). This 'upgrade.sh' script will stop the p4broker service if it is running at the beginning of processing. If it was stopped, it will be restarted after the new binaries are in place and symlinks are updated. If p4broker was not running at the start of processing, new binaries are added and symlinks updated, but the p4broker server will not be started. UPGRADING HELIX PROXY Helix Proxy (p4p) are commonly deployed on a machine by themselves, with no p4d and no broker. It may also be run on the same machine as p4d. This 'upgrade.sh' script will stop the p4p service if it is running at the beginning of processing. If it was stopped, it will be restarted after the new binaries are in place and symlinks are updated. If p4p was not running at the start of processing, new binaries are added and symlinks updated, but the p4p server will not be started. UPGRADING HELIX P4 COMMAND LINE CLIENT The command line client, 'p4', is upgraded in Phase 2 by addition of new binaries and updating of symlinks. STAGING HELIX BINARIES If your server can reach the Perforce FTP server over the public internet, a script can be used from the /p4/sdp/helix_binaries directory to get the latest binaries: $ cd /p4/sdp/helix_binaries $ ./get_helix_binaries.sh If your server cannot reach the Perforce FTP server, perhaps due to outbound network firewall restrictions or operating on an "air gapped" network, use the '-n' option to see where Helix binaries can be acquired from: $ cd /p4/sdp/helix_binaries $ ./get_helix_binaries.sh -n OPTIONS: <instance> Specify the SDP instance name to add. This is a reference to the Perforce Helix Core data set. This defaults to the current instance based on the $SDP_INSTANCE shell environment variable. If the SDP shell environment is not loaded, this option is required. -p Specify '-p' to halt processing after preflight checks are complete, and before actual processing starts. By default, processing starts immediately upon successful completion of preflight checks. -Od Specify '-Od' to override the rule preventing downgrades. WARNING: This is an advanced option intended for use by or with the guidance of Perforce Support or Perforce Consulting. -Osp Specify '-Osp' to override the sudo preflight, skipping that check. WARNING: This is an advanced option intended for use by or with the guidance of Perforce Support or Perforce Consulting. -I Specify '-I' to ignore preflight errors. Use of this flag is STRONGLY DISCOURAGED, as the preflight checks are essential to ensure a safe and smooth migration. If used, preflight checks are still done so their errors are recorded in the upgrade log, and then the migration will attempt to proceed. WARNING: This is an advanced option intended for use by or with the guidance of Perforce Support or Perforce Consulting. -M Specify '-M' if you plan to do a manual upgrade. With this option, only Phase 2 processing, adding new staged binaries and updating symlinks, is done by this script. If '-M' is used, this script does not check that services to be upgraded are online at the start of processing, nor does it attempt to start to stop services. If '-M' is used, the services should be stopped manually before calling this script, and then started manually after. WARNING: This is an advanced option intended for use by or with the guidance of Perforce Support or Perforce Consulting. -c Specify '-c' to execute a command to upgrade the Protections table comment format after the p4d upgrade, by using a command like: p4 protect --convert-p4admin-comments -o | p4 -s protect -i By default, this Protections table conversion is not performed. In some environments with custom policies related to update of the protections table, this command may not work. The new style of comments and the '--convert-p4admin-comments' option was introduced in P4D 2016.1. -L <log> Specify the path to a log file, or the special value 'off' to disable logging. By default, all output (stdout and stderr) goes to this file in the /p4/N/logs directory (where N is the SDP instance name): upgrade.p4_N.<datestamp>.log NOTE: This script is self-logging. That is, output displayed on the screen is simultaneously captured in the log file. Redirection operators like '> log' and '2>&1' are not required, nor is 'tee'. Logging can only be disabled with '-L off' if the '-n' or '-p' flags are used. Disabling logging for actual upgrades is not allowed. -y Specify the '-y' option to confirm that the upgrade should be done. By default, this script operates in No-Op mode, meaning no actions that affect data or structures are taken. Instead, commands that would be run are displayed. This mode can be educational, showing various steps that will occur during an actual upgrade. DEBUGGING OPTIONS: -d Increase verbosity for debugging. -D Set extreme debugging verbosity, using bash '-x' mode. Also implies -d. HELP OPTIONS: -h Display short help message -man Display man-style help message EXAMPLES: EXAMPLE 1: Preflight Only To see if an upgrade is needed for this instance, based on binaries staged in /p4/sdp/helix_binaries, use the '-p' flag to execute only the preflight checks, and disable logging, as in this example: $ cd /p4/common/bin $ ./upgrade.sh 1 -p -L off EXAMPLE 2: Typical Usage Typical usage is with just the SDP instance name as an argument, e.g. instance '1', and no other parameters, as in this example: $ cd /p4/common/bin $ ./upgrade.sh 1 This first runs preflight checks, and aborts if preflight checks detected any issues. The it gives a preview of the upgrade. A successful preview completes with a line near the end that looks like this sample: Success: Finished p4_1 Upgrade. If the preview is successful, then proceed with the real upgrade using the -y flag: $ ./upgrade.sh 1 -y EXAMPLE 3: Simplified If the standard SDP shell environment is loaded, upgrade.sh will be in the path, so the 'cd' command to /p4/common/bin is not needed. Also, the SDP_INSTANCE shell environment variable will be defined, so the 'instance' parameter can be dropped, with simply a call to the script needed. First do a preview: $ upgrade.sh Review the output of the preview, looking for the 'Success: Finished' message near the end of the output. If that exists, then execute again with the '-y' flag to execute the actual migration: $ upgrade.sh -y CUSTOM PRE- AND POST- UPGRADE AUTOMATION HOOKS: This script can execute custom pre- and post- upgrade scripts. This can be useful to incorporate site-specifc elements of an upgrade. If the file /p4/common/site/upgrade/pre-upgrade.sh exists and is executable, it will be executed as a pre-upgrade script. If the file /p4/common/site/upgrade/post-upgrade.sh exists and is executable, it will be executed as a post-upgrade script. Pre- and post- upgrade scripts are called with an SDP instance parameter, and an optional '-y' flag to confirm actual processing is to be done. Custom scripts are expected to operate in preview mode by default, taking no actions that affect data (just as this script behaves). If this upgrade.sh script is given the '-y' flag, that option is passed to the custom script as well, indicating active processing should occur. Pre- and post- upgrade scripts are expected to exit with a zero exit code to indicate success, and non-zero to indicate failure. The custom pre-upgrade script is executed after standard preflight checks complete successfully. If the '-I' flag is used to ignore the status of preflight checks, the custom pre-upgrade script is executed regardless of the status of preflight checks. Preflight checks are executed before actual upgrade processing commences. If a custom pre-upgrade script indicates a failure, the overall upgrade process aborts. The post-upgrade custom script is executed after the main upgrade is successful. Success or failure of pre- and post- upgrade scripts is reported in the log. These scripts do not require independent logging, as all standard and error output is captured in the log of this upgrade.sh script. TIP: Be sure to fully test custom scripts in a test environment before incorporating them into an upgrade on production systems. SEE ALSO: The /verify_sdp.sh script is used for preflight checks. The /p4/sdp/helix_binaries/get_helix_binaries.sh script acquires new binaries for upgrades. Both scripts sport the same '-h' (short help) and '-man' (full manual) usage options as this script. LIMITATIONS: This script does not handle upgrades of 'p4dtg', Helix Swarm, Helix4Git, or any other software. It only handles upgrades of p4d, p4p, p4broker, and the p4 client binary on the SDP-managed server machine on which it is executed.</code></pre> </div> </div> </div> <div class="sect3"> <h4 id="_sdp_upgrade_sh">9.2.3. sdp_upgrade.sh</h4> <div class="paragraph"> <p>This script will perform an upgrade of the SDP itself - see <a href="#_upgrading_the_sdp">Section 7.3, “Upgrading the SDP”</a></p> </div> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>USAGE for sdp_upgrade.sh v1.7.9: sdp_upgrade.sh [-y] [-p] [-L <log>|off] [-D] or sdp_upgrade.sh -h|-man This script must be executed from the 'sdp_upgrade' directory in the extracted SDP tarball. Typical operation starts like this: cd /hxdepots/downloads/new/sdp/Server/Unix/p4/common/sdp_upgrade ./sdp_upgrade.sh -h DESCRIPTION: This script upgrades Perforce Helix Server Deployment Package (SDP) from SDP 2020.1 to the version included in the latest SDP version, SDP 2022.2. == Pre-Upgrade Planning == This script will upgrade the SDP if the pre-upgrade starting SDP version is SDP 2020.1 or later, including any/all patches of SDP 2020.1. If the current SDP version is older than 2020.1, it must first be upgraded to SDP 2020.1 using the SDP Legacy Upgrade Guide. For upgrading from pre-20.1 versions dating back to 2007, in-place or migration-style upgrades can be done. See: https://swarm.workshop.perforce.com/projects/perforce-software-sdp/view/main/doc/SDP_Legacy_Upgrades.Unix.html The SDP should always be upgraded to the latest version first before Helix Core binaries p4d/p4broker/p4p are upgraded using the SDP upgrade.sh script. Upgrading the SDP first ensures the version of the SDP you have is compatible with the latest versions of p4d/p4broker/p4p/p4, and will always be compatible with all supported versions of these Helix Core binaries. When this script is used, i.e. when the current SDP version is 2020.1 or newer, the SDP upgrade procedure does not require downtime for any running Perforce Helix services, such as p4d, p4broker, or p4p. This script is safe to run in environments where live p4d instances are running, and does not require p4d, p4broker, p4p, or any other services to be stopped or upgraded. Upgrade of the SDP is cleanly separate from the upgrade the Helix Core binaries. The upgrade of the SDP can be done immediately prior to Helix Core upgrades, or many days prior. There can be multiple SDP instances on a given server machine. This script will upgrade the SDP on the machine, and thus after the upgrade all instances will immediately use new SDP scripts and updated instance configuration files, e.g. the /p4/common/config/p4_N.vars files. However, all instances will continue running the same Helix Core binaries. Any live running Helix Core server process on the machine are unaffected by the upgrade of SDP. This script will upgrade the SDP on a single machine. If your Perforce Helix topology has multiple machines, the SDP should be upgraded on all machines. The upgrade of SDP on multiple machines can be done in any order, as there is no cross-machine dependency requiring the SDP to be the same version. (The order of upgrade of Helix Core services and binaries such as p4d in global topologies with replicas and edge servers does matter, but is outside the scope of this script). Planning Recap: 1. The SDP can be upgraded without downtime when this script is used, i.e. when the starting SDP version is 2020.1 or later. 2. Upgrade SDP on all machines, in any order, before upgrading p4d and other Helix binaries. == NFS Sharing of HxDepots == In some environments, the HxDepots volume is shared across multiple server machines with NFS, typically mounted as /hxdepots. This script updates the /hxdepots/p4/common and /hxdepots/sdp directories, both of which are on the NFS mount. Thus upgrading SDP on a single machine will effectively and immediately upgrade the SDP on all machines that share /hxdepots from the same NFS-mounted storage. This is a safe and valid configuration, as upgrading the SDP does not affect any live running p4d servers. == Acquiring the SDP Package == This script is part of the SDP package (tarball). It must be run from an extracted tarball directory. Acquiring the SDP tarball is a manual operation. The SDP tarball must be extracted such that the 'sdp' directory appears as <HxDepots>/downloads/new/sdp, where <HxDepots> defaults to /hxdepots. To determine the value for <HxDepots> at your site you can run the following: bash -c 'cd /p4/common; d=$(pwd -P); echo ${d%/p4/common}' On this machine, that value is: /hxdepots Following are sample commands to acquire the latest SDP, to be executed as the user perforce: cd /hxdepots [[ -d downloads ]] || mkdir downloads cd downloads [[ -d new ]] && mv new old.$(date +'%Y%m%d-%H%M') curl -s -k -O https://swarm.workshop.perforce.com/projects/perforce-software-sdp/download/downloads/sdp.Unix.tgz mkdir new cd new tar -xzf ../sdp.Unix.tgz After extracting the SDP tarball, cd to the directory where this sdp_upgrade.sh script resides, and execute it from there. cd /hxdepots/downloads/new/sdp/Server/Unix/p4/common/sdp_upgrade ./sdp_upgrade.sh -man == Preflight Checks == Prior to upgrading, preflight checks are performed to ensure the upgrade can be completed successfully. If the preflight checks fail, the upgrade will not start. Sample Preflight Checks: * The existing SDP version is verified to be SDP 2020.1+. * Various basic SDP structural checks are done. * The /p4/common/bin/p4_vars is checked to confirm it can be upgraded. * All /p4/common/config/p4_N.vars files are checked to confirm they can be upgraded. == Automated Upgrade Processing == Step 1: Backup /p4/common. The existing <HxDepots>/p4/common structure is backed up to: <HxDepots>/p4/common.bak.<YYYYMMDD-hhmm> Step 2: Update /p4/common. The existing SDP /p4/common structure is updated with new versions of SDP files. Step 3: Generate the SDP Environment File. Regenerate the SDP general environment file, /p4/common/bin/p4_vars. The template is /p4/common/config/p4_vars.template. Step 4: Generate the SDP Instance Files. Regenerate the SDP instance environment files for all instances based on the new template. The template is /p4/common/config/instance_vars.template. For Steps 3 and 4, the re-generation logic will preserve current settings. If upgrading from SDP r20.1, any custom logic that exists below the '### MAKE LOCAL CHANGES HERE' tag will be split into separate files. Custom logic in p4_vars will be moved to /p4/common/site/config/p4_vars.local. Custom logic in p4_N.vars files will be moved to /p4/common/site/config/p4_N.vars.local. Note: Despite these changes, the mechanism for loading the SDP shell environment remains unchanged since 2007, so it looks like: $ source /p4/common/bin/p4_vars N Changes to the right-side of assignments for specific are preserved for all defined SDP settings. For p4_vars, preserved settings are: - OSUSER (determined by current owner of /p4/common) - KEEPLOGS - KEEPCKPS - KEEPJNLS For instance_vars files, preserved settings are: - MAILTO - MAILFROM - P4USER - P4MASTER_ID - SSL_PREFIX - P4PORTNUM - P4BROKERPORTNUM - P4MASTERHOST - PROXY_TARGET - PROXY_PORT - PROXY_MON_LEVEL - PROXY_V_FLAGS - P4DTG_CFG - SNAPSHOT_SCRIPT - SDP_ALWAYS_LOGIN - SDP_AUTOMATION_USERS - SDP_MAX_START_DELAY_P4D - SDP_MAX_START_DELAY_P4BROKER - SDP_MAX_START_DELAY_P4P - SDP_MAX_STOP_DELAY_P4D - SDP_MAX_STOP_DELAY_P4BROKER - SDP_MAX_STOP_DELAY_P4P - VERIFY_SDP_SKIP_TEST_LIST - The 'umask' setting. - KEEPLOGS (if set) - KEEPCKPS (if set) - KEEPJNLS (if set) Note that the above list excludes any values that are calculated. Step 5: Remove Deprecated Files. Deprecated files will be purged from the SDP structure. The list of files to be cleaned are listed in this file: /hxdepots/downloads/new/sdp/Server/Unix/p4/common/sdp_upgrade/deprecated_files.txt Paths listed in this file are relative to the '/p4' directory (or more accurately the SDP Install Root directory, which is always '/p4' except in SDP test production environments). Step 6: Update SDP crontabs. No crontab updates are required for this SDP upgrade. == Post-Upgrade Processing == This script provides guidance on any post-processing steps. For some releases, this may include upgrades to crontabs. OPTIONS: -y Specify the '-y' option to confirm that the SDP upgrade should be done. By default, this script operates in No-Op mode, meaning no actions that affect data or structures are taken. Instead, commands that would be run are displayed. This mode can be educational, showing various steps that will occur during an actual upgrade. -p Specify '-p' to halt processing after preflight checks are complete, and before actual processing starts. By default, processing starts immediately upon successful completion of preflight checks. -L <log> Specify the log file to use. The default is /tmp/sdp_upgrade.<timestamp>.log The special value 'off' disables logging to a file. This cannot be specified if '-y' is used. -d Enable debugging verbosity. -D Set extreme debugging verbosity. HELP OPTIONS: -h Display short help message -man Display man-style help message FILES AND DIRECTORIES: Name: SDPCommon Path: /p4/common Notes: This sdp_upgrade.sh script updates files in and under this folder. Name: HxDepots Default Path: /hxdepots Notes: The folder containing versioned files, checkpoints, and numbered journals, and the SDP itself. This is commonly a mount point. Name: DownloadsDir Default Path: /hxdepots/downloads Name: SDPInstallRoot Path: /p4 EXAMPLES: This script must be executed from 'sdp_upgrade' directory in the extracted SDP tarball. Typical operation starts like this: cd /hxdepots/downloads/new/sdp/Server/Unix/p4/common/sdp_upgrade ./sdp_upgrade.sh -h All following examples assume operation from that directory. Example 1: Prelight check only: sdp_upgrade.sh -p Example 2: Preview mode: sdp_upgrade.sh Example 3: Live operation: sdp_upgrade.sh -y LOGGING: This script generates a log file, ~/sdp_upgrade.<timestamp>.log by default. See the '-L' option above. CUSTOM PRE- AND POST- UPGRADE AUTOMATION HOOKS: This script can execute custom pre- and post- upgrade scripts. This can be useful to incorporate site-specific elements of an SDP upgrade. If the file /p4/common/site/upgrade/pre-sdp_upgrade.sh exists and is executable, it will be executed as a pre-upgrade script. If the file /p4/common/site/upgrade/post-sdp_upgrade.sh exists and is executable, it will be executed as a post-upgrade script. Pre- and post- upgrade scripts are passed the '-y' flag to confirm actual processing is to be done. Custom scripts are expected to operate in preview mode by default, taking no actions that affect data (just as this script behaves). If this sdp_upgrade.sh script is given the '-y' flag, that option is passed to the custom script as well, indicating active processing should occur. Pre- and post- upgrade scripts are expected to exit with a zero exit code to indicate success, and non-zero to indicate failure. The custom pre-upgrade script is executed after standard preflight checks complete successfully. Preflight checks are executed before actual upgrade processing commences. If a custom pre-upgrade script indicates a failure, the overall upgrade process aborts. The post-upgrade custom script is executed after the main SDP upgrade is successful. Success or failure of pre- and post- upgrade scripts is reported in the log. These scripts do not require independent logging, as all standard and error output is captured in the log of this sdp_upgrade.sh script. TIP: Be sure to fully test custom scripts in a test environment before incorporating them into an upgrade on production systems. EXIT CODES: An exit code of 0 indicates no errors were encountered during the upgrade. A non-zero exit code indicates the upgrade was aborted or failed.</code></pre> </div> </div> </div> </div> <div class="sect2"> <h3 id="_legacy_upgrade_scripts">9.3. Legacy Upgrade Scripts</h3> <div class="sect3"> <h4 id="_clear_depot_map_fields_sh">9.3.1. clear_depot_Map_fields.sh</h4> <div class="paragraph"> <p>The <code>clear_depot_Map_fields.sh</code> script is used when upgrading to SDP from versions earlier than SDP 2020.1. Its usage is discussed in <a href="SDP_Legacy_Upgrades.Unix.html">SDP Legacy Upgrade Guide (for Unix)</a>.</p> </div> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>USAGE for clear_depot_Map_fields.sh v1.2.0: clear_depot_Map_fields.sh [-i <instance>] [-L <log>] [-v<n>] [-p|-n] [-D] or clear_depot_Map_fields.sh [-h|-man|-V] DESCRIPTION: This script obsoletes the SetDefaultDepotSpecMapField.py trigger. It does so by following a series of steps. First, it ensures that the configurable server.depot.root is set correctly, setting it if it is not already set. Next, the Triggers table is checked to ensure the call to the SetDefaultDepotSpecMapField.py is not called; it is deleted from the Triggers table if found. Last, it resets the 'Map:' field of depot specs for depot types where that is appropriate, setting it to the default value of '<DepotName>/...', so that it honors the server.depot.root configurable. This is done for depots of these types: * stream * local * spec * unload * graph but not these: * archive * remote If an unknown depot type is encountered, the 'Map:' field is reset as well if it is set. This script does a preflight check first, reporting any cases where the starting conditions are not as expected. These conditions are treated as Errors, and will abort processing: * Depot Map field set to something other than the default. * Configurable server.depot.root is set, but to something other than what it should be. The following are treated as Warnings, and will be reported but will not prevent processing. * Configurable server.depot.root is already set. * SetDefaultDepotSpecMapField.py not found in triggers. * Depot already has 'Map:' field set to the default value: <DepotName>/... OPTIONS: -v<n> Set verbosity 1-5 (-v1 = quiet, -v5 = highest). -L <log> Specify the path to a log file, or the special value 'off' to disable logging. By default, all output (stdout and stderr) goes to EDITME_DEFAULT_LOG NOTE: This script is self-logging. That is, output displayed on the screen is simultaneously captured in the log file. Do not run this script with redirection operators like '> log' or '2>&1', and do not use 'tee.' -p Run preflight checks only, and then stop. By default, actual changes occur if preflight checks find no issues. -n No-Op. No actions are taken that would affect data significantly; instead commands are displayed rather than executed. -D Set extreme debugging verbosity. HELP OPTIONS: -h Display short help message -man Display man-style help message -V Display version info for this script and its libraries. EXAMPLES: A typical flow for this script is to do a preflight first, and then a live run, for any given instance: clear_depot_Map_fields.sh -i 1 -p clear_depot_Map_fields.sh -i 1 Note that if using '-n', the '-v5' flag should also be used.</code></pre> </div> </div> </div> </div> <div class="sect2"> <h3 id="_core_scripts">9.4. Core Scripts</h3> <div class="paragraph"> <p>The core SDP scripts are those related to checkpoints and other scheduled operations, and all run from <code>/p4/common/bin</code>.</p> </div> <div class="paragraph"> <p>If you <code>source /p4/common/bin/p4_vars <instance></code> then the <code>/p4/common/bin</code> directory will be added to your $PATH.</p> </div> <div class="sect3"> <h4 id="_p4_vars">9.4.1. p4_vars</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4_vars</code> defines the SDP shell environment, as required by the Perforce Helix server process. This script uses a specified instance number as a basis for setting environment variables. It will look for and open the respective p4_<instance>.vars file (see next section).</p> </div> <div class="paragraph"> <p>This script also sets server logging options and configurables.</p> </div> <div class="paragraph"> <p>It is intended to be used by other scripts for common environment settings, and also by users for setting the environment of their Bash shell.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>source /p4/common/bin/p4_vars 1</pre> </div> </div> <div class="paragraph"> <p>See also: <a href="#_setting_your_login_environment_for_convenience">Section 5.3, “Setting your login environment for convenience”</a></p> </div> </div> <div class="sect3"> <h4 id="_p4_instance_vars">9.4.2. p4_<instance>.vars</h4> <div class="paragraph"> <p>Defines the environment variables for a specific instance, including P4PORT etc.</p> </div> <div class="paragraph"> <p>This script is called by <a href="#_p4_vars">Section 9.4.1, “p4_vars”</a> - it is not intended to be called directly by a user.</p> </div> <div class="paragraph"> <p>For instance <code>1</code>:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4_1.vars</pre> </div> </div> <div class="paragraph"> <p>For instance <code>art</code>:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4_art.vars</pre> </div> </div> <div class="paragraph"> <p>Occasionally you may need to edit this script to update variables such as <code>P4MASTERHOST</code> or similar.</p> </div> <div class="paragraph"> <p><strong>Location</strong>: /p4/common/config</p> </div> </div> <div class="sect3"> <h4 id="_p4master_run">9.4.3. p4master_run</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4master_run</code> is a wrapper script to other SDP scripts. This ensures that the shell environment is loaded from <code>p4_vars</code> before executing the script. It provides a '-c' flag for silent operation, used in many crontab so that email is sent from the scripts themselves.</p> </div> <div class="paragraph"> <p>This is especially useful for calling scripts that do not set their own shell environment, such as Python or Perl scripts. Historically it was used as a wrapper for all SDP scripts.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> Many of the bash shell scripts in the SDP set their own environment (by doing <code>source /p4/common/bin/p4_vars N</code> for their instance); those bash shell scripts do <strong>not</strong> need to be called with the <code>p4master_run</code> wrapper. </td> </tr> </table> </div> </div> <div class="sect3"> <h4 id="_daily_checkpoint_sh">9.4.4. daily_checkpoint.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/daily_checkpoint.sh</code> script configured by default to run six days a week using crontab. The script:</p> </div> <div class="ulist"> <ul> <li> <p>truncates the journal</p> </li> <li> <p>replays it into the <code>offline_db</code> directory</p> </li> <li> <p>creates a new checkpoint from the resulting database files</p> </li> <li> <p>recreates the <code>offline_db</code> database from the new checkpoint.</p> </li> </ul> </div> <div class="paragraph"> <p>This procedure rebalances and compresses the database files in the <code>offline_db</code> directory.</p> </div> <div class="paragraph"> <p>These can be rotated into the live (<code>root</code>) database, by the script <a href="#_refresh_p4root_from_offline_db_sh">Section 9.4.12, “refresh_P4ROOT_from_offline_db.sh”</a></p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/daily_checkpoint.sh <instance> /p4/common/bin/daily_checkpoint.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_keep_offline_db_current_sh">9.4.5. keep_offline_db_current.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/keep_offline_db_current.sh</code> script is for use only on a standby replica. It will not run on any other type of replica.</p> </div> <div class="paragraph"> <p>This script ensures the offline_db has the most current journals replayed.</p> </div> <div class="paragraph"> <p>It is intended for use on standby replicas as an alternative to sync_replica.sh or replica_cleanup.sh. It is ideal for use in an environment where the checkpoints folder of the P4TARGET server is shared (e.g. via NFS) with this server.</p> </div> <div class="paragraph"> <p>This script does NOT do full checkpoint operations, and requires that the offline_db be in a good state before starting — this is verified with a call to verify_sdp.sh.</p> </div> <div class="paragraph"> <p>This uses checkpoint.log as its primary log. It is only intended for use on a machine where other scripts that update checkpoint.log don’t run, e.g. daily_checkpoint.sh, live_checkpoint.sh, or rotate_journal.sh.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/keep_offline_db_current.sh <instance> /p4/common/bin/keep_offline_db_current.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_live_checkpoint_sh">9.4.6. live_checkpoint.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/live_checkpoint.sh</code> is used to initialize the SDP <code>offline_db</code>. It must be run once, typically manually during initial installation, before any other scripts that rely on the <code>offline_db</code> can be used, such as <code>daily_checkpoint.sh</code> and <code>rotate_journal.sh</code>.</p> </div> <div class="paragraph"> <p>This script can also be used in some cases to repair the <code>offline_db</code> if it has has become corrupt, e.g. due to a sudden power loss while checkpoint processing was running.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> Be aware this script locks the live database for the duration of the checkpoint which can take hours for a large installation (please check the <code>/p4/1/logs/checkpoint.log</code> for the most recent output of <code>daily_checkpoint.sh</code> to see how long checkpoints take to create/restore). </td> </tr> </table> </div> <div class="paragraph"> <p>Note that when a <code>live_checkpoint.sh</code> runs, the server will be unresponsive to users for a time. On a new installation this "hang time" will be imperceptible, but over time it can grow to minutes and eventually hours. The idea is that <code>live_checkpoint.sh</code> should be used only very sparingly, and is not scheduled as a routine operation.</p> </div> <div class="paragraph"> <p>This performs the following actions:</p> </div> <div class="ulist"> <ul> <li> <p>Does a journal rotation, so the active P4JOURNAL file becomes numbered.</p> </li> <li> <p>Creates a checkpoint from the live database db.* files in the P4ROOT.</p> </li> <li> <p>Recovers the <code>offline_db</code> database from that checkpoint to rebalance and compress the files</p> </li> </ul> </div> <div class="paragraph"> <p>Run this script when creating the server instance and if an error occurs while replaying a journal during the off-line checkpoint process.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/live_checkpoint.sh <instance> /p4/common/bin/live_checkpoint.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_mkrep_sh">9.4.7. mkrep.sh</h4> <div class="paragraph"> <p>The SDP <code>mkrep.sh</code> script should be used to expand your Helix Topology, e.g. adding replicas and edge servers.</p> </div> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>USAGE for mkrep.sh v3.3.0: mkrep.sh -t <Type> -s <Site_Tag> -r <Replica_Host> [-f <From_ServerID>] [-os] [-p] [-N <N>] [-i <SDP_Instance>] [-L <log>] [-v<n>] [-n] [-D] or mkrep.sh [-h|-man|-V] DESCRIPTION: This script simplifies the task of creating Helix Core replicas and edge servers, and helps ensure they are setup with best practices. This script executes as two phases. In Phase 1, this script does all the metadata configuration to be executed on the master server that must be baked into a seed checkpoint for creating the replica/edge. This essentially captures the planning for a new replica, and can be done before the physical infrastructure (e.g. hardware, storage, and networking) is ready. Phase 1, fully automated by this script, takes only seconds to run. In Phase 2, this script provides information for the manual steps needed to create, transfer, and load seed checkpoints onto the replica/edge. The guidance is specific to type of replica created, based on the command line flags provided to this script. This processing can take a while for large data sets, as it involves creating and transporting checkpoints. Before using this script, a set of geographic site tags must be defined. See the FILES: below for details on a site tags. This script adheres to the these SDP Standards: * Server Spec Naming Standard: https://swarm.workshop.perforce.com/projects/perforce-software-sdp/view/main/doc/SDP_Guide.Unix.html#_server_spec_naming_standard * Journal Prefix Standard: https://swarm.workshop.perforce.com/projects/perforce-software-sdp/view/main/doc/SDP_Guide.Unix.html#_the_journalprefix_standard In Phase 1, this script does the following to help create a replica or edge server: * Generates the server spec for the new replica. * Generates a server spec for master server (if needed). * Sets configurables ('p4 configure' settings) for replication. * Selects the correct 'Services' based on replica type. * Creates service user for the replica, and sets a password. * Creates service user for the master (if needed), and sets a password. * Adds newly created service user(s) to the group 'ServiceUsers'. * Verifies the group ServiceUsers is granted super access in the protections table (and with '-p', also updates Protections). After these steps are completed, in Phase 2, detailed instructions are presented to guide the user through the remaining steps needed to complete the deployment of the replica. This starts with creating a new checkpoint to capture all the metadata changes made by this script in Phase 1. SERVICE USERS: Service users created by this script are always of type 'service', and so will not consume a licensed seat. Service users also have an 'AuthMethod' of 'perforce' (not 'ldap') as is required by 'p4d' for 'service' users. Passwords set for service users are long 32 character random strings that are not stored, as they are never needed. Login tickets for service users are generated using: p4login -service -v OPTIONS: -t <Type>[N] Specify the replica type tag. The type corresponds to the 'Type:' and 'Services:' field of the server spec, which describes the type of services offered by a given replica. Valid type values are: * ha: High Availability standby replica, for 'p4 failover' (P4D 2018.2+) * ham: High Availability metadata-only standby replica, for 'p4 failover' (P4D 2018.2+) * ro: Read-Only standby replica. (Discouraged; Use 'ha' instead for 'p4 failover' support.) * rom: Read-Only standby replica, Metadata only. (Discouraged; Use 'ham' instead for 'p4 failover' support.) * fr: Forwarding Replica (Unfiltered). * fs: Forwarding Standby (Unfiltered). * frm: Forwarding Replica (Unfiltered, Metadata only). * fsm: Forwarding Standby (Unfiltered, Metadata only). * ffr: Filtered Forwarding Replica. Not a valid failover target. * edge: Edge Server. Filtered by definition. Replicas with 'standby' are always unfiltered, and use the 'journalcopy' method of replication, which copies a byte-for-byte verbatim journal file rather than one that is merely logically equivalent. The tag has several purposes: 1. Short Hand. Each tag represents a combination of 'Type:' and fully qualified 'Services:' values used in server specs. 2. Distillation. Only the most useful Type/Services combinations have a shorthand form 3. For forwarding replicas, the name includes the critical distinction of whether any replication filtering is used; as filtering of any kind disqualifies a replica from being a potential failover target. (No such distinction is needed for edge servers, which are filtered by definition). -s <Site_Tag> Specify a geographic site tag indicating the location and/or data center where the replica will physically be located. Valid site tags are defined in the site tags file: /p4/common/config/SiteTags.cfg A sample SiteTags.cfg file that is here: /p4/common/config/SiteTags.cfg.sample -r <Replica_Host> Specify the DNS name of the server machine on which the new replica will run. This is used in the 'ExternalAddress:' field of the replica's ServerID, and also used in instructions to the user for steps after metadata configuration is done by this script. -f <From_ServerID> Specify ServerID of the P4TARGET server from which we are replicating. This is used to populate the 'ReplicatingFrom' field of the server spec. The value must be a valid ServerID. This option should be used if the target is something other than the master. For example, to create an HA replica of an edge server, you might specify something like '-f p4d_edge_syd'. -os Specify the '-os' option to overwrite an exising server spec. By default, this script will abort of the server spec to be generated already exists on the Helix Core server. Specify this option to overwrite the existing server spec. -p This script always performs a check to ensure that the Protections table grants super access to the group ServiceUsers. By default, an error is displayed if the check fails, i.e. if super user access for the group ServiceUsers cannot be verified. This is because, by default, we want to avoid making changes to the Protections table. Some sites have local policies or custom automation that requires site-specific procedures to update the Protections table. If '-p' is specified, an attempt is made to append the Protections table an entry like: super group ServiceUsers * //... This option may not be suitable for use on servers that have custom automation managing the Protections table. -N <N> Specify '-N <N>', where N is an integer. This is used to indicate that multiple replicas of the same type are to be created at the same site. The value specified with '-N' must be a numeric value. Left-padding with zeroes is allowed. For example, '-N 04' is allowed, and 'N A7' is not (as it is not numeric). This affects the ServerID to be generated. For example, the options '-t edge -s syd' would result in a ServerID of p4d_edge_syd. To create a second edge in the same site, use '-t edge -s syd -N 2' to generate p4d_edge2_syd. -i <SDP_Instance> Specify the SDP Instance. If not specified and the SDP_INSTANCE environment is defined, that value is used. If SDP_INSTANCE is not defined, the '-i <SDP_Instance>' argument is required. -v<n> Set verbosity 1-5 (-v1 = quiet, -v5 = highest). -L <log> Specify the path to a log file, or the special value 'off' to disable logging. By default, all output (stdout and stderr) goes in the logs directory referenced by $LOGS environment variable, in a file named mkrep.<timestamp>.log NOTE: This script is self-logging. That is, output displayed on the screen is simultaneously captured in the log file. Using redirection operators like '> log' or '2>&1' are not necessary, nor is using 'tee.' -n No-Op. Prints commands instead of running them. -D Set extreme debugging verbosity. HELP OPTIONS: -h Display short help message -man Display man-style help message -V Display version info for this script and its libraries. FILES: This Site Tags file defines the list of valid geographic site tags: /p4/common/config/SiteTags.cfg The contains one-line entries of the form: <tag>: <description> where <tag> is a short alphanumeric tag name for a geographic location, data center, or other useful distinction. This tag is incorporated into the ServerID of replicas or edge servers created by this script. Tag names should be kept short, ideally no more than about 5 characters in length. The <description> is a one-line text description of what the tag refers to, which may contain spaces and ASCII punctuation. Blank lines and lines starting with a '#' are considered comments and are ignored. REPLICA SERVER MACHINE SETUP: The replica/edge server machine must be have the SDP structure installed, either using the mkdirs.sh script included in the SDP, or the Helix Installer for 'green field' installations. When setting up an edge server, a replica of an edge server, or filtered replica, confirm that the JournaPrefix Standard (see URL above) structure has the separate checkpoints folder as identified in the 'Second Form' in the standard. A baseline SDP structure can typically be extended by running commands like like these samples (assuming a ServerID of p4d_edge_syd or p4d_ha_edge_syd): mkdir /hxdepots/p4/1/checkpoints.edge_syd cd /p4/1 ln -s /hxdepots/p4/1/checkpoints.edge_syd CUSTOM PRE- AND POST- OPERATION AUTOMATION HOOKS: This script can execute custom pre- and post- processing scripts. This can be useful to incorporate site-specific elements of replica setup. If the file /p4/common/site/mkrep/pre-mkrep.sh exists and is executable, it will be executed before mkrep.sh processing. If the file /p4/common/site/mkrep/post-mkrep.sh exists and is executable, it will be executed after mkrep.sh processing. Pre- and post- processing scripts are called with the same command line arguments passed to this mkrep.sh script. The pre- and post- processing scripts can use or ignore arguments as needed, though it is required to implement the '-n' flag to operate in preview mode, taking no actions that affect data (just as this script behaves). Pre- and post- processing scripts are expected to exit with a zero exit code to indicate success, and non-zero to indicate failure. The custom pre-processing script is executed after standard preflight checks complete successfully. If a custom pre-processing script indicates a failure, processing is aborted before standard mkrep.sh processing occurs. The post-processing custom script is executed after the standard mkrep.sh processing is successful. If a post-processing custom script is detected, the instructions that would be provided to the user in Phase 2 are not displayed, as it is expected that the custom post- processing will alter or handle these steps. Success or failure of pre- and post- processing scripts is reported in the log. These scripts do not require independent logging, as all standard and error output is captured in the log of this mkrep.sh script. TIP: Be sure to fully test custom scripts in a test environment before incorporating them into production systems. EXAMPLES: EXAMPLE 1 - Set up a High Availability (HA) Replica of the master. Add an HA replica to instance 1 to run on host bos-helix-02: mkrep.sh -i 1 -t ha -s bos -r bos-helix-02 EXAMPLE 2 - Add an Edge Server to the topology. Add an Edge server to instance acme to run on host syd-helix-04: mkrep.sh -i acme -t edge -s syd -r syd-helix-04 EXAMPLE 3 - Setup an HA replica of an edge server. Add a HA replica of the edge server to instance acme to run on host syd-helix-05: mkrep.sh -i acme -t ha -f p4d_edge_syd -s syd -r syd-helix-05 EXAMPLE 4 - Add a second edge server in the same site as another edge. mkrep.sh -i acme -t edge -N 2 -s syd -r syd-helix-04</code></pre> </div> </div> </div> <div class="sect3"> <h4 id="_p4verify_sh">9.4.8. p4verify.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4verify.sh</code> script verifies the integrity of the 'archive' files, all versioned files in your repository. This script is run by crontab on a regular basis, typically weekly.</p> </div> <div class="paragraph"> <p>It verifies <a href="https://www.perforce.com/manuals/cmdref/Content/CmdRef/p4_verify.html">both shelves and submitted archive files</a></p> </div> <div class="paragraph"> <p>Any errors in the log file (e.g. <code>/p4/1/logs/p4verify.log</code>) should be handled according to KB articles:</p> </div> <div class="ulist"> <ul> <li> <p><a href="https://portal.perforce.com/s/article/3186">MISSING! errors from p4 verify</a></p> </li> <li> <p><a href="https://portal.perforce.com/s/article/2404">BAD! error from p4 verify</a></p> </li> </ul> </div> <div class="paragraph"> <p>If in doubt contact <a href="mailto:support-helix-core@perforce.com">support-helix-core@perforce.com</a></p> </div> <div class="paragraph"> <p>Our recommendation is that you should expect this to be without error, and you should address errors sooner rather than later. This may involve obliterating unrecoverable errors.</p> </div> <div class="admonitionblock note"> <table> <tr> <td class="icon"> <i class="fa icon-note" title="Note"></i> </td> <td class="content"> when run on replicas, this will also append the <code>-t</code> flag to the <code>p4 verify</code> command to ensure that MISSING files are scheduled for transfer. This is useful to keep replicas (including edge servers) up-to-date. </td> </tr> </table> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/p4verify.sh <instance> /p4/common/bin/p4verify.sh 1</pre> </div> </div> <div class="listingblock"> <div class="content"> <pre class="highlight"><code>USAGE for v5.20.0: p4verify.sh [<instance>] [-N] [-nu] [-nr] [-ns] [-nS] [-a] [-nt] [-nz] [-no_z] [-o BAD|MISSING] [-p4config=<PathToFile>] [-chunks <ChunkSize>|-paths <paths_file>] [-w <Wait>] [-q <MaxActivePullQueueSize>] [-Q MaxTotalPullQueueSize] [-recent | -recent=N] [-dlf <depot_list_file>] [-I|-ignores <regex_ignores_file>] [-Ocache] [-n] [-L <log>] [-v] [-d] [-D] or p4verify.sh -h|-man DESCRIPTION: This script performs a 'p4 verify' of all submitted and shelved versioned files in depots of all types except 'remote' and 'archive' type depots. The singular Extensions depot is also verified, if present. The singular Traits depot is also verified, if present. Archive depots are not verified by defefault, but can be with the '-a' option. If run on a replica, it schedules archive failures for transfer to the replica. OPTIONS: <instance> Specify the SDP instance. If not specified, the SDP_INSTANCE environment variable is used instead. If the instance is not defined by a parameter and SDP_INSTANCE is not defined, p4verify.sh exists immediately with an error message. -N Specify '-N' (Notify Only On Failure) to disable the default behavior which will always send a notification which includes a report of the p4 verify status. Specifying '-N' which change the behavior to only send a notification if there is an error during the p4 verify execution. Notification methods are email, AWS SNS, and PagerDuty. Details on configuration can be found in the SDP documentation. Providing the environment variable NOTIFY_ONLY_ON_FAILURE=1 is equivalent to the '-N' command line argument. -nu Specify '-nu' (No Unload) to skip verification of the singleton depot of type 'unload' (if created). The 'unload' depot is verified by default. -nr Specify '-nr' (No Regular) to skip verification of regular submitted archive files. The '-nr' option is not compatible with '-recent'. Regular submitted archive files are verified by default. This option also causes Extension and Traits depots (if present) not to be verified. -ns Specify '-ns' (No Spec Depot) to skip verification of singleton depot of type 'spec' (if created). The 'spec' depot is verified by default. -nS Specify '-nS' (No Shelves) to skip verification of shelved archive files, i.e. to skip the 'p4 verify -qS'. -a Specify '-a' (Archive Depots) to do verification of depots of type 'archive'. Depots of type 'archive' are not verified by default, as archive depots are often physically removed from the server's storage subsystem for long-term cold storage. -nt Specify the '-nt' option to avoid passing the '-t' flag to 'p4 verify' on a replica. By default, p4verify.sh detects if it is running on a replica, and if so automatically applies the '-t' flag to 'p4 verify'. That causes the replica to attempt to self-heal, as files that fail verification are scheduled for transfer from the P4TARGET server. This default behavior results in 'Transfer scheduled' messages in the log, and MISSING/BAD files are listed as 'info:' rather than 'error:'. There is no clear indication of whether or which of the scheduled transfers complete successfully, and so there may be a mix of transient/correctable and "real"/persistent transfer errors for files that are also BAD/MISSING on the master server. Specify '-nt' to ensure the log contains a list of files that currently fail a 'p4 verify' without attempting to transfer them from the master. -nz Specify '-nt' to skip the gzip of the old log file. By default, if a log with the default name or the name specified with '-L' exists at the start of processing, the old log is rotated and gzipped. With this option the old log is not zipped when rotated. -no_z Specify '-no_z' to avoid passing the '-z' option 'p4 verify' commands. Typically, verifies are done with '-qz'; with this option, '-q' is used instead. See 'p4 help verify' for more information. -o BAD|MISSING Specify '-o MISSING' to check only whether expected archive files exist or not, skipping the checksum calculation of existing files. This results in dramatically faster, if less comprehensive, verification. This is particularly well suited when verification is being used to schedule archive file transfers of missing files on replicas. This translates into passing the '--only MISSING' option to 'p4 verify'. Specify '-o BAD' to check only for BAD revisions. This translates into passing the '--only BAD' option to 'p4 verify'. This option requires p4d to be 2021.1 or newer. For older p4d versions, this option is silently ignored. -p4config <PathToFile> Use the '-p4config' option use this SDP p4verify.sh script to verify an arbitrary p4d server. That arbitrary server can be any p4d version, operating on any platform, and need not be managed with SDP. To use this option, first create a P4CONFIG file that defines settings needed to access the other server. As a convention, identify a short tag name for the other server to use in the P4CONFIG file. In the example below, we use 'mot' for "my other server". Create a P4CONFIG file text named /p4/common/site/config/.p4config.mot that contains these settings: P4PORT=ssl:my_other_server:1666 P4USER=p4admin P4TICKETS=/p4/common/site/config/.p4tickets.mot P4TRUST=/p4/common/site/config/.p4trust.mot The P4TRUST setting is only needed if the port is SSL-enabled. If it is enabled, next trust the port: p4 -E P4CONFIG=/p4/common/site/config/.p4config.mot trust -y Next, generate a ticket on that connection: p4 -E P4CONFIG=/p4/common/site/config/.p4config.mot login -a Provide the password if prompted. Finally, call p4verify.sh and specify the config file. When using this option, using '-L' to specify a non-default log file name is useful to keep logs from external servers cleanly separated. p4verify.sh -p4config /p4/common/site/config/.p4config.mot -L /p4verify.mot.log This will run the varify against the server specify in that P4CONFIG file. -chunks <ChunkSize> Specify the maximum amount of content by size to verify at once. If this is specified, the depot_verify_chunks.py script is used to break up depots into chunks of a given size, e.g. 100M or 4G. The <ChunkSize> parameter must be a size value valid to pass to the depot_verify_chunks.py script with the '-m' option. That is, specifying '-chunks 200M' translates to calling depot_chunks_verify.sh with '-m 200M'. This requires the perforce-p4python3 module to be installed and the python3 in the PATH must be the correct one that uses the P4 module. Using '-chunks' is likely to result in a significantlly slower overall verify operation, though chuking can make it less impactful when it runs. Using the '-chunks' option may be necessary on very large data sets, e.g. if there insufficient resources to process the largest depots. The '-recent' and '-chunks' options are mutually exclusive. The '-chunks' and '-paths' options can be used together; see the description of the '-paths' option below. Chunking logic applies only in depots of type 'stream' or 'local'. -paths <paths_file> Specify a file containing a list of depot paths to verify, with one line per entry. Valid entries in the file start with '//', e.g. //mydepot/main/src/... In this example, when //mydepot depot is processed, only specified paths will be verified. All other depots will be processed in full. To verify only specified paths, combine '-paths <paths_file>' with '-dlf <depot_list_file>' where the depot list file contains only 'mydepot' (per the example above). The '-chunks' and '-paths' options can be used together for combined effects. If both options are specified, depots that contain specified paths are chunked based on the specified paths rather than the entire depot, and other paths in that depot are not processed. Depots that do not have any specified paths listed in the <paths_file> are chunked at the top/depot level directory. The '-paths' option can be combined with '-recent' to verify only recent changelists in the specified paths. This option disables processing of the Extensions and Traits depots by default, though '-paths' can specify paths in those depots. Paths specified must be in depots of type 'stream' or 'local', or the singular Extensions or Traits depots. -w <Wait> Specify the '-w' option, where <Wait> is a positive integer indicating the number of seconds to sleep between individual calls to 'p4 verify' commands. For example, specifying '-w 300' results in a delay of 5 minutes between verify commands. This can be used with '-chunks' to inject a delay between chunked depot paths. Otherwise, the delay is injected between each depot processed. This can significantly lengthen the overall duration of 'p4verify.sh' operation, but can also spread out the resource consumption load on a server machine. If shelves are procossed (regardless of whether '-chunks' is used), the delay is injected between each individual shelved changelist, as shelved changes are verified one changelist at a time. For data sets with a large number of shelves, it may be be wise to process shelves separately from submitted files if '-w' is used, a delay value to apply between depots may be different from that applied to individual changelists. See the '-q' option for a description of how '-q' and '-w' can be used together. -q <MaxActivePullQueueSize> Specify the '-q' option, where <MaxActivePullQueueSize> is a positive integer indicating the maximum number of active pulls allowed before a 'p4 verify' command will be executed to transfer archives. The absolute maximum number of possible active pulls is affected by the number of 'startup.N' threads configured to pull archives files, and whether those threads indicate batching. The threads that pull archive files are those that configured to use the 'pull' command the '-u' option. Typically, a small number of pull threads are configured, between 2 and 10 or perhaps 20. If '-q 1' is specified, new 'p4 verify' commands will only be run when the active pull queue is quiet. Specifying a too-high value, e.g. '-q 50' if only 3 'pull -u' archive pull threads are configured, will be ineffective, as the active pull threads will never exceed 3 (let alone 50). The current active pull queue on a replica is reported by: p4 -ztag -F %replicaTransfersActive% pull -ls This option can be useful if using this p4verify.sh script to pull many or even all archives on a new replica server machine from its target server. The injected delays can give the server time to transfer archives scheduled in one call to 'p4 verify' before calling it again with the goal of avoidng overloading the pull queue. If '-w' and '-q' options are both used, the delay specified by '-w' is ignored unless the active pull queue size is greater than or equal to the specified maximum active pull queue size. The '-w' then essentially determines how frequently the 'p4 pull -ls' is run to check the active pull queue size. A reasonable set of values might be '-q 1 -w 10'. The '-q' option in mutually exclusive with '-nt'. The '-q' option in mutually exclusive with '-Q'. -Q <MaxTotalPullQueueSize> Specify the '-Q' option, where <MaxTotalPullQueueSize> is a positive integer indicating the maximum number of total pulls allowed before a 'p4 verify' command will be executed to transfer archives. In certain scenarios, the pull queue can become quite massive. For example, if a fresh standby replica is seeded from a checkpoint but has no archive files, and then a 'p4verify.sh' is run, the verify will schedule all files to be transferred, perhaps millions. If the pull queue gets too large, it can impact metadata replication. Setting this value may help mitigate issues related to scheduling too many archives pulls at once, by delaying scheduling new archive pulls until enough previously scheduled pulls are completed. This option can be useful in such scenarios, if this p4verify.sh script is used to pull many or even all archives on a new replica server machine from its target server. The injected delays can give the server time to transfer archives scheduled in one call to 'p4 verify' before calling it again with the goal of avoidng overloading the pull queue. If individual depots contain large numbers of files, such that a verify on a single depot will schedule too many files to be transferred at once, it may be necessary to combine this option with the '-chunks' option to avoid overloading the transfer queue. **WARNING**: If there are files that cannot be tranferred from the replica's target server, the value of '-Q' must be set to higher than that number, or an infinite loop may occur. For example, if there are 500 permanent "legacy" verify errors on the commit server from 10 years ago that have long since been abandoned, those files can never be transferred to any replica. Running p4verify.sh on the replica will cause those files to be scheduled, but as they cannot be pulled, they will land in the total pull queue. In this scenario, the value set with '-Q' must be greater than 500, or an infinite loop is possible. Specify '-Q 0' to disable checking the total pull queue. The current total pull queue on a replica is reported by: p4 -ztag -F %replicaTransfersTotal% pull -ls This option can be useful if using this p4verify.sh script to pull many or even all archives on a new replica server machine from its target server. The injected delays can give the server time to transfer archives scheduled in one call to 'p4 verify' before calling it again with the goal of avoidng overloading the pull queue. If '-w' and '-Q' options are both used, the delay specified by '-w' is ignored unless the total pull queue size is greater than or equal to the specified maximum total pull queue size. The '-w' then essentially determines how frequently the 'p4 pull -ls' is run to check the total pull queue size. A reasonable set of values might be '-q 50000 -w 10'. The '-Q' option in mutually exclusive with '-nt'. The '-Q' option in mutually exclusive with '-q'. -recent[=N] Specify that only recent changelists should be verified. This can be specified as '-recent' or '-recent=N', where N is an integer indicating the number of recent changelists to verify. If '-recent' is used without the optional '=N' syntax, the $SDP_RECENT_CHANGES_TO_VERIFY variable defines how many changelists are considered recent; the default is 200. If the default is not appropriate for your site, add "export SDP_RECENT_CHANGES_TO_VERIFY" to /p4/common/site/config/p4_N.vars.local to change the default for an instance, or to /p4/common/site/config/p4_vars.local to change it globally. If $SDP_RECENT_CHANGES_TO_VERIFY is unset, the default is 200. When -recent is used, files in the unload depot are not verified. -dlf <depot_list_file> Specify a file containing a list of depots to process in the desired order. By default, all depots in the order reported by reported by 'p4 depots' are processed, which effectively results in depots being processed in alphabetical order, with the singleton Extensions and Traits depots (if present) being processed after other depots. This '-dlf' option can be useful in time-sensitive situations where the order of processing can be prioritized, and/or to prevent processing certain depots. The format fo the depot list file is straighforward, one line per depot, without the '//' nor trailling /..., so a list might look like this sample: ProjA ProjB spec .swarm unload archive ProjC Blank lines and lines starting with a '#' are treated as comments and ignored. WARNING: This is not intended to be the primary method of verification, because it would be easy to forget to add new depots to the list file. If the depot list file is not readable, processing aborts. This option disables processing of the singleton Extensions and Traits depots unless those depots are explicitly included in the depot list file. -ignores <regex_ignores_file> Specify the 'verify ignores' file, a file containing a series of regular expression patterns representing files or file revisions to ignore when scanning for verify errors. Errors matching the pattern will be suppressed from the output captured in the log, and will not be considered a verification error. If the '-ignores' is not specified, the default verify ignores file is: /p4/common/config/p4verify.N.ignores where 'N' is the SDP instance name. If this file exists, it is considered the 'verify ignores' file. Specify '-ignores none' to avoid processing the standard ignores file. The patterns can be specific files, specific file paths, or broader patterns (e.g. in the case of entirely abandoned depots). The file provided is passed as the '-f <file>' option to the 'grep' utility, and is expected to contain a series of one-line entries, each containing an expression to exclude from being considered as verify errors reported by this script. You can test your expression by first using it with grep to ensure it suppresses errors by using a command like this sample, providing an older log from this script that contains errors to be suppressed: grep -Ev -f /path/to/regex_file /path/to/old/p4verify.log If your server is case-sensitive, change that command to use '-i': grep -a -Evi -f /path/to/regex_file /path/to/old/p4verify.log This sample entry ignores a single file revision: //Alpha/main/docs/Expenses from February 1999.xls#3 This sample entry ignores all revisions of a single file: //Alpha/main/docs/Expenses from February 1999.xls This sample entry ignores all entries in the spec depot related to client specs: //spec/client This sample uses the MD5 checksum from the verify error, just to illustrate that this can be used as an alternative to specifying file paths: D34989BFB8D9B0FB9866C4A604A05410 This sample ignores BAD! (but not MISSING!) errors under the //Beta/main/src directory tree: //Beta/main/src/.* BAD! WARNING: Ensure that the regex file provided does NOT contain any blank lines or comments. The file should contain only tested regex patterns. This option is intended to provide a way to ignore unrecoverably lost file revisions from things like past infrastructure failures, for which search and recovery efforts have been abandoned. This option subtly changes the question answered by this script from "Are there any verify errors?" to "Are there any new verify errors, errors we don't already know about?" WARNING: This option is not intended to be incorporated into the primary method of verification, because ignoring archive errors in this script does not solve the problem at its source. Ideally, the root cause of the verify errors should be addressed by recovering lost archives, injecting replacement content, or other means. So long as verify errors remain, even if ignored by this option, users attempting to access the revisions will still see Librarian errors, and replicas will encounter errors trying to pull the missing archives. This option could increase the risk that such revisions are never dealt with. -Ocache Specify '-Ocache' to attempt a verification on a replica configured with a 'lbr.replication' replication configuration setting value of 'cache'. By default, if the 'lbr.replication' configurable is set to 'cache', this script aborts, as replication of such a depot will schedule transfers that are likely unintended. This is a safety feature. The 'cache' mode is generally used on replicas or edge servers with limited disk space. Because running a verify will cause transfers of any missing files, this could result in filling up the disk. Use of '-Ocache' is strongly discouraged unless combined with other options to ensure that only targeted paths are scheduled for transfer. -v Verbose. Show output of verify attempts, which is suppressed by default. Setting SDP_SHOW_LOG=1 in the shell environment has the same effect as -v. The default behavior of this script is to generate no terminal output, but instead to write output into a log file -- see LOGGING below. If '-v' is specified, the generated log is sent to stdout at the end of processing. This flag is not recommended for routine cron operation or for large data sets. The -chunks and -recent options are mutually exclusive. -L <log> Specify the log file to use. The default is /p4/N/logs/p4verify.log Log rotation and old log cleanup logic does not apply to log files specified with -L. Thus, using -L is not recommended for routine scheduled operation, e.g. via crontab. DEBUGGING OPTIONS: -n No-Operation (NO_OP) mode, for debugging. Display certain commands that would be executed without executing them. When '-n' is used, commands that might take a long time to run or affect data are only displayed. Even in '-n' mode, some information-gathering commands such as listing shelved CLs are executed, which may cause the script to take a bit of time to run on a large data set even in dry run mode. -d Specify that debug messages should be displayed. -D Use bash 'set -x' extreme debugging verbosity, and imply '-d'. -L off The special value '-L off' disables logging. This can only be used with '-n' for debugging. HELP OPTIONS: -h Display short help message -man Display man-style help message USAGE TIPS: On a p4d server machine on which this script runs, the P4USER usually has an unlimited ticket in the P4TICKETS file. If this is not the case, ensure that the ticket duration is sufficient for the verify operation to complete. If the '-p4config' option is used, ensure the defined P4USER references a P4TICKETS file with sufficiently far out expiration to prevent issues with ticket expiration. Depending on scale of data and system resources, this p4verify.sh script may run for hours or even days. A ticket duration of less than a defined minimum results in an warning being displayed in the log (but does not prevent the script from attempting the verify). The minimum ticket duration is 31 days 0 hours 0 minutes 0 seconds. EXAMPLES: Example 1: Full Verify This script is typically called via cron with only the instance parameter as an argument, e.g.: p4verify.sh 1 Example 2: Fast Verify A "fast" verify is one in which only the check for MISSING archives is done, while the resource-intensive checksum calculation of potentially BAD existing archives is skipped. This is especially useful when used on a replica. p4verify.sh 1 -o MISSING Example 3: Fast and Recent Verify The '-o MISSING' and '-recent' flags can be combined for a very fast check. This check might be incorporated into a failover procedure. p4verify.sh 1 -o MISSING -recent Example 4: Submitted Files Only This will verify only use submitted files, ignoring shelves and the spec and unload depots, putting the results in a specified log: p4verify.sh 1 -ns -nS -nu -L -L /p4/1/logs/p4verify.submitted.log Example 5: Shelved Files Only This will verify only use submitted files, ignoring shelves and the spec and unload depots, putting them in a specified log: p4verify.sh 1 -nr -ns -nu -L /p4/1/logs/p4verify.shelved.log Example 6: A Dry Run The '-n' option can be used for a dry run. Output may also be displayed to the screen ('-v') for a dry run and the log file optionally discarded: p4verify.sh 1 -n -nS -L off -v Example 7: Archive File Load for New Replica The p4verify.sh script can be used to schedule transfers of a large number of files from a replica. When doing so, however, overloading the new replicas pull queue with too many files may impact metadata replication. This can be addressed by combining a variety of options, such as '-chunks' and '-Q'. For example: p4verify.sh 1 -chunks 200M -Q 10000 -w 20 -o MISSING NOHUP USAGE: Because archive verification is typically a long running task, it is advisable to use 'nohup' to call each command, and combine that by running the command as a background process. Alternately, use 'screen' or similar. Any of the examples above can be used with 'nohup', without output redirected to /dev/null (i.e. to "the void", as this script handles logging and output redirection). To use 'nohup', start the command line with 'nohup', and then after the command, add this text exactly: < /dev/null > /dev/null 2>&1 & As a example, Example 2 above, called with nohup, would look like: nohup /p4/common/bin/p4verify.sh 1 -o MISSING < /dev/null > /dev/null 2>&1 & With the ampersand '&' at the end, the command will appear to return immediately as the process continues to run in the background. Then optionally monitor the log: tail -f /p4/1/logs/p4verify.log LOGGING: This script generates no output by default. All (stdout and stderr) is logged to /p4/N/logs/p4verify.log. The exception is usage errors, which result an error being sent to stderr followed usage info on stdout, followed by an immediate exit. NOTIFICATIONS: In addition to logging, a short summary of the verify is sent as a notification. The summary is reliably short even if the output of the verifications done by this script results in a large log file. There are two notification schemes with this script: * Email notification is always attempted. * AWS SNS notification is attempted if the SNS_ALERT_TOPIC_ARN custom setting is defined. This is typically set in: /p4/common/site/config/p4_N.vars.local TIMING: The log file captures various timing information, including the time required to verify each depot, or each chunk or path if '-paths' or '-chunks' are used. The time to verify shelves in all depots is reported separately from submitted files. Timing indications all start with the text 'Time: ' on the beginning of a line of output in the log file, and can be extracted with a command like this example (adjusting the log file name as needed): grep -a ^Time: /p4/1/logs/p4verify.log EXIT CODES: An exit code of 0 indicates no errors were encountered attempting to perform verifications, AND that all verifications attempted reported no problems. A exit status of 1 indicates that verifications could not be attempted for some reason. A exit status of 2 indicates that verifications were successfully performed, but that problems such as BAD or MISSING files were detected, or else system limits prevented verification.</code></pre> </div> </div> </div> <div class="sect3"> <h4 id="_p4login">9.4.9. p4login</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4login</code> script is a convenience wrapper to execute a series of <code>p4 login</code> commands, using the administration password configured in <code>mkdirs.cfg</code> and subsequently stored in a text file: <code>/p4/common/config/.p4passwd .p4_<instance>.admin</code>.</p> </div> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>USAGE for p4login v4.4.4: p4login [<instance>] [-p <port> | -service] [-automation] [-all] or p4login -h|-man DESCRIPTION: In its simplest form, this script simply logs in P4USER to P4PORT using the defined password access mechanism. It generates a login ticket for the SDP super user, defined by P4USER when sourcing the SDP standard shell environment. It is called from cron scripts, and so does not normally generate any output. If run on a replica with the -service option, the serviceUser defined for the given replica is logged in. The $SDP_AUTOMATION_USERS variable can be defined in /p4_N.vars. If defined, this should contain a comma-delimited list of automation users to be logged in when the -automation option is used. A definition might look like: export SDP_AUTOMATION_USERS=builder,trigger-admin,p4review Login behavior is affected by external factors: 1. P4AUTH, if defined, affects login behavior on replicas. 2. The auth.id setting, if defined, affects login behaviors (and generally simplifies them). 3. The $SDP_ALWAYS_LOGIN variable. If set to 1, this causes p4login to always execute a 'p4 login' command to generate a login ticket, even if a 'p4 login -s' test indicates none is needed. By default, the login is skipped if a 'p4 login -s' test indicates a long-term ticket is available that expires 31+days in the future. Add "export SDP_ALWYAYS_LOGIN=1" to /p4_N.vars to change the default for an instance, or to /p4/common/bin/p4_vars to change it globally. If unset, the default is 0. 4. If the P4PORT contains an ssl: prefix, the P4TRUST relationship is checked, and if necessary, a p4 trust -f -y is done to establish trust. OPTIONS: <instance> Specify the SDP instances. If not specified, the SDP_INSTANCE environment variable is used instead. If the instance is not defined by a parameter and SDP_INSTANCE is not defined, p4login exists immediately with an error message. -service Specify -service when run on a replica or edge server to login the super user and the replication service user. This option is not compatible with '-p <port>'. -p <port> Specify a P4PORT value to login to, overriding the default defined by P4PORT setting in the environment. If operating on a host other than the master, and auth.id is set, this flag is ignored; the P4TARGET for the replica is used instead. This option is not compatible with '-service'. -automation Specify -automation to login external automation users defined by the $SDP_AUTOMATION_USERS variable. -v Show output of login attempts, which is suppressed by default. Setting SDP_SHOW_LOG=1 in the shell environment has the same effect as -v. -L <log> Specify the log file to use. The default is /p4/N/logs/p4login.log -d Set debugging verbosity. -D Set extreme debugging verbosity. HELP OPTIONS: -h Display short help message -man Display man-style help message EXAMPLES: 1. Typical usage for automation, with instance SDP_INSTANCE defined in the environment by sourcing p4_vars, and logging in only the super user P4USER to P4PORT: source /p4/common/bin/p4_vars abc p4login Login in only P4USER to the specified port, P4MASTERPORT in this example: p4login -p $P4MASTERPORT Login the super user P4USER, and then login the replication serviceUser for the current ServerID: p4login -service Login external automation users (see SDP_AUTOMATION_USERS above): p4login -automation Login all users: p4login -all Or: p4login -service -automation LOGGING: This script generates no output by default. All (stdout and stderr) is logged to /p4/N/logs/p4login.log. The exception is usage errors, which result an error being sent to stderr followed usage info on stdout, followed by an immediate exit. If the '-v' flag is used, the contents of the log are displayed to stdout at the end of processing. EXIT CODES: An exit code of 0 indicates a valid login ticket exists, while a non-zero exit code indicates a failure to login.</code></pre> </div> </div> </div> <div class="sect3"> <h4 id="_p4d_instance_init">9.4.10. p4d_<instance>_init</h4> <div class="paragraph"> <p>Starts the Perforce server instance. Can be called directly or as describe in <a href="#_configuring_automatic_service_start_on_boot">Section 5.1.3, “Configuring Automatic Service Start on Boot”</a> - it is created by <code>mkdirs.sh</code> when SDP is installed.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> Do not use directly if you have configured systemctl for systemd Linux distributions such as CentOS 7.x. This risks database corruption if <code>systemd</code> does not think the service is running when it actually is running (for example on shutdown systemd will just kill processes without waiting for them). </td> </tr> </table> </div> <div class="paragraph"> <p>This script sources <code>/p4/common/bin/p4_vars</code>, then runs <code>/p4/common/bin/p4d_base</code> (<a href="#_p4d_base">Section 9.6.12, “p4d_base”</a>).</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/<instance>/bin/p4d_<instance>_init [ start | stop | status | restart ] /p4/1/bin/p4d_1_init start</pre> </div> </div> </div> <div class="sect3"> <h4 id="_recreate_offline_db_sh">9.4.11. recreate_offline_db.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/recreate_offline_db.sh</code> recovers the offline_db database from the latest checkpoint and replays any journals since then. If you have a problem with the offline database then it is worth running this script first before running <a href="#_live_checkpoint_sh">Section 9.4.6, “live_checkpoint.sh”</a>, as the latter will stop the server while it is running, which can take hours for a large installation.</p> </div> <div class="paragraph"> <p>Run this script if an error occurs while replaying a journal during daily checkpoint process.</p> </div> <div class="paragraph"> <p>This script recreates offline_db files from the latest checkpoint. If it fails, then check to see if the most recent checkpoint in the <code>/p4/<instance>/checkpoints</code> directory is bad (ie doesn’t look like the right size compared to the others), and if so, delete it and rerun this script. If the error you are getting is that the journal replay failed, then the only option may be to run <a href="#_live_checkpoint_sh">Section 9.4.6, “live_checkpoint.sh”</a> script.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/recreate_offline_db.sh <instance> /p4/common/bin/recreate_offline_db.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_refresh_p4root_from_offline_db_sh">9.4.12. refresh_P4ROOT_from_offline_db.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/refresh_P4ROOT_from_offline_db.sh</code> script is intended to be used occasionally, perhaps monthly, quarterly, or on-demand, to help ensure that your live (<code>root</code>) database files are defragmented.</p> </div> <div class="paragraph"> <p>It will:</p> </div> <div class="ulist"> <ul> <li> <p>stop p4d</p> </li> <li> <p>truncate/rotate live journal</p> </li> <li> <p>replay journals to offline_db</p> </li> <li> <p>switch the links between <code>root</code> and <code>offline_db</code></p> </li> <li> <p>restart p4d</p> </li> </ul> </div> <div class="paragraph"> <p>It also knows how to do similar processes on edge servers and standby servers or other replicas.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/refresh_P4ROOT_from_offline_db.sh <instance> /p4/common/bin/refresh_P4ROOT_from_offline_db.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_run_if_master_sh">9.4.13. run_if_master.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/run_if_master.sh</code> script is explained in <a href="#_run_if_masteredgereplica_sh">Section 9.4.16, “run_if_master/edge/replica.sh”</a></p> </div> </div> <div class="sect3"> <h4 id="_run_if_edge_sh">9.4.14. run_if_edge.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/run_if_edge.sh</code> script is explained in <a href="#_run_if_masteredgereplica_sh">Section 9.4.16, “run_if_master/edge/replica.sh”</a></p> </div> </div> <div class="sect3"> <h4 id="_run_if_replica_sh">9.4.15. run_if_replica.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/run_if_replica.sh</code> script is explained in <a href="#_run_if_masteredgereplica_sh">Section 9.4.16, “run_if_master/edge/replica.sh”</a></p> </div> </div> <div class="sect3"> <h4 id="_run_if_masteredgereplica_sh">9.4.16. run_if_master/edge/replica.sh</h4> <div class="paragraph"> <p>The SDP uses wrapper scripts in the crontab: <code>run_if_master.sh</code>, <code>run_if_edge.sh</code>, <code>run_if_replica.sh</code>. We suggest you ensure these are working as desired, e.g.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/run_if_master.sh 1 echo yes /p4/common/bin/run_if_replica.sh 1 echo yes /p4/common/bin/run_if_edge.sh 1 echo yes</pre> </div> </div> <div class="paragraph"> <p>It is important to ensure these are returning the valid results for the server machine you are on.</p> </div> <div class="paragraph"> <p>Any issues with these scripts are likely configuration issues with <code>/p4/common/config/p4_1.vars</code> (for instance <code>1</code>)</p> </div> </div> <div class="sect3"> <h4 id="_sdp_health_check_sh">9.4.17. sdp_health_check.sh</h4> <div class="paragraph"> <p>This script is described in the appendix <a href="#_sdp_health_checks">Appendix H, <em>SDP Health Checks</em></a>.</p> </div> <div class="listingblock"> <div class="content"> <pre class="highlight"><code>USAGE for sdp_health_check.sh v1.11.1: sdp_health_check.sh or sdp_health_check.sh -h|-man DESCRIPTION: This script does a health check of the SDP. It generates a report log, which can be emailed to support@perforce.com. It identifies SDP instances and reports on general SDP health. It must be run as the OS user who owns the /p4/common/bin directory. This should be the user account which runs the p4d process, and which owns the /p4/common/bin directory (often 'perforce' or 'p4admin'). Characteristics of this script: * It is always safe to run. It does only analysis and reporting. * It does only fast checks, and has no interactive prompts. Some log files are captured such as checkpoint.log, but not potentially large ones such as the p4d server log. * It requires no command line arguments. * It works for any and all UNIX/Linux SDP version since 2007. Assumptions: * The SDP has always used /p4/common/bin/p4_vars as the shell environment file. This is consistent across all SDP versions. OPTIONS: -D Set extreme debugging verbosity. HELP OPTIONS: -h Display short help message -man Display man-style help message EXAMPLES: This script is typically called with no arguments. LOGGING: This script generates a log file and also displays it to stdout at the end of processing. By default, the log is: /tmp/sdp_health_check.<datestamp>.log or /tmp/sdp_health_check.log The exception is usage errors, which result an error being sent to stderr followed usage info on stdout, followed by an immediate exit. EXIT CODES: An exit code of 0 indicates no errors or warnings were encountered.</code></pre> </div> </div> </div> </div> <div class="sect2"> <h3 id="_more_server_scripts">9.5. More Server Scripts</h3> <div class="paragraph"> <p>These scripts are helpful components of the SDP that run on the server machine, but are not included in the default crontab schedules.</p> </div> <div class="sect3"> <h4 id="_p4_crontab">9.5.1. p4.crontab</h4> <div class="paragraph"> <p>Contains crontab entries to run the server maintenance scripts.</p> </div> <div class="paragraph"> <p><strong>Location</strong>: /p4/sdp/Server/Unix/p4/common/etc/cron.d</p> </div> </div> <div class="sect3"> <h4 id="_verify_sdp_sh">9.5.2. verify_sdp.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/verify_sdp.sh</code> does basic verification of SDP setup.</p> </div> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>USAGE for verify_sdp.sh v5.25.0: verify_sdp.sh [<instance>] [-online] [-skip <test>[,<test2>,...]] [-warn <test>[,<test2>,...]] [-c] [-si] [-L <log>|off ] [-D] or verify_sdp.sh -h|-man DESCRIPTION: This script verifies the current SDP setup for the specified instance, and also performs basic health checks of configured servers. This uses the SDP instance bin directory /p4/N/bin to determine what server binaries (p4d, p4broker, p4p) are expected to be configured on this machine. Existence of the '*_init' script indicates the given binary is expected. For example, for instance 1, if /p4/1/bin/p4d_1_init exists, a p4d server is expected to run on this machine. Checks may be executed or skipped depending on what servers are configured. For example, if a p4d is configured, the $P4ROOT/server.id file should exist. If p4p is configured, the 'cache' directory should exist. OPTIONS: <instance> Specify the SDP instances. If not specified, the SDP_INSTANCE environment variable is used instead. If the instance is not defined by a parameter and SDP_INSTANCE is not defined, exits immediately with an error message. -online Online mode. Does additional checks that expect p4d, p4broker, and/or p4p to be online. Any servers for which there are *_init scripts in the Instance Bin directory are checked. An error is reported if p4d is expected to be online and is not; warnings are displayed if p4broker or p4p are not online. The Instance Bin directory is the /p4/N/bin directory, where N is the SDP instance name. -c Specify '-c' to call ccheck.sh to compare configurables, using the default config file: /configurables.cfg See 'ccheck.sh -man' for more information. This option can only be used in Online mode; if '-c' is specified, '-online' is implied. -skip <test>[,<test2>,...] Specify a comma-delimited list of named tests to skip. Valid test names are: * cron|crontab: Skip crontab check. Use this if you do not expect crontab to be configured, perhaps if you use a different scheduler. * excess: Skip checks for excess copies of p4d/p4p/p4broker in PATH. * init: Skip compare of init scripts w/templates in /p4/common/etc/init.d * license: Skip license related checks. * commitid: Skip check ensuring ServerID of commit starts with 'commit' or 'master'. * masterid: Synonym for commitid. * offline_db: Skip checks that require a healthy offline_db. * p4root: Skip checks that require healthy P4ROOT db files. * p4t_files: Skip checks for existence of P4TICKETS and P4TRUST files. * passwd|password: Skip SDP password checks. * version: Skip version checks. As an alternative to using the '-skip' option, the shell environment variable VERIFY_SDP_SKIP_TEST_LIST can be set to a comma-separated list of named tests to skip. Using the command line parameter is the best choice for temporarily skipping tests, while specifying the environment variable is better for making permanent exceptions (e.g. always excluding the crontab check if crontabs are not used at this site). The variable should be set in /p4/common/config/p4_N.vars. If the '-skip' option is provided, the VERIFY_SDP_SKIP_TEST_LIST variable is ignored (not appended to). So it may make sense to reference the variable on the command line. For example, if the value of the variable is 'crontab', to skip crontab and license checks, you could specify: -skip $VERIFY_SDP_SKIP_TEST_LIST,license -warn <test>[,<test2>,...] Specify a comma-delimited list of named tests that will be reported as warnings rather than errors. The list of valid test names as the same as for the '-skip' option. As an alternative to using the '-warn' option, the shell environment variable VERIFY_SDP_WARN_TEST_LIST can be set to a comma-separated list of name tests to skip. Using the command line parameter is the best choice for temporarily converting errors to warnings, while specifying the environment variable is better for making the conversion to warnings permanent. The variable should be set in /p4/common/config/p4_N.vars file. If the '-warn' option is provided, the VERIFY_SDP_WARN_TEST_LIST variable is ignored (not appended to). So it may make sense to reference the variable on the command line. For example, if the value of the variable is 'crontab', to convert to warnings for crontab and excess binaries tests, you could specify: -warn $VERIFY_SDP_WARN_TEST_LIST,excess -si Silent mode, useful for cron operation. Both stdout and stderr are still captured in the log. The '-si' option cannot be used with '-L off'. -L <log> Specify the log file to use. The default is /p4/N/logs/verify_sdp.log The special value 'off' disables logging to a file. Note that '-L off' and '-si' are mutually exclusive. -D Set extreme debugging verbosity. HELP OPTIONS: -h Display short help message -man Display man-style help message EXAMPLES: Example 1: Typical usage: This script is typically called after SDP update with only the instance name or number as an argument, e.g.: verify_sdp.sh 1 Example 2: Skipping some checks. verify_sdp.sh 1 -skip crontab Example 3: Automation Usage If used from automation already doing its own logging, use -L off: verify_sdp.sh 1 -L off LOGGING: This script generates a log file and also displays it to stdout at the end of processing. By default, the log is: /p4/N/logs/verify_sdp.log. The exception is usage errors, which result an error being sent to stderr followed usage info on stdout, followed by an immediate exit. If the '-si' (silent) flag is used, the log is generated, but its contents are not displayed to stdout at the end of processing. EXIT CODES: An exit code of 0 indicates no errors were encountered attempting to perform verifications, and that all checks verified cleanly.</code></pre> </div> </div> </div> </div> <div class="sect2"> <h3 id="_other_scripts_and_files">9.6. Other Scripts and Files</h3> <div class="paragraph"> <p>The following table describes other files in the SDP distribution. These files are usually not invoked directly by you; rather, they are invoked by higher-level scripts.</p> </div> <div class="sect3"> <h4 id="_backup_functions_sh">9.6.1. backup_functions.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/backup_functions.sh</code> script contains Bash functions used in other SDP scripts.</p> </div> <div class="paragraph"> <p>It is <strong>sourced</strong> (<code>source /p4/common/bin/backup_functions.sh</code>) by other scripts that use the common shared functions.</p> </div> <div class="paragraph"> <p>It is not intended to be called directly by the user.</p> </div> </div> <div class="sect3"> <h4 id="_broker_rotate_sh">9.6.2. broker_rotate.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/broker_rotate.sh</code> rotates the broker log file. It is intended for use on a server machine that has only broker running. When a broker is run on a p4d server machine, the <code>daily_checkpoint.sh</code> take care of rotating the broker log.</p> </div> <div class="paragraph"> <p>It can be added to a crontab for e.g. daily log rotation.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/broker_rotate.sh <instance> /p4/common/bin/broker_rotate.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_ccheck_sh">9.6.3. ccheck.sh</h4> <div class="paragraph"> <p>The script <code>/p4/common/bin/ccheck.sh</code> script compares configurables against a set of defined best practices.</p> </div> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>USAGE for ccheck.sh v1.1.0: ccheck.sh [<SDPInstance>] [-p <Profile>] [-c <CfgFile>] [-y] [-v] [-d|-D] or ccheck.sh [-h|-man|-V] DESCRIPTION: This script compares configurables set on the current server with best practices defined a data file. OPTIONS: -p <Profile> Specify a profile defined in the config file, such as 'demp' or 'hcc'. A profile defines a set of expected configurable values that can differ from the expected values in other profiles. For example, for a demo environment, the filesys.P4ROOT.min might have an expected value of 128M, while the expected value in a prod (production) profile might be 5G, and the same value might be 30G for 'prodent', the profile for production at large enterprise scale. The 'always' profile defines settings that always apply whether '-p' is specified or not. The profile specified with '-p' applies in addition to the 'always' configuration, adding to and possibly overriding settings from the 'always' configuration. The defaut profile is 'prod', the production profile. Specify the special value '-p none' to use only the settings defined in the 'always' profile. -c <CfgFile> Specify an alternate config file that defines best practice configurables. This is intended for testing. -L <log> Specify the path to a log file, or the special value 'off' to disable logging. By default, all output (stdout and stderr) goes to $LOGS/ccheck.log NOTE: This script is self-logging. That is, output displayed on the screen is simultaneously captured in the log file. Using redirection operators like '> log' or '2>&1' are unnecessary, nor is using 'tee'. -y Live operation mode. By default, any commands that affect data, such as setting configurables, are displayed, but not executed. With the '-y' option, commands may be executed. This option is included for future needs. This current version of ccheck.sh does not execute any commands that affect data. -d Display debug messages. -D Set extreme debugging verbosity using bash 'set -x' mode. Implies -d. -si Silient Mode. No output is displayed to the terminal (except for usage errors on startup). Output is captured in the log. The '-si' cannot be used with '-L off'. HELP OPTIONS: -h Display short help message -man Display man-style help message FILES: The standard configurables config file is: /p4/common/config/configurables.cfg EXAMPLES: Example 1: Check configurables with the default profile, and no logging: ccheck.sh -L off Example 2: Check configurables with the 'prod' (Production) profile: ccheck.sh -p prod Example 3: Check configurables with the 'demo' profile, doing a verbose comparison: ccheck.sh -p demo -v FUTURE ENHANCEMENTS: Presently, this ccheck.sh v1.1.0 only reports configurables. It does not support changing configurables. As the script is currently only capable of reporting, the '-y' option has no effect. Some possible future enhancements are: * Extend reporting to suggesting configuration changes. * Provide an option to make changes to configurables that are safe to change immediately, and provide guidance on those configurables that are best set with guidance and plannning. * Provide a way to specify custom exemptions for certain configurables. * Added multi-version support for backward compatibility. This version assumes P4D 2023.1+ (though will be useful for older versions).</code></pre> </div> </div> </div> <div class="sect3"> <h4 id="_edge_dump_sh">9.6.4. edge_dump.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/edge_dump.sh</code> script is designed to create a seed checkpoint for an Edge server.</p> </div> <div class="paragraph"> <p>An edge server is naturally filtered, with certain database tables (e.g. db.have) excluded. In addition to implicit filtering, the server spec may specify additional tables to be excluded, e.g. by using the ArchiveDataFilter field of the server spec.</p> </div> <div class="paragraph"> <p>The script requires the SDP instance and the edge ServerID.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/edge_dump.sh <instance> <edge server id> /p4/common/bin/edge_dump.sh 1 p4d_edge_syd</pre> </div> </div> <div class="paragraph"> <p>It will output the full path of the checkpoint to be copied to the edge server and used with <a href="#_recover_edge_sh">Section 9.6.26, “recover_edge.sh”</a></p> </div> </div> <div class="sect3"> <h4 id="_edge_vars">9.6.5. edge_vars</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/edge_vars</code> file is sourced by scripts that work on edge servers.</p> </div> <div class="paragraph"> <p>It sets the correct list db.* files that are edge-specific in the federated architecture. This version is dependent on the version of p4d in use; this script accounts for the P4D version.</p> </div> <div class="paragraph"> <p>It is not intended for users to call directly.</p> </div> </div> <div class="sect3"> <h4 id="_edge_shelf_replicate_sh">9.6.6. edge_shelf_replicate.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/edge_shelf_replicate.sh</code> script is intended to be run on an edge server and will ensure that all shelves are replicated to that edge server (by running <code>p4 print</code> on them).</p> </div> <div class="paragraph"> <p>Only use if directed to by Perforce Support or Perforce Consulting.</p> </div> </div> <div class="sect3"> <h4 id="_load_checkpoint_sh">9.6.7. load_checkpoint.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/load_checkpoint.sh</code> script loads a checkpoint into <code>root</code> and <code>offline_db</code> for commit/edge/replica instance.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> This script will replace your <code>/p4/<instance>/root</code> database files! <strong>Be careful!</strong> </td> </tr> </table> </div> <div class="paragraph"> <p>If you want to create db files in <code>offline_db</code> then use <a href="#_recreate_offline_db_sh">Section 9.4.11, “recreate_offline_db.sh”</a>.</p> </div> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>USAGE for load_checkpoint.sh v2.10.0: load_checkpoint.sh {<checkpoint>|-latest} [<jnl.1> [<jnl.2> ...]] [-R|-F <SafetyFactor>] [-i <instance>] [-s <ServerID>] [-t <Type>] [-no_start | -verify {default|"Verify Options"} [-delay <delay>]] [-c] [-l] [-r] [-b] [-y] [-L <log>] [-si] [-d|-D] or load_checkpoint.sh [-h|-man] DESCRIPTION: This script loads a specified checkpoint into /p4/N/root and /p4/N/offline_db, where 'N' is the SDP instance name. At the start of processing, preflight checks are done. Preflight checks include: * The specified checkpoint and corresponding *.md5 file must exist. * The specified checkpoint can be a file or a directory (for parallel checkpoint processing). * All journal files to replay (if any were specified) must exist. * The $P4ROOT/server.id file must exist, unless '-s' is specified. * If the $P4ROOT/server.id file exists and '-s' is specified, the values must match. * The $P4ROOT/license file must exist, unless '-l' is specified or if the replica type does not require a license. * Basic SDP structure and key files must exist. If the preflight passes, the p4d_N service is shutdown, and also the p4broker_N service is shutdown if configured. If a P4LOG file exists, it is moved aside so there is a fresh p4d server log corresponding to operation after the checkpoint load. If a P4JOURNAL file exists, it is moved aside as the old journal data is no longer relevant after a checkpoint replay. (Exception: If the P4JOURNAL is speciffed in a list of journals to reply, then it is not moved aside). Next, any existing state* files in P4ROOT are removed. Next, any existing database files in P4ROOT are preserved and moved aside, unless '-R' is specified to remove them. Next, the specified checkpoint is loaded. Upon completion, the Helix Core server process, p4d_N, is started. If the server to be started is a replica, the serviceUser configured for the replica is logged into the P4TARGET server. Any needed 'p4 trust' and 'p4 login' commands are done to enable replication. Note that this part of the processing will fail if the correct super user password is not stored in the standard SDP password file, /p4/common/config/.p4passwd.p4_N.admin After starting the server, a local 'p4 trust' is done if needed, and then a 'p4login -service -v' and 'p4login -v'. By default, the p4d_N service is started, but the p4broker_N service is not. Specify '-b' to restart both services. Finally, the offline_db is rebuilt using the same specified checkpoint and journals. ARGUMENTS AND OPTIONS: <checkpoint> Specify the path to the checkpoint file or directory to load. Exactly one checkpoint must be specified. If a checkpoint file is specified, a serial checkpoint replay will be done. If a checkpoint directory is specified, a parallel replay will be done using the individual files in the directory. For checkpoint files: The file may be a compressed or uncompressed checkpoint, and it may be a case sensitive or case-insensitive checkpoint. The checkpoint file must have a corresponding *.md5 checksum file in the same directory, with one of two name variations: If the checkpoint file is /somewhere/foo.gz, the checksum file may be named /somewhere/foo.gz.md5 or /somewhere/foo.md5. For checkpoint directories: This option is required unless the '-latest' option is used. <jnl.1> [<jnl.2> ...] Specify the path to the one or more journal files to replay after the checkpoint, in the correct sequence order. -latest Specify this as an alternative to providing a specific checkpoint file or directory. The script will then search for the latest *.md5 file in the $CHECKPOINTS directory (), and use that to replay. The most recent *.md5 file determines which checkpoint to load, be it a file or directory. -R Specify '-R' to remove db.* files in P4ROOT rather than moving them aside. By default, databases are preserved for possible future for investigation. A folder named 'MovedDBs.<datestamp>' is created under the P4ROOT directory, and databases are moved there. Keeping an extra copy of databases requires sufficient disk space to hold an extra copy of the db.* files. If -R specified, old databases in P4ROOT are removed, along with state* and other files, and the server.locks directory. -F <SafetyFactor> When replacing an existing set of db.* files, a safety factor is used. This is simply the factor by which the size of pre-existing databases is multiplied when comparing against available disk space. Specify '-F 0' to disable the safety factor check. The disk space safety check is only meaningful if P4ROOT was previously populated with a full set of data. Specifying a nubmer greater than 1, say 1.2 (the default) gives more breathing room. Specifying a value lower than 1, say 0.95, may be OK if you are certain the expanded-from-a-checkpoint db.* files are significantly smaller than size the prior set of db.* files. This option is mutually exclusive with '-R'. If '-R' is used, databases are removed, and there is no need to calculate disk space. -i <instance> Specify the SDP instance. This can be omitted if SDP_INSTANCE is already defined. -s <ServerID> Specify the ServerID. This value is written into $P4ROOT/server.id file. If no $P4ROOT/server.id file exists, this flag is required. If the $P4ROOT/server.id file exists, this argument is not needed. If this '-s <ServerID>' is given and a $P4ROOT/server.id file exists, the value in the file must match the value specified with this argument. -t <Type> Specify the replica type tag if the checkpoint to be loaded is for an edge server or replica. The set of valid values for the replica type tag are defined in the documentation for mkrep.sh. See: mkrep.sh -man If the type is specified, the '-s <ServerID>' is required. If the SDP Server Spec Naming Standard is followed, the ServerID specified with '-s' will start with 'p4d_'. In that case, the value for '-t edge' value is inferred, and '-t' is not required. If the type is specified or inferred, certain behaviors change based on the type: * If the type is edge, only the correct edge-specific subset of database tables are loaded. * The P4ROOT/license file check is suppressed unless the type is ha, ham, fs, for fsm (standby replicas usable with 'p4 failover'). Do not use this '-t <Type>' option if the checkpoint being loaded is for a master server. For an edge server, an edge seed checkpoint created with edge_dump.sh must be used if the edge is filtered, e.g. if any of the *DataFilter fields in the server spec are used. If the edge server is not filtered by means other than being an edge server (for which certain tables are filtered by nature), a standard full checkpoint from the master can be used. For a filtered forwarding replica, a proper seed checkpoint must be loaded. This can be created on the master using key options to p4d, including '-P <ServerID> -jd <SeedCkp' on the master (possibly using the 'offline_db' to avoid downtime, similar to how edge_dump.sh works for edge servers). WARNING: While this script is useful for seeding a new edge server, this script is NOT to be used for recovering or reseeding an existing edge server, because all edge-local database tables (mostly workspace data) would be lost. To recover an existing edge server, see the recover_edge.sh script. Warning: If this option is specified with the incorrect type for the checkpoint specified, results will be unpredictable. -verify default [-delay <delay>] -verify "Verify Options" [-delay <delay>] Specify '-verify' to initiate a call to 'p4verify.sh' after the server is online. On a replica, this can be useful to cause the server to pull missing archive files from its P4TARGET server. If this load_checkpoint.sh script is used in a recovery situation for a master server, this '-verify' option can be used to discover if archive files are missing after the metadata is recovered. The 'p4verify.sh' script has a rich set of options. See 'p4verify.sh -man' for more info. The options to pass to p4verify.sh can be passed in a quoted list, or '-verify default' can be used to indicate these default options: -o MISSING By default, a fast verify is used if the p4d version is new enough (2021.1+). See 'p4verify.sh -man' for more information, specifically the description of the '-o MISSING' option. In all cases, p4verify.sh is invoked as a background process; this load_checkpoint.sh script does not wait for it to complete. The p4verify.sh script will email as per normal when it completes. The optional delay option specifies how long to wait until kicking off the p4verify.sh command, in seconds. The default is 600 seconds. This is intended to give the replica time get get caught up with metadata before the archive pulls are scheduled. The delay is a workaround for job079842. This option is cannot be used with '-no_start'. -c Specify that SSL certificates are required, and not to be generated with 'p4d_N -Gc'. By default, if '-c' is not supplied and SSL certs are not available, certs are generated automatically with 'p4d_N -Gc'. -l Specify that the server is to start without a license file. By default, if there is no $P4ROOT/license file, this script will abort. Note that if '-l' is specified and a license file is actually needed, the attempt this script makes to start the server after loading the checkpoint will fail. If '-t <type>' is specified, the license check is skipped unless the type is 'ha', 'ham', 'fs,' or 'fsm'. Replicas that are potential targets for a 'p4 failover' need a license file for a failover to work. -r Specify '-r' to replay only to P4ROOT. By default, this script replays both to P4ROOT and the offline_db. -no_start Specify '-no_start' to avoid starting the p4d service after loading the checkpoint. This option is cannot be used with '-verify'. -b Specify '-b' to start the a p4broker process (if configured). By default the p4d process is started after loading the checkpoint, but the p4broker process is not. This can be useful to ensure the human administrator has an opportunity to do sanity checks before enabling the broker to allow access by end users (if the broker is deployed for this usage). -y Use the '-y' flag to bypass an interactive warning and confirmation prompt. -L <log> Specify the path to a log file. By default, all output (stdout and stderr) goes to: /p4/<instance>/logs/load_checkpoint.<timestamp>.log NOTE: This script is self-logging. That is, output displayed on the screen is simultaneously captured in the log file. Do not run this script with redirection operators like '> log' or '2>&1', and do not use 'tee.' -si Operate silently. All output (stdout and stderr) is redirected to the log only; no output appears on the terminal. -d Set debugging verbosity. -D Extreme debugging verbosity using bash 'set -x' mode. HELP OPTIONS: -h Display short help message -man Display man-style help message USAGE TIP: All the non-interactive examples below illustrate the practice of using redirects to create an extra log file named 'load.log' in the $LOGS directory for the instance. This load.log file is identical to, and in addition to, the standard timestampped log generated by this script. The intent of this practice is to leave a trail of when a checkpoint was last loaded on any given server machine. EXAMPLES: EXAMPLE 1: Non-interactive Usage Non-interactive usage (bash syntax) to load a checkpoint: nohup /load_checkpoint.sh /p4/1/checkpoints/p4_1.ckp.4025.gz -i 1 -y < /dev/null > /p4/1/logs/load.log 2>&1 & Then, monitor with: tail -f $(ls -t $LOGS/load_checkpoint.*.log|head -1) EXAMPLE 2: Checkpoint Load then Verify, for the SDP Instance alpha. Non-interactive usage (bash syntax) to load a checkpoint followed by a full verify of recent archives files only with other options passed to verify.sh: nohup /load_checkpoint.sh /p4/alpha/checkpoints/p4_alpha.ckp.95442.gz -i alpha -verify -recent -nu -ns -y < /dev/null > /p4/alpha/logs/load.log 2>&1 & EXAMPLE 3: Load Checkpoint and Journals Non-interactive usage (bash syntax) to loading a checkpoint and subsequent journals: nohup /load_checkpoint.sh /p4/1/checkpoints/p4_1.ckp.4025.gz /p4/1/checkpoints/p4_1.jnl.4025 /p4/1/checkpoints/p4_1.jnl.4026 -i 1 -y < /dev/null > /p4/1/logs/load.log 2>&1 & Then, monitor with: tail -f $(ls -t $LOGS/load_checkpoint.*.log|head -1) EXAMPLE 4: Interactive usage. Interactive usage to load a checkpoint with no license file. /load_checkpoint.sh /p4/1/checkpoints/p4_1.ckp.4025.gz -i 1 -l With interactive usage, logging still occurs; all output to the screen is captured. Note that non-interactive usage with nohup is recommended for checkpoints with a long replay duration, to make operation more reliable in event of a shell session disconnect. Alternately, running interactively in a 'screen' session (if 'screen' is available) provides similar protection against shell session disconnects. EXAMPLE 5: Seed New Edge Seeding a new edge server. nohup /load_checkpoint.sh /p4/1/checkpoints/p4_1.ckp.4025.gz -i 1 -s p4d_edge_syd < /dev/null > /p4/1/logs/load.log 2>&1 & WARNING: While this script is useful for seeding a new edge server, this script is NOT to be used for recovering or reseeding an existing edge server, because all edge-local database tables (mostly workspace data) would be lost. To recover an existing edge server, see the recover_edge.sh script. EXAMPLE 6: Seed New Edge and Verify Seeding a new edge server and then do a verify with default options. nohup /load_checkpoint.sh /p4/1/checkpoints/p4_1.ckp.4025.gz -i 1 -s p4d_edge_syd -verify default < /dev/null > /p4/1/logs/load.log 2>&1 & EXAMPLE 7: Load a Parallel Checkpoint on an Edge and Verify Recent This non-interactive example loads a parallel checkpoint directory. The usage difference is that the checkpoint path provided is a parallel checkpoint directory rather than a single checkpoint file. This example loads the checkpoint for a new edge server, and verifes only the most recent 3 changes in each depot. The delay before calling p4verify.sh, 10 minutes (600) by default, is shortened to 5 seconds in this example. nohup /load_checkpoint.sh /p4/1/checkpoints/p4_1.ckp.4025 -i 1 -s p4d_edge_syd -verify "-o MISSING -recent=3 -ns -L /p4/1/logs/p4verify.fast_and_recent.log" -delay 5 -y < /dev/null > /p4/1/logs/load.log 2>&1 &</code></pre> </div> </div> </div> <div class="sect3"> <h4 id="_gen_default_broker_cfg_sh">9.6.8. gen_default_broker_cfg.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/gen_default_broker_cfg.sh</code> script generates an SDP instance-specific variant of the generic P4Broker config file. Display to standard output.</p> </div> <div class="paragraph"> <p>Usage:</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /p4/common/bin gen_default_broker_cfg.sh 1 > /tmp/p4broker.cfg.ToBeReviewed</pre> </div> </div> <div class="paragraph"> <p>The final p4broker.cfg should end up here:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/common/config/p4_${SDP_INSTANCE}.${SERVERID}.broker.cfg</pre> </div> </div> </div> <div class="sect3"> <h4 id="_journal_watch_sh">9.6.9. journal_watch.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/journal_watch.sh</code> script will check diskspace available to P4JOURNAL and trigger a journal rotation based on specified thresholds. This is useful in case you are in danger of running out of disk space and your rotated journal files are stored on a separate partition than the active journal.</p> </div> <div class="paragraph"> <p>This script is using the following external variables:</p> </div> <div class="ulist"> <ul> <li> <p>SDP_INSTANCE - The instance of Perforce that is being backed up. If not set in environment, pass in as argument to script.</p> </li> <li> <p>P4JOURNALWARN - Amount of space left (K,M,G,%) before min journal space where an email alert is sent</p> </li> <li> <p>P4JOURNALWARNALERT - Send an alert if warn threshold is reached (true/false, default: false)</p> </li> <li> <p>P4JOURNALROTATE - Amount of space left (K,M,G,%) before min journal space to trigger a journal rotation</p> </li> <li> <p>P4OVERRIDEKEEPJNL - Allow script to temporarily override KEEPJNL to retain enough journals to replay against oldest checkpoint (true/false, default: false)</p> </li> </ul> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/journal_watch.sh <P4JOURNALWARN> <P4JOURNALWARNALERT> <P4JOURNALROTATE> <P4OVERRIDEKEEPJNL (Optional)></pre> </div> </div> <div class="paragraph"> <div class="title">Examples</div> <p>Run from CLI that will warn via email if less than 20% is available and rotate journal when less than 10% is available</p> </div> <div class="literalblock"> <div class="content"> <pre>./journal_watch.sh 20% TRUE 10% TRUE</pre> </div> </div> <div class="paragraph"> <p>Cron job that will warn via email if less than 20% is available and rotate journal when less than 10% is available</p> </div> <div class="literalblock"> <div class="content"> <pre>30 * * * * [ -e /p4/common/bin ] && /p4/common/bin/run_if_master.sh ${INSTANCE} /p4/common/bin/journal_watch.sh ${INSTANCE} 20\% TRUE 10\% TRUE</pre> </div> </div> </div> <div class="sect3"> <h4 id="_kill_idle_sh">9.6.10. kill_idle.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/kill_idle.sh</code> script runs <code>p4 monitor terminate</code> on all processes showing in the output of <code>p4 monitor show</code> that are in the IDLE state.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/kill_idle.sh <instance> /p4/common/bin/kill_idle.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_mkdirs_sh">9.6.11. mkdirs.sh</h4> <div class="paragraph"> <p>The <code>mkdirs.sh</code> script is intended for the setup and configuration of a <strong>new</strong> Helix Core instance. It should be run only for adding a new instance, not against an existing instance.</p> </div> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>USAGE for mkdirs.sh v4.10.0: mkdirs.sh <instance> [-s <ServerID>] [-t <ServerType>] [-tp <TargetPort>] [-lp <ListenPort>] [-I <svc>[,<svc2>]] [-MDD /bigdisk] [-MLG /jnl] [-MDB1 /db1] [-MDB2 /db2] [-f] [-p] [-test [-clean]] [-n] [-L <log>] [-d|-D] or mkdirs.sh [-h|-man] DESCRIPTION: == Overview == This script initializes an SDP instance on a single machine. This script is intended to support two scenarios: * First time SDP installation on a given machine. * Adding new SDP instances (separate Helix Core data sets) to an existing SDP installation on a given machine. And SDP instance is a single Helix Core data set, with its own unique set of one set of users, changelist numbers, jobs, labels, versioned files, etc. An organization may run a single instance or multiple instances. This is intended to be run either as root or as the operating system user account (OSUSER) that p4d is configured to run as, typically 'perforce'. It should be run as root for the initial install. Subsequent additions of new instances do not require root. == Directory Structure == If an initial install as done by a user other than root, various directories must exist and be writable and owned by 'perforce' before starting: * /p4 * /hxdepots * /hxlogs * /hxmetadata The directories starting with '/hx' are configurable. This script creates an init script in the /p4/N/bin directory. == Crontab == Crontabs are generated for all server types except p4broker. After running this script, set up the crontab based on templates generated as /p4/common/etc/cron.d. For convenience, a sample crontab is generated for the current machine as /p4/p4.crontab.<SDPInstance> (or /p4/p4.crontab.<SDPInstance>.new if the former name exists). These files should be copied or merged into any existing files named with this convention: /p4/common/etc/cron.d/crontab.<osuser>.<host> where <osuser> is the user that services run as (typically 'perforce'), and <host> is the short hostname (as returned by a 'hostname -s' command). REQUIRED PARAMETERS: <instance> Specify the SDP instance name to add. This is a reference to the Perforce Helix Core data set. OPTIONS: -s <ServerID> Specify the ServerID, overriding the REPLICA_ID setting in the configuration file. -S <TargetServerID> Specify the ServerID of the P4TARGET of the server being installed. Use this only when setting up an HA replica of an edge server. -t <ServerType> Specify the server type, overriding the SERVER_TYPE setting in the config file. Valid values are: * p4d_master - A master/commit server. * p4d_replica - A replica with all metadata from the master (not filtered in any way). * p4d_filtered_replica - A filtered replica or filtered forwarding replica. * p4d_edge - An edge server. * p4d_edge_replica - Replica of an edge server. If used, '-S <TargetServerID>' is required. * p4broker - An SDP host running only a standalone p4broker, with no p4d. * p4proxy - An SDP host running only a standalone p4p with no p4d. -tp <TargetPort> Specify the target port. Use only if ServerType is p4proxy and p4broker. -lp <ListenPort> Specify the listen port. Use only if ServerType is p4proxy and p4broker. -I [<svc>[,<svc2>]] Specify additional init scripts to be added to /p4/<instance>/bin for the instance. By default, the p4p service is installed only if '-t p4proxy' is specified, and p4dtg is never installed by default. Valid values to specify are 'p4p' and 'dtg' (for the P4DTG init script). If services are not installed by default, they can be added later using templates in /p4/common/etc/init.d. Also, templates for systemd service files are supplied in /p4/common/etc/systemd/system. -MDD /bigdisk -MLG /jnl -MDB1 /db1 -MDB2 /db2 Specify the '-M*' to specify mount points, overriding DD/LG/DB1/DB2 settings in the config file. Sample: -MDD /bigdisk -MLG /jnl -MDB1 /fast If -MDB2 is not specified, it is set the the same value as -MDB1 if that is set, or else it defaults to the same default value as DB1. -f Specify -f 'fast mode' to skip chown/chmod commands on depot files. This should only be used when you are certain the ownership and permissions are correct, and if you have large amounts of existing data for which the chown/chmod of the directory tree would be slow. -p Specify '-p' to halt processing after preflight checks are complete, and before actual processing starts. By default, processing starts immediately upon successful completion of preflight checks. -L <log> Specify the path to a log file, or the special value 'off' to disable logging. By default, all output (stdout and stderr) goes to this file in the current directory: mkdirs.<instance>.<datestamp>.log NOTE: This script is self-logging. That is, output displayed on the screen is simultaneously captured in the log file. Do not run this script with redirection operators like '> log' or '2>&1', and do not use 'tee'. DEBUGGING OPTIONS: -test Specify '-test' to execute a simulated install to /tmp/p4 as the install root (rather than /p4), and with the mount point directories specified in the configuration file prefixed with /tmp/hxmounts, defaulting to: * /tmp/hxmounts/hxdepots * /tmp/hxmounts/hxlogs * /tmp/hxmounts/hxmetadata -clean Specify '-clean' with '-test' to clean up from prior test installs, which will result in removal of files/folders installed under /tmp/hxmounts and /tmp/p4. Do not specify '-clean' if you want to test a series of installs. -n No-Op. In No-Op mode, no actions that affect data or structures are taken. Instead, commands that would be run are displayed. This is an alternative to -test. Unlike '-p' which stops after the preflight checks, with '-n' more processing logic can be exercised, with greater detail about what commands that would be executed without '-n'. -d Increase verbosity for debugging. -D Set extreme debugging verbosity, using bash '-x' mode. Also implies -d. HELP OPTIONS: -h Display short help message -man Display man-style help message FILES: The mkdirs.sh script uses a configuration file for many settings. A sample file, mkdirs.cfg, is included with the SDP. After determining your SDP instance name (e.g. '1' or 'abc'), create a configuration file for it named mkdirs.<N>.cfg, replacing 'N' with your instance. Running 'mkdirs.sh N' will load configuration settings from mkdirs.N.cfg. UPGRADING SDP: This script can be useful in testing and upgrading to new versions of the SDP, when the '-test' flag is used. EXAMPLES: Example 1: Setup of first instance Setup of the first instance on a machine using the default instance name, '1', executed after using sudo to become root: $ sudo su - $ cd /hxdepots/sdp/Server/Unix/setup $ vi mkdirs.cfg # Adjust settings as desired, e.g P4PORT, P4BROKERPORT, etc. $ ./mkdirs.sh 1 A log will be generated, mkdirs.1.<timestamp>.log Example 2: Setup of additional instance named 'abc'. Setup a second instance on the machine, which will be a separate Helix Core instance with its own P4ROOT, its own set of users and changelists, and its own license file (copied from the master instance). Note that while the first run of mkdirs.sh on a given machine should be done as root, but subsequent instance additions should be done as the 'perforce' user (or whatever operating system user accounts Perforce Helix services run as). $ sudo su - perforce $ cd /hxdepots/sdp/Server/Unix/setup $ cp -p mkdirs.cfg mkdirs.abc.cfg $ vi mkdirs.abc.cfg # Adjust settings in mkdirs.abc.cfg as desired, e.g P4PORT, P4BROKERPORT, etc. $ ./mkdirs.sh abc A log will be generated, mkdirs.abc.<timestamp>.log Example 3: Setup of additional instance named 'alpha' to run a standalone p4p: $ ./mkdirs.sh alpha -t p4proxy Example 4: Setup of a stand instance named '1' to run a standalone p4broker: $ ./mkdirs.sh 1 -t p4broker</code></pre> </div> </div> </div> <div class="sect3"> <h4 id="_p4d_base">9.6.12. p4d_base</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4d_base</code> script is the script to start/stop/restart the <code>p4d</code> instance.</p> </div> <div class="paragraph"> <p>It is called by <code>p4d_<instance>_init</code> script (and thus also <code>systemctl</code> on systemd Linux distributions). It is not intended to be called by users directly.</p> </div> </div> <div class="sect3"> <h4 id="_p4broker_base">9.6.13. p4broker_base</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4broker_base</code> script is very similar to <a href="#_p4d_base">Section 9.6.12, “p4d_base”</a> but for the <code>p4broker</code> service instance.</p> </div> <div class="paragraph"> <p>See <a href="https://www.perforce.com/manuals/p4dist/Content/P4Dist/chapter.broker.html">p4broker in SysAdmin Guide</a></p> </div> </div> <div class="sect3"> <h4 id="_p4ftpd_base">9.6.14. p4ftpd_base</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4ftpd_base</code> script is very similar to <a href="#_p4d_base">Section 9.6.12, “p4d_base”</a> but for the <code>p4ftp</code> service instance. The p4ftp has been deprecated; this may be removed in a future SDP release.</p> </div> <div class="paragraph"> <p>This product is very seldom used these days!</p> </div> <div class="paragraph"> <p>See <a href="https://www.perforce.com/manuals/p4ftp/index.html">P4FTP Installation Guide.</a></p> </div> </div> <div class="sect3"> <h4 id="_p4p_base">9.6.15. p4p_base</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4p_base</code> is very similar to <a href="#_p4d_base">Section 9.6.12, “p4d_base”</a> but for the <code>p4p</code> (P4 Proxy) service instance.</p> </div> <div class="paragraph"> <p>See <a href="https://www.perforce.com/manuals/p4dist/Content/P4Dist/chapter.proxy.html">p4proxy in SysAdmin Guide</a></p> </div> </div> <div class="sect3"> <h4 id="_p4pcm_pl">9.6.16. p4pcm.pl</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4pcm.pl</code> script is a utility to remove files in the proxy cache if the amount of free disk space falls below the low threshold.</p> </div> <div class="listingblock"> <div class="title">Usage</div> <div class="content"> <pre class="highlight"><code>Usage: p4pcm.pl [-d "proxy cache dir"] [-tlow <low_threshold>] [-thigh <high_threshold>] [-n] or p4pcm.pl -h This utility removes files in the proxy cache if the amount of free disk space falls below the low threshold (default 10GB). It removes files (oldest first) until the high threshold is (default 20GB) is reached. Specify the thresholds in kilobyte units (kb). The '-d "proxy cache dir"' argument is required unless $P4PCACHE is defined, in which case it is used. The log is $LOGS/p4pcm.log if $LOGS is defined, else p4pcm.log in the current directory. Use '-n' to show what files would be removed.</code></pre> </div> </div> </div> <div class="sect3"> <h4 id="_p4review_py">9.6.17. p4review.py</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4review.py</code> script sends out email containing the change descriptions to users who are configured as reviewers for affected files (done by setting the Reviews: field in the user specification). This script is a version of the <code>p4review.py</code> script that is available on the Perforce Web site, but has been modified to use the server instance number. It relies on a configuration file in <code>/p4/common/config</code>, called <code>p4_<instance>.p4review.cfg</code>.</p> </div> <div class="paragraph"> <p>This is not required if you have installed Swarm which also performs notification functions and is easier for users to configure.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/p4review.py # Uses config file as above</pre> </div> </div> </div> <div class="sect3"> <h4 id="_p4review2_py">9.6.18. p4review2.py</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4review2.py</code> script is an enhanced version of <a href="#_p4review_py">Section 9.6.17, “p4review.py”</a></p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Run p4review2.py --sample-config > p4review.conf</p> </li> <li> <p>Edit the file p4review.conf</p> </li> <li> <p>Add a crontab similar to this:</p> <div class="ulist"> <ul> <li> <p>* * * * python2.7 /path/to/p4review2.py -c /path/to/p4review.conf</p> </li> </ul> </div> </li> </ol> </div> <div class="paragraph"> <p>Features:</p> </div> <div class="ulist"> <ul> <li> <p>Prevent multiple copies running concurrently with a simple lock file.</p> </li> <li> <p>Logging support built-in.</p> </li> <li> <p>Takes command-line options.</p> </li> <li> <p>Configurable subject and email templates.</p> </li> <li> <p>Use P4Python when available and use P4 (the CLI) as a fallback.</p> </li> <li> <p>Option to send a <em>single</em> email per user per invocation instead of multiple ones.</p> </li> <li> <p>Reads config from a INI-like file using ConfigParser</p> </li> <li> <p>Have command line options that overrides environment variables.</p> </li> <li> <p>Handles unicode-enabled server <strong>and</strong> non-ASCII characters on a non-unicode-enabled server.</p> </li> <li> <p>Option to opt-in (--opt-in-path) reviews globally (for migration from old review daemon).</p> </li> <li> <p>Configurable URLs for changes/jobs/users (for swarm).</p> </li> <li> <p>Able to limit the maximum email message size with a configurable.</p> </li> <li> <p>SMTP auth and TLS (not SSL) support.</p> </li> <li> <p>Handles P4AUTH (optional; use of P4AUTH is no longer recommended).</p> </li> </ul> </div> </div> <div class="sect3"> <h4 id="_proxy_rotate_sh">9.6.19. proxy_rotate.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/proxy_rotate.sh</code> rotates the proxy log file. It is intended for use on a server machine that has only proxy running. When a proxy is run on a p4d server machine, the <code>daily_checkpoint.sh</code> script takes care of rotating the proxy log.</p> </div> <div class="paragraph"> <p>It can be added to a crontab for e.g. daily log rotation.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/proxy_rotate.sh <instance> /p4/common/bin/proxy_rotate.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_p4sanity_check_sh">9.6.20. p4sanity_check.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4sanity_check.sh</code> script is a simple script to run:</p> </div> <div class="ulist"> <ul> <li> <p>p4 set</p> </li> <li> <p>p4 info</p> </li> <li> <p>p4 changes -m 10</p> </li> </ul> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/p4sanity_check.sh <instance> /p4/common/bin/p4sanity_check.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_p4dstate_sh">9.6.21. p4dstate.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/p4dstate.sh</code> is a trouble-shooting script for use when directed by support, e.g. in situations such as server hanging, major locking problems etc.</p> </div> <div class="paragraph"> <p>It is an "SDP-aware" version of the <a href="https://portal.perforce.com/s/article/15261">standard p4dstate.sh</a> so that it only requires the SDP instance to be specified as a parameter (since the location of logs etc are defined by SDP).</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>sudo /p4/common/bin/p4dstate.sh <instance> sudo /p4/common/bin/p4dstate.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_ps_functions_sh">9.6.22. ps_functions.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/ps_functions.sh</code> library file contains common functions for using 'ps' to check on process ids. It is not intended to be called by users.</p> </div> <div class="literalblock"> <div class="content"> <pre>get_pids ($exe)</pre> </div> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>Call with an exe name, e.g. /p4/1/bin/p4web_1</pre> </div> </div> <div class="literalblock"> <div class="title">Examples</div> <div class="content"> <pre>p4web_pids=$(get_pids $P4WEBBIN) p4broker_pids=$(get_pids $P4BROKERBIN)</pre> </div> </div> </div> <div class="sect3"> <h4 id="_pull_sh">9.6.23. pull.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/pull.sh</code> is a reference pull trigger implementation for <a href="https://portal.perforce.com/s/article/15337">External Archive Transfer using pull-archive and edge-content triggers</a></p> </div> <div class="paragraph"> <p>It is a fast content transfer mechanism using Aspera (and can be adapted to other similar UDP based products.) An Edge server uses this trigger to pull files from its upstream Commit server. It replaces or augments the built in replication archive pull and is useful in scenarios where there are lots of large (binary) files and commit/edge are geographically distributed with high latency and/or low bandwidth between them.</p> </div> <div class="paragraph"> <p>See also companion trigger <a href="#_submit_sh">Section 9.6.31, “submit.sh”</a>.</p> </div> <div class="paragraph"> <p>It is based around getting a list of files to copy from commit to edge, then doing the file transfer using <code>ascp</code> (Aspera file copy).</p> </div> <div class="paragraph"> <p>The configurable <code>pull.trigger.dir</code> should be set to a temp folder like <code>/p4/1/tmp</code>.</p> </div> <div class="paragraph"> <p>Startup commands look like:</p> </div> <div class="literalblock"> <div class="content"> <pre>startup.2=pull -i 1 -u --trigger --batch=1000</pre> </div> </div> <div class="paragraph"> <p>The trigger entry for the pull commands looks like this:</p> </div> <div class="literalblock"> <div class="content"> <pre>pull_archive pull-archive pull "/p4/common/bin/triggers/pull.sh %archiveList%"</pre> </div> </div> <div class="paragraph"> <p>There are some pull trigger options, but the are not necessary with Aspera. Aspera works best if you give it the max batch size of 1000 and set up 1 or more threads. Note, that each thread will use the max bandwidth you specify, so a single pull-trigger thread is probably all you will want.</p> </div> <div class="paragraph"> <p>The <code>ascp</code> user needs to have ssl public keys set up or export <code>ASPERA_SCP_PASS</code>.</p> </div> <div class="paragraph"> <p>The <code>ascp</code> user should be set up with the target as / with full write access to the volume where the depot files are located. The easiest way to do that is to use the same user that is running the p4d service.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> ensure ascp is correctly configured and working in your environment: <a href="https://www-01.ibm.com/support/docview.wss?uid=ibm10747281" class="bare">https://www-01.ibm.com/support/docview.wss?uid=ibm10747281</a> (search for "ascp connectivity testing") </td> </tr> </table> </div> <div class="paragraph"> <p>Standard SDP environment is assumed, e.g P4USER, P4PORT, OSUSER, P4BIN, etc. are set, PATH is appropriate, and a super user is logged in with a non-expiring ticket.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> Read the trigger comments for any customization requirements required for your environment. </td> </tr> </table> </div> <div class="paragraph"> <p>See also the test version of the script: <a href="#_pull_test_sh">Section 9.6.24, “pull_test.sh”</a></p> </div> <div class="paragraph"> <p>See the <code>/p4/common/bin/triggers/pull.sh</code> script for details and to customize for your environment.</p> </div> </div> <div class="sect3"> <h4 id="_pull_test_sh">9.6.24. pull_test.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/pull_test.sh</code> script is a test script.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> THIS IS A TEST SCRIPT - it substitutes for <a href="#_pull_sh">Section 9.6.23, “pull.sh”</a> which uses Aspera’s <code>ascp</code> and replaces that with Linux standard <code>scp</code> utility. <strong>IT IS NOT INTENDED FOR PRODUCTION USE!!!!</strong> </td> </tr> </table> </div> <div class="paragraph"> <p>If you don’t have an Aspera license, then you can test with this script to understand the process.</p> </div> <div class="paragraph"> <p>See the <code>/p4/common/bin/triggers/pull_test.sh</code> script for details.</p> </div> <div class="paragraph"> <p>There is a demonstrator project showing usage: <a href="https://github.com/rcowham/p4d-edge-pull-demo" class="bare">https://github.com/rcowham/p4d-edge-pull-demo</a></p> </div> </div> <div class="sect3"> <h4 id="_purge_revisions_sh">9.6.25. purge_revisions.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/purge_revisions.sh</code> script will allow you to archive files and optionally purge files based on a configurable number of days and minimum revisions that you want to keep. This is useful if you want to keep a certain number of days worth of files instead of a specific number of revisions.</p> </div> <div class="paragraph"> <p>Note: If you run this script with purge mode disabled, and then enable it after the fact, all previously archived files specified in the configuration file will be purged if the configured criteria is met.</p> </div> <div class="paragraph"> <p>Prior to running this script, you may want to disable server locks for archive to reduce impact to end users.</p> </div> <div class="paragraph"> <p>See: <a href="https://www.perforce.com/perforce/doc.current/manuals/cmdref/Content/CmdRef/configurables.configurables.html#server.locks.archive" class="bare">https://www.perforce.com/perforce/doc.current/manuals/cmdref/Content/CmdRef/configurables.configurables.html#server.locks.archive</a></p> </div> <div class="paragraph"> <p>Parameters:</p> </div> <div class="ulist"> <ul> <li> <p>SDP_INSTANCE - The instance of Perforce that is being backed up. If not set in environment, pass in as argument to script.</p> </li> <li> <p>P4_ARCHIVE_CONFIG - The location of the config file used to determine retention. If not set in environment, pass in as argument to script. This can be stored on a physical disk or somewhere in perforce.</p> </li> <li> <p>P4_ARCHIVE_DEPOT - Depot to archive the files in (string)</p> </li> <li> <p>P4_ARCHIVE_REPORT_MODE - Do not archive revisions; report on which revisions would have been archived (bool - default: true)</p> </li> <li> <p>P4_ARCHIVE_TEXT - Archive text files (or other revisions stored in delta format, such as files of type binary+D) (bool - default: false)</p> </li> <li> <p>P4_PURGE_MODE - Enables purging of files after they are archived (bool - default: false)</p> </li> </ul> </div> <div class="paragraph"> <div class="title">Config File Format</div> <p>The config file should contain a list of file paths, number of days and minimum of revisions to keep in a tab delimited format.</p> </div> <div class="literalblock"> <div class="content"> <pre><PATH> <DAYS> <MINIMUM REVISIONS></pre> </div> </div> <div class="paragraph"> <p>Example:</p> </div> <div class="literalblock"> <div class="content"> <pre>//test/1.txt 10 1 //test/2.txt 1 3 //test/3.txt 10 10 //test/4.txt 30 3 //test/5.txt 30 8</pre> </div> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/purge_revisions.sh <SDP_INSTANCE> <P4_ARCHIVE_CONFIG> <P4_ARCHIVE_DEPOT> <P4_ARCHIVE_REPORT_MODE (Optional)> 4_ARCHIVE_TEXT (Optional)> <P4_PURGE_MODE (Optional)></pre> </div> </div> <div class="paragraph"> <div class="title">Examples</div> <p>Run from CLI that will archive files as defined in the config file</p> </div> <div class="literalblock"> <div class="content"> <pre>./purge_revisions.sh 1 /p4/common/config/p4_1.p4purge.cfg archive FALSE</pre> </div> </div> <div class="paragraph"> <p>Cron job that will will archive files as defined in the config file, including text files</p> </div> <div class="literalblock"> <div class="content"> <pre>30 0 * * * [ -e /p4/common/bin ] && /p4/common/bin/run_if_master.sh ${INSTANCE} /p4/common/bin/purge_revisions.sh $INSTANCE} /p4/common/config/p4_1.p4purge.cfg archive FALSE FALSE</pre> </div> </div> </div> <div class="sect3"> <h4 id="_recover_edge_sh">9.6.26. recover_edge.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/recover_edge.sh</code> script is designed to rebuild an Edge server from a seed checkpoint from the master while keeping the existing edge specific data.</p> </div> <div class="paragraph"> <p>You have to first copy the seed checkpoint from the master, created with <a href="#_edge_dump_sh">Section 9.6.4, “edge_dump.sh”</a>, to the edge server before running this script. (Alternately, a full checkpoint from the master can be used so long as the edge server spec does not specify any filtering, e.g. does not use ArchiveDataFilter.)</p> </div> <div class="paragraph"> <p>Then run this script on the Edge server host with the instance number and full path of the master seed checkpoint as parameters.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/recover_edge.sh <instance> <absolute path to checkpoint> /p4/common/bin/recover_edge.sh 1 /p4/1/checkpoints/p4_1.edge_syd.seed.ckp.9188.gz</pre> </div> </div> </div> <div class="sect3"> <h4 id="_replica_cleanup_sh">9.6.27. replica_cleanup.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/replica_cleanup.sh</code> script performs the following actions for a replica:</p> </div> <div class="ulist"> <ul> <li> <p>rotate logs</p> </li> <li> <p>remove old checkpoints and journals</p> </li> <li> <p>remove old logs</p> </li> </ul> </div> <div class="paragraph"> <p>This should be used on replicas for which the <code>sync_replica.sh</code> is not used.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/replica_cleanup.sh <instance> /p4/common/bin/replica_cleanup.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_replica_status_sh">9.6.28. replica_status.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/replica_status.sh</code> script is regularly run by crontab on a replica or edge (using <a href="#_run_if_replica_sh">Section 9.4.15, “run_if_replica.sh”</a>).</p> </div> <div class="literalblock"> <div class="content"> <pre>0 8 * * * [ -e /p4/common/bin ] && /p4/common/bin/run_if_replica.sh ${INSTANCE} /p4/common/bin/replica_status.sh ${INSTANCE} > /dev/null 0 8 * * * [ -e /p4/common/bin ] && /p4/common/bin/run_if_edge.sh ${INSTANCE} /p4/common/bin/replica_status.sh ${INSTANCE} > /dev/null</pre> </div> </div> <div class="paragraph"> <p>It performs a <code>p4 pull -lj</code> command on the replica to report current replication status, and emails this to the standard SDP administrator email on a daily basis. This is useful for monitoring purposes to detect replica lag or similar problems.</p> </div> <div class="paragraph"> <p>If you are using enhanced monitoring such as <a href="https://github.com/perforce/p4prometheus">p4prometheus</a> then this script may not be required.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/replica_status.sh <instance> /p4/common/bin/replica_status.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_request_replica_checkpoint_sh">9.6.29. request_replica_checkpoint.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/request_replica_checkpoint.sh</code> script is intended to be run on a standby replica. It essentially just calls 'p4 admin checkpoint -Z' to request a checkpoint and exits. The actual checkpoint is created on the next journal rotation on the master.</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/request_replica_checkpoint.sh <instance> /p4/common/bin/request_replica_checkpoint.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_rotate_journal_sh">9.6.30. rotate_journal.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/rotate_journal.sh</code> script is a convenience script to perform the following actions for the specified instance (single parameter):</p> </div> <div class="ulist"> <ul> <li> <p>rotate live journal</p> </li> <li> <p>replay it to the <code>offline_db</code></p> </li> <li> <p>rotate logs files according to the settings in <code>p4_vars</code> for things like <code>KEEP_LOGS</code></p> </li> </ul> </div> <div class="paragraph"> <p>It has several use cases:</p> </div> <div class="ulist"> <ul> <li> <p>For sites with large, long-running checkpoints, it can be used to schedule journal rotations to occur more frequently than <code>daily_checkpoint.sh</code> is run.</p> </li> <li> <p>It can be used to trigger checkpoints to run on edge servers.</p> </li> </ul> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/rotate_journal.sh <instance> /p4/common/bin/rotate_journal.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_submit_sh">9.6.31. submit.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/submit.sh</code> script is an example submit trigger for <a href="https://portal.perforce.com/s/article/15337">External Archive Transfer using pull-archive and edge-content triggers</a></p> </div> <div class="paragraph"> <p>This is a reference edge-content trigger for use with an Edge/Commit server topology - the Edge server uses this trigger to transmit files which are being submitted to the Commit instead of using its normal file transfer mechanism. This trigger uses Aspera for fast file transfer, and UDP, rather than TCP and is typically much faster, especially with high latency connections.</p> </div> <div class="paragraph"> <p>Companion trigger/script to <a href="#_pull_sh">Section 9.6.23, “pull.sh”</a></p> </div> <div class="paragraph"> <p>Uses <code>fstat -Ob</code> with some filtering to generate a list of files to be copied. Create a temp file with the filename pairs expected by ascp, and then perform the copy.</p> </div> <div class="paragraph"> <p>This configurable must be set:</p> </div> <div class="literalblock"> <div class="content"> <pre>rpl.submit.nocopy=1</pre> </div> </div> <div class="paragraph"> <p>The edge-content trigger looks like this:</p> </div> <div class="literalblock"> <div class="content"> <pre>EdgeSubmit edge-content //... "/p4/common/bin/triggers/ascpSubmit.sh %changelist%"</pre> </div> </div> <div class="paragraph"> <p>The <code>ascp</code> user needs to have ssl public keys set up or export <code>ASPERA_SCP_PASS</code>. The <code>ascp</code> user should be set up with the target as / with full write access to the volume where the depot files are located. The easiest way to do that is to use the same user that is running the p4d service.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> ensure <code>ascp</code> is correctly configured and working in your environment: <a href="https://www-01.ibm.com/support/docview.wss?uid=ibm10747281" class="bare">https://www-01.ibm.com/support/docview.wss?uid=ibm10747281</a> (search for "ascp connectivity testing") </td> </tr> </table> </div> <div class="paragraph"> <p>Standard SDP environment is assumed, e.g P4USER, P4PORT, OSUSER, P4BIN, etc. are set, PATH is appropriate, and a super user is logged in with a non-expiring ticket.</p> </div> <div class="paragraph"> <p>See the test version of this script below: <a href="#_submit_test_sh">Section 9.6.32, “submit_test.sh”</a></p> </div> <div class="paragraph"> <p>See the <code>/p4/common/bin/triggers/submit.sh</code> script for details and to customize for your environment.</p> </div> </div> <div class="sect3"> <h4 id="_submit_test_sh">9.6.32. submit_test.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/submit_test.sh</code> script is a test script.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> THIS IS A TEST SCRIPT - it substitutes for <a href="#_submit_sh">Section 9.6.31, “submit.sh”</a> (which uses Aspera) - and replaces <code>ascp</code> with Linux standard <code>scp</code>. IT IS NOT INTENDED FOR PRODUCTION USE!!!! </td> </tr> </table> </div> <div class="paragraph"> <p>If you don’t have an Aspera license, then you can test with this script to understand the process.</p> </div> <div class="paragraph"> <p>See the <code>/p4/common/bin/triggers/submit_test.sh</code> for details.</p> </div> <div class="paragraph"> <p>There is a demonstrator project showing usage: <a href="https://github.com/rcowham/p4d-edge-pull-demo" class="bare">https://github.com/rcowham/p4d-edge-pull-demo</a></p> </div> </div> <div class="sect3"> <h4 id="_sync_replica_sh">9.6.33. sync_replica.sh</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/sync_replica.sh</code> script is included in the standard crontab for a replica.</p> </div> <div class="paragraph"> <p>It runs <code>rsync</code> to mirror the <code>/p4/1/checkpoints</code> (assuming instance <code>1</code>) directory to the replica machine.</p> </div> <div class="paragraph"> <p>It then uses the latest checkpoint in that directory to update the local <code>offline_db</code> directory for the replica.</p> </div> <div class="paragraph"> <p>This ensures that the replica can be quickly and easily reseeded if required without having to first copy checkpoints locally (which can take hours over slow WAN links).</p> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/sync_replica.sh <instance> /p4/common/bin/sync_replica.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_templates_directory">9.6.34. templates directory</h4> <div class="paragraph"> <p>This sub-directory of <code>/p4/common/bin</code> contains some files which can be used as templates for new commands if you wish:</p> </div> <div class="ulist"> <ul> <li> <p>template.pl - Perl</p> </li> <li> <p>template.py - Python</p> </li> <li> <p>template.py.cfg - config file for python</p> </li> <li> <p>template.sh - Bash</p> </li> </ul> </div> <div class="paragraph"> <p>They are not intended to be run directly.</p> </div> </div> <div class="sect3"> <h4 id="_update_limits_py">9.6.35. update_limits.py</h4> <div class="paragraph"> <p>The <code>/p4/common/bin/update_limits.py</code> script is a Python script which is intended to be called from a crontab entry one per hour. It must be wrapped with the <code>p4master_run</code> script.</p> </div> <div class="paragraph"> <p>It ensures that all current users are added to the <code>limits</code> group. This makes it easy for an administrator to configure global limits on values such as MaxScanRows, MaxSearchResults etc. This can reduce load on a heavily loaded instance.</p> </div> <div class="paragraph"> <p>For more information:</p> </div> <div class="ulist"> <ul> <li> <p><a href="https://portal.perforce.com/s/article/2529">Maximizing Perforce Helix Core Performance</a></p> </li> <li> <p><a href="https://portal.perforce.com/s/article/2521">Multiple MaxScanRows and similar values</a></p> </li> </ul> </div> <div class="literalblock"> <div class="title">Usage</div> <div class="content"> <pre>/p4/common/bin/update_limits.py <instance> /p4/common/bin/update_limits.py 1</pre> </div> </div> </div> </div> </div> </div> <div class="sect1"> <h2 id="_sample_procedures">10. Sample Procedures</h2> <div class="sectionbody"> <div class="paragraph"> <p>This section describes sample procedures using the SDP tools described above, given certain scenarios.</p> </div> <div class="sect2"> <h3 id="_installing_python3_and_p4python">10.1. Installing Python3 and P4Python</h3> <div class="paragraph"> <p>Python3 and P4Python are useful for custom automation, including triggers.</p> </div> <div class="paragraph"> <p>Installing Python3 and P4Python is best done using packages. First, set up the machine to download packages from Perforce Software, following the guidance appropriate for your platform on the <a href="https://package.perforce.com">Perforce Packages</a> page.</p> </div> <div class="paragraph"> <p>Then install Python3 and P4Python Packages with the command appropriate for your operating system. For RHEL/Rocky Linux family, use:</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo yum install perforce-p4python3</pre> </div> </div> <div class="paragraph"> <p>For the Debian/Ubuntu family, use:</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo apt update sudo apt install perforce-p4python3</pre> </div> </div> <div class="paragraph"> <p>It is possible to have multiple versions of Python installed, possibly Python 2.7 (the end of the Python 2 line) and various Python 3.x versions, and possibly multiple versions either or both of Python 2 and Python 3. Whether having multiple versions is desirable or necessary depends on what software on the machine uses Python; that discussion is outside the scope of this document. However, being are of this possibility is important for installing in various existing environments.</p> </div> <div class="paragraph"> <p>The behaviors of the <code>perforce-python3</code> package install vary slighly depending on what is already installed, and are optimized to avoid disrupting existing software.</p> </div> <div class="ulist"> <ul> <li> <p>If no prior version of Python 3 exists on the machine when the <code>perforce-p4python3</code> package is installed, then the newly installed Python 3 will be established as the default, such that calling <code>python3</code> (a symlink) will implicitly refer to the just-installed Python 3 version. <strong>The P4Python module will be available by calling python3</strong>.</p> </li> <li> <p>If Python 3.8 exists on the machine when the <code>perforce-p4python3</code> package is installed, P4Python wil be added to the existing Python 3.8 install. <strong>The P4Python module will be available by calling python3</strong>.</p> </li> <li> <p>If there is already some other version of Python 3.x installed but not 3.8, such as Python 3.6, installing the <code>perforce-p4python3</code> package will add a new Python 3.8 installation with the version of Python 3 it uses (e.g. <code>python3.8</code>), but it will <strong>not</strong> adjust the existing <code>python3</code> symlink. <strong>The P4Python module will <strong>not</strong> P4Python module available with python3.</strong> You can at that point decide to manually adjust the <code>python3</code> symlink to point to <code>python3.8</code>, though this has some risk of breaking other things (such as custom triggers) that require the other version of Python3 if it was actively used. Alternately, you can adjust the shebang lines of specific scripts that use P4Python to refer to <code>python3.8</code> specifically rather than just <code>python3</code>. In any case, avoid using <code>python2</code> or just <code>python</code>, both of which by convention refer to Python 2.</p> </li> </ul> </div> </div> <div class="sect2"> <h3 id="_installing_checkcasetrigger_py">10.2. Installing CheckCaseTrigger.py</h3> <div class="paragraph"> <p>This trigger is very useful to avoid people accidentally checking in files on a case-sensitive server which only differ in case from an existing file (or directory).</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> This trigger requires <code>python3</code>, and must also have P4Python installed. See: <a href="#_installing_python3_and_p4python">Section 10.1, “Installing Python3 and P4Python”</a>. </td> </tr> </table> </div> <div class="paragraph"> <p>The trigger to install is part of the SDP but by default is in <code>/p4/sdp/Unsupported/Samples/triggers</code>.</p> </div> <div class="paragraph"> <p>To install:</p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Install p4python. See: <a href="#_installing_python3_and_p4python">Section 10.1, “Installing Python3 and P4Python”</a>.</p> </li> <li> <p>Copy the trigger and dependencies to approprpiate directory</p> <div class="literalblock"> <div class="content"> <pre>mkdir -p /p4/common/site/bin/triggers cp /p4/sdp/Unsupported/Samples/triggers/CheckCaseTrigger.py /p4/common/site/bin/triggers/ cp /p4/sdp/Unsupported/Samples/triggers/P4Trigger.py /p4/common/site/bin/triggers/</pre> </div> </div> </li> <li> <p>Edit the <code>shebang</code> line (first line) at the start of the trigger if necessary, e.g. change to:</p> <div class="literalblock"> <div class="content"> <pre>#!/bin/env python3</pre> </div> </div> </li> </ol> </div> <div class="paragraph"> <p>Usually <code>python3</code> is appropriate.</p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>Test on an existing (small) changelist:</p> <div class="literalblock"> <div class="content"> <pre>p4 changes -s submitted -m 9</pre> </div> </div> <div class="paragraph"> <p>pick a suitable changelist number, e.g. 1234</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/common/site/bin/triggers/CheckCaseTrigger.py 1234</pre> </div> </div> </li> <li> <p>Test that it works</p> <div class="olist loweralpha"> <ol class="loweralpha" type="a"> <li> <p>Add appropriate line to triggers table:</p> <div class="literalblock"> <div class="content"> <pre>CheckCaseTrigger submit-change //test/... "/p4/common/site/bin/triggers/CheckCaseTrigger.py %changelist%"</pre> </div> </div> </li> <li> <p>Create test workspace</p> </li> <li> <p>Submit simple <code>Test.txt</code></p> </li> <li> <p>Attempt to submit <code>test.txt</code> and check for error</p> </li> </ol> </div> </li> <li> <p>Change triggers table to valid version/path:</p> <div class="literalblock"> <div class="content"> <pre>CheckCaseTrigger submit-change //... "/p4/common/site/bin/triggers/CheckCaseTrigger.py %changelist%"</pre> </div> </div> </li> </ol> </div> </div> <div class="sect2"> <h3 id="_swarm_jira_link">10.3. Swarm JIRA Link</h3> <div class="paragraph"> <p>Here is an example of linking to cloud JIRA in <code>config.php</code>:</p> </div> <div class="literalblock"> <div class="content"> <pre>'jira' => array( 'host' => 'https://example.atlassian.net/', 'user' => 'p4jira@example.com', 'password' => '<API-Token>', 'link_to_jobs' => 'true', ),</pre> </div> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> No need to get complicated with .pem files or 'http_client_options' section. Just specify <code>https://</code> prefix as above. </td> </tr> </table> </div> <div class="paragraph"> <p>Login to user account on Atlassian URL as above, and then create an API token by going to this URL:</p> </div> <div class="paragraph"> <p><a href="https://id.atlassian.com/manage-profile/security/api-tokens" class="bare">https://id.atlassian.com/manage-profile/security/api-tokens</a></p> </div> <div class="paragraph"> <p>This curl request tested the API:</p> </div> <div class="literalblock"> <div class="content"> <pre>curl https://example.atlassian.net/rest/api/latest/project --user p4jira@example.com:<API-TOKEN></pre> </div> </div> <div class="paragraph"> <p>The above should list all active projects:</p> </div> <div class="listingblock"> <div class="title">Example JSON response</div> <div class="content"> <pre class="highlight"><code class="language-json" data-lang="json">{"expand":"description,lead,issueTypes,url,projectKeys,permissions,insight","self":"https://example.atlassian.net/rest/api/2/project/11904","id":"11904","key":"ULG","name":"Ultimate Game"}</code></pre> </div> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> Check that the provided JIRA account has access to all required projects to be linked (and that it isn’t missing some)! See below. </td> </tr> </table> </div> <div class="listingblock"> <div class="title">Example list of projects accessible to JIRA account</div> <div class="content"> <pre class="highlight"><code class="language-shell" data-lang="shell">$ curl --user 'p4jira@example.com:<API-TOKEN>' https://example.atlassian.net/rest/api/latest/project | jq > projects.txt $ egrep "name|key" projects.txt egrep "name|key" projects.txt "key": "PRJA", "name": "Project A", "key": "PRJB", "name": "Project B",</code></pre> </div> </div> </div> <div class="sect2"> <h3 id="_reseeding_an_edge_server">10.4. Reseeding an Edge Server</h3> <div class="paragraph"> <p>Perforce Helix Edge Servers are a form of replica that replicates "persistent history" data such as submitted changelists from the master server, while maintaining local databases for "work-in-progress" data, to include user workspaces, lists of files checked out in user workspaces, etc. This separation of persistent and work-in-progress data has significant benefits that make edge servers perform optimally for certain use cases.</p> </div> <div class="paragraph"> <p>When a new edge server is deployed for the first time, it is "seeded" with a special seed checkpoint from the master server. This is done using the SDP <code>edge_dump.sh</code> script.</p> </div> <div class="paragraph"> <p>Edge servers need to be reseeded in certain circumstances. When an edge server is reseeded, the latest persistent history from the master server is combined with the latest work-in-progress data from the edge server.</p> </div> <div class="paragraph"> <p>Some occasions that require reseeding include:</p> </div> <div class="ulist"> <ul> <li> <p>When changing the scope of replication filtering, i.e. if the <code>*DataFilter</code> fields of the server spec are changed.</p> </li> <li> <p>In some recovery situations involving hardware or other infrastructure failure.</p> </li> <li> <p>When advised by Perforce Support.</p> </li> </ul> </div> <div class="paragraph"> <p>An article <a href="https://portal.perforce.com/s/article/12127">Edge Server Metadata Recovery</a> discusses the manual process in detail. The process outlined in this article is implemented in the SDP with two scripts, <code>edge_dump.sh</code> and <code>recover_edge.sh</code>.</p> </div> <div class="paragraph"> <p>Key aspects of this implementation:</p> </div> <div class="ulist"> <ul> <li> <p>No downtime is required for the master server process.</p> </li> <li> <p>Downtime for the edge to be reseeded is required. This is kept to a minimum.</p> </li> </ul> </div> </div> <div class="sect2"> <h3 id="_edge_reseed_scenario">10.5. Edge Reseed Scenario</h3> <div class="paragraph"> <p>In this sample scenario, an edge server needs to be reseeded.</p> </div> <div class="paragraph"> <p>Sample details about this scenario:</p> </div> <div class="ulist"> <ul> <li> <p>The SDP instance is <code>1</code>.</p> </li> <li> <p>The <code>perforce</code> operating system runs the p4d process on all machines.</p> </li> <li> <p>The <code>perforce</code> user’s <code>~/.bashrc</code> ensures that the shell environment is set automatically on login, by doing: <code>source /p4/common/bin/p4_vars 1</code></p> </li> <li> <p>The master server has a ServerID of <code>master.1</code> and runs on the machine <code>bos-helix-01</code>.</p> </li> <li> <p>The edge server has a ServerID of <code>p4d_edge_syd</code> and runs on the machine <code>syd-helix-04</code>.</p> </li> <li> <p>Both the master and edge server are online and actively in use at the start of processing.</p> </li> <li> <p>Users of the edge server to be reseeded have been notified about a planned outage.</p> </li> <li> <p>No outage is planned or necessary for the master server</p> </li> <li> <p>SSH keys are setup for the <code>perforce</code> user.</p> </li> </ul> </div> <div class="sect3"> <h4 id="_step_0_preflight_checks">10.5.1. Step 0: Preflight Checks</h4> <div class="paragraph"> <p>Make sure the start state is healthy.</p> </div> <div class="paragraph"> <p>As <code>perforce@bos-helix-01</code> (the master):</p> </div> <div class="literalblock"> <div class="content"> <pre>verify_sdp.sh 1 -online</pre> </div> </div> <div class="paragraph"> <p>As <code>perforce@syd-helix-04</code> (the edge):</p> </div> <div class="literalblock"> <div class="content"> <pre>verify_sdp.sh 1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_step_1_create_new_edge_seed_checkpoint">10.5.2. Step 1: Create New Edge Seed Checkpoint</h4> <div class="paragraph"> <p>On the master server, create a new edge seed checkpoint using <code>edge_dump.sh</code>. This will contain recent persistent history from the master.</p> </div> <div class="paragraph"> <p>This process uses the <code>offline_db</code> rather than P4ROOT, so no downtime is needed.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> Creating an edge seed requires that the <code>offline_db</code> directory not be interfered with. The <code>daily_checkpoint.sh</code> script runs in the crontab of the <code>perforce</code> user on the master, and that script must not be run when <code>edge_dump.sh</code> runs. Ensure that <code>edge_dump.sh</code> is run at a time when it won’t conflict with the operation of <code>daily_checkpoint.sh</code>. If checkpoints take many hours, consider disabling the crontab for <code>daily_checkpoint.sh</code> by commenting it out of the crontab until <code>edge_dump.sh</code> completes — but don’t forget to re-enable it afterward! </td> </tr> </table> </div> <div class="paragraph"> <p>Create the edge seed like so, as <code>perforce@bos-helix-01</code> (the master):</p> </div> <div class="literalblock"> <div class="content"> <pre>nohup /p4/common/bin/p4master_run 1 edge_dump.sh 1 p4d_edge_syd < /dev/null > /p4/1/logs/dump.log 2>&1 &</pre> </div> </div> <div class="paragraph"> <p>Then monitor until completion with:</p> </div> <div class="literalblock"> <div class="content"> <pre>tail -f $(ls -t $LOGS/edge_dump.*.log | head -1)</pre> </div> </div> <div class="paragraph"> <p>The edge seed will appear as a file looking something like:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/1/checkpoints/p4_1.edge_syd.seed.2035.gz /p4/1/checkpoints/p4_1.edge_syd.seed.2035.gz.md5</pre> </div> </div> <div class="paragraph"> <p>When the <code>.md5</code> file appears, the edge seed checkpoint is complete.</p> </div> <div class="paragraph"> <p>Notes:</p> </div> <div class="ulist"> <ul> <li> <p>The <code>nohup</code> at the beginning of the command and the <code>&</code> at the end ensure this process will continue to run even if the terminal window in which the command was executed disconnects.</p> </li> </ul> </div> </div> <div class="sect3"> <h4 id="_step_2_transfer_edge_seed">10.5.3. Step 2: Transfer Edge Seed</h4> <div class="paragraph"> <p>Transfer the edge seed from the master to the edge like so, as <code>perforce@bos-helix-01</code> (the master):</p> </div> <div class="literalblock"> <div class="content"> <pre>scp -p /p4/1/checkpoints/p4_1.edge_syd.seed.2035.gz syd-helix-04:/p4/1/checkpoints/. scp -p /p4/1/checkpoints/p4_1.edge_syd.seed.2035.gz.md5 syd-helix-04:/p4/1/checkpoints/.</pre> </div> </div> </div> <div class="sect3"> <h4 id="_step_3_reseed_the_edge">10.5.4. Step 3: Reseed the Edge</h4> <div class="paragraph"> <p>Reseed the edge. As <code>perforce@syd-helix-04</code> (the edge):</p> </div> <div class="literalblock"> <div class="content"> <pre>nohup /p4/common/bin/run_if_edge.sh 1 recover_edge.sh 1 /p4/1/checkpoints/p4_1.edge_syd.seed.2035.gz < /dev/null > /p4/1/logs/rec.log 2>&1 &</pre> </div> </div> <div class="paragraph"> <p>Notes:</p> </div> <div class="ulist"> <ul> <li> <p>The <code>offline_db</code> of the edge server is removed at the start of processing, but is replaced at the end.</p> </li> <li> <p>It is safe for the p4d process of the edge server to be up and running when this process starts. It it is up at the start of processing, it will be shutdown by the <code>recovered_edge.sh</code>, but not immediately. The script allows the p4d service to remain in use while the edge seed checkpoint from the master is replayed into the <code>offine_db</code>.</p> </li> <li> <p>After the edge seed checkpoint has been replayed, the p4d service is shutdown, and then the process of combining persistent and work-in-progress data commences, the essense of the reseed operation.</p> </li> <li> <p>After the edge reseed is complete, the p4d process is started. It will then start replcating new data from the master since the time of the edge seed checkpoint creation. The p4d service may hang and be unresponive for several minutes after it is started. If you choose to monitor closely, when a <code>p4 pull -lj</code> on the edge indicates it has caught up to the master, the service is safe to use again.</p> </li> <li> <p>The <code>recover_edge.sh</code> script continues to run after the service is back online, as it rebuilds the <code>offline_db</code> of the edge server.</p> </li> <li> <p>On the edge server, the edge server’s regular checkpoints land in <code>/p4/1/checkpionts.edge_syd</code>. The <code>/p4/1/checkpoints</code> folder is used only for holding edge seed checkpoints transferred from the master.</p> </li> <li> <p>Typically, all steps described in the process are done on the same day. However, it is OK if the <code>edge_dump.sh</code>, seed checkpoint transfer, and <code>recover_edge.sh</code> with some time lag between the major steps, typically measured in journal rotations or simply days, with incremental impact on the duration of the recovery step, and so long as the edge seed is not so far behind that the master no longer has numbered journals to feed the edge once it starts.</p> </li> </ul> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> Reseeding requires that the <code>offline_db</code> directory not be interfered with. The <code>daily_checkpoint.sh</code> script runs in the crontab of the <code>perforce</code> user on the edge server, and that script must not be run when <code>recover_edge.sh</code> runs. Ensure that <code>recover_edge.sh</code> is run at a time when it won’t conflict with the operation of <code>daily_checkpoint.sh</code>. If checkpoints take many hours, consider disabling the crontab for <code>daily_checkpoint.sh</code> by commenting it out of the crontab until <code>recover_edge.sh</code> completes — but don’t forget to re-enable it afterward! </td> </tr> </table> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> This sample procedure does not illustrate using a p4broker service to broadcast a "Down for maintence" message on the edge server. If your SDP installation uses p4brokers on p4d server machines, they can be used to prevent regular users from attempting to access the edge server during the processing of <code>recover_edge.sh</code>. This can help prevent users from experiencing a hang, for example, in the time after the edge p4d process starts but before it catches up to the master. </td> </tr> </table> </div> </div> </div> </div> </div> <div class="sect1"> <h2 id="_sdp_package_contents_and_planning">Appendix A: SDP Package Contents and Planning</h2> <div class="sectionbody"> <div class="paragraph"> <p>The directory structure of the SDP is shown below in Figure 1 - SDP Package Directory Structure. This includes all SDP files, including documentation and sample scripts. A subset of these files are deployed to server machines during the installation process.</p> </div> <div class="literalblock"> <div class="content"> <pre>sdp doc Server (Core SDP Files) Unix setup (Unix-specific setup) p4 common bin (Backup scripts, etc) triggers (Example triggers) config etc cron.d init.d systemd lib test setup (cross platform setup - typemap, configure, etc) test (automated test scripts)</pre> </div> </div> <div class="paragraph"> <p>Figure 1 - SDP Package Directory Structure</p> </div> <div class="sect2"> <h3 id="_volume_layout_and_server_planning">A.1. Volume Layout and Server Planning</h3> <div class="paragraph"> <p>Figure 2: SDP Runtime Structure and Volume Layout, viewed from the top down, displays a Perforce <em>application</em> administrator’s view of the system, which shows how to navigate the directory structure to find databases, log files, and versioned files in the depots. Viewed from the bottom up, it displays a Perforce <em>system</em> administrator’s view, emphasizing the physical volume where Perforce data is stored.</p> </div> <div class="sect3"> <h4 id="_memory_and_cpu">A.1.1. Memory and CPU</h4> <div class="paragraph"> <p>Make sure the server has enough memory to cache the <strong>db.rev</strong> database file and to prevent the server from paging during user queries. Maximum performance is obtained if the server has enough memory to keep all of the database files in memory. While the p4d process itself is frugal with system resources such as RAM, it benefits from an excess of RAM due to modern operating systems using excess RAM as file I/O cache. This is to the great benefit of p4d, even though the p4d process itself may not be seen as consuming much RAM directly.</p> </div> <div class="paragraph"> <p><strong>Below are some approximate guidelines for</strong> allocating memory.</p> </div> <div class="ulist"> <ul> <li> <p>1.5 kilobyte of RAM per file revision stored in the server.</p> </li> <li> <p>32 MB of RAM per user.</p> </li> </ul> </div> <div class="paragraph"> <p>INFO: When doing detailed history imports from legacy SCM systems into Perforce, there may be many revisions of files. You want to account for <code>(total files) x (average number of revisions per file)</code> rather than simply the total number of files.</p> </div> <div class="paragraph"> <p>Use the fastest processors available with the fastest available bus speed. Faster processors are typically more desirable than a greater number of cores and provide better performance since quick bursts of computational speed are more important to Perforce’s performance than the number of processors. Have a minimum of two processors so that the offline checkpoint and back up processes do not interfere with your Perforce server. There are log analysis options to diagnose underperforming servers and improve things. Contact Perforce Support/Perforce Consulting for details.</p> </div> </div> <div class="sect3"> <h4 id="_directory_structure_configuration_script_for_linuxunix">A.1.2. Directory Structure Configuration Script for Linux/Unix</h4> <div class="paragraph"> <p>This script describes the steps performed by the mkdirs.sh script on Linux/Unix platforms. Please review this appendix carefully before running these steps manually. Assuming the three-volume configuration described in the Volume Layout and Hardware section are used, the following directories are created. The following examples are illustrated with "1" as the server instance number.</p> </div> <table class="tableblock frame-all grid-all stretch"> <colgroup> <col style="width: 50%;"> <col style="width: 50%;"> </colgroup> <thead> <tr> <th class="tableblock halign-left valign-top"><em>Directory</em></th> <th class="tableblock halign-left valign-top"><em>Remarks</em></th> </tr> </thead> <tbody> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/p4</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Must be under root (<code>/</code>) on the OS volume</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxdepots/p4/1/bin</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Files in here are generated by the mkdirs.sh script.</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxdepots/p4/1/depots</code></p></td> <td class="tableblock halign-left valign-top"></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxdepots/p4/1/tmp</code></p></td> <td class="tableblock halign-left valign-top"></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxdepots/p4/common/config</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Contains p4_<instance>.vars file, e.g. <code>p4_1.vars</code></p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxdepots/p4/common/bin</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Files from <code>$SDP/Server/Unix/p4/common/bin</code>.</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxdepots/p4/common/etc</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Contains <code>init.d</code> and <code>cron.d</code>.</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxlogs/p4/1/logs/old</code></p></td> <td class="tableblock halign-left valign-top"></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxmetadata2/p4/1/db2</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Contains offline copy of main server databases (linked by <code>/p4/1/offline_db</code>.</p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxmetadata1/p4/1/db1/save</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock">Used only during running of <code>refresh_P4ROOT_from_offline_db.sh</code> for extra redundancy.</p></td> </tr> </tbody> </table> <div class="paragraph"> <p>Next, <code>mkdirs.sh</code> creates the following symlinks in the <code>/hxdepots/p4/1</code> directory:</p> </div> <table class="tableblock frame-all grid-all stretch"> <colgroup> <col style="width: 33.3333%;"> <col style="width: 33.3333%;"> <col style="width: 33.3334%;"> </colgroup> <thead> <tr> <th class="tableblock halign-left valign-top"><strong><em>Link source</em></strong></th> <th class="tableblock halign-left valign-top"><strong><em>Link target</em></strong></th> <th class="tableblock halign-left valign-top"><strong><em>Command</em></strong></th> </tr> </thead> <tbody> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxmetadata1/p4/1/db1</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/p4/1/root</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>ln -s /hxmetadata1/p4/1/root</code></p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxmetadata2/p4/1/db2</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/p4/1/offline_db</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>ln -s /hxmetadata1/p4/1/offline_db</code></p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxlogs/p4/1/logs</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/p4/1/logs</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>ln -s /hxlogs/p4/1/logs</code></p></td> </tr> </tbody> </table> <div class="paragraph"> <p>Then these symlinks are created in the /p4 directory:</p> </div> <table class="tableblock frame-all grid-all stretch"> <colgroup> <col style="width: 33.3333%;"> <col style="width: 33.3333%;"> <col style="width: 33.3334%;"> </colgroup> <thead> <tr> <th class="tableblock halign-left valign-top"><strong><em>Link source</em></strong></th> <th class="tableblock halign-left valign-top"><strong><em>Link target</em></strong></th> <th class="tableblock halign-left valign-top"><strong><em>Command</em></strong></th> </tr> </thead> <tbody> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxdepots/p4/1</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/p4/1</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>ln -s /hxdepots/p4/1 /p4/1</code></p></td> </tr> <tr> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/hxdepots/p4/common</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>/p4/common</code></p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><code>ln -s /hxdepots/p4/common /p4/common</code></p></td> </tr> </tbody> </table> <div class="paragraph"> <p>Next, <code>mkdirs.sh</code> renames the Perforce binaries to include version and build number, and then creates appropriate symlinks.</p> </div> </div> <div class="sect3"> <h4 id="_p4d_versions_and_links">A.1.3. P4D versions and links</h4> <div class="paragraph"> <p>The versioned binary links in <code>/p4/common/bin</code> are as below.</p> </div> <div class="paragraph"> <p>For the example of <instance> <code>1</code> we have:</p> </div> <div class="literalblock"> <div class="content"> <pre>ls -l /p4/1/bin p4d_1 -> /p4/common/bin/p4d_1_bin</pre> </div> </div> <div class="paragraph"> <p>The structure is shown in this example, illustrating values for two instances, with instance #1 using p4d release 2018.1 and instance #2 using release 2018.2.</p> </div> <div class="paragraph"> <p>In /p4/1/bin:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4_1 -> /p4/common/bin/p4_1_bin p4d_1 -> /p4/common/bin/p4d_1_bin</pre> </div> </div> <div class="paragraph"> <p>In /p4/2/bin:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4_2 -> /p4/common/bin/p4_2 p4d_2 -> /p4/common/bin/p4d_2</pre> </div> </div> <div class="paragraph"> <p>In <code>/p4/common/bin</code>:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4_1_bin -> p4_2018.1_bin p4_2018.1_bin -> p4_2018.1.685046 p4_2018.1.685046</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre>p4_2_bin -> p4_2018.2_bin p4_2018.2_bin -> p4_2018.2.700949 p4_2018.2.700949</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre>p4d_1_bin -> p4d_2018.1_bin p4d_2018.1_bin -> p4d_2018.1.685046 p4d_2018.1.685046</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre>p4d_2_bin -> p4d_2018.2_bin p4d_2018.2_bin -> p4d_2018.2.700949 p4d_2018.2.700949</pre> </div> </div> <div class="paragraph"> <p>The naming of the last comes from:</p> </div> <div class="literalblock"> <div class="content"> <pre>./p4d_2018.2.700949 -V</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre>Rev. P4D/LINUX26X86_64/2018.2/700949 (2019/07/31).</pre> </div> </div> <div class="paragraph"> <p>So we see the build number <code>p4d_2018.2.700949</code> being included in the name of the p4d executable.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> Although this link structure may appear quite complex, it is easy to understand, and it allows different instances on the same server host to be running with different patch levels, or indeed different releases. And you can upgrade those instances independently of each other which can be very useful. </td> </tr> </table> </div> </div> <div class="sect3"> <h4 id="_case_insensitive_p4d_on_unix">A.1.4. Case Insensitive P4D on Unix</h4> <div class="paragraph"> <p>By default <code>p4d</code> is case sensitive on Unix for filenames and directory names etc.</p> </div> <div class="paragraph"> <p>It is possible and quite common to run your server in case insensitive mode. This is often done when Windows is the main operating system in use on the client host machines.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> In "case insensitive" mode, that means that you should ALWAYS execute <code>p4d</code> with the flag <code>-C1</code> (or you risk possible table corruption in some circumstances). </td> </tr> </table> </div> <div class="paragraph"> <p>The SDP achieves this by executing a simple Bash script which (for instance <code>1</code>) is <code>/p4/1/bin/p4d_1</code> with contents:</p> </div> <div class="literalblock"> <div class="content"> <pre>#!/bin/bash P4D="/p4/common/bin/p4d_1_bin" exec $P4D -C1 "$@"</pre> </div> </div> <div class="paragraph"> <p>So the above will ensure that <code>/p4/common/bin/p4d_1_bin</code> (for instance <code>1</code>) is executed with the <code>-C1</code> flag.</p> </div> <div class="paragraph"> <p>As noted above, for case sensitive servers, <code>p4d_1</code> is normally just a link:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/1/bin/p4d_1 -> /p4/common/bin/p4d_1_bin</pre> </div> </div> <div class="paragraph"> <p>Note for an instance <code>alpha</code> (not <code>1</code>), the file would be <code>/p4/alpha/bin/p4d_alpha</code> with contents:</p> </div> <div class="literalblock"> <div class="content"> <pre>#!/bin/bash P4D="/p4/common/bin/p4d_alpha_bin" exec $P4D -C1 "$@"</pre> </div> </div> </div> </div> </div> </div> <div class="sect1"> <h2 id="_the_journalprefix_standard">Appendix B: The journalPrefix Standard</h2> <div class="sectionbody"> <div class="paragraph"> <p>The Perforce Helix configurable <a href="https://www.perforce.com/manuals/cmdref/Content/CmdRef/configurables.configurables.html#journalPrefix"><code>journalPrefix</code></a> determines where the active journal is rotated to when it becomes a numbered journal file during the journal rotation process. It also defines where checkpoints are created.</p> </div> <div class="paragraph"> <p>In the SDP structure, the <code>journalPrefix</code> is set so that numbered journals and checkpoints land on the <code>/hxdepots</code> volume. This volume contains critical digital assets that should be reliably backed up and should have sufficient storage for large digital assets such as checkpoints.</p> </div> <div class="sect2"> <h3 id="_sdp_scripts_that_set_journalprefix">B.1. SDP Scripts that set <code>journalPrefix</code></h3> <div class="paragraph"> <p>The SDP <code>configure_new_server.sh</code>, which applies SDP standards to fresh new <code>p4d</code> servers, sets the <code>journalPrefix</code> for the master server according to this standard.</p> </div> <div class="paragraph"> <p>The SDP <code>mkrep.sh</code> script, which creates new replicas, sets `journalPrefix for replicas according to this standard.</p> </div> <div class="paragraph"> <p>The SDP <code>mkdirs.sh</code> script, which initializes the SDP structure, creates a directory structure for checkpoints based on the journalPrefix.</p> </div> </div> <div class="sect2"> <h3 id="_first_form_of_journalprefix_value">B.2. First Form of <code>journalPrefix</code> Value</h3> <div class="paragraph"> <p>The first form of the <code>journalPrefix</code> value applies to the master server’s metadata set. This value is of this form, where <code>N</code> is replaced with the SDP instance name:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/N/checkpoints/p4_N</pre> </div> </div> <div class="paragraph"> <p>If the SDP instance name is the default <code>1</code>, then files with a <code>p4_1</code> prefix would be stored in the <code>/p4/1/checkpoints</code> directory on the filesystem. Journal files in that directory would have names like <code>p4_1.jnl.320</code> and checkpoints would have names like <code>p4_1.ckp.320.gz</code>.</p> </div> <div class="paragraph"> <p>This <code>journalPrefix</code> value and the corresponding <code>/p4/1/checkpoints</code> directory should be used for the master server. It should also be used for any replica that is a valid failover target for the master server. This includes all <em>completely unfiltered</em> replicas of the master, such as <code>standby</code> and <code>forwarding-standby</code> replicas with a <code>P4TARGET</code> value referencing the master server.</p> </div> <div class="admonitionblock note"> <table> <tr> <td class="icon"> <i class="fa icon-note" title="Note"></i> </td> <td class="content"> A <code>standby</code> replica, also referred to as a <code>journalcopy</code> replica due to the underlying replication mechanisms, cannot be filtered. Standby replicas are commonly deployed for High Availability (HA) and Disaster Recovery (DR) purposes. </td> </tr> </table> </div> <div class="sect3"> <h4 id="_detail_on_completely_unfiltered">B.2.1. Detail on "Completely Unfiltered"</h4> <div class="paragraph"> <p>A "completely unfiltered" replica is one in which:</p> </div> <div class="ulist"> <ul> <li> <p>None of the <code>*DataFilter</code> fields in the replica’s server spec are used</p> </li> <li> <p>The <code>p4 pull</code> command configured to pull metadata from the the replica’s <code>P4TARGET</code> server, as defined in the replica’s <code>startup.<em>N</em></code> configurable, does not use filtering options such as <code>-T</code>.</p> </li> <li> <p>The replica is not an Edge server (i.e. one with a <code>Services</code> value in the server spec of <code>edge-server</code>.) Edge servers are filtered by their vary nature, as they exclude various database tables from being replicated.</p> </li> <li> <p>The replica’s seed checkpoint was created without the <code>-P <em>ServerID</em></code> flag to <code>p4d</code>. The <code>-P</code> flag is used when creating seed checkpoints for filtered replicas and edge servers.</p> </li> <li> <p>The replicas <code>P4TARGET</code> server references something other than the master server, such as an edge server.</p> </li> </ul> </div> </div> </div> <div class="sect2"> <h3 id="_second_form_of_journalprefix_value">B.3. Second Form of <code>journalPrefix</code> Value</h3> <div class="paragraph"> <p>A second form of the <code>journalPrefix</code> is used when the replica is filtered, including edge servers. The second form of the <code>journalPrefix</code> value incorporates a shortened form of the <em>ServerID</em> to indicate that the data set is specific to that <em>ServerID</em>. Because the metadata differs from the master, checkpoints for edge servers and filtered replicas are stored in a different directory, and use a prefix that identifies them as separate and divergent from the master’s data set. This second form allows checkpoints from multiple edge servers or filtered replicas to be stored on an shared (e.g. NFS-mounted) <code>/hxdepots</code> volume.</p> </div> <div class="paragraph"> <p>The second form of journalPrefix is also used if the <code>/hxdepots</code> volume, on which checkpoints are stored, is shared (as indicated when the replicas <code>lbr.replication</code> value is set to a value of <code>shared</code>).</p> </div> <div class="admonitionblock note"> <table> <tr> <td class="icon"> <i class="fa icon-note" title="Note"></i> </td> <td class="content"> Filtered replicas are a strict subset of the master server’s metadata. Edge servers filter some database tables from the master, but also have their own independent metadata (mainly workspace metadata) that varies from the master server and is potentially larger than the master’s data set for some tables. </td> </tr> </table> </div> <div class="paragraph"> <p>The "shortened form" of the <em>ServerID</em> removes the <code>p4d_</code> prefix (per <a href="#_server_spec_naming_standard">Appendix C, <em>Server Spec Naming Standard</em></a>). So, for example an edge server with a <em>ServerID</em>` of <code>p4d_edge_uk</code> would use just the <code>edge_uk</code> portion of the <em>ServerID</em> in the <code>journalPrefix</code>, which would look like:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/N/checkpoints.edge_uk/p4_N.edge_uk</pre> </div> </div> <div class="paragraph"> <p>If the SDP instance name is the default <code>1</code>, then files with a <code>p4_1.edge_uk</code> prefix would be stored in the <code>/p4/1/checkpoints.edge_uk</code> directory on the filesystem. Journal files in that directory would have names like <code>p4_1.edge_uk.320.jnl</code> and checkpoints would have names like <code>p4_1.edge_uk.320.ckp.gz</code>.</p> </div> </div> <div class="sect2"> <h3 id="_scripts_for_maintaining_the_offline_db">B.4. Scripts for Maintaining the <code>offline_db</code></h3> <div class="paragraph"> <p>The following SDP scripts help maintain the <code>offline_db</code>:</p> </div> <div class="ulist"> <ul> <li> <p><code>daily_checkpoint.sh</code>: The <code>daily_checkpoint.sh</code> is used on the master server. When run on the master server, this script rotates the active journal to a numbered journal file, and then maintains the master’s <code>offline_db</code> using the numbered journal file immediately after it is rotated.</p> </li> </ul> </div> <div class="paragraph"> <p>The <code>daily_checkpoint.sh</code> is also used on edge servers and filtered replicas. When run on edge servers and filtered replicas, this script maintains the replica’s <code>offline_db</code> in a manner similar to the master, except that the journal rotation is skipped (as that can be done only on the master).</p> </div> <div class="ulist"> <ul> <li> <p><code>sync_replica.sh</code>: The SDP <code>sync_replica.sh</code> script is intended to be deployed on unfiltered replicas of the master. It maintains the <code>offline_db</code> by copying (via rsync) the checkpoints from the master, and then replays those checkpoints to the local <code>offline_db</code>. This keeps the <code>offline_db</code> of the replica current, which is good to have should the replica ever need to take over for the master.</p> </li> </ul> </div> <div class="paragraph"> <p>INFO: For HA/DR and any purpose where replicas are not filtered, replicas of type <code>standby</code> and <code>forwarding-standby</code> should displace replicas of type <code>replica</code> and <code>forwarding-replica</code>.</p> </div> </div> <div class="sect2"> <h3 id="_sdp_structure_and_journalprefix">B.5. SDP Structure and <code>journalPrefix</code></h3> <div class="paragraph"> <p>On every server machine with the SDP structure where a <code>p4d</code> service runs (excluding broker-only and proxy-only hosts), a structure like the following should exist for each instance:</p> </div> <div class="ulist"> <ul> <li> <p>A <code>/hxdepots/p4/N/checkpoints</code> directory</p> </li> <li> <p>In <code>/p4/N</code>, and symlink <code>checkpoints</code> that links to <code>/hxdepots/p4/N/checkpoints</code>, such that it can be referred to as <code>/p4/N/checkpoints</code>.</p> </li> </ul> </div> <div class="paragraph"> <p>In addition, edge servers and filtered replicas will also have a structure like the following for each instance that runs an edge server or filtered replica:</p> </div> <div class="ulist"> <ul> <li> <p>A <code>/hxdepots/p4/N/checkpoints.ShortServerID</code> directory</p> </li> <li> <p>In <code>/p4/N</code>, and symlink <code>checkpoints.ShortServerID</code> that links to <code>/hxdepots/p4/N/checkpoints.ShortServerID</code>, such that it can be referred to as <code>/p4/N/checkpoints.ShortServerID</code>.</p> </li> </ul> </div> <div class="paragraph"> <p>The SDP <code>mkdirs.sh</code> script, which sets up the initial SDP structure, initializes this structure on initial install.</p> </div> </div> <div class="sect2"> <h3 id="_replicas_of_edge_servers">B.6. Replicas of Edge Servers</h3> <div class="paragraph"> <p>As edge servers have unique data, they are commonly deployed with their own <code>standby</code> replica with a <code>P4TARGET</code> value referencing a given edge server rather than the master. This enables faster recovery option for the edge server.</p> </div> <div class="paragraph"> <p>As a special case, a <code>standby</code> replica of an edge server should have the same <code>journalPrefix</code> value as the edge server it targets. Thus, the <em>ServerID</em> baked into the journalPrefix of a replica of an edge is the ServerID of the target edge server, not the replica.</p> </div> <div class="paragraph"> <p>So for example, an edge server with a <em>ServerID</em> of <code>p4d_edge_uk</code> has a <code>standby</code> replica with a <em>ServerID</em> of <code>p4d_ha_edge_uk</code>. The journalPrefix of that edge should be the same as the edge server it targets, e.g.</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/1/checkpoints.edge_uk/p4_1.edge_uk</pre> </div> </div> </div> <div class="sect2"> <h3 id="_goals_of_the_journalprefix_standard">B.7. Goals of the <code>journalPrefix</code> Standard</h3> <div class="paragraph"> <p>Some design of goals this standard:</p> </div> <div class="ulist"> <ul> <li> <p>Make it so the <code>/p4/N/checkpoints</code> folder is reserved to mean checkpoints created from the master server’s full metadata set.</p> </li> <li> <p>Make the <code>/p4/N/checkpoints</code> folder be safe to rsync from the master to any machine in the topology (as may be needed in certain recovery situations for replicas and edge servers).</p> </li> <li> <p>Make it so the SDP <code>/hxdepots</code> volume can be NFS-mounted across multiple SDP machines safely, such that two or more edge servers (or filtered replicas) could share versioned files, while writing to separate checkpoints directories on a per-ServerID basis.</p> </li> <li> <p>Support all replication uses cases, including support for 'Workspace Servers', a name referring to a set of edge servers deployed in in the same location, typically sharing <code>/hxdepots</code> via NFS. Use of Workspace Servers can be used to scale Helix Core horizontally for massive user bases (typically several thousand users).</p> </li> </ul> </div> </div> </div> </div> <div class="sect1"> <h2 id="_server_spec_naming_standard">Appendix C: Server Spec Naming Standard</h2> <div class="sectionbody"> <div class="paragraph"> <p>Perforce Helix server specs identify various Helix servers in a topology. Servers can be p4d servers (master, replicas, edges), p4broker, p4p, etc. This standard defines the standard for the server spec names.</p> </div> <div class="sect2"> <h3 id="_general_form">C.1. General Form</h3> <div class="paragraph"> <p>The general form of a server spec name is:</p> </div> <div class="listingblock"> <div class="content"> <pre class="highlight"><code><HelixServerTag>_<ReplicaTypeTag>[<N>]_<SiteTag></code></pre> </div> </div> <div class="paragraph"> <p>or, for the singular commit server in a data set:</p> </div> <div class="listingblock"> <div class="content"> <pre class="highlight"><code>{commit|master}[.<SDPInstance>[.<OrgName>]]</code></pre> </div> </div> <div class="sect3"> <h4 id="_commit_server_spec">C.1.1. Commit Server Spec</h4> <div class="paragraph"> <p>The server spec name for a commit server starts with the literal token <code>commit</code> or <code>master</code>, followed by an optional SDP instance name (separated by a dot), followed by an optional organization tag name (separated by a dot).</p> </div> <div class="paragraph"> <p>The server spec name for a commit server is intended to be unique to enable certain cross-instance sharing workflows, e.g. using remote depots and Helix native DVCS features (e.g. <code>p4 fetch</code>, <code>p4 push</code>, etc.). The combination of <SDPInstance>.<OrgName> give a reasonable assurance of uniqueness (without resorting to GUIDs which aren’t suitable as a name, as they are typed often by humans to type).</p> </div> <div class="paragraph"> <p>The <SDPInstance> and <OrgName> both have these characteristics:</p> </div> <div class="paragraph"> <p>The <SDPInstance> and <OrgName> tags can be any alphanumeric name. Underscores (<code>_</code>) and dashes (<code>-</code>) are also allowed. Dots, spaces, and other special characters are not.</p> </div> <div class="paragraph"> <p>The <SDPInstancen> name is typed often in various admin operational tasks, so:</p> </div> <div class="ulist"> <ul> <li> <p>Instance names are best kept short. A length of 1-5 characters is recommended, with a maximum of 32 characters.</p> </li> <li> <p>Lowercase letters are preferred and required at some sites, but not required by the SDP.</p> </li> </ul> </div> <div class="paragraph"> <p>The <OrgName> is not typed often and can be longer. A length of 2-10 characters is recommended, with a maximum of 32 characters.</p> </div> <div class="paragraph"> <p>See <a href="#_instance">Section 2.1.2, “Instance”</a> for more information on an SDP Instance.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> The default <code>auth.id</code> configurable value is <code>p4_<SDPInstance>[.<OrgName>]</code>. The <code>auth.id</code> must also be unique across servers that do any cross-server communicion using remote depots and/or Helix native DVCS features. </td> </tr> </table> </div> <div class="paragraph"> <p>Sample values for the commit server:</p> </div> <div class="ulist"> <ul> <li> <p><code>master</code> - Simple, but does not guarantee uniqueness.</p> </li> <li> <p><code>commit</code> - Simple, but does not guarantee uniqueness.</p> </li> <li> <p><code>master.1</code> - Commit server for SDP instance 1.</p> </li> <li> <p><code>commit.1</code> - Commit server for SDP instance 1.</p> </li> <li> <p><code>commit.fgs.ExampleCo</code> - Commit server for SDP instance <code>fgs</code> for the organization ExampleCo.</p> </li> </ul> </div> <div class="paragraph"> <p>Note that changing the server spec of a commit server can entail some work, as the <code>ReplicatingFrom:</code> field of any server spects that target the comit server would need to be updated if it is ever changed. Also, changing the <code>auth.id</code> involves user impact and thus is best done with communication to users.</p> </div> </div> <div class="sect3"> <h4 id="_helix_server_tags">C.1.2. Helix Server Tags</h4> <div class="paragraph"> <p>The HelixServerTag_ is one of:</p> </div> <div class="ulist"> <ul> <li> <p><code>p4d</code>: for a Helix Core server (including all <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/deployment-architecture.html">distributed architecture</a> usages such as master/replica/edge).</p> </li> <li> <p><code>p4broker</code>: A <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/chapter.broker.html">Helix Broker</a></p> </li> <li> <p><code>p4p</code>: A <a href="https://www.perforce.com/perforce/doc.current/manuals/p4sag/Content/P4SAG/chapter.proxy.html">Helix Proxy</a></p> </li> <li> <p><code>gconn</code>: Helix4Git (H4G) Connector</p> </li> <li> <p><code>swarm</code>: Helix Swarm</p> </li> </ul> </div> <div class="paragraph"> <p>As a special case, the <em>HelixServerTag</em> is omitted for the ServerID of the master server spec.</p> </div> </div> <div class="sect3"> <h4 id="_replica_type_tags">C.1.3. Replica Type Tags</h4> <div class="paragraph"> <p>The <em>ReplicaType</em> is one of:</p> </div> <div class="ulist"> <ul> <li> <p><code>commit</code> or <code>master</code>: The single master-commit. server for a given SDP instance. SDP instance names are included in the ServerID for the master, as they intended to be unique within an enterprise. They must be unique to enable certain cross-instance sharing workflows, e.g. using remote depots and Helix native DVCS features.</p> </li> <li> <p><code>ha</code>: High Availability. This indicates a replica that was specifically intended for HA purposes and for use with the <code>p4 failover</code> command. It further implies the following:</p> <div class="ulist"> <ul> <li> <p>The Services field value is <code>standby</code>.</p> </li> <li> <p>The <code>rpl.journalcopy.location=1</code> configurable is set, optimized for SDP deployment.</p> </li> <li> <p>The replica is not filtered in any way: No usage of the <code>-T</code> flag to <code>p4 pull</code> in the replicas startup.<em>N</em> configurables, and no usage of <code>*DataFilter</code> fields in the server spec.</p> </li> <li> <p>Versioned files are replicated (with an <code>lbr.replication</code> value of <code>readonly</code>).</p> </li> <li> <p>An HA replica is assumed to be geographically near its P4TARGET server, which can be a master server or an edge server.</p> </li> <li> <p>It may or may not use the <code>mandatory</code> option in the server spec. The <code>ha</code> tag does not indicate whether the <code>mandatory</code> option is used (as this is more transient thing not suitable for baking into a server spec naming standard).</p> </li> </ul> </div> </li> <li> <p><code>ham</code>: A <code>ham</code> replica is the same as an <code>ha</code> replica except it does not replicate versioned files. Thus is a <em>metadata-only</em> replica that shares versioned files with its P4TARGET server (master or edge) with an <code>lbr.replication</code> value of <code>shared</code>.</p> </li> <li> <p><code>fr</code>: Forwarding Replica (unfiltered) that replicates versioned files.</p> </li> <li> <p><code>frm</code>: Forwarding replica (unfiltered) that shares versioned files with its target server rather than replicating them.</p> </li> <li> <p><code>fs</code>: Forwarding Standby (unfiltered) that replicates versioned files. This is the same as an <code>ha</code> server, except that it is not necessarily expected to be physically near its P4TARGET server. This could be suited for Disaster Recovery (DR) purposes.</p> </li> <li> <p><code>fsm</code>: Forwarding standby (unfiltered) that shares versioned files with its target server rather than replicating them. This is the same as a <code>ham</code>, except that it is not necessarily expected to be physically near its P4TARGET server.</p> </li> <li> <p><code>ffr</code>: Filtered Forwarding Replica. This replica uses some of filtering, such as usage of <code>*DataFilter</code> fields of the server spec or <code>-T</code> flag to <code>p4 pull</code> in the replicas <code>startup.<N></code> configurables. Filtered replicas are not viable failover targets, as the filtered data would be lost.</p> </li> <li> <p><code>ro</code> - Read Only replica (unfiltered), replicating versioned files).</p> </li> <li> <p><code>rom</code> - Read Only metadata-only replica (unfiltered, sharing versioned files).</p> </li> <li> <p><code>edge</code> - Edge servers. (As edge servers are filtered by their nature, they are not valid failover targets).</p> </li> </ul> </div> <div class="sect4"> <h5 id="_replication_notes">C.1.3.1. Replication Notes</h5> <div class="paragraph"> <p>If a replica does not need to be filtered, we recommend using <code>journalcopy</code> replication, i.e. using a replica with a <code>Services:</code> field value of <code>standby</code> or <code>forwarding-standby</code>. Only use non-journalcopy replication when using filtered replicas (and edge servers where there is no choice).</p> </div> <div class="paragraph"> <p>Some general tips:</p> </div> <div class="ulist"> <ul> <li> <p>The <code>ha</code>, <code>ham</code> replicas are preferred for High Availability (HA) usage.</p> </li> <li> <p>The <code>fs</code> and <code>ro</code> replicas are preferred for Disaster Recovery (DR) usage.</p> </li> <li> <p>Since DR implies the replica is far from its master, replication of archives (rather than sharing e.g. via NFS) may not be practical, and so <code>rom</code> replicas don’t have common use cases.</p> </li> <li> <p>The <code>fr</code> type replica is obsolete, and should be replaced with <code>fs</code> (using <code>journalcopy</code> replication).</p> </li> </ul> </div> </div> </div> <div class="sect3"> <h4 id="_site_tags">C.1.4. Site Tags</h4> <div class="paragraph"> <p>The site tag needs to distinguish the data centers used by a single enterprise, and so generally short tag names are appropriate. See <a href="#_sitetags_cfg">Section 6.3.4.1, “SiteTags.cfg”</a></p> </div> <div class="paragraph"> <p>Each site tag may be understood to be a true data center (Tier 1, Tier 2, etc.), a computer room, computer closet, or reserved space under a developer’s desk. In some cases organizations will already have their own familiar site tags to refer to different sites or data centers; these can be used.</p> </div> <div class="paragraph"> <p>In public cloud deployments, the public cloud provider’s region names can be used (e.g. <code>us-east-1</code>), or an internal short form (e.g. <code>awsnva1</code> for the AWS us-east-1 data center in Northern Virginia, USA.</p> </div> <div class="paragraph"> <p>As a special case, the <code><SiteTag></code> is omitted for the master server spec.</p> </div> </div> </div> <div class="sect2"> <h3 id="_example_server_specs">C.2. Example Server Specs</h3> <div class="paragraph"> <p>Here are some sample server spec names based on this convention:</p> </div> <div class="ulist"> <ul> <li> <p><code>master.1</code>: A master server for SDP instance 1.</p> </li> <li> <p><code>p4d_ha_chi</code>: A High Availability (HA) server, suitable for use with <code>p4 failover</code>, located in Chicago, IL.</p> </li> <li> <p><code>p4d_ha2_chi</code>: A second High Availability server, suitable for use with <code>p4 failover</code>, located in Chicago, IL.</p> </li> <li> <p><code>p4d_ffr_pune</code>: A filtered forwarding replica in Pune, India.</p> </li> <li> <p><code>p4d_edge_blr</code>: An edge server located in Bangalore, India.</p> </li> <li> <p><code>p4d_ha_edge_blr</code>: An HA server with P4TARGET pointing to the edge server in Bangalore, India.</p> </li> <li> <p><code>p4d_edge3_awsnva</code>: A 3rd edge server in AWS data center in the us-east-1 (Northern Virginia) region.</p> </li> </ul> </div> </div> <div class="sect2"> <h3 id="_implications_of_replication_filtering">C.3. Implications of Replication Filtering</h3> <div class="paragraph"> <p>Replicas that are filtered in any way are not viable candidate servers to failover to, because any filtered data would be lost.</p> </div> </div> <div class="sect2"> <h3 id="_other_replica_types">C.4. Other Replica Types</h3> <div class="paragraph"> <p>The naming convention intentionally does not account for all possible server specs available with p4d. The standard accounts only for the distilled list of server spec types supported by the SDP <code>mkrep.sh</code> script, which are the most useful and commonly used ones.</p> </div> </div> <div class="sect2"> <h3 id="_the_sdp_mkrep_sh_script">C.5. The SDP <code>mkrep.sh</code> script</h3> <div class="paragraph"> <p>The SDP script <code>mkrep.sh</code> adheres to this standard. For more information on creating replicas with this script. See: <a href="#_using_mkrep_sh">Section 6.3.4, “Using mkrep.sh”</a>.</p> </div> </div> </div> </div> <div class="sect1"> <h2 id="_frequently_asked_questions">Appendix D: Frequently Asked Questions</h2> <div class="sectionbody"> <div class="paragraph"> <p>This FAQ lists common questions about the SDP with answers.</p> </div> <div class="sect2"> <h3 id="_how_do_i_tell_what_version_of_the_sdp_i_have">D.1. How do I tell what version of the SDP I have?</h3> <div class="paragraph"> <p>First, try the standard check. See: <a href="#_checking_the_sdp_version">Section 1.3, “Checking the SDP Version”</a>.</p> </div> <div class="paragraph"> <p>If that does not display the SDP version, as may happen with older SDP installations, run the SDP Health Check, which will report the correct version reliably. See: <a href="#_sdp_health_checks">Appendix H, <em>SDP Health Checks</em></a>.</p> </div> </div> <div class="sect2"> <h3 id="_how_do_i_change_super_user_password">D.2. How do I change super user password?</h3> <div class="paragraph"> <p>There are two critical accounts to be aware of:</p> </div> <div class="ulist"> <ul> <li> <p>The UNIX/Linux operating system user account with a password managed by the operating system of the machine, referred to as the OSUSER.</p> </li> <li> <p>The Perforce application super user with a password in the Perforce database. The SDP standard shell environment sets P4USER to refer to the super user.</p> </li> </ul> </div> <div class="paragraph"> <p>The user account name <code>perforce</code> is the default for both OSUSER and P4USER, but they can have different values. The OSUSER applies to the server machine, while the P4USER can vary on a per-instance basis.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> Some admins choose to use the same password for the <code>perforce</code> OSUSER and P4USER (for convenience and to reduce confusion), and then do routine rotations of both passwords (for enhanced security). </td> </tr> </table> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> The Perforce application super user should always use Perforce password management, even if other accounts are configured to use LDAP, SSO, or some other authentication method. </td> </tr> </table> </div> <div class="paragraph"> <p>To change the OSUSER, use your standard operating system commands. This may be the <code>passwd</code> command, but may be different depending on your operating system and other factors.</p> </div> <div class="paragraph"> <p>The following describes how to change the Perforce application super user password.</p> </div> <div class="paragraph"> <p>Step 1. Get a maintenance Window</p> </div> <div class="paragraph"> <p>Plan to do this work in a maintenance window. The procedure can cause disruption if any triggers or extensions rely on a valid ticket for your application super user. Also, much automation such as the SDP <code>daily_checkpoint.sh</code> script rely on having a valid ticket.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> If you are fully aware of all the ways the password is used and thus the potential impacts, you can do the work outside of a maintenance window. Changing the password can disrupt triggers, extensions, and various automation, but will mot have any impact on Helix Core service itself. </td> </tr> </table> </div> <div class="paragraph"> <p>Step 2. Pick a Password</p> </div> <div class="paragraph"> <p>Select your new password. Depending on your local policy, you may manually create a password, generate one, and possibly store it in a vault of some kind.</p> </div> <div class="paragraph"> <p>Step 3. Login as the OSUSER</p> </div> <div class="paragraph"> <p>Login as the OSUSER (e.g. <code>perforce</code>), and ensure the standard SDP shell environment is set.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> If the OSUSER shell environment files <code>~/.bash_profile</code> and <code>~/.bashrc</code> are set correctly, this step is done just by logging into the <code>perforce</code> OSUSER account. </td> </tr> </table> </div> <div class="paragraph"> <p>Step 4. Get the current password from the admin password file. The shell variable $SDP_ADMIN_PASSWORD_FILE contains the path to the password file for the current instance, something like <code>/p4/common/config/.p4passwd.p4_N.admin</code>. Do</p> </div> <div class="literalblock"> <div class="content"> <pre>cat $SDP_ADMIN_PASSWORD_FILE</pre> </div> </div> <div class="paragraph"> <p>Take note of the current/old password.</p> </div> <div class="paragraph"> <p>Step 5. Put the new password in the admin password file.</p> </div> <div class="paragraph"> <p>Step 7. Do:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 passwd</pre> </div> </div> <div class="paragraph"> <p>Provide the old and new password as prompted.</p> </div> <div class="paragraph"> <p>Step 6. Call the <code>p4login</code> script to exercise the new password file:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4login -v</pre> </div> </div> <div class="paragraph"> <p>Confirm you have a valid ticket afterward with:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 login -s</pre> </div> </div> <div class="paragraph"> <p>Step 7. Copy the password file to any and all replica and edge server machines.</p> </div> <div class="paragraph"> <p>Step 8. On each replica and edge, login as <code>perforce</code> and also do <code>p4login -v</code> and <code>p4 login -s</code>.</p> </div> </div> <div class="sect2"> <h3 id="_can_i_remove_the_perforce_user">D.3. Can I remove the perforce user?</h3> <div class="paragraph"> <p>No. This account is required for critical operations like checkpoints for backup.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> This account need not occupy a licensed seat. Once a Helix Core server becomes licensed, you can fill out the <a href="https://www.perforce.com/support/vcs/helix-core-request-background-user">Helix Core Request for Background User</a> form to request up to 3 "background users" to support background automation tasks. This accounts for the <code>perforce</code> super user, a <code>swarm</code> user, and typically one named something like <code>builder</code> for automated builds. </td> </tr> </table> </div> </div> <div class="sect2"> <h3 id="_can_i_clone_a_vm_to_create_a_standby_replica">D.4. Can I clone a VM to create a standby replica?</h3> <div class="paragraph"> <p>Yes, cloning a virtual machine (VM) of a Helix Core commit server is a great way to simplify the process of creating a standby replica of the commit server. Similarly, cloning an edge server is useful in creating a standby replica of the edge.</p> </div> <div class="paragraph"> <p>Cloning can be done with various technologies and in cloud and on-prem environments. For example, in AWS, creating an AMI of an EC2 instance (i.e. a virtual machine) is just different terminology for creating a clone of the virtual machine. Azure, GCP, and other clouds have similar concepts and capabilities, as do on-prem virtual infrastructure such as VMware ESX servers. Even non-virtual infrastructure tools exist for cloning bare metal server machines.</p> </div> <div class="paragraph"> <p>Nothing needs to change other than the <code>server.id</code> file whether the machine you’re cloning is a commit server (to make a standby of the commit) or an edge (to make a standby of the edge). There is a slight SDP structure difference between an commit an an edge — an edge will have a <code>/hxdepots/p4/N/checkpoints.edge_SITE</code> directory and <code>/p4/N/checkpoints.edge_SITE</code> symlink to it. As long as you clone the machine that you’re making a standby of, be it commit or edge, you’ll have the correct structure on the standby.</p> </div> <div class="paragraph"> <p>While nothing should need to change, there are a few things to double check before initiating the cloning process:</p> </div> <div class="ulist"> <ul> <li> <p>Check that the SDP Instance Vars file, <code>/p4/common/config/p4_N.vars</code> has correct values for <strong>P4MASTERHOST</strong> and <strong>P4MASTER_ID</strong>.</p> </li> <li> <p>The <strong>P4MASTER_ID</strong> must be the <code>server.id</code> of the commit server, always, and that will be the same regardless of what machine you’re on.The <strong>P4MASTERHOST</strong> should be a DNS name for the commit server that works — i.e. that valid to reference from the standby server after cloning. Using the same DNS name used by regular users is preferred — it can be an FQDN or a short name depending on how DNS is setup locally. If DNS isn’t available in the server environment (as is sometimes the case), Plan B for setting <strong>P4MASTERHOST</strong> is to still use the same DNS that users know, but to add an <code>/etc/hosts</code> entry ("hack?") on the standby server machine after cloning so that the DNS name works on the standby to reference the commit server. Plan C, which we strong advise against but do support, is to use an IP address for the <strong>P4MASTERHOST</strong> value. Plan A is preferred because Plans B and C require the admin who executes failover to be aware of the "hacks" — <code>/etc/hosts</code> entry or using an IP address — to be accounted for in the failover procedure.</p> </li> </ul> </div> <div class="paragraph"> <p>The general idea is that <code>/p4/common</code> structure in the SDP should be <em>common</em> across all Helix Core server machines in your fleet. Even on the standby replica, the <strong>P4MASTER_ID</strong> and <strong>P4MASTERHOST</strong> values be exactly the same as on the commit. Cloning the machine is the best way to do it. It’s also nice to have a reasonably current set of archives, and nice to ensure all those little SDP config bits are correct.</p> </div> <div class="paragraph"> <p>Here is a sample procedure of cloning a machine to create a standby replica.</p> </div> <div class="paragraph"> <p>Step 1. Verify <strong>P4MASTER_ID</strong> and <strong>P4MASTERHOST</strong> settings are correct.</p> </div> <div class="paragraph"> <p>Step 2. Use <code>mkrep.sh</code> to create your standby server. See: <a href="#_using_mkrep_sh">Section 6.3.4, “Using mkrep.sh”</a>.</p> </div> <div class="paragraph"> <p>Step 3. Run <code>p4 admin journal</code>. (Digression: Use <code>p4 admin journal</code> command if you’re creating a standby or unfiltered edge or replica, but use the <code>rotate_journal.sh</code> script instead if you’re creating a filtered edge or filtered forwarding replica, where <em>filtered</em> here means using the <code>*DataFilter</code> fields in the server spec and/or using <code>-T</code> option to the configured <code>startup.N</code> thread that does the metadata pull for the ServerID of the new server.)</p> </div> <div class="paragraph"> <p>Step 4. Clone the VM.</p> </div> <div class="paragraph"> <p>Step 5. Start the new VM after the cloning operation. For example, if in AWS, launch an EC2 instance from the AMI.</p> </div> <div class="paragraph"> <p>Step 6. Stop the p4d_N (and p4broker_N) services if running.</p> </div> <div class="paragraph"> <p>Step 7. Use <code>hostname -I</code> to get the local/private IP, and request a new license file for that IP — but don’t wait for it.</p> </div> <div class="paragraph"> <p>Step 8. Remove the <code>$P4ROOT/license</code> file.</p> </div> <div class="paragraph"> <p>Step 9. Remove the <code>$P4ROOT/server.id</code> file.</p> </div> <div class="paragraph"> <p>Step 10. Load the latest checkpoint and numbered journal, and then pull recent archives, e.g. with a command like this sample:</p> </div> <div class="literalblock"> <div class="content"> <pre>nohup load_checkpoint.sh /p4/1/checkpoints/p4_1.ckp.50.gz /p4/1/checkpoints/p4_1.jnl.50 -s p4d_ha_bos -l -r -b -y -verify default < /dev/null > /p4/1/logs/load.log 2>&1 &</pre> </div> </div> <div class="paragraph"> <p>That <code>load_checkpoint.sh</code> does the rest. It stops p4d and p4broker services (just in case you forgot), clears P4ROOT, moves P4LOG and P4JOURNAL aside if they exist (which they would after a cloning situation), puts the new correct <code>server.id</code> file in place, reloads from the latest checkpoint and numbered journal (that are sure to have the very latest data due to the <code>p4 admin journal</code> done above just before the cloning), does a <code>p4d -xu</code> (just in case it’s needed, but shouldn’t be in this situation), starts the service, and then kicks off a <code>p4 verify -t</code> command on all depots to pull over any missing files from the commit.</p> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> The above procedure is merely a sample. Certain details, such as the handling of license files, may vary from one site to another. </td> </tr> </table> </div> </div> </div> </div> <div class="sect1"> <h2 id="_troubleshooting_guide">Appendix E: Troubleshooting Guide</h2> <div class="sectionbody"> <div class="paragraph"> <p>This appendix lists problems sometimes encountered by SDP users, with guidance on how to analyize and resolve each issue.</p> </div> <div class="paragraph"> <p>Do not hesitate to contact <a href="mailto:consulting@perforce.com">consulting@perforce.com</a> if additional assistance is required.</p> </div> <div class="sect2"> <h3 id="_daily_checkpoint_sh_fails">E.1. Daily_checkpoint.sh fails</h3> <div class="olist arabic"> <ol class="arabic"> <li> <p>Check the output of the log file and look for errors:</p> <div class="literalblock"> <div class="content"> <pre>less /p4/1/logs/checkpoint.log</pre> </div> </div> </li> </ol> </div> <div class="paragraph"> <p>Possibilities include:</p> </div> <div class="ulist"> <ul> <li> <p>Errors from <code>verify_sdp.sh</code> - should be self explanatory.</p> <div class="ulist"> <ul> <li> <p>Note that it is possible to edit <code>/p4/common/config/p4_1.vars</code> and set the value of <code>VERIFY_SDP_SKIP_TEST_LIST</code> to include any tests you consider should be skipped - don’t overdo this!</p> </li> </ul> </div> </li> <li> <p>See next section</p> </li> </ul> </div> <div class="sect3"> <h4 id="_last_checkpoint_not_complete_check_the_backup_process_or_contact_support">E.1.1. Last checkpoint not complete. Check the backup process or contact support.</h4> <div class="paragraph"> <p>If this error occurs it means the script has found a "semaphore" file which is used to prevent multiple checkpoints running at the same time. This file is (for instance 1) <code>/p4/1/logs/ckp_running.txt</code>.</p> </div> <div class="paragraph"> <p>Check if there is a current process running:</p> </div> <div class="literalblock"> <div class="content"> <pre>ps aux | grep daily_checkpoint</pre> </div> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> If you are CERTAIN that there is no checkpoint process running, then you can delete this file and re-run <code>daily_checkpoint.sh</code> (or allow it to be run via nightly crontab). If in doubt, contact support! </td> </tr> </table> </div> </div> </div> <div class="sect2"> <h3 id="_replication_appears_to_be_stalled">E.2. Replication appears to be stalled</h3> <div class="paragraph"> <p>This can happen for a variety of reasons, most commonly:</p> </div> <div class="ulist"> <ul> <li> <p>Service user is not logged in to the parent</p> <div class="ulist"> <ul> <li> <p>Or there is a problem with ticket or ticket location</p> </li> </ul> </div> </li> <li> <p>Configurables are incorrect (<code>p4 configure show allservers</code>)</p> </li> <li> <p>Network connectivity to upstream parent</p> </li> <li> <p>A problem with state file</p> <div class="olist arabic"> <ol class="arabic"> <li> <p>Check the output of <code>p4 pull -lj</code>, e.g. this shows all is working well:</p> <div class="literalblock"> <div class="content"> <pre>$ p4 pull -lj Current replica journal state is: Journal 1237, Sequence 2680510310. Current master journal state is: Journal 1237, Sequence 2680510310. The statefile was last modified at: 2022/03/29 14:15:16. The replica server time is currently: 2022/03/29 14:15:18 +0000 GMT</pre> </div> </div> </li> </ol> </div> </li> </ul> </div> <div class="sect3"> <h4 id="_resolution">E.2.1. Resolution</h4> <div class="olist arabic"> <ol class="arabic"> <li> <p>This example shows a password error for the service user:</p> <div class="literalblock"> <div class="content"> <pre>$ p4 pull -lj Perforce password (P4PASSWD) invalid or unset. Perforce password (P4PASSWD) invalid or unset. Current replica journal state is: Journal 1237, Sequence 2568249374. Current master journal state is: Journal 1237, Sequence -1. Current master journal state is: Journal 0, Sequence -1. The statefile was last modified at: 2022/03/29 13:05:46. The replica server time is currently: 2022/03/29 14:13:21 +0000 GMT</pre> </div> </div> <div class="olist loweralpha"> <ol class="loweralpha" type="a"> <li> <p>In case of a password error, try logging in again:</p> <div class="literalblock"> <div class="content"> <pre>p4login -v 1 -service p4 pull -lj</pre> </div> </div> </li> <li> <p>If the above reports an error, then copy and paste the command it shows as executing and try it manually, for example (adjust the server/user ids):</p> <div class="literalblock"> <div class="content"> <pre>/p4/1/bin/p4_1 -p p4master:1664 -u p4admin -s login svc_p4d_edge_ldn</pre> </div> </div> </li> </ol> </div> </li> </ol> </div> <div class="paragraph"> <p>If the above is not successful:</p> </div> <div class="olist arabic"> <ol class="arabic" start="3"> <li> <p>Review output of <code>verify_sdp.sh</code>:</p> <div class="literalblock"> <div class="content"> <pre>/p4/common/bin/verify_sdp.sh 1 grep Error /p4/1/logs/verify_sdp.log</pre> </div> </div> <div class="olist loweralpha"> <ol class="loweralpha" type="a"> <li> <p>Check for errors in the resulting log file:</p> <div class="literalblock"> <div class="content"> <pre>grep Error /p4/1/logs/verify_sdp.log</pre> </div> </div> </li> </ol> </div> </li> <li> <p>Check for errors in the p4d log file:</p> <div class="literalblock"> <div class="content"> <pre>grep -A4 error: /p4/1/logs/log | less</pre> </div> </div> </li> <li> <p>Check permissions on the tickets file (env var <code>$P4TICKETS</code>):</p> <div class="literalblock"> <div class="content"> <pre>ls -al $P4TICKETS</pre> </div> </div> <div class="paragraph"> <p>e.g.</p> </div> <div class="literalblock"> <div class="content"> <pre>ls -al /p4/1/.p4tickets</pre> </div> </div> </li> </ol> </div> </div> <div class="sect3"> <h4 id="_make_errors_visible">E.2.2. Make Errors Visible</h4> <div class="paragraph"> <p>If the above doesn’t help, then make errors visible/easy to find, assuming instance <strong>1</strong> - run this <strong>on the replica (not commit!)</strong>:</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo systemctl stop p4d_1 cd /p4/1/logs mv log log.old sudo systemctl start p4d_1 grep -A4 error: log | less</pre> </div> </div> <div class="paragraph"> <p>Due to shortened log file, any errors should be easily found. Ask for help (email <code>support-helix-core@perforce.com</code>) if not obvious.</p> </div> </div> <div class="sect3"> <h4 id="_remove_state_file">E.2.3. Remove state file</h4> <div class="paragraph"> <p>Files <code>state</code> and <code>statejcopy</code> can usually be removed - let the server work out its current state. If you want to know current journal counter for replica:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4d -r /p4/1/root -k db.counters -jd - 2>/dev/null | grep @journal@ | cut -d '@' -f 8</pre> </div> </div> <div class="paragraph"> <p>If there is a problem with being able to pull over an old journal which no longer exists on the master you may need to reseed the replica!</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo systemctl stop p4d_1 cd /p4/1/root mv state* save/ cd /p4/1/logs [[ -d save ]] || mkdir save # Create if doesn't exist mv journal* save/ sudo systemctl start p4d_1</pre> </div> </div> </div> </div> <div class="sect2"> <h3 id="_archive_pull_queue_appears_to_be_stalled">E.3. Archive pull queue appears to be stalled</h3> <div class="paragraph"> <p>This manifests as the output of <code>p4 pull -ls</code> showing an unchanging number of files in the queue - no progress is being made.</p> </div> <div class="literalblock"> <div class="content"> <pre>$ p4 pull -ls File transfers: 3 active/29 total, bytes: 2338 active/25579 total. Oldest change with at least one pending file transfer: 1234.</pre> </div> </div> <div class="paragraph"> <p>This can happen for a variety of reasons, most commonly:</p> </div> <div class="ulist"> <ul> <li> <p>Non-existent (purged) files (where filetype includes +Sn - where n is number of revisions to keep contents for)</p> </li> <li> <p>Non-existent (shelved) files</p> </li> <li> <p>Non-existent files with verify problem on master server</p> </li> <li> <p>Temporary file transfer problems which exceeded thresholds for auto-retry</p> </li> </ul> </div> <div class="sect3"> <h4 id="_resolutions">E.3.1. Resolutions</h4> <div class="olist arabic"> <ol class="arabic"> <li> <p>Retry pull errors</p> <div class="listingblock"> <div class="content"> <pre class="highlight"><code> p4 pull -R <wait a short time> p4 pull -ls</code></pre> </div> </div> </li> <li> <p>If the above doesn’t fix things then we can check for errors:</p> <div class="literalblock"> <div class="content"> <pre>p4 pull -l | grep -c failed</pre> </div> </div> </li> <li> <p>If the above is > 0 then we need to investigate in more detail.</p> </li> </ol> </div> <div class="sect4"> <h5 id="_remove_and_re_queue">E.3.1.1. Remove and re-queue</h5> <div class="paragraph"> <p>Save the list of files with errors to a file - like this to allow for spaces in filenames:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 -F "%rev% %file%" pull -l > pull.errs cat pull.errs | while read -e r f; do p4 pull -d -r $r -f "$f"; done</pre> </div> </div> <div class="paragraph"> <p>Finally we can “re-queue” any for re-transfer (note this can take a while for files with many revs):</p> </div> <div class="literalblock"> <div class="content"> <pre>cut -d' ' -f 2,999 pull.errs | sort | uniq | while read -e f; do echo "$f" && p4 verify -qt --only MISSING "$f"; done</pre> </div> </div> <div class="admonitionblock tip"> <table> <tr> <td class="icon"> <i class="fa icon-tip" title="Tip"></i> </td> <td class="content"> the <code>--only MISSING</code> option requires <code>p4d</code> version >= 2021.1 and is much faster - just remove that option with older versions of <code>p4d</code> </td> </tr> </table> </div> <div class="paragraph"> <p>Then have another look:</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 pull -l</pre> </div> </div> </div> <div class="sect4"> <h5 id="_check_for_verify_errors_on_the_parent_server">E.3.1.2. Check for verify errors on the parent server</h5> <div class="paragraph"> <p>On the parent server, check the most recent <code>p4verify.log</code> file (typically runs Saturday morning via crontab).</p> </div> <div class="paragraph"> <p>Cross-check any entries in <code>pull.errs</code> above - if they are also verify errors on the parent server then you need to resolve that. Consider contacting <a href="mailto:helix-core-support@perforce.com">helix-core-support@perforce.com</a> if you need help. Resolutions may include obliterating lost revisions, or attempting to restore from backup.</p> </div> </div> </div> </div> <div class="sect2"> <h3 id="_cant_login_to_edge_server">E.4. Can’t login to edge server</h3> <div class="paragraph"> <p>This can happen if the edge server replication has stalled as above.</p> </div> <div class="sect3"> <h4 id="_resolution_2">E.4.1. Resolution</h4> <div class="ulist"> <ul> <li> <p>Try the resolution steps for <a href="#_replication_appears_to_be_stalled">Section E.2, “Replication appears to be stalled”</a></p> </li> <li> <p>Restart edge server</p> </li> <li> <p>Monitor replication and check for any errors</p> </li> </ul> </div> </div> </div> <div class="sect2"> <h3 id="_updating_offline_db_for_an_edge_server">E.5. Updating offline_db for an edge server</h3> <div class="paragraph"> <p>If your <code>daily_checkpoint.sh</code> jobs on the edge server are failing due to a problem with the <code>offline_db</code> or missing edge journals, AND the edge server is otherwise running fine, then consider this option.</p> </div> <div class="admonitionblock important"> <table> <tr> <td class="icon"> <i class="fa icon-important" title="Important"></i> </td> <td class="content"> Checkpointing the edge will take some time during which the edge will be locked! Schedule this for a convenient time! </td> </tr> </table> </div> <div class="sect3"> <h4 id="_resolution_3">E.5.1. Resolution</h4> <div class="paragraph"> <p>Assuming instance 1:</p> </div> <div class="ulist"> <ul> <li> <p>ON EDGE SERVER:</p> <div class="literalblock"> <div class="content"> <pre>source /p4/common/bin/p4_vars 1 p4 admin checkpoint -Z</pre> </div> </div> </li> <li> <p>ON COMMIT SERVER (and at a convenient time to lock edge):</p> <div class="literalblock"> <div class="content"> <pre>source /p4/common/bin/p4_vars 1 p4 admin journal</pre> </div> </div> </li> <li> <p>Monitor edge server checkpoint being created (on EDGE SERVER):</p> <div class="literalblock"> <div class="content"> <pre>p4 configure show journalPrefix</pre> </div> </div> <div class="paragraph"> <p>Using the output shown by the above command:</p> </div> <div class="literalblock"> <div class="content"> <pre>ls -lhtr /p4/1/checkpoints.<suffix>/*.ckp.*</pre> </div> </div> <div class="paragraph"> <p>Also you can check for edge being locked (the following may hang):</p> </div> <div class="literalblock"> <div class="content"> <pre>p4 monitor show -al</pre> </div> </div> </li> <li> <p>Then replay the journal on the edge server to the <code>offline_db</code>:</p> <div class="literalblock"> <div class="content"> <pre>cd /p4/1/offline_db mv db.* save/ nohup /p4/1/bin/p4d_1 -r . -jr /p4/1/checkpoints.<suffix>/p4_1.ckp.NNNN.gz > rec.out &</pre> </div> </div> <div class="paragraph"> <p>When the above has completed, mark as usable by creating semaphore file:</p> </div> <div class="literalblock"> <div class="content"> <pre>touch /p4/1/offline_db/offline_db_usable.txt</pre> </div> </div> </li> </ul> </div> </div> </div> <div class="sect2"> <h3 id="_journal_out_of_sequence_in_checkpoint_log_file">E.6. Journal out of sequence in checkpoint.log file</h3> <div class="paragraph"> <p>This error is encountered when the offline and live databases are no longer in sync, and will cause the offline checkpoint process to fail. Because the scripts will replay all outstanding journals, this error is much less likely to occur. This error can be fixed by:</p> </div> <div class="ulist"> <ul> <li> <p>recreating the offline_db: <a href="#_recreate_offline_db_sh">Section 9.4.11, “recreate_offline_db.sh”</a></p> </li> <li> <p>alternatively if that doesn’t work - run the <a href="#_live_checkpoint_sh">Section 9.4.6, “live_checkpoint.sh”</a> script (note the warnings about locking live database)</p> </li> </ul> </div> </div> <div class="sect2"> <h3 id="_unexpected_end_of_file_in_replica_daily_sync">E.7. Unexpected end of file in replica daily sync</h3> <div class="paragraph"> <p>Check the start time and duration of the <a href="#_daily_checkpoint_sh">Section 9.4.4, “daily_checkpoint.sh”</a> cron job on the master. If this overlaps with the start time of the <a href="#_sync_replica_sh">Section 9.6.33, “sync_replica.sh”</a> cron job on a replica, a truncated checkpoint may be rsync’d to the replica and replaying this will result in an error.</p> </div> <div class="paragraph"> <p>Adjust the replica’s cronjob to start later to resolve this.</p> </div> <div class="paragraph"> <p>Default cron job times, as installed by the SDP are initial estimates, and should be adjusted to suit your production environment.</p> </div> </div> </div> </div> <div class="sect1"> <h2 id="_starting_and_stopping_services">Appendix F: Starting and Stopping Services</h2> <div class="sectionbody"> <div class="paragraph"> <p>There are a variety of <em>init mechanisms</em> on various Linux flavors. The following describes how to start and stop services using different init mechanisms.</p> </div> <div class="sect2"> <h3 id="_sdp_service_management_with_the_systemd_init_mechanism">F.1. SDP Service Management with the systemd init mechanism</h3> <div class="paragraph"> <p>On modern OS’s, like RHEL7 & 8, Rocky Linux 8, and Ubuntu >=18.04, and SuSE >=12, the <code>systemd</code> init mechanism is used. The underlying SDP init scripts are used, but they are wrapped with "unit" files in <code>/etc/systemd/system</code> directory, and called using the <code>systemctl</code> interface as <code>root</code> (typically using <code>sudo</code> while running as the <code>perforce</code> user).</p> </div> <div class="paragraph"> <p>On systems where systemd is used, <strong>the service can only be started using the <code>sudo systemctl</code> command</strong>, as in this example:</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo systemctl status p4d_N sudo systemctl start p4d_N sudo systemctl status p4d_N</pre> </div> </div> <div class="paragraph"> <p>Note that there is no immediate indication from running the start command that it was actually successful, hence the status command is run after. For best results, wait a few seconds after running the start command before running the status command. (If the start was unsuccessful, a good start to diagnostics would include running <code>tail /p4/N/logs/log</code> and <code>cat /p4/N/logs/p4d_init.log</code>).</p> </div> <div class="paragraph"> <p>The service should also be stopped in the same manner:</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo systemctl stop p4d_N</pre> </div> </div> <div class="paragraph"> <p>Checking for status can be done using both the <code>systemctl</code> command, or calling the underlying SDP init script directly. However, there are cases where the status indication may be different. Calling the underlying SDP init script for status will always report status accurately, as in this example:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/N/bin/p4d_N_init status</pre> </div> </div> <div class="paragraph"> <p>That works reliably even if the service was started with <code>systemctl start p4d_N</code>.</p> </div> <div class="paragraph"> <p>Checking status using the systemctl mechanism is done like so:</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo systemctl start p4d_N</pre> </div> </div> <div class="paragraph"> <p>If this reports that the service is <strong><code>active (running)</code></strong>, such indication is reliable. However, the status indication may falsely indicate that the service is down when it is actually running. This could occur with older init scripts if the underlying init script was used to start the server rather than using <code>sudo systemctl start p4d_N</code> as prescribed. The status indication would only indicate that the service is running if it was started using the systemctl mechanism. As of SDP 2020.1, a safety feature now assures that system is always used if configured.</p> </div> <div class="sect3"> <h4 id="_brokers_and_proxies">F.1.1. Brokers and Proxies</h4> <div class="paragraph"> <p>In the above examples for starting, stopping, and status-checking of services using either the SysV or <code>systemd</code> init mechanisms, <code>p4d</code> is the sample service managed. This can be replaced with <code>p4p</code> or <code>p4broker</code> to manage proxy and broker services, respectively. For example, on a <code>systemd</code> system, the broker service, if configured, can be started like so:</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo systemctl status p4broker_1 sudo systemctl start p4broker_1 sudo systemctl status p4broker_1</pre> </div> </div> </div> <div class="sect3"> <h4 id="_root_or_sudo_required_with_systemd">F.1.2. Root or sudo required with systemd</h4> <div class="paragraph"> <p>For SysV, having sudo is optional, as the underlying SDP init scripts can be called safely as <code>root</code> or <code>perforce</code>; the service runs as <code>perforce</code>.</p> </div> <div class="paragraph"> <p>If <code>systemd</code> is used, by default <code>root</code> access (often granted via <code>sudo</code>) is needed to start and stop the p4d service, effectively making sudo access required for the <code>perforce</code> user. The systemd "unit" files provided with the SDP handle making sure the underlying SDP init scripts start running under the correct operating system account user (typically <code>perforce</code>).</p> </div> </div> </div> <div class="sect2"> <h3 id="_sdp_service_management_with_sysv_init_mechanism">F.2. SDP Service Management with SysV init mechanism</h3> <div class="paragraph"> <p>On older OS’s, like RHEL/CentOS 6, the SysV init mechanism is used. For those, you can the following example commands, replacing <em>N</em> with the actual SDP instance name</p> </div> <div class="literalblock"> <div class="content"> <pre>sudo service p4d_N_init status</pre> </div> </div> <div class="paragraph"> <p>The service can be checked for status, started and stopped by calling the underlying SDP init scripts as either <code>root</code> or <code>perforce</code> directly:</p> </div> <div class="literalblock"> <div class="content"> <pre>/p4/N/bin/p4d_N_init status</pre> </div> </div> <div class="paragraph"> <p>Replace <code>status</code> with <code>start</code> or <code>stop</code> as needed. It is common to do a <code>status</code> check immediately before and after a <code>start</code> or <code>stop</code>.</p> </div> <div class="paragraph"> <p>During installation, a symlink is setup such that <code>/etc/init.d/p4d_N_init</code> is a symlink to <code>/p4/N/bin/p4_N_init</code>, and the proper <code>chkconfig</code> commands are run to register the application as a service that will be started on boot and gracefully shutdown on reboot.</p> </div> <div class="paragraph"> <p>On systems using SysV, calling the underlying SDP init scripts is safe and completely interchangeable with using the <code>service</code> command being run as <code>root</code>. That is, you can start a service with the underlying SDP init script, and the SysV init mechanism will still safely detect whether the service is running during a system shutdown, and thus will perform a graceful stop if p4d is up and running when you go to reboot. The status indication of the underlying SDP init script is absolutely 100% reliable, regardless of how the service was started (i.e. calling the init script directly as <code>root</code> or <code>perforce</code>, or using the <code>service</code> call as <code>root</code>.</p> </div> </div> </div> </div> <div class="sect1"> <h2 id="_brokers_in_stack_topology">Appendix G: Brokers in Stack Topology</h2> <div class="sectionbody"> <div class="paragraph"> <p>A preferred methodology is to deploy p4broker processes to control access to p4d servers. In a typical configuration, 100% of user activity gets to p4d thru a p4broker deployed in "stack topology", i.e. a p4broker exists on every machine where p4d is, and access to p4d on any given machine is only via the broker, with a typical setup using firewalls to enforce that concept. There are typically only 3 exceptions:</p> </div> <div class="olist arabic"> <ol class="arabic"> <li> <p>p4d-to-p4d communication (<code>p4 pull</code>, <code>p4 journalcopy</code>) bypasses the broker</p> </li> <li> <p>Triggers called from p4d run 'p4' commands against the p4d port directly.</p> </li> <li> <p>Admins running 'p4' commands while on the server machine can bypass the broker if they want.</p> </li> </ol> </div> <div class="paragraph"> <p>Everything else (to include Proxies, Swarm, Jenkins, any systems integrations, etc.) must go thru the broker.</p> </div> <div class="paragraph"> <p>Using brokers like this makes it straightforward to implement the "Down for Maintenance" concept across an entire global topology. For example, when upgrade p4d services in a global topology, doing the outer-to-inner upgrade procedure, it is best to prevent users from loading the system during the upgrade process.</p> </div> <div class="paragraph"> <p>Using brokers in "stack topology" avoids the significant performance impact of brokers deployed on a different machine than the targeted p4d. While running on the same host, the impact of brokers is relatively small.</p> </div> <div class="paragraph"> <p>Brokers are preferred over p4d command triggers for certain use cases. They’re independent of p4d and can keep p4d safe from rogue usage patterns.</p> </div> </div> </div> <div class="sect1"> <h2 id="_sdp_health_checks">Appendix H: SDP Health Checks</h2> <div class="sectionbody"> <div class="paragraph"> <p>If you need to contact Perforce Support to analyze an issue with the SDP on UNIX/Linux, you can use the <code>/p4/common/bin/sdp_health_check.sh</code> script. This script is included with the SDP (starting with SDP 2023.1 Patch 3). If your installation does not have this script, it can be downloaded separately. Every version of the <code>sdp_health_check.sh</code> script can be used any and all versions of the UNIX/Linux SDP dating back to 2007, so you don’t need to be concerned with version compatibility.</p> </div> <div class="paragraph"> <p>If your Perforce Helix server machine has outbound internet access, execute the following while logged in as the operating system user that owns the <code>/p4/common/bin</code> directory (typically <code>perforce</code> or <code>p4admin</code>):</p> </div> <div class="literalblock"> <div class="content"> <pre>cd /p4/common/bin</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre>[[ -e sdp_health_check.sh ]] && mv -f sdp_health_check.sh sdp_health_check.sh.moved.$(date +'%Y-%m-%d-%H%M%S')</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre>curl -L -s -O https://swarm.workshop.perforce.com/projects/perforce-software-sdp/download/tools/sdp_health_check.sh chmod +x sdp_health_check.sh</pre> </div> </div> <div class="literalblock"> <div class="content"> <pre>./sdp_health_check.sh</pre> </div> </div> <div class="paragraph"> <p>If your Perforce Helix server machine does not have have outbound internet access, acquire the <code>sdp_health_check.sh</code> file from a machine that does have outbound internet access, and then somehow get that file to your Perforce Helix server machine.</p> </div> <div class="paragraph"> <p>If you have multiple server machines with SDP, possibly including machines running P4D replicas or edge servers, P4Proxy or P4Broker servers, run the health on al machines of interest.</p> </div> <div class="paragraph"> <p>The <code>sdp_health_check.sh</code> script will produce a log file that can be provided to Perforce Support to help diagnose configuration issues and other problems. The script has these characteristics:</p> </div> <div class="ulist"> <ul> <li> <p>It is always safe to run. It does only analysis and reporting.</p> </li> <li> <p>It does only fast checks, and has no interactive prompts. Some log files are captured such as checkpoint.log, but not potentially large ones such as the p4d server log.</p> </li> <li> <p>It requires no command line arguments.</p> </li> <li> <p>It does not trasnfer sensitive information.</p> </li> <li> <p>It works for any and all UNIX/Linux SDP version since 2007.</p> </li> </ul> </div> </div> </div> </div> <div id="footer"> <div id="footer-text"> Version v2024.1<br> Last updated 2024-05-30 11:49:48 -0400 </div> </div> </body> </html>
# | Change | User | Description | Committed | |
---|---|---|---|---|---|
#125 | 30938 | Robert Cowham |
Minor clarifications for getting started using install_sdp.sh script Updated some links to new Helix Core doc locations. |
||
#124 | 30937 | Robert Cowham |
Update p4review2.py to work with Python3 Add basic test harness. Delete p4review.py which is Python2 and update docs. |
||
#123 | 30926 | C. Thomas Tyler | Updated version for release. | ||
#122 | 30913 | C. Thomas Tyler | Regnerated docs for release. | ||
#121 | 30837 | C. Thomas Tyler | Added ref to new storage doc. | ||
#120 | 30835 | C. Thomas Tyler |
Adapted Server Spec Naming Standard section detailing the ServerID of the commit server to the defacto standard already used in HRA. Changed from: {commit|master}[.<SDPInstance>[.<OrgName>]] to: {commit|master}[.<OrgName>[.<SDPInstance>]] Various typo fixes and minor changes in SDP Guide. Updated consulting email address (now consulting-helix-core@perforce.com) in various files. |
||
#119 | 30782 | C. Thomas Tyler |
Added new install_sdp.sh script and supporting documentation. The new install_sdp.sh makes SDP independent of the separate Helix Installer software (the reset_sdp.sh script). The new script greatly improves the installation experience for new server machines. It is ground up rewrite of the reset_sdp.sh script. The new script preserves the desired behaviors of the original Helix Installer script, but is focused on the use case of a fresh install on a new server machine. With this focus, the scripts does not have any "reset" logic, making it completely safe. Added various files and functionalityfrom Helix Installer into SDP. * Added firewalld templates to SDP, and added ufw support. * Improved sudoers generation. * Added bash shell templates. This script also installs in the coming SDP Package structure. New installs use a modified SDP structure that makes it so the /p4/sdp and /p4/common now point to folders on the local OS volume rather than the /hxepots volume. The /hxdepots volume, which is often NFS mounted, is still used for depots and checkpoints, and for backups. The new structure uses a new /opt/perforce/helix-sdp structure under which /p4/sdp and /p4/common point. This structure also contains the expaneded SDP tarball, downloads, helix_binaries, etc. This change represents the first of 3-phase rollout of the new package structure. In this first phase, the "silent beta" phase, the new structure is used for new installations only. This phase requires no changes to released SDP scripts except for mkdirs.sh, and even that script remains backward-compatible with the old structure if used independently of install_sdp.sh. If used with install_sdp.sh, the new structure is used. In the second phase (targeted for SPD 2024.2 release), the sdp_upgrade.sh script will convert existing installations to the new structure. In the third phase (targeted for SDP 2025.x), this script will be incorporated into OS pacakge installations for the helix-sdp package. Perforce internal wikis have more detail on this change. #review-30783 |
||
#118 | 30661 | Robert Cowham | Exapand description for recreate_offline_db.sh | ||
#117 | 30656 | Robert Cowham | Tweak xrefs from failover guide and sdp guide. | ||
#116 | 30608 | C. Thomas Tyler |
Fixed doc typo in triggers table call; trigger type should be 'change-submit', not 'submit-change'. |
||
#115 | 30606 | C. Thomas Tyler |
Updated content related to to perforce-p4python3 package. #review-30607 |
||
#114 | 30531 | C. Thomas Tyler |
Merge down from main to dev with: p4 merge -b perforce_software-sdp-dev |
||
#113 | 30516 | C. Thomas Tyler | Doc corrections and clarifications. | ||
#112 | 30440 | Robert Cowham | Add a couple of emphases... | ||
#111 | 30385 | C. Thomas Tyler | Regnerated docs for release. | ||
#110 | 30367 | C. Thomas Tyler |
Updated Server Spec Naming Standard to account for allowing 'commit' to be used as a synonym for 'master', and also allowing for appending an optional '<OrgName>'. |
||
#109 | 30294 | C. Thomas Tyler | Updated docs for release. | ||
#108 | 30285 | C. Thomas Tyler |
Updated SDP Guide for Unix to include raw perforce_suoders.t file for better accuracy and easier update. Added a copy of perforce_sudoers.t from Helix Installer. For immediate purposes, this is to allow this file to be included in SDP documentation. However, this change is also part of a larger goal to move extensive Helix Installer functionality into the SDP. |
||
#107 | 30205 | C. Thomas Tyler | Refactored Terminology so we can reference indiviual terms with direct URLs. | ||
#106 | 30168 | Mark Zinthefer | updating the Unix docs | ||
#105 | 30040 | C. Thomas Tyler | Regenerated docs. | ||
#104 | 30031 | C. Thomas Tyler | Added doc for ccheck.sh, keep_offline_db_current.sh. | ||
#103 | 30008 | C. Thomas Tyler |
Doc change and Non-functional updates to CheckCaseTrigger.py: * Bumped version number for recent changes. * Fixed doc inconsistencies. Fixes: SDP-1035 #review-30009 |
||
#102 | 30000 | C. Thomas Tyler |
Refined Release Notes and top-level README.md file in preparation for coming 2023.2 release. Adjusted Makefile in doc directory to also generate top-level README.html from top-level README.md file so that the HTML file is reliably updated in the SDP release process. Updated :revnumber: and :revdate: docs in AsciiDoc files to indicate that the are still current. Avoiding regen of ReleaseNotes.pdf binary file since that will need at least one more update before shipping SDP 2023.2. |
||
#101 | 29953 | C. Thomas Tyler | Regeneratd docs. | ||
#100 | 29912 | Robert Cowham | Remove link to Helix Installer until we refactor that to avoid support errors. | ||
#99 | 29890 | C. Thomas Tyler | Regenerated docs. | ||
#98 | 29844 | C. Thomas Tyler |
Added sdp_health_check to SDP package. Updated docs in Guide and Release Notes to reflect this change. Added more docs for this in the SDP Guide. #review-29845 @vkanczes |
||
#97 | 29826 | C. Thomas Tyler | Regenerated HTML. | ||
#96 | 29824 | C. Thomas Tyler |
Added comment that P4SERVICEPASS is not used; it remains in place for backward compatibility. Added FAQ: How do I change super user password? Added FAQ: Can I remove the perforce user? Added FAQ: Can I clone a VM to create a standby replica? #review-29825 |
||
#95 | 29727 | Robert Cowham | Note the need for an extra p4 trust statement for $HOSTNAME | ||
#94 | 29719 | Robert Cowham |
Fix journal numbering example. Add section to make replication errors visible. |
||
#93 | 29715 | C. Thomas Tyler |
Doc correction. The sample command correctly indicates that `/home/perforce` should be the home directory, but the text still says should be `/p4`, the legacy location. Also added a note advising against user of automounted home dirs. #review-29716 |
||
#92 | 29700 | C. Thomas Tyler |
Updated Version to release SDP 2023.1.29699. Re-generated docs. |
||
#91 | 29693 | C. Thomas Tyler |
Adjusted /hxserverlocks recommendations: * Changed filesystem name from 'tmpfs' to 'HxServerLocks' in /etc/fstab. * Changed mount permissions from '0755' to '0700' to prevent data leaks. * Changed mounted filesystem size recommendations. * Updated info about size of files being 17 or 0 bytes depending on p4d version. * Indicated change should be done in a maintenance window (as /etc/fstab is modified). Also updated limited sudoers to include entries for running setcap and getcap. #review-29694 @robert_cowham |
||
#90 | 29622 | C. Thomas Tyler |
Updated Version to release SDP 2023.1.29621. Re-generated docs. |
||
#89 | 29611 | C. Thomas Tyler |
Updated Version to release SDP 2023.1.29610. Re-generated docs. |
||
#88 | 29483 | Robert Cowham | Clarify case-insensitive servers | ||
#87 | 29475 | Robert Cowham | For SELinux note the yum package to install for basics | ||
#86 | 29442 | C. Thomas Tyler |
Updated Version to release SDP 2022.2.29441. Re-generated docs. |
||
#85 | 29400 | C. Thomas Tyler |
Updated Version to release SDP 2022.2.29399. Re-generated docs. |
||
#84 | 29311 | C. Thomas Tyler |
Per Thomas Albert, adjusted title on doc page: From: Perforce Helix Server Deployment Package (for UNIX/Linux) To: Perforce Helix Core Server Deployment Package (for UNIX/Linux) #review-29312 @thomas_albert |
||
#83 | 29251 | C. Thomas Tyler |
Updated Version to release SDP 2022.2.29250. Re-generated docs. |
||
#82 | 29204 | C. Thomas Tyler |
Updated Version to release SDP 2022.1.29203. Re-generated docs. |
||
#81 | 29142 | C. Thomas Tyler |
Updated Version to release SDP 2022.1.29141. Re-generated docs. |
||
#80 | 29137 | C. Thomas Tyler | Added docs for proxy_rotate.sh, and updated docs for broker_rotate.sh. | ||
#79 | 29096 | Robert Cowham | Add a section on installing Swarm triggers | ||
#78 | 29055 | Robert Cowham | Update troubleshooting to check ckp_running.txt semaphore | ||
#77 | 29044 | Robert Cowham | Update to include troubleshooting 'p4 pull -ls' errors | ||
#76 | 29002 | C. Thomas Tyler |
Doc correction; tip refers to 'wget' in a sample command that uses curl instead. |
||
#75 | 28988 | C. Thomas Tyler |
Updated Version to release SDP 2022.1.28987. Re-generated docs. |
||
#74 | 28986 | C. Thomas Tyler |
Clarified text related to mandatory vs. nomandatory standby replicas. |
||
#73 | 28980 | Robert Cowham | Note how to configure Swarm to use postfix | ||
#72 | 28926 | Robert Cowham | Added check for Swarm JIRA project access. | ||
#71 | 28840 | C. Thomas Tyler |
Updated Version to release SDP 2022.1.28839. Re-generated docs. |
||
#70 | 28767 | C. Thomas Tyler |
SDP Guide Doc Updates: * Fixed typos. * Enhanced mandatory/nomandatory description. * Added detail to instructions on using the `perforce-p4python` packcage, and change reference from Swarm docs to the more general Perforce Packages page. * Refactored FAQ, Troubleshooting Guide, and Sample Procedures appendices for greater clarity. * Added Appendix on Brokers in Stack Topology #review-28768 |
||
#69 | 28686 | Robert Cowham | Clarify FAQ for replication errors | ||
#68 | 28667 | Robert Cowham |
Add a note re monitoring. Add some FAQ appendix questions. |
||
#67 | 28650 | C. Thomas Tyler |
Updated Version to release SDP 2021.2.28649. Re-generated docs. |
||
#66 | 28618 | C. Thomas Tyler | Fixed missing command re: .ssh directory generation. | ||
#65 | 28606 | C. Thomas Tyler |
Added SDP Health Checks appendix to UNIX/Linux SDP Guide. Also removed some references to '-k' (insecure) in curl statements. #review-28607 @d_benedict |
||
#64 | 28604 | Robert Cowham | Added notes for Python/P4Python and CheckCaseTrigger installation | ||
#63 | 28503 | Robert Cowham | Add SELinux tip | ||
#62 | 28496 | Robert Cowham | Fix typo in journalctl | ||
#61 | 28493 | Robert Cowham |
Added notes to get systemd SDP scripts working under SELinux Thanks to Rich Alloway! |
||
#60 | 28411 | C. Thomas Tyler |
Updated Version to release SDP 2021.2.28410. Re-generated docs. |
||
#59 | 28351 | Robert Cowham | Tweaked sdp upgrades docs. | ||
#58 | 28261 | C. Thomas Tyler | Fixed on-character doc typo (curk -> curl). | ||
#57 | 28257 | C. Thomas Tyler |
Updated Version to release SDP 2021.1.28253. Re-generated docs. |
||
#56 | 28246 | C. Thomas Tyler |
Enahnced the 'Upgrading the SDP' section of the SDP Guide: * Added sample command to deal with possibly existing tarball. * Added tips to enable less technical users to get past basic snags. * Added detail on how to find your /hxdepots directory if not default. |
||
#55 | 28239 | C. Thomas Tyler |
Updated Version to release SDP 2021.1.28238. Re-generated docs. |
||
#54 | 28230 | C. Thomas Tyler | Minor doc corrections. | ||
#53 | 28225 | C. Thomas Tyler | Enhanced info on upgrading the SDP. | ||
#52 | 28180 | C. Thomas Tyler |
Fixed oversight in documentation, describing how to check the SDP Version file. |
||
#51 | 28160 | C. Thomas Tyler | Regenerated HTML (no PDF) for easy review. | ||
#50 | 28154 | C. Thomas Tyler |
Added new Sample Procedures section. Added Sample Procedure: Reseeding an Edge Server Corrected teriminology re: 'instance' and 'process' and 'server' to be inline with other documentation and common usage. Other minor fixes. #review-28155 |
||
#49 | 28104 | C. Thomas Tyler | Fixed typo. | ||
#48 | 28102 | C. Thomas Tyler |
Clarified "breathing" comment (as in "breathing room") with more clear and more translatable language. #review-28103 @thomas_albert |
||
#47 | 28100 | C. Thomas Tyler |
Updated SDP Guide for UNIX/Linux: * Filled in missing information re: new upgrades. * Expanded on definition of vague "Exceptionally large" term. Generating HTML for easy review; holding off on PDF as it will be generated during the release. #review-28101 @roadkills_r_us |
||
#46 | 28071 | Robert Cowham | Clarify some notes re setting up Gmail | ||
#45 | 27978 | Robert Cowham |
Clarifications and warnings around load_checkpoint.sh Mention recreate_offline_db.sh a little more prominently Recommend installing postfix for mail. |
||
#44 | 27920 | C. Thomas Tyler |
Updated Version to release SDP 2020.1.27919. Re-generated docs. |
||
#43 | 27900 | C. Thomas Tyler |
Updated Version to release SDP 2020.1.27899. Re-generated docs. |
||
#42 | 27890 | C. Thomas Tyler |
Updated Release Notes and SDP Guide to clarify SDP r20.1 supports Helix Core binaries up to r21.1, in advance of the coming SDP r21.1 release that will make it more obvious. In get_helix_binaries.sh: * Changed default Helix Core binary version to r21.1. * Changed examples of getting a different version to reference r20.2. #review-27891 @amo |
||
#41 | 27821 | C. Thomas Tyler |
Updated Version to release SDP 2020.1.27820. Re-generated docs. |
||
#40 | 27764 | C. Thomas Tyler |
Updated Version to release SDP 2020.1.27763. Re-generated docs. |
||
#39 | 27760 | C. Thomas Tyler |
Updated Version to release SDP 2020.1.27759. Re-generated docs. |
||
#38 | 27725 | C. Thomas Tyler | Re-generated HTML and PDF from adoc files. | ||
#37 | 27722 | C. Thomas Tyler |
Refinements to @27712: * Resolved one out-of-date file (verify_sdp.sh). * Added missing adoc file for which HTML file had a change (WorkflowEnforcementTriggers.adoc). * Updated revdate/revnumber in *.adoc files. * Additional content updates in Server/Unix/p4/common/etc/cron.d/ReadMe.md. * Bumped version numbers on scripts with Version= def'n. * Generated HTML, PDF, and doc/gen files: - Most HTML and all PDF are generated using Makefiles that call an AsciiDoc utility. - HTML for Perl scripts is generated with pod2html. - doc/gen/*.man.txt files are generated with .../tools/gen_script_man_pages.sh. #review-27712 |
||
#36 | 27710 | Robert Cowham | Another tweak to tmpfs settings | ||
#35 | 27709 | Robert Cowham |
Note check for serverlocks. Fix typo in path in failover. |
||
#34 | 27536 | C. Thomas Tyler |
Legacy Upgrade Guide doc updates: * Added 'Put New SDP in Place' section. * Added 'Set SDP Counters' section to set SDP_VERSION and SDP_DATE counters. * Covered updating depot spec Map fields. * Covered adding server.id files. * Added missing content on putting new SDP directory in place. SDP_Guide.Unix doc updates: * Added Legacy Upgrade Scripts section w/clear_depot_Map_fields.sh. Updated Makefile with new doc build dependencies. Regenerated docs. |
||
#33 | 27526 | C. Thomas Tyler |
Updated Version to release SDP 2020.1.27524. Re-generated docs. |
||
#32 | 27505 | C. Thomas Tyler |
Enhanced doc for Systemd/SysV services management and configuration docs, separating basic configuration for start/stop/status from enabling for start on boot (with Systemd/SysV variations for each). Added doc coverage for using systemd to enable multiple broker configs. Added doc coverage for applying limited sudo. Spell check. |
||
#31 | 27462 | C. Thomas Tyler |
Updated Version to release SDP 2020.1.27457. Re-generated docs. |
||
#30 | 27414 | C. Thomas Tyler | Updated SDP Guide. | ||
#29 | 27406 | C. Thomas Tyler | Updated Version to release SDP 2020.1.27403. | ||
#28 | 27398 | C. Thomas Tyler | Refined Makefile for generating docs and regenerated docs. | ||
#27 | 27351 | C. Thomas Tyler | Updated AsciiDoc-generated files. | ||
#26 | 27322 | C. Thomas Tyler | Updated AsciiDoc-generated files. | ||
#25 | 27253 | C. Thomas Tyler | Updated generated docs. | ||
#24 | 27213 | C. Thomas Tyler | Regenerated docs. | ||
#23 | 27156 | C. Thomas Tyler |
Consolidated SDP Standards into the SDP Guide for UNIX/Linux. Added references to those sections in the Windows SDP Guide. Normalized doc titles. Various other doc update. |
||
#22 | 27096 | C. Thomas Tyler |
Refactored SDP Legacy Upgrade content into a separate doc. The SDP Guide will be remain comprehensive and cover how to upgrade the SDP itself forwardm from the current version (2020.1) using the new, p4d-like incremental upgrade mechanism. The content for manual upgrade procedures needed to get older SDP installations to 2020.1 is only useful until sites are on 2020.1. This content is extensive, narrowly focused, and of value only once per installation, and thus the legacy upgrade content is separated into its own document. Regenerated work-in-progress HTML files for easier review. |
||
#21 | 27074 | C. Thomas Tyler | Regenerated SDP Guide docs from adoc. | ||
#20 | 27058 | Robert Cowham |
Added direct links to the various scripts where they are explained. Tweak some wording in SDP upgrade section |
||
#19 | 27055 | C. Thomas Tyler |
Pulled the SDP Upgrade Guide for Linux into the main SDP Guide, and deleted the separate upgrade doc. Also other minor refinements. Pulled in updated mkrep.sh v2.5.0 docs. This version is still in progress. Search for EDITME to find areas requiring addtional content. |
||
#18 | 27041 | Robert Cowham |
Windows Guide directly includes chunks of the Unix guide for replication etc, with a little ifdef to avoid Unix only comments. Fix Makefile and add missing generated man page. |
||
#17 | 27033 | C. Thomas Tyler | Work in progress updates to SDP_Guilde.Unix. | ||
#16 | 27021 | C. Thomas Tyler |
Re-ordered so `systemd` info comes first (as it is more likely to be relevant), and older SysV docs deferred. Various other tweaks. |
||
#15 | 27014 | C. Thomas Tyler | Regenerated AsciiDoc output. | ||
#14 | 26992 | Robert Cowham | Document SiteTags.cfg file | ||
#13 | 26851 | Robert Cowham |
Fix typo in tmpfs /etc/fstab entry which stopped it working in the doc. Mention in pre-requisites for failover and failover guide the need to review OS Config for your failover server. Document Ubuntu 2020.04 LTS and CentOS/RHEL 8 support. Note performance has been observed to be better with CentOS. Document pull.sh and submit.sh in main SDP guide (remove from Unsupported doc). Update comments in triggers to reflect that they are reference implementations, not just examples. No code change. |
||
#12 | 26780 | Robert Cowham | Complete rename of P4DNSNAME -> P4MASTERHOST | ||
#11 | 26755 | Robert Cowham | Include p4verify.sh man page in SDP Guide automatically for usage section. | ||
#10 | 26748 | Robert Cowham |
Add recommended performance tweaks: - THP off - server.locks directory into RAM |
||
#9 | 26747 | Robert Cowham |
Update with some checklists for failover to ensure valid. Update to v2020.1 Add Usage sections where missing to Unix guide Refactor the content in Unix guide to avoid repetition and make things read more sensibly. |
||
#8 | 26727 | Robert Cowham |
Add section on server host naming conventions Clarify HA and DR, and update links across docs Fix doc structure for Appendix numbering |
||
#7 | 26661 | Robert Cowham |
Tidying up cross references. Added missing sync_replica.sh docs. |
||
#6 | 26654 | Robert Cowham |
First draft of new Failover Guide using "p4 failover" Linked from SDP Unix Guide |
||
#5 | 26649 | Robert Cowham |
More SDP Doc tidy up. Removed some command summary files. |
||
#4 | 26644 | Robert Cowham |
SDP Doc Update to address jobs. Mainly documents scripts which lacked any mention. |
||
#3 | 26637 | Robert Cowham |
Include script help within doc Requires a couple of tags in the scripts themselves. |
||
#2 | 26631 | Robert Cowham | New AsciiDoc version of Windows SDP guide | ||
#1 | 26629 | Robert Cowham |
Fixed Makefile to generate HTML Check in theme Some notes in README Remove the .docx! |