Files larger than 1 MB are truncated. Click
here to display the full file (may cause the browser to become unresponsive) or use the
Open button to view outside of Swarm.
From: "Mark D. Baushke" <mark.baushke@solipsa.com>
Date: Thu, 29 Jul 1999 16:44:04 -0700
Subject: Looking for Jamrules to deal with java
Has anyone already written the rules and actions needed to compile
.java files into .class files and update .jar files with newly built
.class files? If so, I'd love to get a copy of them.
From: linda_farrenkopf@liebert.com
Date: Mon, 09 Aug 1999 14:36:05 -0400
Subject: Setting Environment Variables
I need to set an environment variable and to export it during my JAM build. The
compiler I am using does not accept a command line method of specifying the
include directories, but requires an environment variable setting instead. The
problem is that my project includes multiple trees each of which would need
slightly different settings for this variable, C51INC. Could someone please
explain how to do this from within a Jamfile.
Date: Mon, 09 Aug 1999 12:48:23 -0700
From: Steve Bennett <steveb@portal.com>
Subject: Re: Setting Environment Variables
The simplest way to do this is to write a wrapper script.
On Unix:
#!/bin/sh
#
# Runs cc with arg1 as INCS and arg2 as LIBS
#
INCS="$1"; shift
LIBS="$1"; shift
export INCS LIBS
cc $@
You can do something similar on NT.
From: Laura Wingerd <laura@perforce.com>
Subject: Re: Setting Environment Variables
Date: Mon, 9 Aug 1999 13:41:45 -0700 (PDT)
Is this something you could set right in the compile action? E.g.,
say you use the C++ rule to compile. You could modify your Jambase's
C++ actions to look something like:
if $(UNIX) {
actions C++ {
C51INC="$(C51INC)"
export C51INC
$(C++) -c $(C++FLAGS) $(OPTIM) -I$(HDRS) $(>)
}
}
if $(NT) {
actions C++ {
set C51INC=$(C51INC)&
$(C++) -c $(C++FLAGS) $(OPTIM) -I$(HDRS) -o$(<) $(>)
}
}
Then, somewhere in the Main or Library rule (or your own versions
thereof), you'd set C51INC on each target, e.g.,
C51INC on $(<) = yourvaluehere ;
From: Roark Hennessy <RHennessy@Stac.com>
Date: Tue, 10 Aug 1999 12:28:48 -0700
Subject: NEWBIE:directory dependencies?
Just learning jam and have a basic question...
TOP/
Jamfile
Jamrules
/include
Xyz.h
/shared/
s1.cpp
s2.cpp
s3.cpp
Jamfile
/test
test.cpp
Jamfile
In TOP/Jamrules:
GZHDRS = $(TOP)/include /usr/include ;
GZFLAG = -g ;
In TOP/Jamfile:
SubInclude TOP shared ;
SubInclude TOP test ;
In TOP/shared:
SubDir TOP shared ;
SubDirHdrs $(GZHDRS) ;
SubDirC++Flags $(GZFLAG ) ;
Library s : s1.cpp s2.cpp s3.cpp ;
In TOP/test
SubDir TOP test ;
SubDirHdrs $(GZHDRS) ;
SubDirC++Flags $(GZFLAG ) ;
Main RunMe : test.cpp ;
LinkLibraries RunMe : s ;
The problem is that the TOP/test/Jamfile does not know how to build the
TOP/shared library (s) . If I run jam from the TOP/shared subdirectory
manually to produce the s.a file, then switch to the TOP/test subdirectory
and run the jam from there, it skips the link because it can not find the
library.
If I change the LinkLibraries line to LinkLibraries RunMe : ../shared/s ;
then the link succeeds only if the s.a is present, if the s.a is not there
then the TOP/test/jamfile does not know how to build it.
Oh and I'm running on RedHat 6.0 linux, Jam/MR version 2.2.1 on intel
Date: Tue, 10 Aug 1999 16:26:35 -0500 (CDT)
From: Scott McCaskill <scott@pe-i.com>
Subject: Re: NEWBIE:directory dependencies?
I'm also new to jam, and I just tackled this same problem myself. Here's
what I came up with:
# SubIncludeOnce -- like SubInclude, but will only include each Jamfile once.
# This is handy for specifying dependencies between things in different
# directories. Usually SubIncludeOnce has to go at the end of the Jamfile.
rule SubIncludeOnce {
local i ;
local include_marker ;
include_marker = included ;
# value of include_marker is the concatenated directory names in the
# path to the directory being included
include_marker = $(include_marker)_$(i) ;
}
# if the variable whose name is the value of include_marker does not
# exist, then we know we haven't included that directory yet.
if ! $($(include_marker)) {
# Do not include more than once
$(include_marker) = TRUE ;
SubInclude $(<) ;
# } else {
# ECHO "Already included: " $(<) ;
}
}
I think you'll also need something like this to set up the dependency
between the library and the executable:
Depends RunMe : s ;
The Depends line may have to come before the SubIncludeOnce line in the
Jamfile, and the SubIncludeOnce lines may have to go at the end of the
Jamfile (they did for me).
I use this instead of SubInclude to set up dependencies between
executables and libraries. If you forget to use SubIncludeOnce and use
SubInclude instead, you may see some things get built more than once.
I also have RH6, jam 2.2.5. I haven't modified the Jambase.
BTW, if I don't reply, it's probably becuase I'll be out for a week
starting thursday.
Date: Tue, 10 Aug 1999 15:47:19 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: NEWBIE:directory dependencies?
When you're sitting in your top-level dir, your SubInclude's say to read in
the Jamfiles in the directories listed.
When you're sitting in the "test" dir, Jam is only going to read in the
local Jamfile, because you've not given it any directive to read in any
other Jamfile. So if you're only reading in the local Jamfile, then it's
only going to know about what's in that one. Having the executable link
against a library that gets built elsewhere isn't enough to make Jam go
someplace else to try and build it -- how would it know where it's supposed
to go? Using a relative (or even a full) path to specify where the library
can be found still isn't telling Jam where to go to build it (for all it
knows, maybe it gets built one place and installed into where it can be found).
Ordinarily, things are organized from a "top-down" perspective, and when
you're sitting in a subdirectory, you only want to build what's in that
directory (and anything below it). But there's no law that says you can't
go build things in other places if that's what you want to do. You just
need to tell it where to go.
BTW: I've gotten rid of things that aren't relevant to this particular thing,
like the header stuff -- oh, and I've used the -L flag to point to where s.a
lives (I didn't bother to make it libs.a...I was feeling lazy :)
So, given your example...
# Top-level Jamfile
SubInclude TOP shared ;
SubInclude TOP test ;
# Jamfile for shared
SubDir TOP shared ;
# To avoid being multiply included
S_INCLUDED = true ;
Library s : s.c ;
# Jamfile for test
# Note: Put this first so SubDir will still get set correctly
if ! $(S_INCLUDED) {
SubInclude TOP shared ;
}
SubDir TOP test ;
Main RunMe : main.c ;
LINKFLAGS on RunMe = -L $(TOP)/shared ;
LinkLibraries RunMe : s ;
Now, if you're in the "test" dir, and (lib)s.a needs to get built, it will:
% cd test
% jam -n
...found 19 target(s)...
...updating 4 target(s)...
Cc /tmp/roark/shared/s.o
cc -c -O -I/tmp/roark/shared -o /tmp/roark/shared/s.o /tmp/roark/shared/s.c
Archive /tmp/roark/shared/s.a
ar ru /tmp/roark/shared/s.a /tmp/roark/shared/s.o
Ranlib /tmp/roark/shared/s.a
RmTemps /tmp/roark/shared/s.a
Cc /tmp/roark/test/main.o
cc -c -O -I/tmp/roark/test -o /tmp/roark/test/main.o /tmp/roark/test/main.cLink
/tmp/roark/test/RunMe
cc -L /tmp/roark/shared -o /tmp/roark/test/RunMe /tmp/roark/test/main.o /tm
p/roark/shared/s.a
Chmod /tmp/roark/test/RunMe
chmod 711 /tmp/roark/test/RunMe
...updated 4 target(s)...
And it will also work from the top-level directory:
% cd $TOP
% jam -n
...found 19 target(s)...
...updating 4 target(s)...
Cc /tmp/roark/shared/s.o
cc -c -O -I/tmp/roark/shared -o /tmp/roark/shared/s.o /tmp/roark/shared/s.c
Archive /tmp/roark/shared/s.a
ar ru /tmp/roark/shared/s.a /tmp/roark/shared/s.o
Ranlib /tmp/roark/shared/s.a
ranlib /tmp/roark/shared/s.a
RmTemps /tmp/roark/shared/s.a
rm -f /tmp/roark/shared/s.o
Cc /tmp/roark/test/main.o
cc -c -O -I/tmp/roark/test -o /tmp/roark/test/main.o /tmp/roark/test/main.cLink
/tmp/roark/test/RunMe
cc -L /tmp/roark/shared -o /tmp/roark/test/RunMe /tmp/roark/test/main.o /tm
p/roark/shared/s.a
Chmod /tmp/roark/test/RunMe
chmod 711 /tmp/roark/test/RunMe
...updated 4 target(s)...
From: "Binder, Duane" <dbinder@globalmt.com>
Subject: RE: Setting Environment Variables
Date: Tue, 10 Aug 1999 18:05:24 -0500
Is there a reason that '&' is necessary in the NT actions?
It appears to me that Jam adds an extra space that CMD.exe does not ignore.
From: Laura Wingerd [mailto:laura@perforce.com]
Subject: Re: Setting Environment Variables
Is this something you could set right in the compile action? E.g.,
say you use the C++ rule to compile. You could modify your Jambase's
C++ actions to look something like:
if $(UNIX) {
actions C++ {
C51INC="$(C51INC)"
export C51INC
$(C++) -c $(C++FLAGS) $(OPTIM) -I$(HDRS) $(>)
}
}
if $(NT) {
actions C++ {
set C51INC=$(C51INC)&
$(C++) -c $(C++FLAGS) $(OPTIM) -I$(HDRS) -o$(<) $(>)
}
}
From: Roark Hennessy <RHennessy@Stac.com>
Subject: RE: NEWBIE:directory dependencies?
Date: Tue, 10 Aug 1999 18:46:20 -0700
Thanks that worked,
I understood all but, can you explain the:
LINKFLAGS on RunMe = -L $(TOP)/shared ;
Below...
BTW: I've gotten rid of things that aren't relevant to this particular thing,
like the header stuff -- oh, and I've used the -L flag to point to where s.a
lives (I didn't bother to make it libs.a...I was feeling lazy :)
# Jamfile for test
# Note: Put this first so SubDir will still get set correctly
if ! $(S_INCLUDED) {
SubInclude TOP shared ;
}
SubDir TOP test ;
Main RunMe : main.c ;
LINKFLAGS on RunMe = -L $(TOP)/shared ;
LinkLibraries RunMe : s ;
Date: 11 Aug 1999 03:53:30 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: NEWBIE:directory dependencies?
my jamfile:
if ! $(INSTACAST_CLIENT_BACKEND_INCLUDED) {
SubInclude TOP client backend ;
}
SubDir TOP client gtkclient ;
% jam
Top level of source tree has not been set with TOP
If I put "SubDir TOP client gtkclient ;" before and after the if, then all
goes well.
Date: Tue, 10 Aug 1999 23:24:54 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: RE: NEWBIE:directory dependencies?
It was just an illustration of how to point the linker to where
libraries live instead of using a path (you had used ../shared/libs.a
in one of your examples).
You don't actually need it in this case, since the library ends up
with a full-path name. But if you were linking against other libraries
that you weren't trying to build while in your "test" directory but that
weren't in the standard look-for-libraries places, then you'd use it.
Date: Tue, 10 Aug 1999 23:33:42 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: NEWBIE:directory dependencies?
This means you don't have $TOP set.
You wouldn't need to do this if you have $TOP set. My advice would
be to set $TOP.
Date: Thu, 12 Aug 1999 13:30:03 -0700
From: Brendan McCarthy <mccarthy@justintime.com>
Subject: Jam and Java
Has anybody on this list tackled the problem of dependency analysis when
compiling Java source? We currently use a system built on GNU make, and
the system cannot properly detect the targets that need to be built
because of the ambiguity of Java's "import" statement (similar to C's
"#include"). As a result there is no such thing as a "do nothing"
build, and everything is rebuilt every time. Can jam be used to solve
this problem?
Subject: Re: Jam and Java
From: "Mark D. Baushke" <mark.baushke@solipsa.com>
Date: Thu, 12 Aug 1999 13:51:31 -0700
mccarthy> Has anybody on this list tackled the problem of dependency
mccarthy> analysis when compiling Java source?
I've been looking for anyone who has java rules for jam myself. So
far, no one has come forth.
mccarthy> We currently use a system built on GNU make, and the system
mccarthy> cannot properly detect the targets that need to be built
mccarthy> because of the ambiguity of Java's "import" statement
mccarthy> (similar to C's "#include").
Yeah, the lines like:
import com.domain.foo.*;
statements can be parsed by a regexp in the HDRSCAN variable, but it
is more difficult to look in the various .jar files to see what files
satisfy the import or notice the lines
package com.domain;
to let you know what domain particular sources files are to be
found. It is also difficult to know that a single .java file might be
able to generate multiple .class files and to keep the rules to clean
up derrived .class files correct.
mccarthy> As a result there is no such thing as a "do nothing" build,
mccarthy> and everything is rebuilt every time. Can jam be used to
mccarthy> solve this problem?
That would be my hope, but I do not yet have any evidence to back up the idea.
Date: 12 Aug 1999 21:22:25 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: shared object rule
this is in my Jamfile:
SUFSO default = .so ;
rule SharedLibrary {
Library $(<) : $(>) ;
MainFromObjects $(<)$(SUFSO) ;
LinkLibraries $(<)$(SUFSO) : $(<) ;
LINKFLAGS on $(<)$(SUFSO) += -shared ;
RmTemps $(<)$(SUFSO) : $(<)$(SUFLIB) ;
}
SharedLibrary foo : foo.c ;
and all works, except everytime I build, it rebuilds everything:
% jam
...found 12 target(s)...
...updating 3 target(s)...
C++ chat.o
Archive chat.a
Ranlib chat.a
Link chat.so
Chmod chat.so
...updated 3 target(s)...
% jam
...found 12 target(s)...
...updating 3 target(s)...
C++ chat.o
Archive chat.a
Ranlib chat.a
Link chat.so
Chmod chat.so
...updated 3 target(s)...
I think its building the .so over since the .a is missing, and its building
.c's over because they are needed to build the .a.
I think what I need it to get the .so to be dependant on the .c's, but I
have no idea how to get that to work.
I tried adding
Depends $(<)$(SUFSO) : $(>) ;
and
Depends $(<) : $(>) ;
in the first line of the rule, but neither worked :(
Date: Thu, 12 Aug 1999 14:59:10 -0700 (PDT)
Subject: Re: Jam and Java
There are various dependency issues with Java that no build
tool has quite caught up with (not even Jam or Gnu make) -- and
of course, Java keeps changing on us. :-)
There are two issues with Java that put constraints on what
you can do with a build tool.
The first issue is that the javac compiler's built-in
dependency-checking still doesn't reliably work for a set of
Java files with circular dependencies. That is, if you try to
compile a specific .java file, or a few specific .java files,
it may miss compiling others that these files depend on. The
only way to reliably catch these dependencies is to run javac
with a wildcard to catch all of them:
javac *.java
This is actually pretty fast, even for directories with a lot of Java files.
The second issue is that the javac compiler creates .class
files with names that it generates on the fly. Nobody but javac
knows what these files will be named, so it becomes a headache
to maintain the build files. If you're going to be doing
something with the .class files, such as making a .jar archive,
you'll need to use a wildcard:
jar cf Cookie.jar *.class
These issues have prompted me to set up my Java development
in directories that are related to packages (a fine idea in its
own right, I think) and to write rules in which the .jar file is
the target and the above commands are in the rules. I let javac
and wildcards handle the finger granularity of the build.
P.S.: BTW, you might be interested in the Java-handling Jambase
that Ames Carlson wrote. You can find it in the list archives:
Date: Thu, 12 Aug 1999 15:38:41 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: shared object rule
Actually, there's lots of reasons, but they all come from the same
basic thing -- you're removing the .a that MainFromObjects uses,
and that Library wants as well, and so does LinkLibraries.
So the short answer is: don't remove the .a.
From: "Reddy, Jagannatha" <Jagannatha_Reddy@bmc.com>
Subject: Building Shared Library
Date: Thu, 12 Aug 1999 17:50:58 -0500
I am new to Jam. I have a requirement to build
Shared Libraries on multiple OS (AIX, HP-UX, Solaris).
Please let me know if you have any solutions for the same.
Date: Thu, 12 Aug 1999 18:58:04 -0400
From: Randy McCaskill <Randy_McCaskill@lcc.com>
Subject: Problem with jam dependencies
I am having a problem with jam's generated dependency list. I have
trimmed my tree down to a simple example (that is attached in a zip
file). I have a subdirectory with a couple of headers files in it. The
source file (main.c) includes one of the header files using
<SubDir/file1.h>. That include file then includes another include file
"file2.h" (using quotation marks since it is also in SubDir. The source
file (which isn't in SubDir) should be dependent on both files, but jam
doesn't catch the dependency on file2.h since it can't find the file in
the search path. I could add SubDir to my include path so that it is
searched, but outside of my small example, I have quite a few
directories in my include tree and I don't really want to have to add
them all.
Is there a simple solution to my problem? I am not currently on the
mailing list, but am reading via the archives, so please reply directly
as well as to the mailing list.
From: Paul Haffenden <pjh@unisoft.com>
Date: Fri, 13 Aug 1999 11:00:39 BST
Subject: Allowing the rules to determine the target specified on the command line.
We have found it useful to allow our jam rules to find out
how jam has been invoked.
We have several extra pseudo targets
and find that when we are using our
'source' pseudo target, we don't want
all the compile rules to be active.
By adding code in jam.c, we make a symbol $(ARGV)
visible that contains the targets specified
on the command line.
In our Jamrules file we have:
for i in $(ARGV) {
switch $(i) {
case sourceood : TARGET_SOURCE = 1 ; TARGET_NOBUILD = 1 ;
TARGET_SOURCEOOD = 1 ;
Depends sourceood : source ;
case source* : TARGET_SOURCE = 1 ; TARGET_NOBUILD = 1 ;
case push* : TARGET_PUSH = 1 ;
TARGET_NOSUBINCS = 1 ; TARGET_NOBUILD = 1 ;
case chmog* : TARGET_CHMOG = 1 ;
case scen* : TARGET_SCEN = 1 ; TARGET_NOBUILD = 1 ;
case pkgs* : TARGET_PKGS = 1 ; TARGET_NOBUILD = 1 ;
case lang* : TARGET_LANG = 1 ; TARGET_NOBUILD = 1 ;
case install* : TARGET_INSTALL = 1 ;
case root* : TARGET_ROOT = 1 ; TARGET_NOBUILD = 1 ;
case srcpkgs* : TARGET_SRCPKGS = 1 ; TARGET_NOBUILD = 1 ;
case remote : TARGET_REMOTE = 1 ;
}
}
The variables TARGET_* are then tested in our rules to see
if they are required to do something.
e.g:
if $(TARGET_INSTALL){
# now call Makei to do the hard work.
Makei $(<) : $(s) : $(4) :
$(i)$(t5)T : $(stag) : $(tdir) : $(sdir) ;
}
if $(TARGET_PKGS){
Makefilelist pkgs : $(PKG)$(i) :
abits$(SLASH)$(tdir)$(SLASH)$(<) $(4) f ;
}
Here are the code changes:
if( strlen( date ) == 25 )
date[ 24 ] = 0;
var_set( "JAMDATE", list_new( L0, newstr( date ) ), VAR_SET );
}
/* set up argv. This is a UniSoft Addition */
if (argc){
LIST *largv;
int i;
for(i = 0 ; i < argc ; i++){
if (i == 0) {
largv = list_new(L0, newstr(argv[i]));
} else {
list_new(largv, newstr(argv[i]));
}
}
var_set( "ARGV", largv, VAR_SET );
}
/* load up environment variables */
var_defines( environ );
var_defines( othersyms );
From: Peter Glasscock <peterg@harlequin.co.uk>
Subject: Re: Allowing the rules to determine the target specified on the command line.
Date: Fri, 13 Aug 1999 11:18:57 +0100 (BST)
With large projects, the scanning of source files is quite
time-consuming and can severely slow down a build when only one or two
source files have changed.
I have implemented some new rules (FILEOPEN, FILEWRITE, FILECLOSE) to
allow Jam to write lines to files whilst processing the jambase (or
equivalent). By using these to write valid jam files, which can be read
in on later invocations, I "cache" the dependency information found with
HDRRULE and HDRSCAN.
At the moment, I am using a wrapper around Jam which sets a variable on
the command-line with a list of all the targets that the user has asked
for. A solution like this, if it was incorporated into the main Jam
source would make this hack unnecessary.
If a large number of other people are interested in my dependency
caching, and/or the FILE* rules I've added, I'll post them. The actual
dependency caching use of the FILE* rules is quite complicated and
took some time to get right with my own replacement for the Jambase.
It would probably take a bit of work to incorporate it into the example
one that comes with the Jam source.
From: "Darrin Edelman" <darrin@aetherworks.com>
Date: Fri, 13 Aug 1999 09:23:18 -0500
Subject: Cross Compiling for VxWorks on NT using Jam
We have run into an issue that we thought others might also have to deal
with or may need to deal with in the future. That is that Jam is not really
cross-compile friendly. It assumes a certain form of library exists on each
platform which is generally a reasonable thing to do since on Windows you
have Windows libraries and on Unix you have Unix style libraries. This
however is not necessarily the case if you are cross-compiling.
The fix is relatively simple -- you need to allow the use of different style
libraries independent of system architecture. Attached are some diffs that
do just this for Unix style libraries under Windows. Note that we haven't
really added any new code -- we have just copied the appropriate code from
fileunix.c as mentioned in the diffs into filent.c to support this feature.
Of course, this is a quick and dirty hack that uses the environment to
determine which libraries should be used. Ideally, this code should be
factored out into a function and supported via a command-line option for
cross-compilation.
We really don't like the notion of using our own private version Jam, so we
would be more than happy to spend the time to add the command line option
properly but hesitate to do the work without assurance that it will be
included in the next release of Jam. If anyone knows how to make this
happen please contact me.
Also, if someone knows a better way to handle this then please do tell...
From: Temesgen Habtemariam [mailto:temesgen@jeeves.net]
Sent: Thursday, May 13, 1999 11:52 AM
Subject: Cross Compiling for VxWorks on NT
Jam assumes we are using NT style libraries if we are compiling on NT
platform. The vxWorks libraries are archived in Unix style and need to be
scanned in that manner. So I had to copy the code for file_archscan() from
the file fileunix.c to filent.c for the case where we are cross compiling
for VxWorks. I have assumed the variable VXWORKS is defined for VxWorks
compilations,(maybe there is a better way of telling weather we cross
compiling). Here is a file that has the diff for filent.c
name="diff.txt"
filename="diff.txt"
D:/Jeeves\tools\Jam\filent.c ====
***************
*** 16,24 ****
# include "jam.h"
# include "filesys.h"
# include <io.h>
# include <sys/stat.h>
-
/*
* filent.c - scan directories and archives on NT
*
# include "jam.h"
# include "filesys.h"
+ /* Added the following two includes to use var_get function in
+ the VXWORKS HACK in file_archscan function. */
+ # include "lists.h"
+ # include "variable.h"
+
# include <io.h>
# include <sys/stat.h>
/*
* filent.c - scan directories and archives on NT
*
***************
*** 168,173 ****
char *archive;
void (*func)();
{
+
+ /************************* BEGIN HACK
*******************************/
+
+ /* VXWORKS uses Unix type libraries. The followning code is copied from
+ fileunix.c line 164 - 248. */
+ /* FIXIT: We are assuming VXWORKS is defined for VXWORKS compiles */
+
+ if(var_get("VXWORKS")) /* cross-compiling for vxworks */
+ {
+ struct ar_hdr ar_hdr;
+ char buf[ MAXJPATH ];
+ long offset;
+ char *string_table = 0;
+ int fd;
+
+ if( ( fd = open( archive, O_RDONLY, 0 ) ) < 0 )
+ return;
+
+ if( read( fd, buf, SARMAG ) != SARMAG ||
+ strncmp( ARMAG, buf, SARMAG ) )
+ {
+ close( fd );
+ return;
+ }
+
+ offset = SARMAG;
+
+ if( DEBUG_BINDSCAN )
+ printf( "scan archive %s\n", archive );
+
+ while( read( fd, &ar_hdr, SARHDR ) == SARHDR &&
+ !memcmp( ar_hdr.ar_fmag, ARFMAG, SARFMAG ) )
+ {
+ char lar_name[256];
+ long lar_date;
+ long lar_size;
+ long lar_offset;
+ char *c;
+ char *src, *dest;
+
+ strncpy( lar_name, ar_hdr.ar_name, sizeof(ar_hdr.ar_name) );
+
+ sscanf( ar_hdr.ar_date, "%ld", &lar_date );
+ sscanf( ar_hdr.ar_size, "%ld", &lar_size );
+
+ if (ar_hdr.ar_name[0] == '/')
+ {
+ if (ar_hdr.ar_name[1] == '/')
+ {
+ /* this is the "string table" entry of the symbol table,
+ ** which holds strings of filenames that are longer than
+ ** 15 characters (ie. don't fit into a ar_name
+ */
+
+ string_table = malloc(lar_size);
+ lseek(fd, offset + SARHDR, 0);
+ if (read(fd, string_table, lar_size) != lar_size)
+ printf("error reading string table\n");
+ }
+ else if (ar_hdr.ar_name[1] != ' ')
+ {
+ /* Long filenames are recognized by "/nnnn" where nnnn is
+ ** the offset of the string in the string table represented
+ ** in ASCII decimals.
+ */
+ dest = lar_name;
+ lar_offset = atoi(lar_name + 1);
+ src = &string_table[lar_offset];
+ while (*src != '/')
+ *dest++ = *src++;
+ *dest = '/';
+ }
+ }
+
+ c = lar_name - 1;
+ while( *++c != ' ' && *c != '/' )
+ ;
+ *c = '\0';
+
+ if ( DEBUG_BINDSCAN )
+ printf( "archive name %s found\n", lar_name );
+
+ sprintf( buf, "%s(%s)", archive, lar_name );
+
+ (*func)( buf, 1 /* time valid */, (time_t)lar_date );
+
+ offset += SARHDR + ( ( lar_size + 1 ) & ~1 );
+ lseek( fd, offset, 0 );
+ }
+
+ if (string_table)
+ free(string_table);
+
+ close( fd );
+ }
+ /*************************** END HACK ************************************/
+ else /* NOT cross-compiling for vxworks */
+ {
struct ar_hdr ar_hdr;
char *string_table = 0;
char buf[ MAXJPATH ];
***************
*** 255,260 ****
}
close( fd );
+ }
}
# endif /* NT */
From: "Hoff, Todd" <Todd.Hoff@LIGHTERA.com>
Subject: RE: Cross Compiling for VxWorks on NT using Jam
Date: Fri, 13 Aug 1999 09:25:47 -0700
What we do is use the directory names to encode what should be built.
Something like: vx-ppc, win32, vx-x86.
Our crack jam expert made these translate into the right build environment
and rules.
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: Cross Compiling for VxWorks on NT using Jam
We do the similar thing here for our cross-compiling product.
Each piece of our product has one "generic" Jamfile that describes the
targets (i.e., contains rules like Main, Library, etc). Then a
"higher level" Jamfile does a "SubDir" into various obj-<targ> dirs,
and does an "include" of the generic Jamfile.
We have intentionally avoided hacking jam source or Jambase.
From: Laura Wingerd <laura@perforce.com>
Date: Mon, 16 Aug 1999 12:28:02 -0700 (PDT)
Subject: Re: last rule for jam?
Well, I searched the jamming archive, and I've searched my personal
mail files, but didn't find anything, although I could have sworn
I've seen postings about this...
Jam doesn't have any concept of "after the build is complete". If
you have a bunch of things that need to be done to stuff after it
is built, just define a rule that makes its outputs dependent on
its inputs, and invoke that rule using your built things as inputs.
E.g., if your last step is to bundle up everything you build in a
tar file, invoke your tar rule on everything you build. It won't
get run until the build is complete.
In fact, Jambase's Install* rules work this way. Because the Install
rule's outputs (files in the install path) are dependent on its
inputs (files in the build path), everything always gets "installed"
last. Take a look at those rules in Jambase and see if you can use
them as an example.
The problem arises when your last step needs to be run regardless
of whether the build was completely successful. For example, if
the last step is to generate a report that lists what got built,
you can't make that report dependent on built targets -- any failed
target will cause the report not to be generated. But you can't
make it independent of your built targets either, because then Jam
may try to run it before the build is done. In that situation, I
think your best bet is to run a separate post-build script after
jam completes.
From: "Hoff, Todd" <Todd.Hoff@LIGHTERA.com>
Subject: RE: Re: last rule for jam?
The ghost thread. We need a thread sensitive :-)
The problem is the build builds numerous products. The number
of dependencies would be huge and not maintainable. It would
more general to have a start and end action.
This is what i ended up doing.
Date: Mon, 16 Aug 1999 19:49:50 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: RE: Re: last rule for jam?
I sent mail about doing this not long ago. Maybe neither of you found it
in the archive because the Subject wasn't about doing post-"all" targets --
it was "Re: Jam's dependencies are broken" (someone else thinks they are,
not me :). Oh well...
Anyway, you don't need to make your post-"all" targets depend on every
individual target you need built before them -- you just need to make them
depend on "all" (currently, "all" depends on everything else, but nothing
depends on "all").
You can do it in Jambase, like this:
Depends last : all
Depends all : shell files.... (etc.)
<snip>
NOTFILE last all first shell....(etc.)
If you don't want to diddle with Jambase, you can just have it in your
Jamrules file (not [necessarily] in a particular rule).
Then in your post-"all" rule(s) for your after-all-is-done targets, you can have:
rule WrapUp {
Depends last : $(<) ;
etc....
}
If you want "last" targets to get built by default -- i.e., whenever you run
just 'jam' (as opposed to running 'jam last', like you do 'jam install', to
get everything built) -- you need to change jam.c as well:
153c153
< char *all = "all";
285c285
< status |= make( 1, &all, anyhow );
As an example: here's a top-level Jamfile:
# Jamfile for $TOP/src ;
SubInclude TOP src a ;
SubInclude TOP src b ;
SubDir TOP src ;
Boot start ;
WrapUp finish ;
Main foo : foo.c ;
And here are the rules for Boot and WrapUp:
rule Boot { Depends first : $(<) ; }
actions quietly Boot { echo ; echo "Starting build at $(JAMDATE)..." ; echo }
rule WrapUp { Depends last : $(<) ; }
actions quietly WrapUp { echo ; echo "Build done at `date`" ; echo }
And running 'jaml' (which is my 'jam' with "last" as the default target) with
my modified Jambase:
% jaml -f ../Jambase
...found 28 target(s)...
...updating 6 target(s)...
Starting build at Mon Aug 16 19:43:56 1999...
Link /tmp/last/src/a/exe1
Chmod /tmp/last/src/a/exe1
Link /tmp/last/src/b/exe2
Chmod /tmp/last/src/b/exe2
Cc /tmp/last/src/foo.o
Link /tmp/last/src/foo
Chmod /tmp/last/src/foo
Build done at Mon Aug 16 19:43:57 PDT 1999
...updated 6 target(s)...
From: "Raymond Wiker" <raymond@orion.no>
Date: Tue, 24 Aug 1999 17:06:09 +0200 (CEST)
Subject: Creating relative paths, outside current tree
I'm working on a project where the prohect source files are
located under /DevRoot/src/..., while a number of third-party modules
are placed under /DevRoot/ext/...
I have a top-level Jamfile at //DevRoot/src/TS/Jamfile, which
includes Jamfiles for directories at lower levels. At the moment the
only rule I use through the Jamfiles are for compiling idl files into
C++ headers, skeletons and stubs, and the rules assume that the
idl file is in the current directory (i.e, the same file as the
Jamfile that refers to it), and that the generated files should also
placed in this directory.
particular, I want to have Jam call the idl generator in such a way
that the generated files are placed in the current directory, while
the IDL source files can be placed outside the tree spanned by the set
of Jamfiles (e.g, under /DevRoot/ext). An example of a valid IDL
command (for a particular idl compiler) is
idl -B -A -I../../../../ext/ACE_wrappers/TAO/orbsvcs \
-out . \
../../../../ext/ACE_wrappers/TAO/orbsvcs/examples/CosEC/Factory/CosEventChannelFactory.idl
It would be quite acceptable to explicitly list the include
paths for the idl generator, as well as for the idl file, but I want
the paths to be relative. Note that TOP is //DevRoot/src/TS, and I
want to access files under //DevRoot/ext.
(Hum... I just realised that I could use a variable DEVROOT,
and make TOP relative to that, and make the dependencies relative to
that... is there a viable alternative?)
My current Jamrules file looks like this:
filename="Jamrules"
# Jam rules for compiling idl files to C++, using the Orbix idl compiler.
if $(DEVROOT){
makeDirName IDL : $(DEVROOT) ext orbix bin "idl.exe" ;
} else {
EXIT Please set the environment variable DEVROOT to the root
of your Perforce client ;
}
IDLFLAGS = "-A" ;
IDLBOAFLAGS = -B ;
NOTFILE idlfiles ;
rule Idl {
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
Depends idlfiles : $(<) ;
Depends $(<) : $(>) ;
}
rule IdlBOA {
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
Depends idlfiles : $(<) ;
Depends $(<) : $(>) ;
}
actions Idl {
$(IDL) $(IDLFLAGS) $(>)
}
actions IdlBOA {
$(IDL) $(IDLFLAGS) $(IDLBOAFLAGS) $(>)
}
rule IdlBOAObject {
local _newExts ;
_newExts = .hh S.CPP C.CPP ;
IdlBOA $(<:B)$(_newExts) : $(<) ;
}
rule IdlObject {
local _newExts ;
_newExts = .hh S.CPP C.CPP ;
Idl $(<:B)$(_newExts) : $(<) ;
}
--1lo/HnUyzz
And the top-level Jamfile:
--1lo/HnUyzz
filename="Jamfile"
# Top level Jamfile for TradeSys project(s).
TOP = . ;
SubInclude TOP MarketServer ;
SubInclude TOP TradeSysRM ;
SubInclude TOP TradeSysGW ;
SubInclude TOP SessionManagement ;
Date: Wed, 25 Aug 1999 11:28:16 +0200
From: Igor Boukanov <igor.boukanov@fi.uib.no>
Subject: Re: Creating relative paths, outside current tree
Try to put at the Jamrules top the following (I assume Jamrules is in
.../DevRoot/src/TS):
if ! $(DEVROOT) {
local tmp ;
# Make tmp a relative path from .../DevRoot/src/TS to ../DevRoot
makeSubDir tmp : src TS ;
# Assuming that TOP is a relative path from jam invocation directory to
../DevRoot/src/TS,
# prefix TOP by tmp and set it to DEVROOT. Note: does not work if TOP is
an absolute path
DEVROOT = $(TOP:R=$(tmp)) ;
}
Also I would advise to replace "TOP = . ;" in your top Jamfile by
SubDir TOP ;
Date: Wed, 25 Aug 1999 15:29:06 -0700
From: Hayden Ridenour <hridenou@interwoven.com>
Subject: file/directory names with whitespace
While trying to build the latest version of Jam/MR from the sources, I've
encountered the problem that -I<path> expansions of paths with a space
become -I<first-part> -I<second-part>. Does Jam/MR not support filenames
with spaces?
From: Laura Wingerd <laura@perforce.com>
Subject: Re: file/directory names with whitespace
Date: Wed, 25 Aug 1999 16:35:02 -0700 (PDT)
When jam parses a Jamfile, it treats spaces as delimiters.
Thus, the assignment:
HDRS = /some/where/over the/rainbow ;
sets HDRS to a list of two values, "/some/where/over" and "the/rainbow".
Luckily, you can use quotes to tell jam that a space is a data value, not
a delimiter:
HDRS = '/some/where/over the/rainbow' ;
Date: Wed, 25 Aug 1999 18:39:45 -0500 (CDT)
Subject: Re: file/directory names with whitespace
sort of. Jam macros will do a kind of mix and match when you concat a
string with a macro. Each item in the macro will be concatenated with
the string, as explained in the manual. spaces are used to determine
what constitutes items in a macro.
like:
includes = /usr/local/include /usr/me/include /usr/project/include ;
then specify -I$(includes) gives you
-I/usr/local/include -I/usr/me/include -I/usr/project/include
From: Yariv Sheizaf <yarivs@cimatron.co.il>
Date: Tue, 31 Aug 1999 17:29:40 +0200
Subject: Jam/MR & Visual C++
Is somebody has experience with Jam/MR in MS Visual C++ environment ?
From: Yariv Sheizaf [mailto:yarivs@cimatron.co.il]
Sent: Tuesday, August 31, 1999 8:30 AM
Subject: Jam/MR & Visual C++
Is somebody has experience with Jam/MR in MS Visual C++ environment ?
From: David.Buscher@durham.ac.uk
Date: Thu, 2 Sep 1999 14:27:46 +0100 (BST)
Subject: InstallFile broken on IRIX 6.2?
I have the following Jamfile and an empty Jamrules:
### My jamfile
SubDir STAGING c40Comms libsrc ;
InstallFile /software/Electra/include : c40string.h stdio40.h ;
### End of jamfile
When I try to do a 'jam install' under IRIX 6.2, I get the following output:
...found 9 target(s)...
...updating 2 target(s)...
Install /software/Electra/include/c40string.h
Chmod /software/Electra/include/c40string.h /software/Electra/include/stdio40.h
Cannot access /software/Electra/include/stdio40.h: No such file or directory
chmod 644 /software/Electra/include/c40string.h /software/Electra/include/stdio40.h
...failed Chmod /software/Electra/include/c40string.h /software/Electra/include/stdio40.h ...
...removing /software/Electra/include/c40string.h
Install /software/Electra/include/stdio40.h
...failed updating 2 target(s)...
It seems as though InstallFile is trying to do a Chmod on all the files
after only the first file has been copied. This Jamfile works fine on
Solaris 2.5 but not on IRIX 6.2. Is this a known bug? I am using jam-2.2.
Date: Thu, 2 Sep 1999 09:47:07 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: InstallFile broken on IRIX 6.2?
Yep, you found a bug. Looks like when INSTALL isn't set, all bets are off:
[From InstallInto]:
for i in $(>) {
Install $(i:G=installed) : $(i) ;
}
if ! $(INSTALL) {
Chmod $(t) ;
if $(OWNER) { Chown $(t) ; OWNER on $(t) = $(OWNER) ; }
if $(GROUP) { Chgrp $(t) ; GROUP on $(t) = $(GROUP) ; }
}
since "t" is set to all the source files.
The fix is to do the Chmod'ing in a for-loop the same way the Install'ing is done:
if ! $(INSTALL) {
for i in $(t) {
Chmod $(i) ;
if $(OWNER) { Chown $(i) ; OWNER on $(i) = $(OWNER) ; }
if $(GROUP) { Chgrp $(i) ; GROUP on $(i) = $(GROUP) ; }
}
}
% jam -n install
...found 8 target(s)...
...updating 2 target(s)...
Install /tmp/install/foo.tmp
cp foo.tmp /tmp/install/foo.tmp
Chmod /tmp/install/foo.tmp
chmod 644 /tmp/install/foo.tmp
Install /tmp/install/bar.tmp
cp bar.tmp /tmp/install/bar.tmp
Chmod /tmp/install/bar.tmp
chmod 644 /tmp/install/bar.tmp
...updated 2 target(s)...
Since this is an actual bug, someone from Perforce should probably pick
this fix up for real.
From: David.Buscher@durham.ac.uk
Date: Thu, 2 Sep 1999 18:19:04 +0100 (BST)
Subject: Re: InstallFile broken on IRIX 6.2?
Having looked at the code, though, I realise that, newbie that I am, I
don't understand why the original code was broke, or why the new code is
better. I think I am confused by the order in which jam executes actions:
obviously it is not in the order they appear in the rules - so what order
is it?
Date: Thu, 02 Sep 1999 15:32:08 -0700
Subject: jam on Solaris and install rule
I've encountered a problem with the Install rule when moving from Linux
to Solaris. This is using an unmodified Jambase. For both of these
platforms, Jambase uses the "install" executable to perform installs.
However, the Linux (GNU) "install" has the source and destination
arguments in a different order than the Solaris "install". Jam expects
the GNU convention, and breaks under Solaris. Has anyone else
encountered this?
My current fix is to define the INSTALL variable for Solaris to point at
a local shell script that swaps the arguments.
Subject: Re: jam on Solaris and install rule
From: "Mark D. Baushke" <mark.baushke@solipsa.com>
Date: Thu, 02 Sep 1999 16:45:14 -0700
I believe that you will find that a Solaris box (c.f, both SunOS5.6
and SunOS 5.7) has two different install executables. One is part of
the optional ucb package and lives in /usr/ucb/install and the other
in /usr/sbin/install
I have never seen any problems with the order of the arguments, rather
I have seen problems with the additional -m$(MODE) -o$(OWNER) and
-g$(GROUP) arguments.
% /usr/ucb/install
usage: install [-cs] [-g group] [-m mode] [-o owner] file ... destination
install -d [-g group] [-m mode] [-o owner] dir
% /usr/sbin/install
usage: install [options] file [dir1 ...]
%
The one that lives in /usr/ucb/install will tend to work like the GNU
version. I suppose that the Jambase should be changed to use a full
pathname for SOLARIS hosts, but that would assume the administrator
had installed the /usr/ucb tools unless the Jambase was changed to
specify both the /usr/sbin/install pathname as well as its arguments.
From: "Dowdy, Mark" <mark@ciena.com>
Date: Tue, 7 Sep 1999 15:33:02 -0700
Subject: Dependency Generation "Broken"
We're using Jam to build a fairly large, multi-directory
project using GNU tools and have uncovered a problem with
Jam's dependency generation. It appears that the compiler's
algorithm used to locate included files differs from Jam's
method in a way that causes some dependencies to be missed.
According to the GNU documentation, the compiler first
searches the directory where the current input file came
from and then searches the -I directories (the directories
in the HDRS variable). The problem arises when a file is
included with a directory name (i.e. #include "foo/fooFile.h")
and this header file includes another file from its directory
(i.e. #include fooFile2.h"). If the .../foo directory is not
in the HDRS list, the dependencies for fooFile2.h are missed
even though the compiler doesn't find any problems.
Has anyone else seen this problem? If so, do you have a
Jam fix? The obvious workaround for us is to include
directories on any file included in a header file, but
it would certainly be better if Jam behaved the way the
compiler does.
Date: Tue, 7 Sep 1999 17:42:08 -0500 (CDT)
Subject: Re: Dependency Generation "Broken"
We had a related problem where the header directories were not being
searched in the same order the compiler would, so a duplicate include
file was being found. We fixed up the order and that solved that problem.
Your problem is a little different.
Date: Tue, 07 Sep 1999 17:07:19 -0700
Subject: Finding the project libraries
Help -- my executable Jamfiles can't find my libraries.
I have a project which builds multiple executable and multiple libraries
(shared among the executables). The source is distributed among various
directories, and I've set up a series of related Jamfiles to build the
directory tree. The output files (libraries, .o, and executables) are
placed in a build/<platform> subdirectory below each project, via
ALL_LOCATE_TARGET. So far, so good.
But the system only works when I build from within the topmost directory
(via "jam" or "jam myprog".) If I go to, say, the "myprog" directory
and type "jam", it cannot find any libraries previously built for the
project, claiming that they're missing. Thus I effectively cannot do
partial rebuilds at all.
Imagine "myproj" links in "mylib", like so:
SubDir TOP myprog ;
Main myprog : myprog.cpp ;
LinkLibraries myprog : libxxx ;
libxxx has been built, and is in the directory
$(TOP)/libxxx/build/linux-x86. The LinkLibraries rule makes myprog
depend on libxxx, but Jam can't find libxxx when invoked from inside the
myprog directory. If I invoke it from the top-level directory via say
"jam myprog", it finds the libxxx fine and links it in.
I'm trying to figure out how Jam expects to solve this problem, so if
this isn't the right way to do things, please let me know. I have
various modules in various directories, some of which depend on the
targets of other modules. How do I set things up to allow modules to
find the built libraries they need during local builds, while still
allowing for a consistent global build?
I do not want to add all the module build directories to the link search
paths, if possible.
I have tried using an install rule to install libraries to a single
external directory, then making the LinkLibraries arguments contain the
path to that directory. This seemed to cause dependency problems when
doing a top-level build. It also feels clunky.
Are there any examples of Jamfile trees like this out there? I've
browsed the jamming archives, but haven't been able to find a jamfile
that does this. It seems like a very basic thing, so I have the feeling
I'm missing something.
Date: Tue, 7 Sep 1999 21:40:34 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: Finding the project libraries
Jam only knows where to find stuff based on two things: where it
currently is, or where it's been told to look.
If you have:
LinkLibraries myprog : libxxx ;
and libxxx isn't in the local directory, then you need to tell Jam where
to find it. Since it sounded like you're setting ALL_LOCATE_TARGET to
the build/<platform> subdir of <whatever> directory, then, when you're
in myprog, Jam will look for everything in ..../myprog/build/<platform> --
which isn't where libxxx lives -- it lives in ..../libxxx/build/<platform>.
So you need to have Jam look for it where it does live.
One way would be to have a symbol (e.g., LIBDIRS) that lists all the
library directories, then use SEARCH to include that list.
For example, in Jamrules (or if you'd rather, in a separate file that you
include from Jamrules), you could have:
LIBDIRS = $(TOP)/libxxx/build/$(PLATFORM)
$(TOP)/libyyy/build/$(PLATFORM)
$(TOP)/libzzz/build/$(PLATFORM)
;
Then, in the Jamfiles where you need to, you would add:
SEARCH = $(LOCATE_TARGET) $(LIBDIRS) ;
Note that doing it this way will mean that if you're in, say, the myprog
directory, and any of the libraries that myprog links against are out-of-date
with their source, they *won't* be rebuilt -- Jam will simply link against
whatever's there. But since it sounded like you didn't want it rebuilding
the libraries when you were in myprog anyway...
Even if you have a zillion library directories, the list should be pretty
trivial to generate.
If this idea won't work for you, let me know -- there are other ways of
doing it.
Date: Wed, 08 Sep 1999 13:05:09 +0200
From: Igor Boukanov <igor.boukanov@fi.uib.no>
Subject: Re: Finding the project libraries
The trick is to have a rule similar to SubInclude that process the given
subdir only once even if you use it several times on the same dir. I
call this rule ImportDir and in your case to use it you simply add at
the end of your `TOP myprog Jamfile'
ImportDir TOP libxxx build linux-x86
You will also need to replace SubInclude by ImportDir in the topmost
Jamfile. Then you can run jam from any directory you like and it will
build only targets in that subdirectory and everything they depend on.
I put ImportDir (plus some other usefull/useless staff) into Jambase but
you can add them to
your top most Jamrules file - see the attached Jambase-ext
# Additiones to Jambase by Igor Boukanov, Igor.Boukanov@fi.uib.no
# new rules
# ImportDir TOP d1 d2 ... ; include a subdirectory Jamfile
# if not already included
# ImportFile TOP d1 d2 ... file ; include the given file
# if not already included
# IncludeFile TOP d1 ... dn file ; include the given file
#
# new utilities
# addFileName var : d1 d2 ... file ; $(var) += path from root to file
# makeFileName var : d1 d2 ... file ; $(var) = path from root to file
rule ImportFile {
if ! $(<[1]) {
EXIT "ImportFile syntax error: TOP should be given" ;
}
if ! $($(<[1])) {
EXIT "ImportFile syntax error: TOP should be already set" ;
}
if ! $(<[2]) {
EXIT "ImportFile syntax error: should have at least 2 arguments" ;
}
local ImportFile__marker, ImportFile__i ;
ImportFile__marker = "imported__" ;
{
ImportFile__marker = $(ImportFile__marker)__$(ImportFile__i) ;
}
if ! $($(ImportFile__marker)) {
# Do not include more than once
$(ImportFile__marker) = TRUE ;
local ImportFile__path ;
makeFileName ImportFile__path : $($(<[1])) $(<[2-]) ;
include $(ImportFile__path) ;
}
}
rule ImportDir { ImportFile $(<) $(JAMFILE) ; }
rule IncludeFile {
if ! $(<[1]) {
EXIT "IncludeFile syntax error: TOP should be given" ;
}
if ! $(<[2]) {
EXIT "IncludeFile syntax error: should have at least 2 arguments" ;
}
local IncludeFile__path ;
makeFileName IncludeFile__path : $($(<[1])) $(<[2-]) ;
include $(IncludeFile__path) ;
}
rule addFileName {
if ! $(>) {
EXIT "Second argument in addFileName should have at least 1 component to form a file name" ;
}
if ! $(>[2]) { $(<) += $(>) ; }
else {
# In Jam I can not get $(>[all except last]) directly
local addFileName__list, addFileName__base, addFileName__i ;
addFileName__list = $(>[1]) ;
addFileName__base = $(>[2]) ;
for addFileName__i in $(>[3-]) {
addFileName__list += $(addFileName__base) ;
addFileName__base = $(addFileName__i) ;
}
local addFileName__dir_path ;
makeDirName addFileName__dir_path : $(addFileName__list) ;
$(<) += $(addFileName__base:D=$(addFileName__dir_path)) ;
}
}
rule makeFileName { $(<) = ; addFileName $(<) : $(>) ; }
From: Paul Bleisch <PBleisch@digitalanvil.com>
Date: Tue, 7 Sep 1999 16:31:28 -0500
Subject: New to Jam
I am new to Jam, but it appears that I need to tweak
alot of variables/rules to get anything to build on
NT. For instance, the default Jambase does not add
/D"WIN32" /D"_WINDOWS" to the default CCFLAGS (and
derivatives), the default linker command line does
not allow one to add library search directories
(-L/SomeDir) similar to the -I include directives, and
there is no default .rc build rule. I've hacked up
a Jamfile that attends to these problems and I
understand why they are not defaults, but...
Has anyone done all of this before? Is there a public
repository of rule files and Jambases for more extensive
support of compiler features? I've also noticed that the
default Jambase does not enable any of the more useful
compiler options for MSVC (optimizations, debugging,
etc). While I am not sure this is something that
should be in the default distribution, I would think
someone has done this already. If not...
What would be the "most correct" way to handle things like:
1) Building .PDB, .BSC files? (MS debug and browse info
files) I would think this is some kind of implicit
target for a Main target. i.e. Main foo would actually
build foo.exe, foo.bsc, and foo.pdb
2) How do I handle precompiled headers. Specifically, MSVC
(and other cc's I assume) allow one to specify a precompiled
header file on the command line (/Fp, and the /Y switches).
Is there an elegant way to handle "MSVC style" project
configurations in a Jamfile? i.e. I would like to build multiple
versions of the same target with different options (optimizations,
debugging, etc) depending on the "configuration" chosen. I
am thinking currently of multiple Main targets and somehow using
per-target binding to set the per-configuration settings.
From: David.Buscher@durham.ac.uk
Date: Wed, 8 Sep 1999 14:55:45 +0100 (BST)
Subject: Re: New to Jam
I would just like to echo the plea for (a) a FAQ (even one which says
"There is no FAQ" - I've spent a lot of time looking for one) and (b) a
repository of examples. The lack of these two is the biggest hurdle for us
newbies getting started with Jam. You have the feeling that you are
re-solving problems that countless others have already solved, but you
can't find out how they did it. Surely Open Source is all about not having
to re-invent wheels?
Date: Fri, 10 Sep 1999 18:58:01 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: New to Jam
What kinds of questions would you like to see answered in an FAQ? And
what types of things would you like to see examples of? (There are
actually probably lots of questions answered and examples of things in
the mailing-list archive -- is there a problem finding what you're
looking for that way? I haven't looked through it, so I don't know how
useful or un the archive is.)
From: David.Buscher@durham.ac.uk
Date: Sun, 12 Sep 1999 21:17:59 +0100 (BST)
Subject: Re: New to Jam
The question *I* would most like to see answered in a FAQ is about
cross-compiling & multiple variants in general, i.e. making multiple
versions of libraries and executables from a single set of sources. There
have been quite a few questions on this sort of topic in the archive, but
I've had to piece the answers together, and I'm still not sure I know how
to do it in a general way, i.e. catering for both libraries and
executables, and extending in an easy way to a multiple-directory project.
An ideal answer would include examples of the actual Jamfiles in a real
project that builds several variants.
I think the mailing-list archive is useful, but when you see the same type
of question recurring, in slightly modified forms, it seems to me that a
FAQ is a better way to address it.
As far as a repository goes, I'd imagine it would have example Jamrules
and Jamfiles from projects on various platforms, perhaps examples on
platforms which the standard Jambase does not cater for, e.g. the
compilers mentioned in the original post on this thread. Another sort of
thing in the repository might be examples of neat ways of solving
particular problems, and general 'tricks of the trade'. The
include-only-once rule that was recently posted would be a good example of this.
The way I learned to do Makefiles was to read other people's makefiles
(e.g. the Linux kernel makefiles) and see how they tackled the problems I
was having. There aren't a lot of publicly-available Jamfiles out there at
the moment (that I'm aware of), so repository would help to make the
learning curve a little less steep.
From: "Joan Yuen" <joany@ecdirect.com>
Date: Fri, 10 Sep 1999 09:53:51 -0700
Subject: Jam vs other tools
We are a startup Java shop and I'm looking into various make tools for our
build system here. Currently everything is built either with DOS batch
scripts or UNIX shell scripts. What are the pros and cons with using Jam vs
something like gnu make? Our number one requirement is cross-platform
development, as we have developers on both NT and Linux. Any feedback will
be appreciated.
Date: Mon, 13 Sep 1999 12:27:55 -0700
From: sweeney@informix.com (Tony Sweeney)
Subject: Re: Jam vs other tools
The problem with GNU _anything_ is that it assumes GNU _everything_. GNU
is Not Unix, but a replacement for it, so if you go the GNU route, you will
end up spending a considerable amount of time building GNU utilities and
installing all over the shop. Jam's big advantage is that it is idempotent,
comparitively speaking. You simply need a functioning C compiler, and an
understanding of how Jam works. Add in your specific rules for your own
environment(s), and you're done. Easy peasy. ;-)
Date: Mon, 13 Sep 1999 14:43:12 -0700
From: Brendan McCarthy <mccarthy@justintime.com>
Subject: an UPDATE variable?
I've been combing through the list archives for a couple of weeks now,
and I must say that the feedback and response times on this list are
remarkably good. Kudos, praises, etc. to all!
Question: (using jam 2.2.5 on Solaris)
Does a variable exist that tells whether a source is marked for
updating? In other words, I'd like to say in my rule definition:
if ( $(this source is marked for an updating action) ) {
$(list of sources that will be updated) .= $(this source) }
Essentially, I'd like to call an action that changes a variable, but
from what I see you can't muck with variables from within actions.
I've written some rules and actions to handle java compilations in a
large directory tree. Basically, I'm establishing a one to one
correspondence between each java source and its compiled object (we
aren't planning to allow multiple class files generated from a single
source, so this is a safe assumption for now). If the compiled object
is out of date, then I'd like to add the source to a list variable for
an updating action which occurs as the last step of the build. This
way, all of the java files (not from a single directory, but from the
whole directory tree!) are fed to the compiler at once. This is a
pretty crude solution to the problem of dependency analysis and may not
scale well, but it's doing the trick for now. The only problem is that
it's updating every target every time.
It seems as though the together modifier would be appropriate, but
this works for multiple sources going into a single target. I've also
considered just writing the names of the sources to a temporary file,
but I'm thinking someone might know of a better way...
Date: Mon, 13 Sep 1999 16:56:56 -0500 (CDT)
Subject: Re: an UPDATE variable?
I ended up having them go to a temporary file...
Date: Tue, 14 Sep 1999 14:57:21 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: an UPDATE variable?
I'm not sure what you mean by:
>from what I see you can't muck with variables from within actions.
If you mean can you change the value of a variable inside an actions
block, the answer is yes -- you can change the value of it -within-
that actions block (that's how I did the "sparse-tree" thing someone
asked for). But if you mean, can you change the value of a variable
within an actions block and then have some rule that also references
that variable see the value as what you changed it to, the answer is no.
You can do that with rules (to non-local variables), but not with actions.
As to if there's a special variable that holds the names of the sources
that are out-of-date (e.g., like gmake's "$?"), no -- a file that's newer
is marked internally, with T_FATE_NEWER, but that's not something you
have access to (but I can't think of why you'd want to even if you could).
And even gmake's $? only holds what's newer for an individual target --
it doesn't accumulate across the entire tree. It's not clear to me what
you'd do with a variable that kept a list of all the source-files over the
whole tree that were newer. (But there's no law that says you couldn't
create one. :) As long as your dependencies are set up correctly, then
all that should be getting passed to the actions are those files that need building.
sent mail about that a couple of times. You can have a "last" target
(or call it whatever you choose) that depends on "all". (Although from
what you've described, I'm not sure if you'd even need that -- if you
have some big target at the end that depends on all the individual
objects being up-to-date first, then you should be able to just say
that, without having to have anything special.)
But I'm not up on java, so I'm not sure what it is you're trying to do.
Maybe you could be a little more specific about what it is, and how a
list of all the newer sources would help you?
Date: Thu, 16 Sep 1999 09:05:25 -0700 (PDT)
Subject: Re: Jam vs other tools
I strongly disagree. GNU follows the same "software tools"
methodology as Unix itself, and you can use GNU Make with any
set of tools you'd like to. I've used it with native Unix
tools, VMS DCL commands, and even DOS.
Now, it might be *advisable* to install GNU tools all over
the shop, because they're usually better than the native tools
and the same tools work exactly the same way from platform to
platform; but it certainly isn't necessary.
I do agree with you that Jam is a better solution overall
for multi-platform development, because of that extra level of
abstraction it provides.
Date: Sat, 18 Sep 1999 14:26:10 -0500
From: "Frot" <frot@earthling.net>
Subject: Jam, development trees and executable/library/headers finding
I have a general question concerning the use of development tree structure.
Till now I have been using JAM using the 'flat'
directory structure (e.g. all sources, headers,
objects, libraries & executables in one directory)
This was OK and worked all of the time.
But currently I am working on a bigger project
with more sources & deliverables, so I decided to
introduce a development tree
structure to jam. First of all all object will be
put in a subdir called "bin.<OS>" (as used in the JAMDOS jamfile).
I also decided to split up independent sources
into different directories.
Example :
library abc
<deliverable 1>
tool xyz
<deliverable 2>
tool 123
<deliverable 3, uses library abc>
example code
<deliverable 4, uses tools xyz>
In my development tree this would look like :
<project-x>\
\abc\.......
(sources for library abc)
\abc\bin.<OS>\.......
(objects for library abc, incl. the library itself)
\xyz\.......
(sources for tool xyz)
\xyz\bin.<OS>\.......
(objects for tool xyz, incl. the executable)
\123\.......
(sources for tool 123)
\123\bin.<OS>\.......
(objects for tool 123, incl. the executable)
\exp\.......
(source for example code)
\exp\bin.<OS>\.......
(objects for example code, incl. the executable)
I have been able to use such a structure by using
the jam SubDir rule in my jam files, and by tweaking the SubDir rule
so that the LOCATE_TARGET variable always gets a bin.<OS> portion attached.
So far so good. Now what is my question ?????
Well, problem start when trying to build tool 123
& example code. The reason the other two have no problems is that
they are independent of other deliverables in the
development tree (library abc only needs its own
sources to build, so does tool xyz).
In above tree following problems arise :
1. tool 123 experiences problems linking as the linker does not find library abc
2. example code experiences problems as it does not find executable xyz
Possible solutions could be :
1. Never use such a structure (don't want that)
2. Adapt jambase so that all bin.<OS> directories are being accessed a search
paths for libraries (HOW ???)
3. Adapt jambase so that all bin.<OS> directories are being added to the
search path for starting executables (HOW ???)
4. Upon creation of
executables/libraries copy them to a tree fixed
place (e.g. <project-x>\lib) and add this path to
search path for both libraries and executables
(AGAIN HOW ??)
5. Upon creation of
executables/libraries copy libraries to a LINKER
aware directory outside of the dev. tree and
executables to a OS aware directory outside of
the dev. tree, so finding libraries will be a
task for the linker and executables for shell
I have looked at all of these possibilities and
they all have some dirty tricks and consequences
attached I rather would avoid.
My question to you is now to help me decide what
to take (preferably how) and whether you might
have some other idea's how to solve this.
I also would like to know how you all tackle this
problem of development trees & interdependent deliverables ?
Date: Sat, 18 Sep 1999 18:08:15 -0700
Subject: Re: Jam, development trees and executable/library/headers finding
This is very similar to a problem I was just having. My solution was to
define a new rule, "Uses":
rule Uses {
local LOCATE_LIBDIRS ;
#
# Uses <target> : <Dir1> <Dir2> ...
#
# This modifies variables that a SubDir rule has set up.
LOCATE_LIBDIRS = $(>[1-])/$(BUILT) ;
# Make generated files go into a platform-specific
# subdirectory
LOCATE_TARGET = $(SUBDIR)/$(BUILT) ;
# Look for needed things in subdirectory first, then
# in local built/xxx dir, then in remote built/xxx dirs
# mentioned with Uses:
SEARCH_SOURCE = $(SUBDIR) $(LOCATE_TARGET) $(LOCATE_LIBDIRS) ;
ECHO "SEARCH_SOURCE for " $(SUBDIR) " = " $(SEARCH_SOURCE) ;
}
In this rule, BUILT is a variable that resolves to
built.<os-specific-string>. The rule is invoked right after SubDir for
every Jamfile that needs the output of some other Jamfile, and lists
explicitly the absolute path to each of the other project directories:
SubDir src services dm ;
Uses dmd : $(KUDZU_ROOT)/src/emlib/delivery
$(KUDZU_ROOT)/src/emlib/socket
$(KUDZU_ROOT)/src/emlib/util ;
I don't know if this is the best way to do things, but it seems to work
out OK (so far).
Date: Sat, 18 Sep 1999 18:51:36 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: Dependency Generation "Broken"
The short answer is: Jam uses SubDirHdrs to find header-files like your
example shows. SubDirHdrs is just a list of the directories you want Jam
to look in -- and it does include a -I<dir> for each one you list. From what
you said, you didn't want that, and you didn't want to have to provide the
list of directories in the first place -- you wanted Jam to be able to figure
it out for you. (That's why it took me awhile to think about it.)
What I came up with was to add a new variable (which gets set in headers.c)
that allows you to see the boundname of the file Jam is scanning for includes.
Then I modified HdrRule to use that variable and include the directory of it
in HDRSEARCH (if it's not already in there). (BTW: I'm not thrilled with the
name of the variable I added, but I couldn't think of a better one -- can you?)
headers.c diff:
120,122d119
< /* Add a variable that holds the full-pathname of the file being scanned. */
< var_set( "SCANFILE", list_new( L0, newstr( file ) ), VAR_SET );
<
HdrRule diff (in Jambase):
<
< if ! $(SCANFILE:D) in $(HDRSEARCH)
< {
< HDRSEARCH = $(HDRSEARCH) $(SCANFILE:D) ;
< }
<
If the diff line numbers from Jambase are off -- I put this in after the:
INCLUDES $(<) : $(s) ;
and before the
SEARCH on $(s) = $(HDRSEARCH) ;
I put together a slightly more elaborate example than the one you provided
so I could test things out and make sure everything else still worked okay:
% grep "include" xxx.c
#include "hdr/xxx.h"
#include <stdio.h>
#include "../hdr/ggg.h"
%
% cat hdr/*.h
/* xxx.h */
#include "yyy.h"
/* yyy.h */
#include "zzz.h"
/* zzz.h */
#include "hdr1/aaa.h"
%
% cat hdr/hdr1/*.h
/* aaa.h */
#include "bbb.h"
/* bbb.h */
#include "hdr2/ccc.h"
%
% cat hdr/hdr1/hdr2/*.h
/* ccc.h */
#include "ddd.h"
/* ddd.h */
#include "hdr3/eee.h"
%
% cat hdr/hdr1/hdr2/hdr3/*.h
/* eee.h */
#include "fff.h"
%
% cat ../hdr/*.h
/* ggg.h */
#include "hhh.h"
I included Echo's in my Jambase so you can see what file is being scanned,
and how HDRSEARCH is modified accordingly (don't include them in real life :).
% myjam -f Jambase.echo -d2
SCANFILE is /tmp/mark/lib/xxx.c
HDRSEARCH is now /tmp/mark/lib /usr/include
SCANFILE is /tmp/mark/lib/hdr/xxx.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr
SCANFILE is /tmp/mark/lib/hdr/yyy.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr
SCANFILE is /tmp/mark/lib/hdr/zzz.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr
SCANFILE is /tmp/mark/lib/hdr/hdr1/aaa.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr /tmp/mark/lib/hdr/
hdr1
SCANFILE is /tmp/mark/lib/hdr/hdr1/bbb.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr /tmp/mark/lib/hdr/
hdr1
SCANFILE is /tmp/mark/lib/hdr/hdr1/hdr2/ccc.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr /tmp/mark/lib/hdr/
hdr1 /tmp/mark/lib/hdr/hdr1/hdr2
SCANFILE is /tmp/mark/lib/hdr/hdr1/hdr2/ddd.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr /tmp/mark/lib/hdr/
hdr1 /tmp/mark/lib/hdr/hdr1/hdr2
SCANFILE is /tmp/mark/lib/hdr/hdr1/hdr2/hdr3/eee.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr /tmp/mark/lib/hdr/
hdr1 /tmp/mark/lib/hdr/hdr1/hdr2 /tmp/mark/lib/hdr/hdr1/hdr2/hdr3
SCANFILE is /usr/include/stdio.h
HDRSEARCH is now /tmp/mark/lib /usr/include
SCANFILE is /usr/include/sys/types.h
HDRSEARCH is now /tmp/mark/lib /usr/include /usr/include/sys
SCANFILE is /usr/include/machine/endian.h
HDRSEARCH is now /tmp/mark/lib /usr/include /usr/include/sys /usr/include/machin
e
SCANFILE is /tmp/mark/lib/../hdr/ggg.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/../hdr
...found 32 target(s)...
...updating 2 target(s)...
Cc /tmp/mark/lib/xxx.o
cc -c -O -I/tmp/mark/lib -o /tmp/mark/lib/xxx.o /tmp/mark/lib/xxx.c
Archive /tmp/mark/lib/libxxx.a
[etc...]
...updated 2 target(s)...
%
% touch hdr/hdr1/hdr2/hdr3/fff.h
% myjam -f Jambase -d7
bind -- fff.h: /tmp/mark/lib/hdr/hdr1/hdr2/hdr3/fff.h
time -- fff.h: Sat Sep 18 17:08:51 1999
made* newer fff.h
...found 32 target(s)...
...updating 2 target(s)...
Cc /tmp/mark/lib/xxx.o
[etc...]
...updated 2 target(s)...
%
% touch ../hdr/hhh.h
% myjam -f Jambase -d7
bind -- hhh.h: /tmp/mark/lib/../hdr/hhh.h
time -- hhh.h: Sat Sep 18 17:17:28 1999
made* newer hhh.h
...found 32 target(s)...
...updating 2 target(s)...
Cc /tmp/mark/lib/xxx.o
Archive /tmp/mark/lib/libxxx.a
Ranlib /tmp/mark/lib/libxxx.a
RmTemps /tmp/mark/lib/libxxx.a
...updated 2 target(s)...
%
From: "Scotte Zinn" <szinn@sentex.net>
Date: Fri, 17 Sep 1999 22:29:18 -0400
Subject: Request for FAQ
I'm trying to set up the following kind of environment
/thirdparty - top level of third-party package
/thirdparty/include - include files for third-party package
/thidparty/lib - library files for third-party package
/root - top level of system
/root/include - include files for system (source files)
/root/lib - libraries that are built (output files)
/root/src - top level of source
/root/src/dir1 - one module to be built (produces objects and libraries)
/root/src/dir2 - another module
/root/src/dir... - etc
/root/object/dir1 - objects for module dir1
/root/object/dir2 - objects for module dir2
/root/object/dir... - etc
/root/bin - resulting binaries from the build
I'd like to be able to execute jam in /root to build the complete system
I'd like to be able to execute jam in /root/src/dir1 to build only module in dir1
Some of the modules in dirXXX may depend on other modules in dirXXX
A set of JamRules / JamBase / JamFile files for this kind of hierarchy would
be an addition to an FAQ. I am very new to Jam and don't know where to
start with this. I have tried using MakeLocate and setting LOCATE_SOURCE
and have achieved getting the binaries and libraries to go to the correct
location, however if the objects were to go to a directory other than where
the source file resides, then it seems to keep wanting to build the
libraries each time I execute Jam.
Date: Sun, 19 Sep 1999 20:27:26 +0200 (MEST)
From: Igor Boukanov <boukanov@sentef3.fi.uib.no>
Subject: Re: Jam, development trees and executable/library/headers finding
At the end of your Jamfile in 123 directory add
SubInclude TOP abd ;
and at the end of your Jamfile in example dir add
SubInclude TOP xyz ;
Then remove references to example and 123 directories from your TOP
directory Jamfile.
Now of cause you need to 'cd 123; jam' and 'cd examples; jam'
to build them. If you prefere to be able to build also everything
including 123 and examples from the TOP, then modify SubInclude so it
include the given dir only once - see my ImportDir rule that I have posted
to the jam mail list couple weeks ago.
From: "Harry Callahan" <boner_ear@hotmail.com>
Date: Fri, 01 Oct 1999 11:45:20 EDT
Subject: Rsh to NT
Is anyone using Jam via a remote shell to NT?
We're primarily a Unix shop, but we have a requirement to
generate some targets on NT. I've got a Jamfile functioning
just fine on Unix and NT (via cmd prompt), however if I
remsh and jam.exe I get an illegal memory reference.
Wait, it gets better ... Only targets with corresponding
"actions" procedures give me the error. So, 'jam.exe -n'
and 'jam.exe clean' and 'jam.exe install' all work fine.
I even have a simple help action that just does some
echoes ... and it aborts ... Funny thing is, if I jump
to the NT box and issue the jam command natively, it works like a charm.
I've tried a couple flavors of third party rshd for NT.
It runs as a service on NT allowing remote shell connections.
From: Stephen Dennis <stephen.dennis@onyx.ca>
Subject: RE: Rsh to NT
Date: Fri, 1 Oct 1999 12:02:02 -0400
The problem is that most of the rshd's for NT run as a service, so they do not have
the same access to 'desktop' resources as a logged in user and are usually running
on a different 'WindowStation'. Same goes for the telnetd's, smbrun and others.
See the docs on 'CreateWindowStation' for more information about this mess.
My solution was to build an rshd from the publicly available code (I used BSD 4.4 rshd)
and wired in the appropriate NT process starting stuff, and ran this on the desktop of a
logged in NT machine. Ultimately I ended up building a custom client as well just to
make sure everything was semantically correct.
Thus, jam or make or whatever, has access to all the network connections, memory,
files and whatever you would have expected.
Ugly, but functional.
Right me for the code for the custom client and server if you would like.
From: Peter Glasscock <peterg@harlequin.co.uk>
Date: Wed, 13 Oct 1999 12:03:56 +0000 (GMT)
Subject: An option some of you may find useful...
Some colleagues have been complaining that Jam continues to build
non-dependant targets when one fails. I myself consider this a feature,
but they would like a way of stopping the build on the first error.
In response to this, I have made a tiny change to the Jam sources to
select this behaviour. I am posting the change here, so that others who
might have had similar thoughts can use it. You can also find the
change in my guest branch //guest/peter_glasscock, which also contains
some important fixes that haven't yet made it into the "official" Jam
sources.
If you think it's a useful feature, lobby Perforce to include this in
the "official" sources too.
Change 233:
edit //guest/peter_glasscock/jam/src/jam.c#5
edit //guest/peter_glasscock/jam/src/jam.h#3
edit //guest/peter_glasscock/jam/src/make1.c#5
Here is the change for those of you not familiar (or without) the p4
client software:
==== //guest/peter_glasscock/jam/src/jam.c#4 - c:\users\peterg\perforce\guest\jam\src\jam.c ====
129a130
173c174
< if( ( n = getoptions( argc, argv, "d:j:f:s:t:ano:v", optv ) ) < 0 )
182a184
206a209,211
==== //guest/peter_glasscock/jam/src/jam.h#2 - c:\users\peterg\perforce\guest\jam\src\jam.h ====
309a310
==== //guest/peter_glasscock/jam/src/make1.c#4 - c:\users\peterg\perforce\guest\jam\src\make1.c ====
409a410,411
From: Peter Glasscock <peterg@harlequin.co.uk>
Subject: Re: An option some of you may find useful...
Date: Thu, 14 Oct 1999 08:44:02 +0000 (GMT)
True, but DEBUG_MAKE is on for all debug levels above 0. Since 1 is the
default, you would have to explicitly choose to have no output at all in
order to turn this behaviour off.
The reason I put it within this conditional is because the main reason
people want the build to stop after the first error; to see what failed.
In order for Jam to print out the command that it executed that failed,
it must be at least debug level 1 (ie DEBUG_MAKE). In fact, the line
that prints out the failed command is just a few lines above the one I
added (in the same block!).
There doesn't seem to be any point in quitting after the first error if
you can't see what it was. But I suppose you'd be right in saying that
the behaviour isn't entirely consistent.
This is only a suggestion, so you are free to put the line wherever you
like. I don't have any use whatsoever for the 0 debug level, so I
really don't mind what happens at that level :-)
Date: Wed, 20 Oct 1999 19:50:02 +0200
From: Igor Boukanov <igor.boukanov@fi.uib.no>
Subject: Too small updated action list or do I have to modify jam for this?
To write a set of rules to compile Java sources I had to change jam
sources due to the following:
It is rather difficult to predict all Java '.class' file names so I
decided to avoid an introduction of any dependencies between java
sources and class files and I wrote something like
rule JavaMain {
Depends $(<) : $(>) ;
Depends all : $(<) ;
}
actions together updated JavaMain {
javac $(>) && touch $(<)
}
with usage:
JavaMain project-name : java-sources ;
JavaMain rule will touch 'project-name' file to change time stamp so it
is >= any of its sources.
I use 'together' and 'updated' modifiers to compile all sources in a
single command and
only those that are newer. But then I faced well known problem:
Although javac tries to compile not only given sources but also other
files if they depend in some sense on the given sources, still javac can miss.
For example, given a.java with single line
class a { }
and b.java with
class b extends a { }
the command 'javac a.java' does not compile 'b.java' even if a.java is
newer. Thus I have to tell Jam to compile b.java if anything in a.class
that b.class depends on is changed. To make life easy I translated this
into requirement to tell Jam to issue
javac a.java b.java
even if only a.java is modified and in the same time simply
javac b.java
for changes only in b.java
First I added:
Depends b.java : a.java ;
But this does not work because for changes only in a.java jam will
constantly recompile everything because nothing makes b.java newer than
a.java. So instead I added a new rule:
rule JavaDepends {
Depends $(<) : $(>) ;
}
actions JavaDepends {
touch $(<)
}
to make b.java newer than a.java after
JavaDepends a.java b.java
but of cause it has very annoying side effect of changing b.java time stamp.
So I needed something like JavaDepends that does not modify b.java.
After some attempts to implement this in the jam-2.2 I give up and added
a new built-in rule (see the attached patch for jam-2.2, apply it via
'patch -lp0 < jam-2.2.AsIfUpdated.patch'):
AsIfUpdated targets : sources ;
### If any of targets is stable (no updates) but any of AsIfUpdated
sources requires
### an update or newer, do not skip this target from source lists of
actions with
### "updated" modifier
I can write with it:
JavaMain my-project : a.java b.java ;
AsIfUpdated b.java : a.java ;
which will compile both a.java and b.java for a-modifications and only
b.java for b-modifications.
And now the question is:
Do I really need to patch jam for this?
diff -r -bBdc jam-2.2.orig/compile.c jam-2.2/compile.c
*** jam-2.2.orig/compile.c Wed Nov 12 10:22:24 1997
--- jam-2.2/compile.c Wed Oct 20 18:43:24 1999
***************
*** 43,48 ****
* builtin_echo() - ECHO rule
* builtin_exit() - EXIT rule
* builtin_flags() - NOCARE, NOTFILE, TEMPORARY rule
+ * builtin_as_if_updated() - ASIFUPDATED rule
*
* 02/03/94 (seiwald) - Changed trace output to read "setting" instead of
* the awkward sounding "settings".
***************
*** 66,71 ****
static void builtin_echo();
static void builtin_exit();
static void builtin_flags();
+ static void builtin_as_if_updated();
int glob();
***************
*** 121,126 ****
bindrule( "Temporary" )->procedure
bindrule( "TEMPORARY" )->procedure
parse_make( builtin_flags, P0, P0, C0, C0, L0, L0, T_FLAG_TEMP );
+
+ bindrule( "ASIFUPDATED" )->procedure
+ bindrule( "AsIfUpdated" )->procedure
+ parse_make( builtin_as_if_updated, P0, P0, C0, C0, L0, L0, 0 );
+
}
/*
***************
*** 765,770 ****
}
/*
+ * builtin_as_if_updated() - ASIFUPDATED rule
+ *
+ * If one of targets is stable but any of ASIFUPDATED sources requires
+ * update, do not skip this target from source lists of actions with
+ * "updated" modifier
+ */
+
+ static void
+ builtin_as_if_updated( parse, args )
+ PARSE *parse;
+ LOL *args;
+ {
+ LIST *targets = lol_get( args, 0 );
+ LIST *sources = lol_get( args, 1 );
+ LIST *l;
+
+ for( l = targets; l; l = list_next( l ) )
+ {
+ TARGET *t = bindtarget( l->string );
+ t->as_if_update_deps = targetlist( t->as_if_update_deps, sources );
+ }
+ }
+
+ /*
* debug_compile() - printf with indent to show rule expansion.
*/
diff -r -bBdc jam-2.2.orig/make1.c jam-2.2/make1.c
*** jam-2.2.orig/make1.c Wed Nov 12 10:22:36 1997
--- jam-2.2/make1.c Wed Oct 20 18:43:24 1999
***************
*** 577,583 ****
continue;
if( ( flags & RULE_NEWSRCS ) && t->fate <= T_FATE_STABLE )
! continue;
/* Prohibit duplicates for RULE_TOGETHER */
continue;
if( ( flags & RULE_NEWSRCS ) && t->fate <= T_FATE_STABLE )
! {
! /* Skip only if all t->as_if_update_deps are also stable or unknown
! */
! int should_skip = 1 ;
! TARGETS *cursor;
! for( cursor = t->as_if_update_deps; cursor; cursor = cursor->next )
! {
! if( cursor->target->fate > T_FATE_STABLE )
! {
! should_skip = 0;
! break;
! }
! }
! if( should_skip ) continue;
! }
/* Prohibit duplicates for RULE_TOGETHER */
diff -r -bBdc jam-2.2.orig/rules.h jam-2.2/rules.h
*** jam-2.2.orig/rules.h Tue Nov 18 18:32:17 1997
--- jam-2.2/rules.h Wed Oct 20 18:43:24 1999
***************
*** 157,162 ****
int asynccnt; /* child deps outstanding */
TARGETS *parents; /* used by make1() for completion */
char *cmds; /* type-punned command list */
+
+ TARGETS *as_if_update_deps; /* If this target is stable but
+ * any of as_if_update_deps targets requires
+ * update, do not skip this target from
+ * source lists of actions with "updated"
+ * modifier
+ */
+
} ;
RULE *bindrule();
Date: Fri, 22 Oct 1999 12:22:20 -0700
Subject: Q on "on"
I just ran into a problem using "on", and I'd like to validate my
solution. Here's the problem: I want to add a library to LINKLIBS for a
particular target, and no others. The following code sets LINKLIBS for
ex to be _just_ -limsdk, ignoring the global LINKLIBS variable completely:
LINKLIBS on ex += -limsdk ;
Main ex : ex.cpp ;
If I manually include the global LINKLIBS, it seems to work:
LINKLIBS on ex = $(LINKLIBS) -limsdk ;
I had naively expected that "LINKLIBS on ex" would have a default value
equal to $(LINKLIBS), but it looks like the default is no value. Is
this the right thing to do? As a side question, is there a way to print
out the value of LINKLIBS on ex for debugging purposes? I usually use
ECHO to see selected variables, but I can't get ECHO to print out
$(LINKLIBS on ex) or "$(LINKLIBS) on ex".
Date: Fri, 22 Oct 1999 13:46:32 -0700
Subject: Q on multi-Jamfile projects (again)
I just spent a few hours tracking down an annoying bug. I had a header
file which (for some reason) #included itself:
#ifndef FOO_H
#define FOO_H
#include "foo.h"
#endif
This caused Jam's bind phase to get totally out of whack; the symptom
was an inability to find things which were supposed to exist on the
current SEARCH path. Looking at the debugging output, it seemed that
additions to SEARCH weren't getting added (they did not show up in the
debug output). Comment out the above #include, however, and everything
works fine.
IWBNI Jam protected itself against such (twisted but legal) #include issues :)
Date: Fri, 05 Nov 1999 18:07:24 -0600
From: "Stanford S. Guillory" <guillory@vignette.com>
Subject: Setting a Jam variable and not getting a space at the end
I have the following rule:
actions vEncryptFile1 {
$(ENCRYPTFILE_EXPORT)
set ICU_DATA="$(THIRDPARTY)$(SLASH)icu$(SLASH)data$(SLASH)winnt"
$(ENCRYPT) $(>)
}
The encrypt invocation uses the environment variable to attach
additional directory components
to the path. However, jam insists on puttint a space on the end of the
line, so the actual environment variable is:
[ICU_DATA="$(THIRDPARTY)$(SLASH)icu$(SLASH)data$(SLASH)winnt" ],
so there is this space in the path. How does one get rid of this?
Date: Sat, 6 Nov 1999 11:57:43 -0800 (PST)
Subject: Re: Setting a Jam variable and not getting a space at the end
It wasn't clear to me from your example how having the trailing space was
getting in the way, but I can tell you where it comes from:
lists.c:
list_print( l )
LIST *l; {
for( ; l; l = list_next( l ) )
printf( "%s ", l->string );
}
The space is there so things don't get mooshed together:
Echo foo bar blat ;
results in:
foo bar blat
instead of:
foobarblat
I suppose you could swap it around and have a leading space, if that would be
better for you -- or you could get rid of the space in list_print, and be
responsible for it in your rules/targets (but that could get messy pretty fast).
Alternatively, since your example was an "actions", you could just use the
shell to get rid of it.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Mon, 8 Nov 1999 10:21:20 +0100
Subject: Re : Setting a Jam variable and not getting a space at the end
I also had the same problem.
It comes from the var_string() function, which expands strings containing variables.
This function adds a space after each item in the expanded variable, even for the last.
I modified variable.c and added a line before line 203 :
for( ; l; l = list_next( l ) ) {
int so = strlen( l->string );
if( out + so >= oute ) return -1;
strcpy( out, l->string );
out += so;
==> if( list_next( l ) ) *out++ = ' ';
}
list_free( l );
This removes the last space in variable expansion.
Date: Mon, 08 Nov 1999 14:11:09 +0100
From: Sven Havemann <s.havemann@tu-bs.de>
Subject: Jam/Irix 6.5
The jam Makefile contains a target
all: jam0
For careful people who don't have . in their $PATH (like me) this should
be changed to
all: jam0
./jam0
Date: Mon, 8 Nov 1999 10:32:44 +1100
From: Graeme Gill <graeme@colorbus.com.au>
Subject: Re: Re: Setting a Jam variable and not getting a space at the end
void
list_print( l )
LIST *l; {
int ptd = 0;
for( ; l; ptd = 1, l = list_next( l ) )
printf( "%s%s", ptd ? " " : "", l->string );
}
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Date: Mon, 29 Nov 1999 08:47:20 -0800
Subject: simultaneous builds in jam?
Can jam handle builds from multiple CPUs in the same
directory tree? I've parallelized our build into 3 simultaneous
phases and i'm seeing build failures i don't see when
the build is done serially. I'm wondering if this a jam
issue or something we have done.
The first step is to completely sync the build area with sources.
Then 3 build targets are executed in jam simultaneously
on 3 different hosts. So, multiple builds are running
at the same time in the same tree. The build targets
should be non-overlapping. For example, windoze libraries
and vxworks libraries are built at the same time. There
shouldn't be a conflict.
The problem i'm seeing is:
MkDir1 Z:\x\build\obj\Actor
mkdir Z:\x\build\obj\Actor
A subdirectory or file Z:\x\build\obj\Actor already exists.
mkdir Z:\x\build\obj\Actor
...failed MkDir1 Z:\x\build\obj\Actor ...
...skipped Z:\x\build\obj\Actor\win32 for lack of Z:\x\build\obj\Actor...
...skipped Z:\x\build\obj\Actor\win32\debug for lack of
Z:\x\build\obj\Actor\win32...
...skipped <Build!obj!Actor!win32!debug>Actor.obj for lack of
Z:\x\build\obj\Actor\win32\debug...
...skipped lib_Actor_win32_dbg.lib for lack of
<Build!obj!Actor!win32!debug>Actor.obj...
What seems to be happening is the first target to reach a certain directory
wins. The other
two targets will not be built. In the above example the vxworks release
target was built, but
the win32 and vxworks debug targets were skipped.
Time is a debt i pay only momentarily.
Date: Mon, 29 Nov 1999 10:37:19 -0800
From: "Olivier Brand" <olivier@intraware.com>
Subject: Jam and Java
I am working on a way to define makefiles for big projects. I have came
with a solution mixing gmake, jikes and perl (to build dependencies).
Everything works pretty good but I cannot express the following:
- Circular dependencies.
- Ordering the files to compile. (Build a hierachy tree)
Is it possible with Jam to do these 2 things ?
Where can I find Java resources for Jam ?
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Subject: RE: simultaneous builds in jam?
Date: Wed, 1 Dec 1999 15:19:55 -0800
Steve Babiak suggested changing the make directory rule to:
actions MkDir1 {
if not exist $(1)\nul $(MKDIR) $(1)
}
This gates the make directory and works for me! Steve's explanation is:
The "if not exist" is understood to test for existence of a file, not
a directory, in NT. That is why the "nul" file is tacked onto the end
of $(1). So, if the nul file does not exist in the directory, then
call MKDIR. The MKDIR on NT creates a directory _and _ that directory
will contain a nul file always!
Date: Wed, 08 Dec 1999 16:46:44 -0700
From: Lance Johnston <lance@scmlabs.com>
Subject: Future Plans for Jam?
Does anyone know what the future plans for Jam are? Will there be
ongoing releases, and if so, what's the planned functionality? Does any
future functionality include adding some string processing capablities?
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Fri, 10 Dec 1999 11:00:54 -0800
Subject: IBM MVS Open Edition (OE) aka OS/390 Unix System Services (USS)
I see in the http://public.perforce.com/public/jam/src/RELNOTES that Jam
has been ported to MVS OE. Does anybody have any tips or pointers on using
Jam on MVS? Thanks.
From: "Dowdy, Mark" <mark@ciena.com>
Date: Tue, 21 Dec 1999 13:13:03 -0800
Subject: Maximum "actions" length
Could anyone familiar with the internals of Jam tell
me whether or not there is a maximum line length for
an action. The reason I ask is because we have a java
compilation action that has a line that just reached 1023
characters (before variable expansion). When we recently
modified the action increasing the length of this very long
line, Jam would no longer run the action, even if a target
was out of date. When we remove our new additions to the
long line in the action, the action is run on the out-of-date targets.
Date: Tue, 21 Dec 1999 13:37:32 -0800 (PST)
Subject: Re: Maximum "actions" length
I believe it's MAXLINE in jam.h.
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: Maximum "actions" length
Date: Mon, 27 Dec 1999 15:12:54 -0800
What version of Jam are you running? I'll assume you're running the
"official" 2.2 release.
Is your java compilation action defined with PIECEMEAL? If not, you
might want to re-engineer it so that it uses PIECEMEAL.
We had this problem when running "jam clean". Basically, jam used a
heuristic to decide how many targets to pass to each action
more-or-less say "yeah, that should be < NN chars, do it", but the
expanded action would actually be > NN chars, and Jam would catch it
and abort.
Mark Baushke <mdb@acm.org> found this problem, and contributed a patch
to the perforce archive that improved the heuristic so that it would
take a few stabs at coming up with an actions line that didn't
overflow the buffer. It's been working like a charm for us.
From: "Randy Roesler" <rroesler@mdsi.bc.ca>
Date: Wed, 29 Dec 1999 14:37:01 -0800
Subject: Possible Bug in Jam
Jam version 2.2.5
I think I might have descovered a bug in the Jam engine itself.
I can post my Jamrules if anybody thinks that will help.
Here is the senario.
a) Source file exists in some source directory.
Lets call this file X.h
The SEARCH for X is correctly setup so that
the target X.h binds to $TOP/src/Cmp/X.h
where Cmp is the component contains X
b) We want to export X.h to some include directory using
symbolic links. To do this
Depends Cmp/X.h : <exported>X.h
Depends <exported>X.h : <source>X.h
Depends exports : <exported>X.h
set LOCATE on Cmp/X.h = $TOP/include
set LOCATE on <exported>X.h = $TOP/include/Cmp
# create the sumblic link.
SymLink <exported>X.h : <source>X.h
c) Some source file (X.cpp) includes Cmp/X.h
set HDRRULE on <src!Cmp>X.cpp = HdrRule
<etc as required for dependency scanning>
So, we now have two targets referencing the same file
$TOP/src/Cmp/X.h and two referebcing the exported files
$TOP/include/Cmp/X.h.
Dependency scanning should scan X.cpp, and propagate the
HDR* variables to Cmp/X.h (which in term, might propagate
to other included files). But there is no reason that the
HDR* variables are propagated to <exported>X.h or <source>X.h
Now the trouble is that somehow HDRRULE and HDRSCAN
are getting defined on <exported>X.h and <source>X.h, even though
the Jamrules never explicity requests dependency analysis on these
files. Its also not added by the HdrRule.
[Why am I doing this? because if dependency analysis was allowed
on <exported>X.h, it would notice that X.h includes Y.h (say). Y.h
might be newer than X.h, which would cause Jam to think that it needed
to rebuild Cmp/X.h, which causes the whole system to rebuild.
<exported>X.h is separated from Cmp/X.h so that the make process
can bind <exported>X.h and leave Cmp/X.h unbound. We need to delay
the binding of Cmp/X.h so that HDR* variables can be propagated to it BEFOPE
it is bound. Note, "jam exports obj" will bind the <export>* targets first,
and the Cmp/X.h targets are part of dependency analsys of the obj target. ]
The following is the output from
jam -n -d5 exports | grep QueueTestHandlerCallback
If you trace through it, you can see the HdrRule being called
with <exported>QueueTestHandlerCallback.idl even though
HDRRULE is never set on this target.
(sorry about the line wrapping)
exported>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/include
<Srvs!Queue!tests>QueueTestHandlerCallback_c.cpp
QueueTestHandlerCallback_s.h
<Srvs!Queue!tests>QueueTestHandlerCallback_s.cpp :
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback_c.cpp
QueueTestHandlerCallback_s.h
<Srvs!Queue!tests>QueueTestHandlerCallback_s.cpp :
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue
<Srvs!Queue!tests>QueueTestHandlerCallback_c.cpp
QueueTestHandlerCallback_s.h
<Srvs!Queue!tests>QueueTestHandlerCallback_s.cpp :
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = HdrRule
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = ^[ ]*#[
]*include[ ]*[<"](.*)[">].*$
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/include /opt/wle/include
/u/oracle/product/8.1.5/rdbms/demo /u/oracle/product/8.1.5/plsql/public
/u/oracle/product/8.1.5/network/public
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue /usr/include
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback_c.cpp
QueueTestHandlerCallback_s.h
<Srvs!Queue!tests>QueueTestHandlerCallback_s.cpp :
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<source>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue
/u/rroesler/continous/Top/include/Queue
<source>QueueTestHandlerCallback.idl
QueueTestHandlerCallback.idl QueueTestHandlerCallback_obj.cpp
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback_obj.cpp
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = HdrRule
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = ^[ ]*#[
]*include[ ]*[<"](.*)[">].*$
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/include /opt/wle/include
/u/oracle/product/8.1.5/rdbms/demo /u/oracle/product/8.1.5/plsql/public
/u/oracle/product/8.1.5/network/public
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue /usr/include
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = HdrRule
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = ^[ ]*#[
]*include[ ]*[<"](.*)[">].*$
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/include /opt/wle/include
/u/oracle/product/8.1.5/rdbms/demo /u/oracle/product/8.1.5/plsql/public
/u/oracle/product/8.1.5/network/public
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue /usr/include
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
make -- <exported>QueueTestHandlerCallback.idl
Queue/QueueHandlerCallback.idl
bind -- <exported>QueueTestHandlerCallback.idl:
/u/rroesler/continous/Top/include/Queue/QueueTestHandlerCallback.idl
time -- <exported>QueueTestHandlerCallback.idl: Tue Nov 16
make -- <source>QueueTestHandlerCallback.idl
Queue/QueueHandlerCallback.idl
bind -- <source>QueueTestHandlerCallback.idl:
/u/rroesler/continous/Top/Srvs/Queue/tests/QueueTestHandlerCallback.idl
time -- <source>QueueTestHandlerCallback.idl: Tue Nov 16
made stable <source>QueueTestHandlerCallback.idl
made stable <exported>QueueTestHandlerCallback.idl
From: "Dowdy, Mark" <mark@ciena.com>
Subject: RE: Maximum "actions" length
Date: Wed, 29 Dec 1999 16:50:23 -0800
FYI, the problem turned out to be a stack corruption
because a line in one of our actions contained a token
with a pair of variables that when expanded, were longer
than MAXSYM. When var_expand() was doing strcpy's, there
wasn't a check to see if the size of out_buf was exceeded
and strcpy happily scribbled all over the stack.
From: mzukowski@bco.com
Date: Tue, 4 Jan 2000 10:47:05 -0800
Subject: backwards rule...
I use noweb which takes a .nw file and makes any number of other source
files, such as .c and .h files. When I change a .nw file, it may only
change one of the .c files, but not all of them. I'm not sure how to handle
that with Jam.
What I really need to do is have a rule which says to always run noweb on
the .nw files and then check to see if any of the .c or .h files have
changed. I don't want to say that the .c files depend on the .nw file
because sometimes they do and sometimes they don't, depending on which part
of the .nw file has changed. It's kind of a conditional dependency.
Has anyone dealt with a similar situation before?
From: "John Avery" <javery@taxcut.com>
Date: Tue, 4 Jan 2000 14:00:10 -0500
Subject: Re: backwards rule...
For a somewhat similar situation, I run jam twice, the first time with a
"setup" target that may write files needed by the second, main, build.
Subject: backwards rule...
I use noweb which takes a .nw file and makes any number of other source
files, such as .c and .h files. When I change a .nw file, it may only
change one of the .c files, but not all of them. I'm not sure how to
handle that with Jam.
What I really need to do is have a rule which says to always run noweb on
the .nw files and then check to see if any of the .c or .h files have
changed. I don't want to say that the .c files depend on the .nw file
because sometimes they do and sometimes they don't, depending on which part
of the .nw file has changed. It's kind of a conditional dependency.
Has anyone dealt with a similar situation before?
From: "Johnston, Keith" <johnston@vignette.com>
Date: Thu, 6 Jan 2000 10:52:26 -0600
Subject: Running Jam on NT in a Samba-Mounted Directory
I'm new to this list and to Jam, but I can't find anything about this in the
archives or the FAQ.
In the environment here, we have Jam files that are used to build both on
Unix and NT. I was hoping I could set up a shared directory on Unix, and
then run Jam on NT from inside that shared directory.
However, when I try this, Jam says it cannot find the source files:
$ jam
don't know how to make <foo!bar!fileA>fileA.cpp
don't know how to make <foo!bar!fileA>fileB.cpp
...
Jam works fine when I run it in a directory that really contains the source
files on NT instead of the mounted directory.
Any ideas? Is this a Samba configuration problem or a Jam problem?
From: "Johnston, Keith" <johnston@vignette.com>
Date: Thu, 6 Jan 2000 10:49:12 -0600
Subject: Running Jam on NT in a Samba-Mounted Directory
I'm new to this list and to Jam, but I can't find anything about this in the
archives or the FAQ.
In the environment here, we have Jam files that are used to build both on
Unix and NT. I was hoping I could set up a shared directory on Unix, and
then run Jam on NT from inside that shared directory.
However, when I try this, Jam says it cannot find the source files:
$ jam
don't know how to make <foo!bar!fileA>fileA.cpp
don't know how to make <foo!bar!fileA>fileB.cpp
...
Jam works fine when I run it in a directory that really contains the source
files on NT instead of the mounted directory.
Any ideas? Is this a Samba configuration problem or a Jam problem?
Date: Mon, 10 Jan 2000 18:05:48 -0800 (PST)
Subject: Re: Future Plans for Jam?
Well, I can't speak for anyone at Perforce (or anywhere else for that
matter), but October/November/December were exceedingly busy months for
me. I have several letters in my Jam mail folder that I've been meaning to
respond to -- including Lance's original one on this subject. I usually
try to respond immediately to any mail on this list that I have a response
to (I did manage to for one that only took two seconds to do) -- but since
I'm only now starting my Christmas shopping, any response that's going to
take a little more thought is still going to have to wait until time gets
a little less crunched (including Lance's original one on this subject :)
From: "Eric Johnson" <ejohnson@metrotools.com>
Date: Tue, 25 Jan 2000 19:04:03 -0500
Subject: Proteus - An alternative to make
I posted this comp.software.config-mgmt. But I thought
some folks on this list might find this interesting. I hope
this isn't viewed as intrusive. So far the reaction has
been mildly luke warm. Someone's bound to like it.
Proteus - A New Approach To Make
* A Criticism of Make
Proteus was born out of frustration. A make utility is a crucial part
of the software development process. Yet make alone is never
enough. Most developers and source code managers build up a warehouse
of scripts and utilities to squeeze out the desired behavior. But even
after building up such an arsenal, development groups must continually
wage war with their ad hoc make system.
The costs of the silent war build though the years. Most developers
within an organization are incapable of making bug fixes or
enhancements to their build process. Those that can fear the complex
dependencies of the various utilities cobbled together. Educating new
developers about the vagaries of the build process becomes a rite of
passage. As the source code base grows, the build system fails to
scale and creaks along like band-aids applied to a sinking ship.
While there are a number of flavors of make, this will focus on
critiquing the functionality common to Unix make, GNU make, Digital's
MMS, and Microsoft's NMAKE. The following critique will refer to the
collective common feature set as make.
The only prior knowledge that's required is that one needs to
understand the prototypical make relationship. Which is this
target : dependent1 dependent2 dependent3 ...
shell action1
shell action2
In words, the relationship operates like so - if any dependent on the
right hand side of the relationship is out of date with respect to the
target, the shell actions are invoked. Each dependent's out of date
state is determined by recursively locating a target and analyzing its
out of date relationship to its dependents.
* Out of Date Relationship
The simplest hardwired assumption is the method of determining if
something is out of date. In general, make assumes that the target and
dependent have a direct file counterpart from which a timestamp can be
extracted. From here, make performs a time stamp comparison between
the target file and dependent file. Should the dependent's time stamp
be newer than the target's time stamp, the target is deemed out of
date. When the target is out of date, the shell actions are invoked to
bring the target up to date.
For this discussion, this relationship will be called the out of date
relationship. To put it formally, make uses a timestamp out of date
relationship, which is a grave mistake. Consider the following
development scenario.
Suppose there are two development teams. The Tools Team designs a
low-level class framework for implementing an object persistence
model. The Tools Team has its own build and release cycle separate
from others in the organization. The release process consists of
delivering their source code base in its entirety to other internal
development groups.
Now consider the timestamp out of date relationship from the
perspective of those receiving the work of the Tools Team. Suppose the
Applications Team has been hard at work with v1.0 of the Tools Team's
object persistence framework. Since the Applications Team has been
under much pressure, they are under a daily build system to speed the
QA process and fold in the rest of the final development.. The effect
of this cycle is to cause each object module and executable to have a
very recent timestamp.
Let's say the Tools Team has finished work on v1.5 of their object
persistence framework. After testing their library of source code,
they release it to others within the organization. For sake of
completeness, let's say the Tools Team's last build was on October 1st
with all timestamps for their source code base dating from the
previous day, September 30th. On October 10th, the Applications Team
receives the release, they decide to rebuild their system against
these new changes.
Now comes the challenge. The source code that the Tools Team delivered
is in some sense old. It dates from September 30th. But given that the
Applications Team rebuilds everyday, the Applications Team will have
object modules and executables that date from after October 1st. Yet,
those binaries were built using the previous version of the Tools
Team's library.
As a result of this situation, when one goes to rebuild the system
with the new object persistence framework source files, nothing will
be rebuilt. That's because, from a timestamp perspective, all of the
resulting object modules and executables are in fact up to date in
relation to the files they depend on. From a conceptual perspective,
the files have changed.
A common solution to this problem is to modify the timestamp of all
the source files of the Tools Team's library. This would cause all
object modules and executables to rebuild which would guarantee a
correct build. Yet this is very unappealing for two significant
reasons. This is quite hackish and terribly inefficient. Surely not
everything has changed in the source code base, why waste so much time
rebuilding when perhaps very little has changed? The timestamp out of
date relationship is not an accurate way to communicate changed source
files. Yet make does not allow one to control the behavior of the
out of date relationship. There are of course better out of date
relationship algorithms to use, but rather than foist this choice
onto the make user, the make developer should have a choice.
* Dynamic Dependency Generation
Any large-scale system implemented in C or C++ will have a substantial
source to header file dependency structure. No development group can
be expected to manually create this information. Yet as a vital part
of the build process, make does not offer a simple mechanism to
conveniently generate this information. The root of the problem is
the way in which make evaluates the makefile. Make has two phases of
operation. The first phase is the syntactical parsing of the
makefile. During this phase, the underlying tree of target to
dependent information is built up. In the second phase, the tree is
evaluated and then brought up to date.
In order for make to operate, the complete dependency tree needs to be
computed as per the first phase. After all, there's no way to know
what to actually rebuild in the second phase unless one has the whole
dependency tree. But to compute the whole dependency tree can be time
consuming, especially for a large system. Thus, we're left with a
system where the second phase imposes a high cost on the execution of
the first phase.
A large source code base makes it too expensive to compute the whole
dependency tree each time. A more efficient process would be to
compute the dependencies only for any files that have changed. But
this is the very type of operation that only the second phase of make
can perform. The result is a situation where we need the features of
the evaluation phase to help us generate the information for the tree
population phase.
Some utilities solve this issue via a recursive invocation mechanism,
but this is an inefficient hack. The recursive invocations are
inherently unable to share information without further hacks. The
effect here is that each visited header file must be recursively
evaluated for its complete chain of headers. More work would need to
be reintroduced in order to eliminate this inefficiency.
The basic cause is that the two phases should really exist as one
integrated phase. Why not allow one to populate the tree as its
evaluated. Not only would this result in greater flexibility, but it
would result in greater efficiency too. The tree is grown only to the
size that is needed when it is needed. The theme here is lazy
evaluation.
* Parallel Build Capabilities
As the complexity of the products created grows, so grows the source
code base. With this growth comes the increased build time associated
with it. While great strides have been made in distributed systems and
multi-tasking operating systems, make is largely unable to put those
resources to use. A make system should be able to independently build
multiple components, but also have the skill to synchronize those
components that do depend on each other. A make system is more than
just file dependency, its that and conceptual dependency at the global
scope.
* Shell Commands Are Inadequate Actions
As part of every make process, one needs to implement the actions that
actually bring the target up to date with respect to the
dependents. Unfortunately, make implements these update actions as
shell operations. For those operating systems with a strong shell,
this direct access to the shell is a blessing given make's weak
variable handling tools. But for those make developers running under
weaker shells, this turns into a hideous curse. Apart from the Unix
based shells, OpenVMS's DCL and Microsoft's infamous "dos box" offers
a very weak feature set.
Even if these shells were stronger, the point still remains - direct
invocation of shell commands is the wrong approach. This hampers
portability of the make system. In order to accommodate the
ever-growing list of operating systems and Byzantine shells, the make
developer has to push their makefile through mind-numbing contortions.
Rather than rely on the shell as a warmed over interpreter, the good
make system should offer its own notion of a programmable
function. This would allow the make developer to implement more
complex behavior than could be achieved directly in the shell. In
addition, by using an actual programming language, the update actions
can become more operating system independent.
* A VPATH that Works
The goal of any large-scale make system is to minimize the amount of
work that needs to be done for local development. Actually reaching
this goal proves to be frustratingly difficult. One of the ways to
reach this goal is to have a make system that draws upon centrally
built binaries so that local development only rebuilds what's been
changed locally.
To make the issue clear, let's consider the following example. This
isn't the ideal way or the common way in which source code is shared,
but it will give you an idea as to the issue involved here.
/shared_directory/src ; Complete set of source files
/shared_directory/src/bin ; Binaries produced from the above
In the /shared_directory/src directory, we find a complete set of
source files that forms a complete library. The src directory contains
all headers and sources needed to build the library. And under the bin
directory, we'll find all of the binaries that are to be produced from
that source code base. Now, if a developer were to perform any local
development, they would have a directory chain that looks something
like this.
/usr/ejohnson/devel/src
/usr/ejohnson/devel/src/bin
Note that the hierarchy of directories for local development is the
same as that of the centralized development. In order for the user to
do any useful local development, the library must be completely
rebuilt. Thus the local directory becomes a complete mirror of the
central directory. If the library is sufficiently large, this will
consume a considerable amount of time and disk space. This becomes
particularly painful for the developer when the change is miniscule
compare to the overall size of the package.
The ideal way to handle this issue is to allow the local developer to
invoke a make system that can draw upon the centrally built binaries
when possible. More importantly though, the make system should have
enough smarts to know when the local changes require rebuilding of
central source files. To concretize this last point, let's assume a
simple source code base. The library to be built consists of three
source files, foo.h, foo.cpp, and bar.cpp. Furthermore, let's assume
that both source files, foo.cpp and bar.cpp, include foo.h. Thus, any
changes to foo.h would require the recompiling of foo.cpp and
bar.cpp.
For simplicity's sake, let's assume that the centrally build library
is completely up to date. Furthermore, let's assume that the developer
would like to modify foo.h without directly modifying any other source
file. This means that in the local directory, the developer will only
have foo.h and no other source file.
In this scenario, the make system should recompile both source files
from the central directory and place the binary output into the local
directory. The desired source to be recompiled should not be placed
into the local directory nor should the binary be placed into the
central directory. Once both binaries are placed into the local binary
directory, the library would be relinked. The difficulty in the above
scenario is in handling the recompilation of the source file. That's
because when the source file is recompiled, the binary is placed into
the local directory. This means that the target for which we were
looking at has been given a new home. To put it differently, the link
action for the library will need to be told that the object module was
placed into the local directory rather than the central directory.
This point is particularly important to grasp, yet difficult to
convey, so let's reconsider this issue from make's perspective. When
rebuilding the library, the make system will first consider a
relationship like this.
foo.exe : foo.obj bar.obj
[link actions]
The foo.exe will be produced locally, but the object modules, foo.obj
and bar.obj, could be found either centrally or locally. Let's
suppose that there are no copies locally, so when the make system goes
to look for them, the object modules will be found in the central
directory.
Thus the make system is now effectively working with a relationship
like this.
/central/bin/foo.obj : foo.cpp foo.h
[compiler actions]
As with foo.obj and bar.obj, foo.cpp and foo.h will need to be
searched for in a similar path like way. Thus, we'll look for those
source files locally and then centrally. In this case, we'll discover
that foo.h (the local copy) is out of date with respect to the object
module in the central directory. This will result in the source file
being recompiled into the local binary directory.
This is all well and good, but we're left with the odd effect of the
target not actually being built. In some sense, foo.obj was
rebuilt. But the actual target, as reported or known to the make
system, /central/bin/foo.obj, is still out of date. With a little hand
waving, we've brought a different foo.obj up to date.
In order to be completely correct though, the make system needs to
have a way to push back the new name of that target back up the
evaluation sequence. This means that the dependencies for the link
relationship, the one with foo.obj and bar.obj on the right hand side
would need to be informed of their target's new home. This push back
of new target home information is critical for the success of a shared
directory build system.
* Make is a Lousy Programming Language
What sums up all of the previous criticisms against make is this
simple observation. A makefile really needs to be thought of as a
program. It's a tool to be used and customized by developers for their
own needs. But as a language in which to write a program, make's suite
of functionality leaves much to be desired.
Thus, the final and fundamental criticism of make is that it is not a
professional language in which one could write any large scale,
portable program. The flow control constructs are weak. There's no
support for procedures with return values thus preventing any top-down
design. Error handling is cryptic and handicapped. In addition,
there's no real notion of structures to create aggregated data
units. Make, as a development language, is deficient and retarded.
The birth of Proteus really began with the above observations. It
began with the goal of implementing a good make utility that had a
real language in it. But rather than write yet another scripting
language, a simple laundry list of must have language features were
developed.
* Broad based support across popular and fringe operating systems
* Intelligent variable handling - including scoped variables
* Primitive OOP support - classes and polymorphism
* Thread support to implement parallel evaluation of build trees
* Strong ties to an operating system's shell
* Some measure of error handling
The two most popular scripting languages that match most of the above
criteria are Python and Perl. Unfortunately, Perl's thread support is
experimental and is incomplete. Thus, the only scripting language that
satisfies the above requirements is Python.
To summarize -
Proteus is framework for building a make system. Its written entirely
in Python and is freely distributable. If you'd like a copy, send me email.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Thu, 3 Feb 2000 10:54:07 -0500
Subject: Compiling Debug and Product targets in the same Jamfile.
We're in the process of migrating to JAM, with some difficulty I might add.
In the old environment, we recursively called the same makefile to generate
product(non debug) and debug variants of the objects, libraries and executables.
The different variants had their own destination directories so that the
could coexist in harmony.
With JAM I tried creating ProdApp and DebugApp rules. I was expecting the
both the debug objects and executable to end up in a bin/debug directory
while the product objects and executable to end up in a bin/prod directory.
rule ProdApp {
LOCATE_TARGET = $(TARGET_PROD) ;
Depends $(<) : $(>) ;
Main $(<) : $(>) ;
}
rule DebugApp {
LOCATE_TARGET = $(TARGET_DEBUG) ;
Depends $(<) : $(>) ;
Main $(<) : $(>) ;
}
HDRS += ..$(sep)include ;
source = file1.c file2.c file3.c ;
DebugApp progd : $(source) ;
ProdApp prog : $(source) ;
Results with:
...found n target(s)...
...updating n target(s)...
MkDir1 bin.nt
MkDir1 bin.nt\debug
MkDir1 bin.nt\release
Cc bin.nt\release\file1.obj
file1.c
Cc bin.nt\release\file1.obj
file1.c
Cc bin.nt\release\file2.obj
file2.c
Cc bin.nt\release\file2.obj
file2.c
Cc bin.nt\release\file3.obj
file3.c
Cc bin.nt\release\file3.obj
file3.c
Link bin.nt\debug\progd.exe
Chmod bin.nt\debug\progd.exe
Link bin.nt\release\prog.exe
Chmod bin.nt\release\prog.exe
...updated n target(s)...
Do I have to resort to changing the names of the object files (ie. appending
a 'd' to the debug variants). That seemed to do the trick with the executables.
If I have to run JAM twice with different settings, so be it, but I wished
to avoid that.
Date: Thu, 3 Feb 2000 10:54:13 -0600 (CST)
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
Without getting into it in detail, I guess it would be that the
executables are stated to depend upon the sources rather than the binaries,
so once jam builds the sources then it could link both exe's
from them.
if you run with debug, you can nail it down exactly, although its a lot
of output to look at.
We do a similar thing here, but I seem to remember a rule MainFromObjects
that expresses the exe to obj dependency directly.
What we do is run jam twice with a BUILD_TYPE=debug and BUILD_TYPE=release
and use the same rules.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 3 Feb 2000 17:55:58 +0100
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
This doesn't work, because the 2 invocations of the rule "Main" refer to the
same target.
And this target, of course, has only one location and one set of variables.
We run JAM twice, with and without a "-sDEBUG=1" option.
But you might try to distinguish the targets with a GRIST :
rule ProdApp {
SOURCE_GRIST = prod ;
LOCATE_TARGET = $(TARGET_PROD) ;
CCFLAGS = ... ;
LINKFLAGS = ... ;
Main $(<:G=prod) : $(>) ;
}
rule DebugApp {
SOURCE_GRIST = debug ;
LOCATE_TARGET = $(TARGET_DEBUG) ;
CCFLAGS = .... ;
LINKFLAGS = ... ;
Main $(<:G=debug) : $(>) ;
}
From: Randy Roesler <rroesler@mdsi.bc.ca>
Subject: RE: Compiling Debug and Product targets in the same Jam file.
Date: Thu, 3 Feb 2000 10:30:18 -0800
use GRIST ... chnage the Main rule (and other
rules as required) so that the executable and
object files have differnt GRIST.
for example, add '-release' or '-debug' to the GRIST
From: Koloseike, Jason [mailto:Jason.Koloseike@Cognos.COM]
Sent: Thursday, February 03, 2000 7:54 AM
Subject: Compiling Debug and Product targets in the same Jamfile.
We're in the process of migrating to JAM, with some difficulty I might add.
In the old environment, we recursively called the same makefile to generate
product(non debug) and debug variants of the objects, libraries and executables.
The different variants had their own destination directories so that the
could coexist in harmony.
With JAM I tried creating ProdApp and DebugApp rules. I was expecting the
both the debug objects and executable to end up in a bin/debug directory
while the product objects and executable to end up in a bin/prod directory.
rule ProdApp {
LOCATE_TARGET = $(TARGET_PROD) ;
Depends $(<) : $(>) ;
Main $(<) : $(>) ;
}
rule DebugApp {
LOCATE_TARGET = $(TARGET_DEBUG) ;
Depends $(<) : $(>) ;
Main $(<) : $(>) ;
}
HDRS += ..$(sep)include ;
source = file1.c file2.c file3.c ;
DebugApp progd : $(source) ;
ProdApp prog : $(source) ;
Results with:
...found n target(s)...
...updating n target(s)...
MkDir1 bin.nt
MkDir1 bin.nt\debug
MkDir1 bin.nt\release
Cc bin.nt\release\file1.obj
file1.c
Cc bin.nt\release\file1.obj
file1.c
Cc bin.nt\release\file2.obj
file2.c
Cc bin.nt\release\file2.obj
file2.c
Cc bin.nt\release\file3.obj
file3.c
Cc bin.nt\release\file3.obj
file3.c
Link bin.nt\debug\progd.exe
Chmod bin.nt\debug\progd.exe
Link bin.nt\release\prog.exe
Chmod bin.nt\release\prog.exe
...updated n target(s)...
Do I have to resort to changing the names of the object files (ie. appending
a 'd' to the debug variants). That seemed to do the trick with the executables.
If I have to run JAM twice with different settings, so be it, but I wished
to avoid that.
Date: Thu, 3 Feb 2000 15:57:37 -0800 (PST)
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
So far everyone seems to be suggesting you try using GRIST, but I'm not
sure it would do it for you (but maybe I'm just being slow today). For
things like this, I generally just use full paths -- that way, you end up
with two different targets (when you run 'jam', not in your Jamfile).
So, here's some stuff that works. It's basically just a quickie, so you'll
probably need to adjust things for your specific needs. For one thing, I
didn't bother using $(SLASH) (I was being lazy), so you'll need to change
any /'s (I did it on a FreeBSD machine, but it should work on Windoze once
you change those.) I also assumed you always want both flavors built -- if
you want to be able to do one or the other, you can always add that.
Dir struct is:
/tmp/jason/
Jamfile Jamrules src/
Jamfile foo.c
Jamrules:
# Build both debug and production executables
rule myMain {
local i t ;
{
t = $(SUBDIR)/bin/$(i)/$(<) ;
#Because we aren't using MakeLocate, which ordinarily does the mkdir's
myMkDir $(SUBDIR)/bin/$(i) ;
# Use full-path targets
myMainFromObjects $(t) : $(>:S=$(SUFOBJ)) ;
myObjects $(t:D) : $(>) ;
}
}
rule myMkDir {
Depends first : dirs ;
Depends dirs : $(<) ;
MkDir $(<) ;
}
rule myMainFromObjects {
local d s t ;
#Set directory, source, and target names
d = $(<:D) ;
s = $(d)/$(>) ;
makeSuffixed t $(SUFEXE) : $(<) ;
#Link -s for production executables
if $(d:B) = prod { LINKFLAGS on $(t) += -s ; }
#If executables have suffixes...
if $(t) != $(<) {
Depends $(<) : $(t) ;
NOTFILE $(<) ;
}
# Make compiled sources a dependency of target
Depends exe : $(t) ;
Depends $(t) : $(s) ;
Clean clean : $(t) ;
Link $(t) : $(s) ;
}
rule myObjects {
{ #Compile -g for debug
if $(<:B) = debug { CCFLAGS on $(<)/$(i:S=$(SUFOBJ)) += -g ; }
Object $(<)/$(i:S=$(SUFOBJ)) : $(i) ;
Depends obj : $(<)/$(i:S=$(SUFOBJ)) ;
}
}
Jamfiles:
# Top-level Jamfile
SubInclude TOP src ;
# Jamfile for src
SubDir TOP src ;
myMain foo : foo.c ;
'jam -d2' output (minorly edited so it's not so long):
...found 22 target(s)...
...updating 7 target(s)...
mkdir /tmp/jason/src/bin
mkdir /tmp/jason/src/bin/debug
mkdir /tmp/jason/src/bin/prod
Cc /tmp/jason/src/bin/debug/foo.o
cc -c -g -O -I/tmp/jason/src -o /tmp/jason/src/bin/debug/foo.o /tmp/jason/src/foo.c
Link /tmp/jason/src/bin/debug/foo
cc -o /tmp/jason/src/bin/debug/foo /tmp/jason/src/bin/debug/foo.o
chmod 711 /tmp/jason/src/bin/debug/foo
Cc /tmp/jason/src/bin/prod/foo.o
cc -c -O -I/tmp/jason/src -o /tmp/jason/src/bin/prod/foo.o /tmp/jason/src/foo.c
Link /tmp/jason/src/bin/prod/foo
cc -s -o /tmp/jason/src/bin/prod/foo /tmp/jason/src/bin/prod/foo.o
chmod 711 /tmp/jason/src/bin/prod/foo
...updated 7 target(s)...
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Fri, 4 Feb 2000 10:00:02 +0100
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
You are right:
t = $(SUBDIR)/bin/$(i)/$(<) ; (where $(i) is either 'debug' or 'prod')
will do the same thing as
t = $(<:G=$(i));
LOCATE on $(t) = $(SUBDIR)/bin/$(i) ;
Grists and complete paths are 2 ways to make a target unique.
(A grist appears on the target's name in Jam, but not in the target's
file name used in actions)
The advantage of grists is that you don't have to rewrite all the rules,
and the directory structure can be more complex. It is simpler to extract
the grist from a target's name than from a $(SUBDIR)/bin/$(i) construct.
Date: Fri, 4 Feb 2000 08:23:33 -0800 (PST)
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
I stand corrected (fighting the flu the past two weeks has clearly clouded my brain).
So, here's a much more elegant solution (Jason, I'd recommend you use this
one instead since it's a lot cleaner):
# Build both debug and production executables
rule myMain {
local i s t ;
{
SOURCE_GRIST = $(i) ;
t = $(<:G=$(i)) ;
s = $(>:G=$(i)) ;
if $(i) = debug { ObjectCcFlags $(s) : -g ; }
if $(i) = prod { LINKFLAGS on $(t) += -s ; }
Main $(t) : $(s) ;
LOCATE on $(s:S=$(SUFOBJ)) = $(SUBDIR)/bin/$(i) ;
LOCATE on $(t) = $(SUBDIR)/bin/$(i) ;
}
}
Date: Fri, 4 Feb 2000 09:16:13 -0800 (PST)
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
Ack! I'm obviously not functioning well. I forgot about making the
directories. So...here's what should be the final answer (famous last
words, right?)
# Build both debug and production executables
rule myMain {
local i s t ;
{
SOURCE_GRIST = $(i) ;
t = $(<:G=$(i)) ;
s = $(>:G=$(i)) ;
if $(i) = debug { ObjectCcFlags $(s) : -g ; }
if $(i) = prod { LINKFLAGS on $(t) += -s ; }
Main $(t) : $(s) ;
Depends $(s:S=$(SUFOBJ)) : $(SUBDIR)/bin/$(i) ;
MkDir $(SUBDIR)/bin/$(i) ;
LOCATE on $(s:S=$(SUFOBJ)) = $(SUBDIR)/bin/$(i) ;
LOCATE on $(t) = $(SUBDIR)/bin/$(i) ;
}
}
Date: Thu, 03 Feb 2000 11:22:37 -0800
From: Iain McClatchie <iain@10xinc.com>
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
Jason> In the old environment, we recursively called the same makefile
Jason> to generate product(non debug) and debug variants of the
Jason> objects, libraries and executables.
You can and should handle this in Jam with a single Jam run.
The rule is: every file you wish to build should exist as a seperate
"target" in the Jamfile hierarchy. That means you should have
seperate targets for the debug and production versions of your app,
and those files should be built from seperate debug and production
versions of each object file.
It is perfectly reasonable to have these target names distinguished by
different grist, or by different subdirectory names, or by adding ".d"
before the suffix of every debug object file or executable. We use
suffixes.
It is totally unreasonable to have to specify your build process twice
over in order to maintain these parallel builds. Instead, you write
rules in Jam to do it for you.
At 10x, we build three versions of every executable: a debug version,
an optimized "production" version, and a profiling version used for
performance tuning. This takes no more code in our Jamfiles than
building a single production version would. Here's how we do it:
"Jamrules" contains:
CC = /usr/local/bin/gcc ;
C++ = /usr/local/bin/gcc ;
CC_OPT_FLAGS = -m486 -malign-functions=4 -O2 -g2 ;
C++_OPT_FLAGS = -m486 -malign-functions=4 -O2 -g2
-fno-exceptions -fno-implicit-templates ;
CC_DBG_FLAGS = -m486 -malign-functions=4 -g -gstabs+ ;
C++_DBG_FLAGS = -m486 -malign-functions=4 -g -gstabs+
-fno-exceptions -fno-implicit-templates ;
CC_PROF_FLAGS = -m486 -malign-functions=4 -O2 -g2 -pg ;
C++_PROF_FLAGS = -m486 -malign-functions=4 -O2 -g2 -pg
-fno-exceptions -fno-implicit-templates ;
LINKFLAGS_OPT = -lm ;
LINKFLAGS_DBG = -lm ;
LINKFLAGS_PROF = -lm -pg ;
# MyMain basename : sources : libraries
# generates basename -- fast executable
# basename.dbg -- debuggable executable
# basename.prof -- fast executable with profiling
# libraries listed should be package libraries
rule MyMain {
local s t_fast t_dbg t_prof ;
makeGristedName s : $(>) ;
makeGristedName t_fast : $(<) ;
makeGristedName t_dbg : $(<).dbg ;
makeGristedName t_prof : $(<).prof ;
MainFromObjects $(t_fast) : $(s:S=.opt$(SUFOBJ)) ;
MainFromObjects $(t_dbg) : $(s:S=.dbg$(SUFOBJ)) ;
MainFromObjects $(t_prof) : $(s:S=.prof$(SUFOBJ)) ;
LinkPackageLibraries $(t_fast) : $(3) ;
LinkPackageLibraries $(t_dbg) : $(3) ;
LinkPackageLibraries $(t_prof) : $(3) ;
Depends dbg : $(t_dbg) ;
Depends profile : $(t_prof) ;
LINKFLAGS on $(t_fast) = $(LINKFLAGS_OPT) ;
LINKFLAGS on $(t_dbg) = $(LINKFLAGS_DBG) ;
LINKFLAGS on $(t_prof) = $(LINKFLAGS_PROF) ;
for i in $(s) {
CCFLAGS on $(i:S=.opt$(SUFOBJ)) += $(CC_OPT_FLAGS) ;
C++FLAGS on $(i:S=.opt$(SUFOBJ)) += $(C++_OPT_FLAGS) ;
CCFLAGS on $(i:S=.dbg$(SUFOBJ)) += $(CC_DBG_FLAGS) ;
C++FLAGS on $(i:S=.dbg$(SUFOBJ)) += $(C++_DBG_FLAGS) ;
CCFLAGS on $(i:S=.prof$(SUFOBJ)) += $(CC_PROF_FLAGS) ;
C++FLAGS on $(i:S=.prof$(SUFOBJ)) += $(C++_PROF_FLAGS) ;
Object $(i:S=.opt$(SUFOBJ)) : $(i) ;
Object $(i:S=.dbg$(SUFOBJ)) : $(i) ;
Object $(i:S=.prof$(SUFOBJ)) : $(i) ;
}
}
Here's a typical Jamfile: (in $WORKAREA/gemini/src)
SubDir TOP gemini src ;
MAINSRC = alloc.c chains.c deduce.c equate.c
fingers.c gemini.c hash.c match.c
nxtarg.c properties.c queue.c readgraph.c
simalloc.c simnet.c simread.c skipto.c
userdef.c ;
MyMain <gemini!src>gemini : $(MAINSRC) ;
Here's another: (in $WORKAREA/timepath/src)
SubDir TOP timepath src ;
SubDirHdrs $(TOP)/timepath/include ;
SUBDIRCCFLAGS = -W -Wall ;
SUBDIRC++FLAGS = -W -Wall ;
MAINSRC
main.cc sigslex.cc sigsgram.cc
sigs.cc hashloc.cc testcase.cc
testcaselex.cc testcasegram.cc
;
MyMain <timepath!src>timepath : $(MAINSRC) : libappbase.a ;
Yacc sigsgram.cc : sigs.y : sigs ;
Lex sigslex.cc : sigs.l : sigs ;
Yacc testcasegram.cc : testcase.y : testcase ;
Lex testcaselex.cc : testcase.l : testcase ;
Note that we've also redefined the Yacc and Lex rules to use the GNU
versions of the tools, and also to cleanly handle multiple scanners
and parsers in the same executable:
# Lex lex.c : lex.l : lexprefix
rule Lex {
defaultGrist csource : $(<) ;
defaultGrist lsource : $(>) ;
Depends $(csource) : $(lsource) ;
MakeLocate $(csource) : $(LOCATE_SOURCE) ;
SEARCH on $(lsource) = $(SEARCH_SOURCE) ;
Clean clean : $(csource) ;
if $(3) { LEXOPTS on $(csource) = -P$(3) ;
} else { LEXOPTS on $(csource) = ; }
Lex1 $(csource) : $(lsource) ;
}
actions Lex1 { $(LEX) $(LEXFLAGS) $(LEXOPTS) -o$(<) $(>) }
# Yacc yacc.c : yacc.y : yaccprefix
rule Yacc {
defaultGrist csource : $(<) ;
defaultGrist ysource : $(>) ;
defaultGrist others : $(<:S=.cc.h) $(<:S=.cc.output) ;
Depends $(csource) $(others) : $(ysource) ;
MakeLocate $(csource) : $(LOCATE_SOURCE) ;
MakeLocate $(others) : $(LOCATE_SOURCE) ;
SEARCH on $(ysource) = $(SEARCH_SOURCE) ;
Clean clean : $(csource) $(others) ;
if $(3) { YACCPREFIX on $(csource) = -p $(3) ;
} else { YACCPREFIX on $(csource) = ; }
Yacc1 $(csource) : $(ysource) ;
}
actions Yacc1 { $(YACC) -o $(<) $(YACCFLAGS) $(YACCPREFIX) $(>) }
# defaultGrist targvarname : targnames
rule defaultGrist {
local i res ;
res = ; {
if $(i:G) { res += $(i) ;
} else { res += $(i:G=$(SOURCE_GRIST)) ; }
}
$(<) = $(res) ;
}
(Yes, Laura, I know I'm supposed to check these into the Jam public
repository, and I haven't yet...)
Date: Thu, 03 Feb 2000 13:12:03 -0800
From: Iain McClatchie <iain@10xinc.com>
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
My example referenced a rule called LinkPackageLibraries. You
should change that out to use the LinkLibrary rule, unless you want
to pick up the 10x package system as well. In the latter case,
I've posted it to the Jam mailing list before, you should get it
from the archives.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Fri, 4 Feb 2000 18:52:38 +0100
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
Or shorter, using the LOCATE_TARGET variable :
# Build both debug and production executables
rule myMain {
local i s t ; {
SOURCE_GRIST = $(i) ;
LOCATE_TARGET = $(SUBDIR)/bin/$(i) ;
t = $(<:G=$(i)) ;
s = $(>:G=$(i)) ;
if $(i) = debug { ObjectCcFlags $(s) : -g ; }
if $(i) = prod { LINKFLAGS on $(t) += -s ; }
Main $(t) : $(s) ;
}
}
Date: Fri, 4 Feb 2000 13:36:12 -0800 (PST)
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
Yes, that's cleaner -- and takes care of the directories getting made, too.
(I guess not working with Jam for so long [can it really be six years? --
ouch!] has caused me to lose even more of it than I thought I
had...bummer. Oh well, at least Jason now knows lots of different ways to
not do it :)
Date: Fri, 11 Feb 2000 15:26:26 -0500
From: Stefan Vorkoetter <stefan@freedomintelligence.com>
Subject: Jam is jamming my Win98 box
I use Jam at work on a Win2000 box to build a large database product,
implemented primarily in C++. We use MSVC 5 at the moment. I have an
identical setup at home _except_ that the OS is Win98 2nd Edition
instead of Win2000.
Whenever I run Jam, I encountered two types of problems:
1. Jam doesn't seem to see that certain things are already there.
they are not already there, and on my home machine, it always
attempts to create these directories, whether they are there
or not.
2. When Jam tries to build something, after a number of commands
have been issued by Jam, the Command Prompt window hangs.
I don't know if these two problems are related, but they are keeping
me from being able to use Jam.
I tried downloading the source, in hopes of being able to debug
things, and the same thing happens once the initial Makefile starts up jam0.
Before I spend a whole lot of time debugging, I was wondering if
anyone has had (and hopefully solved) this problem.
From: "Robert Morgan" <robertm@captivation.com>
Subject: RE: Jam is jamming my Win98 box
Date: Fri, 11 Feb 2000 12:58:15 -0800
If I recall, this is caused because Jam not reading timestamps due to a
potentially mal-formed pathnames. I fixed this in my local source -- I
suppose it should be gravitated into the main source.
The change to FILENT.C, file_dirscan():
sprintf( filespec, "%s/*", dir );
becomes:
if (dir[strlen(dir)-1] == '\\')
sprintf( filespec, "%s*", dir );
else
sprintf( filespec, "%s/*", dir );
Date: Fri, 11 Feb 2000 15:22:50 -0500
From: Stefan Vorkoetter <stefan@freedomintelligence.com>
Subject: Jam is jamming my Win98 box
I use Jam at work on a Win2000 box to build a large database product,
implemented primarily in C++. We use MSVC 5 at the moment. I have an
identical setup at home _except_ that the OS is Win98 2nd Edition
instead of Win2000.
Whenever I run Jam, I encountered two types of problems:
1. Jam doesn't seem to see that certain things are already there.
they are not already there, and on my home machine, it always
attempts to create these directories, whether they are there
or not.
2. When Jam tries to build something, after a number of commands
have been issued by Jam, the Command Prompt window hangs.
I don't know if these two problems are related, but they are keeping
me from being able to use Jam.
I tried downloading the source, in hopes of being able to debug
thingalland the same thing happens once the initial Makefile starts
up jam0.
Before I spend a whole lot of time debugging, I was wondering if
anyone has had (and hopefully solved) this problem.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Mon, 14 Feb 2000 16:05:09 -0500
Subject: Fairly new to JAM
I've been looking through the documentation and looked at the mail archives,
but I cannot figure out why this doesn't work:
rule GenAppProj { Depends $(<) : $(>) ; }
actions GenAppProj { genproj.py -tguiapp $(<) $(>) }
source = file1.c file2.c file3.c
GenAppProj prog.dsp : $(source) ;
The rule is referenced, but the action is never called. Looking at an earlier
example in the mailing list, I found a reference to generating man pages from
header files. If I add "Depends files : $(<) ;" to the rule, my action is
called, but I don't know why. I can't find any reference to "Depends files"
anywhere in the documentation.
Thanks. In case your wondering, I'm trying to generate a Visual C project
file from within jam. If we let the programmers do this by hand, they tend
to set the warning level too low (Gee! It worked for me).
Date: Mon, 14 Feb 2000 13:22:05 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Fairly new to JAM
Jam builds the 'all' target by default. If you want something built
when you just type 'jam' you need to make 'all' depend on it.
Your GenAppProj makes the .dsp depend on all the .c files, but the
'all' target doesn't depend on the .dsp.
Does it work when you run 'jam proj.dsp'?
The 'files' target that you used is part of the Jambase file built
into jam. If you look at the Jambase file (included with the source
distribution of jam) you'll see that it makes 'files' depend on 'all'.
So you had the following dependency when you had "Depends files ..."
in there:
all <- files <- proj.dsp
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Subject: RE: Fairly new to JAM
Date: Tue, 15 Feb 2000 09:43:58 -0500
Yes that is the answer. Using the jam proj.dsp does execute
the GenAppProj action. I mistakenly thought 'jam -a'
would achieve the same effect, but I guess it only rebuilds
the targets associated with 'all'.
From: Matt Armstrong [mailto:matt@corp.phone.com]
Sent: Monday, February 14, 2000 4:22 PM
Subject: Re: Fairly new to JAM archives,
Jam builds the 'all' target by default. If you want something built
when you just type 'jam' you need to make 'all' depend on it.
Your GenAppProj makes the .dsp depend on all the .c files, but the
'all' target doesn't depend on the .dsp.
Does it work when you run 'jam proj.dsp'?
The 'files' target that you used is part of the Jambase file built
into jam. If you look at the Jambase file (included with the source
distribution of jam) you'll see that it makes 'files' depend on 'all'.
So you had the following dependency when you had "Depends files ..."
in there:
all <- files <- proj.dsp
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Date: Tue, 15 Feb 2000 16:51:26 +0100
Subject: FW: Jam/MR questions
Hi, I just subscribed to the JAM mailing list and I'm pretty new to
JAM. As I have bothered Christopher with a problem, I think it's
better suited for the mailing list. Perhaps someone can help me with it.
From: Robert M. Muench [mailto:robert.muench@robertmuench.de]
Sent: Tuesday, February 15, 2000 11:25 AM
Subject: Jam/MR questions
building my OpenAmulet library project but I have some problems with it.
I have a source tree structure like:
.../source/XYZ
Where XYZ are functional parts of the library. Inside .../source I
have a 'jamrules' file which sets some variables like C++FLAGS etc.
Than I have a 'jamfile' in this directory, which contains a line for
each XYZ directory of the following form:
SubInclude TOP XYZ;
Inside each XYZ directory I have further 'jamfile' like this, with
'Object ...' for every source file in the directory XYZ.
SubDir TOP XYZ ;
Object ABC : ABC.cpp ;
Than I started JAM with:
set TOP=f:\openamulet\source
jam -a -n
...found 7 target(s)...
Why does nothing gets build? It seems that JAM doesn't recognize the
'Object' directive. I wanted to use 'Objects' but don't know where to
put this command into (.../source/jamfile ?) and how to specify it.
I try to do build a DLL, this means that the source files need to be
compiled to obj files, and than linked together for a DLL.
Date: Tue, 15 Feb 2000 10:48:37 -0600 (CST) (robert.muench@robertmuench.de)
Subject: Re: FW: Jam/MR questions
Object is a pretty low-level rule, and I believe that the dependencies
it (actually the C++ rule) sets are only between the object and the .cpp
file. (Objects would probably be a better choice, or one of the library rules.)
The upshot is that if you just say jam, or jam all it won't get built
because the all target does not depend upon ABC.
I just looked up -a, and it says to build all targets, even if they are
up to date. So, the default target is all, so it will build all targets
that depend upon all, and since none do, it does nothing.
jam is very dependency driven. Nothing will happen if a dependency does
not exist or need to be updated.
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: FW: Jam/MR questions
Date: Wed, 16 Feb 2000 13:15:28 +0100
Hi, first I can say, that I have solved the "nothing" happens problem.
I have written a script which generates JAMFILES automatically for all
subdirectories. Unfortunately the directory names given in SubDir were
shifted by one directory :-| I finally used the Objects rule, pretty
neat and easy to setup.
However, what I now have is that JAM starts building all objects,
nothing more. That's OK as there is no target, library etc. specified yet.
Here are some more questions:
1. I would like to call the compiler with a bunch (around 10) of
filenames to compile at once, and not for each single file... speeds
up compilation a lot. Is this possible?
2. The project I have at hand here uses macros for includes (I know
not very nice but it's old code). The problem is that JAM won't
recognize this and therefore doesn't find all dependencies. Is there
any other solution than replacing all macros with real #include statements?
3. Is it possible to start several compiler threads to speed up compiling?
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Fri, 18 Feb 2000 10:35:19 -0500
Subject: Has anyone written a JAM syntax file for VIM?
Before I start writing one myself, I was wondering if anyone had already
done this.
Date: Fri, 18 Feb 2000 09:08:19 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Has anyone written a JAM syntax file for VIM?
" Vim syntax file
" Language: Jam M/R file
" URL: none
" Last Change: Feb 10, 2000
" Remove any old syntax stuff hanging around syn clear
syn case match
" Jam keywords
syn keyword jamstyleKeywords all if else for in actions rule local switch include case on default
syn keyword jamstyleActionModifiers bind existing ignore piecemeal quietly together updated
syn keyword jamstyleBuiltinVars UNIX NT VMS MAC OS2 OS OS OSVER OSPLAT \
JAMVERSION JAMUNAME HDRSCAN
syn keyword jamstyleBuiltins ALWAYS Depends ECHO EXIT DIE INCLUDES LEAVES
NOCARE NOTFILE NOUPDATE TEMPORARY SEARCH LOCATE HDRRULE
" Jambase stuff
syn keyword jamstyleJambase first shell files lib exe obj dirs clean uninstall
Archive As BULK Bulk C++ Cc CcMv Chgrp Chmod Chown Clean CreLib FILE File Fortran
GenFile GenFile1 HDRRULE HardLink HdrRule Install InstallBin InstallFile
InstallInto InstallLib InstallMan InstallShell Lex Library LibraryFromObjects
Link Link LinkLibraries Main MainFromObjects MakeLocate MkDir MkDir1 Object
ObjectC++Flags ObjectCcFlags ObjectHdrs Objects Ranlib RmTemps Setuid Shell
SubDir SubDirC++Flags SubDirCcFlags SubDirHdrs SubInclude Undefines
UserObject Yacc Yacc1 addDirName makeCommon makeDirName makeGrist
makeGristedName makeRelPath makeString makeSubDir makeSuffixed
unmakeDir AR ARFLAGS AS ASFLAGS AWK BINDIR C++ C++FLAGS CC CCFLAGS
CHMOD CP CRELIB CW CWGUSI CWMAC CWMSL DOT DOTDOT EXEMODE FILEMODE
FORTRAN FORTRANFLAGS HDRS INSTALL JAMFILE JAMRULES JAMSHELL LEX
LIBDIR LINK LINKFLAGS LINKLIBS LN MANDIR MKDIR MSIMPLIB MSLIB
MSLINK MSRC MV NOARSCAN OPTIM RANLIB RCP RM RSH RUNVMS SED
SHELLHEADER SHELLMODE SLASH STDHDRS STDLIBPATH SUFEXE SUFLIB SUFOBJ
UNDEFFLAG WATCOM YACC YACCFILES YACCFLAGS SUBDIRCCFLAGS
RELOCATE SEARCH_SOURCE SUBDIRHDRS MODE OSFULL SUBDIRASFLAGS SUBDIR_TOKENS
LOCATE_SOURCE LOCATE_TARGET UNDEFS SOURCE_GRIST GROUP OWNER NEEDLIBS SLASHINC
" Comments
syn match jamstyleComment "^\s*#.*"
" Errors
syn match jamstyleError "[^ \t];"hs=s+1
if !exists("did_jamstyle_syntax_inits")
let did_jamstyle_syntax_inits = 1
hi link jamstyleKeywords Keyword
hi link jamstyleActionModifiers Keyword
hi link jamstyleBuiltins Special
hi link jamstyleBuiltinVars Special
hi link jamstyleError Error
hi link jamstyleComment Comment
hi link jamstyleJambase Identifier
"hi link jamstyleOption String
"hi link jamstyleTag Special
"hi link jamstyleTagN Identifier
"hi link jamstyleTagError Error
endif
let b:current_syntax = "jamstyle"
Date: Mon, 21 Feb 2000 14:59:55 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Prob with spaces in variables
I have this Jamfile:
TEST = "foo bar" ;
TEST += baz ;
ECHO \"$(TEST)\" ;
TEST2 = "foo bar baz" ;
ECHO \"$(TEST2)\" ;
It produces this:
"foo bar" "baz"
"foo bar baz"
Is there any way I can convince Jam at variable expansion time that
the variable being expanded is a scalar and not a list? I'd like a
way to make the TEST variable expand just like the TEST2 variable
regardless of how it got its value.
I'm running into this with the MSVCNT variable on WinNT. It has a
"Program Files"
component and since it comes from the environment Jam doesn't know not
to break it up. So it is impossible to use the default MSVCNT value
(without using the space free 8.3 version of the name, which is undesireable).
Date: Mon, 21 Feb 2000 17:26:04 -0600 (CST)
Subject: Re: Prob with spaces in variables
we do not install msvc++ in a path with blanks in it. this is part of our sop.
ps. but isn't test the way you'd want it, so that a path with blanks would keep
the "'s around it?
Date: Mon, 21 Feb 2000 21:59:29 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Referncing target specific variables within rules
Say I set a variable on a target:
VAR on target = value ;
Then later on, in some other rule I want to see what the value of "VAR
on target" is. Is this possible?
I know that VAR will get bound to value within actions that update
target, but I want to examine the value within a rule.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Tue, 22 Feb 2000 10:51:09 -0500
Subject: A couple of questions: Recursive Jam, the Clean rule.
Does Jam support a hiarchie of jamfiles, where a parent jamfile will call its child
jamfiles? Our old build environment was made up a large tree of directories
and sub directories. Previously we used a scripts and directory list to
determine what order the makefiles were called. It would be nice to add a simple rule
in our Jamfile describing the relationship between it and its children. This
way it would be a simple matter to go to the root of the product and build
everything or go to a subdirectory and only build that branch.
In our old build environment, we used to have a release rules that would
wipe/clean all targets and rebuild the product and debug targets. When I tried to
implement the same rule in Jam (Depends release : clean prod debug ;) Jam would stop
after the clean rule. This makes sense, but it would be nice to have it
continue on and rebuild the targets. Is this moot since 'jam -a' will rebuild
everything? Does it clean the existing targets first?
Date: Tue, 22 Feb 2000 11:03:53 -0500
From: Donald Sharp <sharpd@cisco.com>
Subject: Re: A couple of questions: Recursive Jam, the Clean rule.
Yep, Look at:
http://public.perforce.com/public/jam/src/Jamfile.html
Specifically the section labeled, Handling Directory trees.
Subject: Re: Referncing target specific variables within rules
From: "Mark D. Baushke" <mark.baushke@solipsa.com>
Date: Mon, 21 Feb 2000 23:50:28 -0800
Nope. Once you have set the value for the target, you won't be able
to fetch it out for later processing. If you need a copy of it, you
will have to setup your own copy
VAR on $(>) = $(VALUE) ;
VAR_$(>) = $(VALUE) ;
VAR_saved += $(>) ;
or something might do the trick. Then later, you can
play with going through $(VAR_saved) to find all of the VAR_$(xxx) values again
local myvar ;
myvar = $(VAR_$(xxx)) ;
# examine $(myvar) as desired
# do something weird like add a new element to the front
VAR on $(xxx) = something weird to the front $(myvar) ;
}
Of course, you must be careful that the VAR_saved value is never
a valid target as in 'VAR on saved = $(VALUE) ;'
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Thu, 24 Feb 2000 15:57:44 -0500
Subject: SubDir, SubInclude and relative paths.
Following the documentation, I've started to use SubInclude to setup a tree
of Jamfiles.
I had originally setup a Jamfile for testing purposes in
$(TOP)/common/utilities:
#
# ... and the directory hiarchy.
#
SubDir TOP common utility bin.nt ;
#HDRS += $(DOTDOT)$(SLASH)include ;
HDRS += $(COMMONINC) ;
source = cxdrcout.c cxdrhout.c cxdrmain.c
cxdrpars.c cxdrscan.c cxdrutil.c ;
#
# EXE Targets
#
Main cxdrgend : $(source) ;
This jamfile worked fine and since I added the 'bin.nt' to the end of the
SubDir clause, objects and executable ended up in that subdirectory.
Hurray! And even though the source was
in $(TOP)common/utility Jam was able to locate the source.
I've now added a parent jamfile in $(TOP)/common:
SubInclude TOP common utility ;
#
# ... and the directory hiarchy.
#
SubDir TOP common ;
Because of this, I was forced to use absolute paths in my original Jamfile's
HDR definition.
When I run jam from $(TOP)/common/utility everything works fine, but when I
run it from one directory up ( $(TOP)/common ) I get the following errors:
D:\ws\main\common>jam -a
don't know how to make <common!utility!bin.nt>cxdrcout.c
don't know how to make <common!utility!bin.nt>cxdrhout.c
don't know how to make <common!utility!bin.nt>cxdrmain.c
don't know how to make <common!utility!bin.nt>cxdrpars.c
don't know how to make <common!utility!bin.nt>cxdrscan.c
don't know how to make <common!utility!bin.nt>cxdrutil.c
...found 26 target(s)...
...can't find 6 target(s)...
...can't make 7 target(s)...
...skipped <common!utility!bin.nt>cxdrcout.obj for lack of
<common!utility!bin.nt>cxdrcout.c...
...skipped <common!utility!bin.nt>cxdrhout.obj for lack of
<common!utility!bin.nt>cxdrhout.c...
...skipped <common!utility!bin.nt>cxdrmain.obj for lack of
<common!utility!bin.nt>cxdrmain.c...
...skipped <common!utility!bin.nt>cxdrpars.obj for lack of
<common!utility!bin.nt>cxdrpars.c...
...skipped <common!utility!bin.nt>cxdrscan.obj for lack of
<common!utility!bin.nt>cxdrscan.c...
...skipped <common!utility!bin.nt>cxdrutil.obj for lack of
<common!utility!bin.nt>cxdrutil.c...
...skipped cxdrgend.exe for lack of
<common!utility!bin.nt>cxdrcout.obj...
...skipped 7 target(s)...
Why is the SubInclude screwing up the search for my source? Do I have
prefix my source with absolutes paths?
I would appreciate any help ... I thought I had a handle on Jam, but this
one throws me.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Subject: RE: SubDir, SubInclude and relative paths.
Date: Thu, 24 Feb 2000 16:05:27 -0500
Adding the absolute path to the source fixes the immediate
issue, but creates another one.
While the executable ends up in bin.nt subdirectory, the
object files are generated in the same directory as the source.
Date: Thu, 24 Feb 2000 13:20:15 -0800 (PST)
Subject: RE: SubDir, SubInclude and relative paths.
I thought we already went thru this stuff and, after several roundabout
ways on my part, eventually gave you the succinct way to get things built
into where you need them to go. Is that not working for you?
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Thu, 2 Mar 2000 17:10:13 -0500
Subject: Is this a Bug?
I've been writing Jam files recently (with your help) with the intent that
they are to be used accross many platforms.
Recently, I stopped using GRISTS to identify target directories because
objects, libraries and executables are all going to different directories.
In our Jamfile, I've been generating names for object files by combining
LOCATE_TARGET (../../common/utility/bin.nt) and source names. ie:
rule debugMain {
t_objs = $(>:S=d$(SUFOBJ)) ;
ourMainFromObjects $(t_dbg) : $(t_objs) ;
}
rule ourMainFromObjects {
local s t ;
s = $(LOCATE_TARGET)$(SLASH)$(>) ;
}
The rules work well on under NT, but on UNIX (solaris and linux) the object
name picks up an extra copy of $(LOCATE_TARGET). So instead of the C++ action
being told to compile target ../../utility/common/bin.linux/file.o, the C++
action is being told to compile target
../../utility/common/bin.linux/../../utility/common/bin.linux/file.o
By temporarily replacing the Object and C++ rules I've been able to confirm
that those rules are receiving the correct target name, but somehow the C++ action is
receiving the screwed up target name as $(<).
Has anyone run accross this problem before?
What the problem boils down to, the object target picks up
Date: Thu, 02 Mar 2000 22:32:51 +0000
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Is this a Bug?
It sounds like you may be fully specifying the target filenames as well
as setting the LOCATE var on them. You should probably stick to doing
one or the other.
Read up on the built in LOCATE variable and maybe take a look at the use
of the MakeLocate rule in the default Jambase.
Basically, the deal with LOCATE is if you do this (which is one of the
things the Jambase MakeLocate does):
LOCATE foo.exe : some/directory/somewhere ;
Then you can (and should) refer to foo.exe by "foo.exe" in all your
rules. But when an action gets run, the foo.exe gets magically bound to
the "some/directory/somewhere/foo.exe" filename.
Date: Fri, 03 Mar 2000 10:40:10 -0800
From: "andy nguyen" <aknguyen@onebox.com>
Subject: how to lock a branch in P4 (newbie)
I'm totally new to Perforce. Question: how can we lock a branch in
perforce. I tried to lock it, and P4 came back with a message "locked
0 files". Also there is no online help for lock.
From: "Dowdy, Mark" <mark@ciena.com>
Date: Fri, 3 Mar 2000 11:48:32 -0800
Subject: Meaning of "parents" in Debug Output
Could someone explain to me what the "parents" status means in
the debug output (i.e. "time -- <filename> : parents"). It
seems that files with this designation are later marked "stable"
even if they don't exist (and need to be built). Thanks.
Date: Fri, 3 Mar 2000 14:21:02 -0600 (CST)
Subject: Re: how to lock a branch in P4 (newbie)
if you want to keep anybody from changing it, its probably better to
do a p4 protect and set the branch to read only.
online help on lock says:
$ p4 help lock
lock -- Lock an opened file against changelist submission
p4 lock [ -c changelist# ] [ file ... ]
The open files named are locked in the depot, preventing any
user other than the current user on the current client from
submitting changes to the files. If a file is already locked
then the lock request is rejected. If no file names are given
then lock all files currently open in the changelist number given
or in the 'default' changelist if no changelist number is given.
which doesn't sound quite like what you want, I think.
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Date: Sun, 5 Mar 2000 13:30:23 +0100
Subject: Jam & includes by macro
The project I have at hand here uses macros for includes (I know
not very nice but it's old code). The problem is that JAM won't
recognize this and therefore doesn't find all dependencies. Is there
any other solution than replacing all macros with real #include
statements?
Can Jam be configured to recognize such a macro usage?
Date: Sun, 05 Mar 2000 22:56:57 +0000
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Jam & includes by macro
That depends -- do the header filenames appear when the macros are
used? If so, you might be able to tweak HDRPATTERN to suit your needs.
Check out how HDRPATTERN is used in the built in Jambase.
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: Jam & includes by macro
Date: Mon, 6 Mar 2000 09:44:32 +0100
Hi, unfortunately not, it looks like this:
#include MYINCLUDE__H
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Thu, 9 Mar 2000 10:34:21 -0500
Subject: String deconstrution.
Jam is quite capable of constructing strings, but I can't find anything
related to pulling a string apart like strtok().
I would love to be able to create a rule that generates a grist from
an actual directory specification.
Rather than:
makeGrist gristVar : dir1 dir2 dir3 ;
I would like to be able write a function:
myMakeGrist gristVar : /dir1/dir2/dir3 ;
Am I missing something? I've all through the documentation and
haven't seen anything releated to pulling strings apart.
Date: Thu, 09 Mar 2000 00:13:37 +0100
From: Ullrich Koethe <koethe@informatik.uni-hamburg.de>
Subject: Newbie Q: Automated unit test
I'm trying to built Jam rules which ensure that a package is only build
if a number of unit tests have successfully been executed. However, in
my Jamfiles, the package gets build even if the unit tests fail. What
needs to be changed to make the idea work ?
# Jamrules
Depends obj lib exe : unittest ;
NOTFILE unittest ;
rule TestedLibrary {
UnitTest $(<) : $(3) ;
# this rule shouldn't succeed if the UnitTest failed
Library $(<) : $(>) ;
}
rule UnitTest { Depends unittest : $(<) ; }
actions UnitTest { $(>) # produces nonzero exit code upon failure
}
# Jamfile
Main test : test.c ;
TestedLibrary libmod1 : mod1.c : test ;
# Output
UnitTest libmod1
/home/koethe/C++/make/sandbox/src/mod1/test
...failed UnitTest libmod1 ...
Cc /home/koethe/C++/make/sandbox/src/mod1/mod1.o
Archive /home/koethe/C++/make/sandbox/src/mod1/libmod1.a
(The last line shouldn't be there)
Date: Thu, 09 Mar 2000 10:01:22 -0700
From: Lance Johnston <lance@scmlabs.com>
Subject: Re: String deconstrution.
Sorry, there ain't no way. You've encountered one of Jam's biggest limitations.
Date: Thu, 9 Mar 2000 10:27:32 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: String deconstrution.
If you want to split at the directory separator, then you're in luck.
The :P variable expansion modifier gives just enough functionality to
get what you want. Here is what I've written:
# splitDir var : dir ;
#
# Split variable 'dir' containing a filename into its component
# parts and assign it to 'var'.
#
rule splitDir {
$(1) = ;
splitDirImpl $(1) : $(2) ;
}
rule splitDirImpl {
local parent ;
parent = $(2:P) ;
if $(parent) && $(parent) != $(SLASH) && $(parent) != $(2) {
splitDirImpl $(1) : $(parent) ;
$(1) += $(2:BS) ;
} else {
$(1) += $(2) ;
}
}
Date: Thu, 9 Mar 2000 11:08:01 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Newbie Q: Automated unit test
You need to make the library depend on the unit test. You probably
want to create a fake intermediate target for the unit test and then
have the library depend on it. Something like:
rule TestedLibrary {
local test library ;
test = $(<:S=.unittest) ;
NOTFILE $(test) ;
library = $(<:S=$(SUFLIB)) ;
Depends $(library) : $(test) ;
UnitTest $(test) : $(3) ;
Library $(library) : $(>) ;
}
Since the library depends on the fake "libname.unittest" target, the
library shouldn't get built if the unit test fails.
Date: Thu, 9 Mar 2000 16:54:30 -0800 (PST)
Subject: Re: String deconstrution.
Unless you specify something else for it to use, Jam does generate grist
from directory paths (well, subdirectory paths anyway). If you need it to
contain the full path for some reason, you could always just prepend $TOP to it.
But, you know, grist isn't really meaningful -- it's just purposeful. I
suppose you could try to make it be, but I'm not sure why you'd need to.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Fri, 10 Mar 2000 10:58:48 -0500
Subject: Is there a bug in Header file dependencies?
I've settled on using fully qualified target names with rooted directories.
Everything seems to be working well, but there is one fly in the ointment
The situation is as follows:
- Jamfile #1 uses the SubInclude rule to include Jamfiles #2 and #3.
- Jamfile #2 uses a custom rule to generate a Header and a C++ file,
based on some text files. This rule also associates the generated
files with the 'files' pseudo target.
- Jamfile #3 builds a library from several static C++ files and the
generated C++ file in Jamfile #2. Some of the static C++ files also
include the Header file generated in Jamfile #2.
If I force Jamfile #2 to regenerate the C++ and header file by touching
one or more of their dependent text files ...
Jamfile #3 sees that the generated C++ file has been updated, so Jam
compiles and sticks in the library. For some reason Jam fails to realize
the static C++ files, dependent on the generated Header file also need
to be recompiled.
If I run Jam a second time it finally clues in and realizes that some of
static C++ files need to be recompiled.
Why is this happening. I don't want to have start writing complex header file
dependency trees. The automatically handling of the chore was one of the
major reasons we picked Jam. It also defeats the purpose of SubInclude
and treating all the Jam files as one.
I've tried adding the generated files to the 'first' pseudo target instead,
but this doesn't help.
Any suggestion? I'm currently running Jam 2.2.1 on Linux.
Date: Fri, 10 Mar 2000 10:22:55 -0800 (PST)
Subject: Re: Is there a bug in Header file dependencies?
Try running jam at a high enough debug level to get the dependencies shown
(I think they come in at d5...I just always go to d7), and see what shows
up depending on what. Then go from there.
Date: Fri, 10 Mar 2000 22:46:17 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Is there a bug in Header file dependencies?
My guess is that you're running into a problem caused by fully
qualifying your filenames everywhere. Since you're using fully
qualified target names I'm assuming you've somehow ditched grist.
The header targets found through jam's implicit dependency rule have
no grist or path names attached to them. If your rules are adding
grist or pathnames to the *generated* header file targets, then jam
will not know that the two files are the same.
So you might get this dependency tree:
all
files
/home/jason/src/gen/generated.cc
/home/jason/src/gen/source.txt
/home/jason/src/gen/generated.h
/home/jason/src/gen/source.txt
libs
/home/jason/src/libs/mylib.a
/home/jason/src/libs/generated.o
/home/jason/src/gen/generated.cc
generated.h
/home/jason/src/libs/static.o
/home/jason/src/somewhere/static.c
generated.h
Notice that Jam thinks both generated.cc and static.cc depend on
generated.h not /home/jason/src/gen/generated.h. The two are not the
same to jam. Jam may know that it is updating
/home/jason/src/gen/generated.h but it doesn't then automatically know
that generated.h will be changing too. You have to tell it with:
Depends generated.h : /home/jason/src/gen/generated.h ;
Or in a generic way, assuming the fully qualified header name is in
the "header" variable:
if $(header:BS) != $(header) { Depends $(header:BS) : $(header) ; }
To see if this is what is happening, touch your source .txt files and
run "jam -d+3 -n" and look at what files the generated.cpp and
static.cpp depend on. Pay special attention to the name jam uses for
the generated.h files. If they are fully qualified some places and
just the basename others, there ya go.
[As a general rule, I'd avoid fully qualified pathnames in Jamfiles.
Jam is designed to work best by using just the basenames as target
names. The SEARCH and LOCATE vars can be used on targets to tell jam
where to find or put the files. Grist can be used to differentiate
files of the same name in two different directories.]
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: String deconstrution.
Date: Fri, 10 Mar 2000 12:30:43 -0500
I've attached an alternative approach that we use here. It has the
add'l feature of being able to recognize $(TOP) and omit it from the
gristing. So:
foo/bar/foo.c becomes <foo!bar>foo.c
foo/bar becomes <foo>bar (no file vs dir check)
/abs/path/foo/bar/foo.c becomes <foo!bar>foo.c
(assuming $(TOP) = /abs/path)
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: String deconstrution.
Date: Mon, 13 Mar 2000 13:51:10 -0800
Oops, forgot to include the rules I mentioned. See bottom of this one...
# Assign to the first arg the grist of the dir part of the second
# argument. Typical usage:
# IosGristDir dir_name : $(file) ;
# You get back the !-separated dir components, with $(TOP) stripped
# off.
# Examples:
# foo/bar/foo.c becomes foo!bar!bar.c (no file vs dir check)
# foo/bar becomes foo!bar
# /abs/path/foo/bar becomes foo!bar (assuming $(TOP) = /abs/path)
#
rule IosGristDir {
if ! $(>:D) { # If dir part is empty
$(<) = $(>) ;
} else if $(>:D) = $(TOP) { # If dir part is $(TOP)
$(<) = $(>:BS) ;
} else {
IosGristDir $(<) : $(>:D) ; # Build grist w/ dir and append
$(<) = $($(<))!$(>:BS) ; # !tail
}
}
# Assign to the first arg the gristed name of the file path given by
# the second argument. Typical usage:
# IosGristPath file_name : $(file) ;
# Uses IosGristDir and adds <>foo.c around it to complete the job.
# Examples:
# foo/bar/foo.c becomes <foo!bar>foo.c
# foo/bar becomes <foo>bar (no file vs dir check)
# /abs/path/foo/bar/f becomes <foo!bar>f (assuming $(TOP) = /abs/path)
#
rule IosGristPath {
local exec_d ;
IosGristDir exec_d : $(>:D) ;
$(<) = <$(exec_d)>$(>:BS) ;
}
From: Jason Koloseike <koloseij@home.com>
Date: Thu, 9 Mar 2000 23:38:56 -0500
Subject: Dependency Error with Header files?
Finally settled on using absolute paths to minimize difficulty in dependency
checking, but ....
Everything seemed to work well. But either, I missing something or there
is a bug in Jam. The situation is as follows:
- Jamfile #0 Includes Jamfile #1 and #2
- Jamfile #1 generates a header and a C++ file when ever one or more
files *.str files are modified in a sub directory. In the rule that
generates the header and C++ file, I've associated these two targets
with the 'files' pseudo target.
- Jamfile #2 sees that the C++ file has been modified (in Jamfile #1) and
builds/rebuilds a library based on it and some static (ungenerated) C++
files.
Now some of the static C++ files (in Jamfile #2) also indirectly include
the generated header file (in Jamfile #1), but for some reason they are
not recompiled.
If I run the Jam a second time, it finally realizes that generated header
file has changed and triggers the C++ files to be recompiled.
Why do I have to run Jam twice. This sort of defeats the purpose of SubInclude.
I've tried adding the generated C++ and header files to the 'first' pseudo
target, but that doesn't help.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Subject: RE: Dependency Error with Header files?
Date: Tue, 14 Mar 2000 13:17:20 -0500
Sorry, I don't know why this is poping up now, but
Matt Armstrong already posted an answer to similiar question of mine.
I guess I need to have a talk with my ISP provider.
From: Jason Koloseike [mailto:koloseij@home.com]
Sent: Thursday, March 09, 2000 11:39 PM
Subject: Dependency Error with Header files?
Finally settled on using absolute paths to minimize difficulty in dependency
checking, but ....
Everything seemed to work well. But either, I missing something or there
is a bug in Jam. The situation is as follows:
- Jamfile #0 Includes Jamfile #1 and #2
- Jamfile #1 generates a header and a C++ file when ever one or more
files *.str files are modified in a sub directory. In the rule that
generates the header and C++ file, I've associated these two targets
with the 'files' pseudo target.
- Jamfile #2 sees that the C++ file has been modified (in Jamfile #1) and
builds/rebuilds a library based on it and some static (ungenerated) C++
files.
Now some of the static C++ files (in Jamfile #2) also indirectly include
the generated header file (in Jamfile #1), but for some reason they are
not recompiled.
If I run the Jam a second time, it finally realizes that generated header
file has changed and triggers the C++ files to be recompiled.
Why do I have to run Jam twice. This sort of defeats the purpose of SubInclude.
I've tried adding the generated C++ and header files to the 'first' pseudo
target, but that doesn't help.
Date: Tue, 14 Mar 2000 16:04:43 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Problems with TEMPORARY
I've run into a situation where jam isn't generating a temporary file
when it should. I have attached a jamfile that exhibits the problem.
I build a dependency tree like this:
child
father
grandfather
mother
It works fine until I mark father as TEMPORARY. When I do that it
seems to work fine until I touch mother file. When I do that, jam
tries to build child without father being present.
======================================================================
make -- all
time -- all: missing
make -- child
time -- child: Tue Mar 14 15:46:32 2000
make -- father
time -- father: parents
make -- grandfather
time -- grandfather: Tue Mar 14 15:45:00 2000
made stable grandfather
made stable father
make -- mother
time -- mother: Tue Mar 14 15:54:26 2000
made* newer mother
made+ old child
made+ update all
...found 5 target(s)...
...updating 1 target(s)...
Cat child
cat father mother > child
cat: father: No such file or directory
...failed Cat child ...
...failed updating 1 target(s)...
======================================================================
Jam knows that 'child' depends on 'father' yet it isn't rebuilding it.
Is there some way I can deal with this problem?
The Jamfile I'm using to test is here...
rule Cp { Depends $(<) : $(>) ; }
actions Cp { cp $(>) $(<)}
rule Cat { Depends $(<) : $(>) ; }
actions together Cat { cat $(>) > $(<)}
rule RmTemps { TEMPORARY $(>) ;}
actions quietly updated piecemeal together RmTemps { rm $(>) }
rule MakeChild {
Cp father : grandfather ;
Cat $(<) : mother father ;
RmTemps $(<) : father ;
}
MakeChild child ;
Depends all : child ;
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Tue, 21 Mar 2000 11:26:27 -0800
Subject: Writing a new rule
I'm a new jam user, trying to get my head around this thing. I've been
using Opus Make for a long time and I'm pretty comfortable with that. I've
immersed myself in all the jam info I could find, but it's still a bit
alien to me yet.
I'm trying to write a new rule that runs a program and touches
"myprogram.run" to signify success. The idea is that we build a test
program, then we run it as part of our build process.
In make, it would look like this:
myprogram.run : myprogram.exe
myprogram
touch myprogram.run
(This could also be written as an inference rule.)
In jam, I tried adapting the Objects/Object rules and ended up with this:
# Add a new "run" target to the "all" target
Depends all : run ;
NOTFILE run ;
# based on the Objects rule
rule Run {
local i s ;
# add grist to filenames
makeGristedName s : $(<) ;
for i in $(s) {
Running $(i:S=.run) : $(i) ;
Depends run : $(i:S=.run) ;
}
}
# based on Object rule
rule Running {
Clean clean $(<) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
}
actions Running {
$(>) ;
touch $(<) ;
}
Run myprogram ;
This actually seemed to work, but with a couple glitches:
"jam clean" doesn't erase "myprogram.run".
If I put "badcommand" instead of $(>) in actions Running (to force a failed
return code), it still touches the .run file. This defeats the point of
the .run file.
Overall, I'm not sure I really understand how the rules and actions work
together, why some rules have no action and vice versa.
Could somebody explain what I've done wrong and what I've done right?
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Tue, 21 Mar 2000 15:37:00 -0800
Subject: Re: Writing a new rule
Right, it's similar to the "keep going" option in some make programs.
Maybe I can use the "updated" attribute to only run tests on new .exes that
have been built, but I still think I need some way of knowing that tests
from previous runs have been successful.
Subject: Re: Writing a new rule
Also, Jam does not quit if a target fails. It just does not try to make
anything that depends upon that target.
was every
Well, you can think differently. More direct is that you need to run the
test if the .exe is built. You don't need a .run file to figure out if
the .exe is going to be updated. That's the theory (TM), and I'm pretty
sure that's right.
Now, if running the test is dependent upon if the test failed last time
and/or the .exe, then you'd need to express that in the dependencies, and
a success file would be good.
Date: 22 Mar 2000 10:29:10 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: Writing a new rule
How did you get the clean to work?
Replying to problems like this in public helps others see what to do in the future.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Wed, 22 Mar 2000 06:40:33 -0800
Subject: Re: Writing a new rule
Sorry, I forgot to "reply all". I thought the later quoting included the context.
I was missing the colon after the second "clean".
Subject: Re: Writing a new rule
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Mon, 27 Mar 2000 08:00:04 -0800
Subject: Order of execution for rules and actions
In writing my own rules, I'm running into trouble because I don't have a
solid feeling for what order things happen in. And I'm mostly falling into
make-style thinking which is getting me into trouble. What I need is some
grounding in Jam theory.
For example, in Jambase, the Link rule calls the Chmod action, and there is
also a Link action (but no Chmod rule). Somehow, this causes the Chmod
action to happen after the Link action, but I'm not sure why.
Is there a document somewhere that explains more of this sort of thing?
(I've been through all the stuff on the jam web page, but I may have missed
it.) It would be especially interesting to see a side-by-side chart that
contrasts make and jam and points out the conceptual differences. Thanks.
Date: Mon, 27 Mar 2000 09:56:27 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Order of execution for rules and actions
I've found that many of the finer points like this are not clearly
documented. I frequently find myself writing little test Jamfiles to
test out theories about how Jam really works. Sometimes I look
through the source.
This is the best overall reference, though admitedly somewhat out of
date: http://www.perforce.com/jam/doc/jam.paper.html.
I usually stay out of trouble when I realize that jam runs in a few
distinct passes:
1) Parsing and rule execution. This is when jam parses all the
jamfiles, executing the rules as it goes. The key to remember here is
that all the Jamfiles are basically concatenated together into a big
global namespace as they are parsed. The product of this phase is a
dependency tree and maybe some output if you use the ECHO rule anywhere.
2) Binding. This is when jam binds target names to actual files,
checks modification times and decides what actions to run. If a taget
has HDRRULE set on it, that rule is executed and the dependency tree
is possibly updated by that rule. Because the HDRRULE
3) Actions are run.
I'm fuzzy on what really happens during phases 2 and 3, which not
surprisingly plays into why your example above is confusing.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Wed, 29 Mar 2000 08:07:51 -0800
Subject: Re: Order of execution for rules and actions
I found an answer about the Link/Chmod order in
http://public.perforce.com/public/jam/src/Jamlang.html:
"When a rule is invoked, its action definition, if any, is automatically
the first updating action to be associated with targets. Any other actions
invoked from a rule's procedure definition statements will be executing
during updating in the order in which they were invoked."
Meanwhile, I've found something that seems to work for my "run" rule. In
Opus Make, it would look like this:
%.run : %.exe
$(.SOURCE)
touch $(.TARGET)
If the exe failed, the target wouldn't be touched and Opus would notice the failure.
As I learned earlier, Jam runs all the actions at once and only sees the
last return code. So I headed down the path of trying to have two separate
rules/actions, one to set a pseodotarget that depends on the run, and the
other to create the .run flag based on the pseudotarget. I kept getting
tangled trying to line up all the dependencies.
So I want back to the idea that I was still really only trying to create
one target (the .run file), but it took two actual commands to do it, first
the .exe itself, then a touch command that required the .exe to be
successful. So I put both commands in the same actions definition, with
some logic to check the return codes, and a method of sending back a bad
return code if either of the commands failed (see below).
Of course I'll need a different action implementation for unix. Does this
look basically correct and proper? Did I violate any fundamentals of good
jam design? Is there a cleaner way to do it? Thanks for your help.
##################################
# Custom rules
# add a new pseudotarget for running the test programs
Depends all : run ;
NOTFILE run ;
# Our version of main that builds and tests the program
rule Ourmain {
Main $(<) : $(>) ;
Testmain $(<) ;
}
rule Testmain {
# define the .run file and set the location for it.
local runfile ;
runfile = $(<:S=.run) ;
MakeLocate $(runfile) : $(LOCATE_TARGET) ;
# set up the dependencies
Depends $(runfile) : $(<) ;
Depends run : $(runfile) ;
# set up the clean rule
Clean clean : $(runfile) ;
# set up the action
Runmain $(runfile) : $(<) ;
}
# check return code after each command
# we have to make sure the last thing we run is a "bad command" to indicate
failure
actions Runmain {
$(>) ;
if errorlevel 1 goto complain ;
touch $(<) ;
if errorlevel 1 goto complain ;
goto end ;
:complain ;
bad.returncode.code ;
:end ;
}
#################################
# What to build
# directory to put targets in
LOCATE_TARGET = bin ;
# we don't need Jam's libs
LINKLIBS = ;
# all our flags
C++FLAGS = /J /MLd /W3 /GX /Z7 /Od /DWIN32 /D_DEBUG /D_CONSOLE /YX ;
Library fund :
root.cpp hashcsr.cpp can.cpp array.cpp track.cpp buffer.cpp
bbuffer.cpp
hash.cpp hashfifo.cpp sarray.cpp dstring.cpp qstring.cpp queue.cpp
table.cpp outofmem.cpp bitarray.cpp fmttbl.cpp date.cpp
timeobj.cpp allocate.cpp xmlite.cpp ;
Ourmain ta : ta.cpp fundmain.cpp ;
LinkLibraries ta : fund ;
#################################
Subject: Re: Order of execution for rules and actions
I've found that many of the finer points like this are not clearly
documented. I frequently find myself writing little test Jamfiles to
test out theories about how Jam really works. Sometimes I look
through the source.
This is the best overall reference, though admitedly somewhat out of
date: http://www.perforce.com/jam/doc/jam.paper.html.
I usually stay out of trouble when I realize that jam runs in a few
distinct passes:
1) Parsing and rule execution. This is when jam parses all the
jamfiles, executing the rules as it goes. The key to remember here is
that all the Jamfiles are basically concatenated together into a big
global namespace as they are parsed. The product of this phase is a
dependency tree and maybe some output if you use the ECHO rule
anywhere.
2) Binding. This is when jam binds target names to actual files,
checks modification times and decides what actions to run. If a taget
has HDRRULE set on it, that rule is executed and the dependency tree
is possibly updated by that rule. Because the HDRRULE
3) Actions are run.
I'm fuzzy on what really happens during phases 2 and 3, which not
surprisingly plays into why your example above is confusing.
Date: Wed, 29 Mar 2000 09:23:18 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Order of execution for rules and actions
That isn't the case. Jam stops trying to build a target as soon as
one action fails. Try this Jamfile, it'll never get to the Print action.
rule MakeIt {
InduceError $(<) ;
Print $(<) ;
}
actions InduceError { exit 1 }
actions Print { echo Got to print rule: $(<) }
MakeIt a ;
Depends all : a ;
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Wed, 29 Mar 2000 09:06:02 -0800
Subject: Re: Order of execution for rules and actions
What I meant was if you have more than one line in an actions definition,
jam will run all the lines even if one of them fails. For example:
rule MakeIt { InduceError $(<) ; }
actions InduceError {
badcommand
echo Got past the command: $(<)
}
MakeIt a ;
Depends all : a ;
This is different from make's shell lines that will stop on the first
nonzero return code.
Meanwhile, I think I've finally figured out a more elegant way to take
advantage of the action sequence without having to resort to shell logic.
The Testmain-Runmain-Touch setup is very similar to the
MainFromObjects-Link-Chmod setup:
##################################
# Custom rules
Depends all : run ;
NOTFILE run ;
rule Ourmain {
Main $(<) : $(>) ;
Testmain $(<) ;
}
rule Testmain {
local runfile ;
runfile = $(<:S=.run) ;
MakeLocate $(runfile) : $(LOCATE_TARGET) ;
Depends $(runfile) : $(<) ;
Depends run : $(runfile) ;
Clean clean : $(runfile) ;
Runmain $(runfile) : $(<) ;
}
rule Runmain { Touch $(<) ; }
actions Runmain { $(>) ; }
actions Touch { touch $(<) ; }
#################################
# What to build
# [...]
Ourmain ta : ta.cpp fundmain.cpp ;
#################################
Subject: Re: Order of execution for rules and actions
That isn't the case. Jam stops trying to build a target as soon as
one action fails. Try this Jamfile, it'll never get to the Print
action.
rule MakeIt {
InduceError $(<) ;
Print $(<) ;
}
actions InduceError { exit 1 }
actions Print { echo Got to print rule: $(<) }
MakeIt a ;
Depends all : a ;
Subject: Re: Order of execution for rules and actions
From: "Mark D. Baushke" <mark.baushke@solipsa.com>
Date: Mon, 27 Mar 2000 09:22:11 -0800
All rules are processed and then all actions are processed that are
required to satisfy building the target specified by the jam command.
(There are a few odd cases like the header parsing rules which can do
some action-like work during rules processing, but no files are
getting written during the rules processing phase at all.)
A way to think of how jam works is that the rules processing phase
just builds the dependency tree and associates the actions needed to
build the various targets. After it has determined how to build
everything, it will then go through to build the target you specified
for it to build and any required actions to build the other targets
that are considered part of the final target.
If you type 'jam files' then jam goes through all of the jam rules and
finds any targets associated with the 'files' target and when it has
completed the rules processing goes to the leafs and starts building
the pre-requisite targets for it to finally finish with the 'files' target.
One of the more difficult things for jam to do is to have one action
generate multiple targets. It much prefers to have a single action
have a single result. It also does not like to have a circular
dependency. If you code one, then you will likely always get jam doing
some work if you rerun it after it has completed once.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Order of execution for rules and actions
Date: Wed, 29 Mar 2000 10:42:43 -0800
would somebody like to try
action InduceError { badcommand && echo Got pasth the command: $(<) }
Seems to work on NT, not sure about Win98/95. Definitely works in Ksh.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Thu, 30 Mar 2000 13:55:19 -0800
Subject: Compiling more than once
We have several test drivers that all link to the same obj. When I build
them with jam, the obj gets recompiled each time. Why is that?
Here's the Jamfile:
SubDir TOP ;
Library fund :
root.cpp hashcsr.cpp can.cpp array.cpp track.cpp buffer.cpp
bbuffer.cpp
hash.cpp hashfifo.cpp sarray.cpp dstring.cpp qstring.cpp queue.cpp
table.cpp outofmem.cpp bitarray.cpp fmttbl.cpp date.cpp
timeobj.cpp allocate.cpp xmlite.cpp ;
Ourmain ta : ta.cpp fundmain.cpp ;
Main easyparm : easyparm.cpp fundmain.cpp ;
Ourmain tqueue : tqueue.cpp fundmain.cpp ;
Ourmain thash : thash.cpp fundmain.cpp ;
Ourmain tbuff : tbuff.cpp fundmain.cpp ;
Ourmain talloc : talloc.cpp fundmain.cpp ;
Ourmain ttable : ttable.cpp fundmain.cpp ;
Ourmain tsarray : tsarray.cpp fundmain.cpp ;
LinkLibraries ta easyparm tqueue thash tbuff talloc ttable tsarray : fund ;
Here's the output after touching fundmain.cpp:
C:\disposable\fund>jam exe
...found 133 target(s)...
...updating 9 target(s)...
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
Link bin\ta.exe
Chmod bin\ta.exe
Link bin\easyparm.exe
Chmod bin\easyparm.exe
Link bin\tqueue.exe
Chmod bin\tqueue.exe
Link bin\thash.exe
Chmod bin\thash.exe
Link bin\tbuff.exe
Chmod bin\tbuff.exe
Link bin\talloc.exe
Chmod bin\talloc.exe
Link bin\ttable.exe
Chmod bin\ttable.exe
Link bin\tsarray.exe
Chmod bin\tsarray.exe
...updated 9 target(s)...
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Thu, 30 Mar 2000 13:57:02 -0800
Subject: How to put a dependency on the jamrules and jamfiles
If I change a jamfile, I would like everything referenced in that jamfile to be rebuilt.
If I change jamrules, I would like everything rebuilt.
Is there a straightforward way to do that?
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Thu, 30 Mar 2000 14:16:25 -0800
Subject: Re: Compiling more than once
Another twist: When I use -d2, I see that the C++FLAGS are specified 8
times on each compile. I suspect the two problems are related.
If I put grist on each fundmain.cpp, then the multiple compile flags go
away, and the order of the compile/link is changed. It now does C++, Link,
Chmod for each, instead of doing 8 C++ first. That's slightly better, but
still gives me 8 compiles instead of one. Is there a proper way to compile
an obj once and link it several times?
Here's the gristed version:
SubDir TOP ;
Library fund :
root.cpp hashcsr.cpp can.cpp array.cpp track.cpp buffer.cpp
bbuffer.cpp
hash.cpp hashfifo.cpp sarray.cpp dstring.cpp qstring.cpp queue.cpp
table.cpp outofmem.cpp bitarray.cpp fmttbl.cpp date.cpp
timeobj.cpp allocate.cpp xmlite.cpp ;
Ourmain ta : ta.cpp <ta>fundmain.cpp ;
Main easyparm : easyparm.cpp <easyparm>fundmain.cpp ;
Ourmain tqueue : tqueue.cpp <tqueue>fundmain.cpp ;
Ourmain thash : thash.cpp <thash>fundmain.cpp ;
Ourmain tbuff : tbuff.cpp <tbuff>fundmain.cpp ;
Ourmain talloc : talloc.cpp <talloc>fundmain.cpp ;
Ourmain ttable : ttable.cpp <ttable>fundmain.cpp ;
Ourmain tsarray : tsarray.cpp <tsarray>fundmain.cpp ;
LinkLibraries ta easyparm tqueue thash tbuff talloc ttable tsarray : fund ;
Subject: Compiling more than once
We have several test drivers that all link to the same obj. When I build
them with jam, the obj gets recompiled each time. Why is that?
Date: Thu, 30 Mar 2000 15:23:38 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: How to put a dependency on the jamrules and jamfiles
Not with the stock Jambase, but you might use "jam -a" to make sure
everything is getting rebuilt correctly.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Thu, 30 Mar 2000 15:06:45 -0800
Subject: Re: How to put a dependency on the jamrules and jamfiles
Yes, I could always use jam -a, but what I really want is to have jam
figure out that the jamfile or jamrules are new and automatically do the -a
(or subset) for me.
Subject: Re: How to put a dependency on the jamrules and jamfiles
Not with the stock Jambase, but you might use "jam -a" to make sure
everything is getting rebuilt correctly.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: How to put a dependency on the jamrules and jamfiles
Date: Thu, 30 Mar 2000 16:23:44 -0800
You could simply add the dependency to your "Main" rule
The jamfile and or jamrules can be depended upon just jam
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Compiling more than once
Date: Thu, 30 Mar 2000 16:59:30 -0800
The way that Jam works if you have a rule and action like
rule DoIt { }
action DoIt { echo $(<); }
when each time the Jamfiles call the DoIt rule, Jam
adds a "pending action" to the target.
So,
DoIt A ;
adds the DoIt action to the target "A" three time.
What you need to do is add protection to the rule ...
rule DoIt {
if ! $($(<)-done) {
$(<)-done = 1 ;
ReallyDoIt $(<) ;
}
}
action ReallyDoIt {
echo $(<) ;
}
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Fri, 31 Mar 2000 14:44:57 -0800
Subject: foo.exe depends on itself
I ran into what seems to be a quirk:
If I try to build an executable called "foo" in a directory also called
"foo" I get:
warning: foo.exe depends on itself
Looking at a d3 trace, I see something like the first trace below.
I believe what's happening is jam is can't distinguish between the
directory "foo" and the pseudotarget "foo" that gets created by the
MainFromObjects rule.
If I use a different directory name (like bar), I get the second trace.
Is there a way to help jam distinguish a directory name from a pseudotarget?
foo.exe in foo directory:
make -- exe
time -- exe: unbound
make -- first
make -- foo.exe
bind -- foo.exe: foo\bin\foo.exe
time -- foo.exe: missing
make -- <foo>foo.obj
bind -- <foo>foo.obj: foo\bin\foo.obj
time -- <foo>foo.obj: Fri Mar 31 15:15:11 2000
make -- foo\bin
time -- foo\bin: Fri Mar 31 15:15:11 2000
make -- foo
time -- foo: unbound
make -- foo.exe
warning: foo.exe depends on itself
made stable foo
made stable foo\bin
make -- <foo>foo.cpp
bind -- <foo>foo.cpp: foo\foo.cpp
time -- <foo>foo.cpp: Fri Mar 31 15:05:28 2000
foo.exe in bar directory:
make -- exe
time -- exe: unbound
make -- first
make -- foo.exe
bind -- foo.exe: bar\bin\foo.exe
time -- foo.exe: missing
make -- <bar>foo.obj
bind -- <bar>foo.obj: bar\bin\foo.obj
time -- <bar>foo.obj: Fri Mar 31 15:23:01 2000
make -- bar\bin
time -- bar\bin: Fri Mar 31 15:23:01 2000
make -- bar
time -- bar: Fri Mar 31 15:22:50 2000
made stable bar
made stable bar\bin
make -- <bar>foo.cpp
bind -- <bar>foo.cpp: bar\foo.cpp
time -- <bar>foo.cpp: Fri Mar 31 15:22:28 2000
Date: Fri, 31 Mar 2000 17:56:35 -0600 (CST)
Subject: Re: foo.exe depends on itself
I considered setting grist on directories to add a special string like
_dir_, to make the target distinct from others. This seemed to fix
the problem, but we had another problem which I never determined
after that, got sidetracked.
Date: Fri, 31 Mar 2000 16:15:52 -0800 (PST)
Subject: Re: foo.exe depends on itself
I'm confused. How many things are named "foo"? ... your target (which in
the end becomes "foo.exe"), your directory, and some pseudo-target?
In any case, I've just tried several scenarios (including having a
pseudo-target named "foo"), and all of them worked just fine, so there's a
problem with your rule(s) somewhere.
Whenever you have dependency problems, though, you should run jam at a
high enough debug level to actually see the Depends.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Fri, 31 Mar 2000 15:29:47 -0800
Subject: Re: foo.exe depends on itself
I have a foo directory, a foo.exe, and a foo.cpp to build it with. (I
don't think the cpp matters.)
Here's what I get if I filter out the Depends for "foo". (Nothing unusual there.)
I think the key is "Subdir TOP foo ;" and running jam from the directory
above. And TOP is not set.
I'm confused. How many things are named "foo"? ... your target (which in
the end becomes "foo.exe"), your directory, and some pseudo-target?
In any case, I've just tried several scenarios (including having a
pseudo-target named "foo"), and all of them worked just fine, so there's a
problem with your rule(s) somewhere.
Whenever you have dependency problems, though, you should run jam at a
high enough debug level to actually see the Depends.
Date: Fri, 31 Mar 2000 17:06:07 -0800 (PST)
Subject: Re: foo.exe depends on itself
If that's the case then where does this "foo" come from:
Somewhere along the line, you've got a "foo" depending on "foo.exe".
I've tried several different ways to get it to break for me, and so far --
even when with a Depends foo : foo.exe -- it still works fine. Since I'm
not able to reproduce the error you're getting, I can't say for sure where
it might be, but it's undoubtedly somewhere in your own rule(s).
No, if it was a TOP-not-set problem, you'd get:
Top level of source tree has not been set with TOP
(BTW: I'm assuming you meant SubDir and just typed it wrong here.)
Date: Fri, 31 Mar 2000 22:44:02 -0600 (CST)
Subject: Re: foo.exe depends on itself
Here's what I remember from what I think was a similar problem.
Our situation was quite similar, we created an executable, and at some
point, we decided to name the directory the same name as the executable.
I believe the main rule looks something like this:
Main foo : foo.cpp slag.cpp etc... ;
it creates a chain of dependencies with "foo" in it. At some point,
it does a MakeLocate on the executable to be produced. I noticed in
the example it was foo/bin. This does a MkDir on foo/bin, which
proceeds to make every dir from top on down to bin. One of these
is dir "foo". The rule then finds itself dealing with a target which
already has stuff going on, and things get a bit mixed up from there.
You will find that if you set the location for the foo executable to
be in a path without the foo directory as part of it, then the problem
will go away. (that's the LOCATE macro, isn't it?)
Date: 2 Apr 2000 16:53:06 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: foo.exe depends on itself
SubDir TOP ;
SubInclude TOP foo ;
SubDir TOP foo ;
Main foo : foo.c ;
The above two Jamfiles will cause the same error you witnessed, right?
I ran into this exact problem and i solved it by changing the Main line to:
Main <foo>foo : foo.c ;
All seemed to work fine then, even when running jam from TOP directory.
Date: Mon, 3 Apr 2000 14:03:15 -0700 (PDT)
Subject: Re: foo.exe depends on itself
$ cat Jamfile
SubDir TOP ;
SubInclude TOP foo ;
$ cat foo/Jamfile
SubDir TOP foo ;
Main foo : foo.c ;
$ jam
...found 17 target(s)...
...updating 2 target(s)...
Cc /temp/foo/foo.o
Link /temp/foo/foo
Chmod /temp/foo/foo
...updated 2 target(s)...
I've tried everything I could think of, including pseudo-targets named
"foo", having explicit dependencies of "foo" to "foo.exe", having the
target "foo" build into TOP/foo/bin, etc., and I can't reproduce what you
guys (I think it's up to 3 now, right?) have seen happen. I can't remember
ever having seen Jam confuse directories and files, and since I can't get
it to do it even by trying to, I still have to strongly suspect it's
something in the rules your using.
Date: Mon, 3 Apr 2000 16:17:43 -0500 (CDT)
Subject: Re: foo.exe depends on itself
check your rules for MkDir, Laura may have put this into
the source a while back: (from feb 99)
How about if you hack the MkDir rule to grist *its* targets? So
that instead of building "axe" it builds "<_dir>axe"? Try these
changes in the MkDir rule:
929c929
< s = $(<:P) ;
---
> s = $(<:PG=_dir) ;
933c933
< switch $(s)
---
> switch $(s:G=)
Let me know if it works okay. (You don't have a directory called
"_dir" do you?)
If this fix isn't a problem for you (or anyone else), I'll put it
in the public depot source.
Date: 3 Apr 2000 21:20:40 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: foo.exe depends on itself
heres my transaction:
% ls -lR
.:
total 8
-rw-r--r-- 1 nirva users 34 Apr 3 16:08 Jamfile
drwxr-xr-x 2 nirva users 4096 Apr 3 16:09 foo/
foo:
total 8
-rw-r--r-- 1 nirva users 36 Apr 3 16:08 Jamfile
-rw-r--r-- 1 nirva users 11 Apr 3 16:08 foo.c
% cat Jamfile
SubDir TOP ;
SubInclude TOP foo ;
% cat foo/Jamfile
SubDir TOP foo ;
Main foo : foo.c ;
% cat foo/foo.c
main() { }
% jam
Jamrules: No such file or directory
warning: foo depends on itself
...found 10 target(s)...
...updating 2 target(s)...
Cc foo/foo.o
MkDir1 foo/foo
Link foo/foo
/usr/bin/ld: cannot open output file foo/foo: Is a directory
collect2: ld returned 1 exit status
cc -o foo/foo foo/foo.o
...failed Link foo/foo ...
...failed updating 1 target(s)...
...updated 1 target(s)...
% jam -v
Jam/MR Version 2.2.5. Copyright 1993, 1999 Christopher Seiwald.
same exact setting as your's. The only diff is that I don't set the TOP
environment variable, and you probably do. If I set TOP to pwd, then it
works fine. I get around this issue by adding grists to the foo executable
target. Jam is able to determine which directory to look in for Jamrules
correctly without setting TOP, so I think this is a jam bug.
Date: Mon, 3 Apr 2000 16:58:52 -0700 (PDT)
Subject: Re: foo.exe depends on itself
Okay, here's the poop:
- If you have a "SubDir TOP ;" in your top-level Jamfile (which
I went ahead and included in mine last time, so I'd match what
you had, but which I wouldn't ordinarily have had, because you
don't actually need it, and in fact I'd consider it wrong for
it to be there, since TOP's not a subdirectory), and
- You don't have TOP set, and
- You run 'jam' from the top-level directory, then
- Instead of getting the "Top level of source tree has not been set
with TOP" error, you'll get the "warning: foo depends on itself" error.
So, the correct solution is to (choose one):
- Delete the "SubDir TOP ;" from your top-level Jamfile, or
- Make sure you have TOP set if you're using "Sub{Dir,Include}"
rules (I'm not sure why you haven't been having it set), or
- Put the same test for TOP-being-set into the SubDir rule that
the SubInclude rule uses:
Date: 4 Apr 2000 00:18:02 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: foo.exe depends on itself
I'm not understanding this.. how can I have this work without having to set
the TOP environment variable?
All your solutions seems to require TOP to be set. They also seem to just
be ways to make sure that TOP is set in the environment.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Mon, 3 Apr 2000 17:18:38 -0700
Subject: Re: foo.exe depends on itself
Why is TOP required to be set? According to
http://public.perforce.com/public/jam/src/Jamfile.html,
"When you have set a root variable, e.g., $(TOP), SubDir constructs path
names rooted with $(TOP), e.g., $(TOP)/src/util. Otherwise, SubDir
constructs relative pathnames to the root directory, computed from the
number of arguments to the first SubDir rule, e.g., ../../src/util. In
either case, the SubDir rule constructs the path names that locate source files."
Subject: Re: foo.exe depends on itself
Okay, here's the poop:
- If you have a "SubDir TOP ;" in your top-level Jamfile (which
I went ahead and included in mine last time, so I'd match what
you had, but which I wouldn't ordinarily have had, because you
don't actually need it, and in fact I'd consider it wrong for
it to be there, since TOP's not a subdirectory), and
- You don't have TOP set, and
- You run 'jam' from the top-level directory, then
- Instead of getting the "Top level of source tree has not been set
with TOP" error, you'll get the "warning: foo depends on itself"
error.
So, the correct solution is to (choose one):
- Delete the "SubDir TOP ;" from your top-level Jamfile, or
- Make sure you have TOP set if you're using "Sub{Dir,Include}"
rules (I'm not sure why you haven't been having it set), or
- Put the same test for TOP-being-set into the SubDir rule that
the SubInclude rule uses:
Date: Mon, 3 Apr 2000 18:55:11 -0700 (PDT)
Subject: Re: foo.exe depends on itself
Fine -- SubDir doesn't require TOP be set. I stand corrected. But
SubInclude does -- there's a bail-out in it for exactly that.
So I should have said: If you're going to use the SubInclude rule, then
you need to have TOP set.
TOP set, then don't use SubInclude -- go ahead and use SubDir in your
top-level Jamfile so that TOP gets set for you, then just use regular include(s).
Date: 4 Apr 2000 05:18:21 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: being forced to set TOP outside jam
I don't get it.. if the first thing my toplevel Jamfile does is
SubDir TOP ;
then what is the diff between that and setting the TOP env var? The first
thing the SubDir command does is set TOP if TOP isnt already set.
Setting the env TOP outside of jam is horrible if you have many many trees
you might work with. There has to be a way to get this to work.
I don't understand what using include instead of SubInclude buys you.
SubInclude just does the include after concating the dir names, right?
Date: Tue, 04 Apr 2000 09:57:19 +0200
From: Igor Boukanov <igor.boukanov@fi.uib.no>
Subject: Re: being forced to set TOP outside jam
Instead I suggest to use simple
TOP = $(DOT) ;
I had some problems with "SubDir TOP ;" ones and did not have time to
find out where things goes bad in SubDir, but then I changed it to "TOP
= $(DOT) ;" and problems went away.
Date: 4 Apr 2000 14:45:02 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: being forced to set TOP outside jam
This can't possibly work if you don't jam from the TOP.
Date: 4 Apr 2000 16:13:02 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: being forced to set TOP outside jam
Things are not fine! If you have a directory in your tree named foo, and
inside it you have a Jamfile with Main foo : foo.c ;, you will have errors
unless you set the TOP env var.
This discussion is about getting around the setting of TOP, and getting
rid of that error without adding a grist.
Date: Tue, 4 Apr 2000 11:09:48 -0500 (CDT)
04 Apr 2000 09:57:19 +0200)
Subject: Re: being forced to set TOP outside jam
I find this discussion very confusing. In our system, the top level Jamfile
does a "SubDir TOP ;" and we never explicitly set TOP, jam sets it
to a relative value. This works out much better than having to set it.
All the lower-level jamfiles do a SubInclude etc. and things are fine.
Date: Tue, 4 Apr 2000 11:54:48 -0500 (CDT)
Subject: Re: being forced to set TOP outside jam
I think you are better off doing the grist than setting TOP explicitly. Probably
better is to find out exactly why it is happening. a -d5 usually gives all the info
you need. I know that's easier said than done.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Tue, 4 Apr 2000 19:23:21 +0100
Subject: Re: foo.exe depends on itself
Excuse me if I'm wrong, but I think I found something
about a "foo" pseudo-target.
The MainFromObject rule says:
makeSuffixed t $(SUFEXE) : $(<) ;
if $(t) != $(<) {
Depends $(<) : $(t) ;
NOTFILE $(<) ;
}
In your example, ( Main foo : foo.cpp ; )
$(<) is "foo"
$(t) is "foo.exe".
To remove the pseudo-target, I suggest calling the Main rule with the suffixed file:
Main foo$(SUFEXE) : foo.cpp ;
I guess this "foo" pseudo-target can be useful when calling "jam foo"
from the command line.
For my part, I never call Main with the unsuffixed name
because I almost have a line such as
LINKFLAGS on foo.exe = ... ;
where the suffixed target name is required.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Tue, 4 Apr 2000 16:35:54 -0700
Subject: Jam on Win9x?
Is the NT version of Jam meant to run on Windows 9x? I got a bunch of
"invalid switch" errors, and Jam didn't seem to realize when a command
failed. It also doesn't seem to exit cleanly but instead locks up the
command prompt.
Date: Wed, 05 Apr 2000 13:02:38 +0200
From: Igor Boukanov <igor.boukanov@fi.uib.no>
Subject: Re: being forced to set TOP outside jam
I use actually in top level Jamfile (not Jamrules!):
if !$(TOP) { TOP = $(DOT) ; }
I had to do it due to problems with "SubDir TOP ;" with a jam/jambase
port tailored for GCC on Windows.
Of cause this does not work if you include your TOP-level jamfile from
some sub directory before SubDir declaration, but then "SubDir TOP ;"
would not work either.
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Date: Tue, 11 Apr 2000 16:18:07 -0700
Subject: targets mysteriously not getting built
We're trying to recreate our build environment in a remote location.
For some reason it's not working even though everything is the "same".
Jam looks like it is going to build everything and then it just stops
after building a couple files.
It looks like jam is just stopping and it's not saying anything.
The -d7 debug didn't yield anything obvious.
Does anyone have any ideas what could be happening? I've included some
output from the build log.
jam_cmd changing directory to Z:\allbuilds\build_main_2000-04-11_txn.
System Command=->Z:\allbuilds\build_main_2000-04-11_txn\bin\Jam.exe -d1
vx-ppc-rel 2>&1<-...patience...
...patience......patience......patience......patience......patience...
...patience......found 7052 target(s)......updating 2359 target(s)...
MkDir1 Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc
if not exist
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\nul mkdir
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc
MkDir1 Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release
if not exist
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\nul
mkdir Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release
C++_gnu
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\Actor.o
Z:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\bin\ccppc
-c -mcpu=603e -mstrict-align
-BZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\lib\gcc-
lib\ -ansi -nostdinc -DRW_MULTI_THREAD -D_REENTRANT -fvolatile -fno-builtin
-fno-defer-pop -Wall -DCPU=PPC603 -DVX -DRWDEBUG -O
-IZ:\allbuilds\build_main_2000-04-11_txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\cm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\lm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\tm
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\config\all
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\config
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\drv
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h\arch\ppc
-IZ:\allbuilds\build_main_2000-04-11_txn\component\Actor -o
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\Actor.
o Z:\allbuilds\build_main_2000-04-11_txn\component\Actor\Actor.cpp
Hamilton C shell(tm) Release 2.3.b
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpDispatcher.o
Z:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\bin\ccppc
-c -mcpu=603e -mstrict-align
-BZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\lib\gcc-
lib\ -ansi -nostdinc -DRW_MULTI_THREAD -D_REENTRANT -fvolatile -fno-builtin
-fno-defer-pop -Wall -DCPU=PPC603 -DVX -DRWDEBUG -O
-IZ:\allbuilds\build_main_2000-04-11_txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\cm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\lm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\tm
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\config\all
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\config
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\drv
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h\arch\ppc
-IZ:\allbuilds\build_main_2000-04-11_txn\component\Actor -o
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpDisp
atcher.o
Z:\allbuilds\build_main_2000-04-11_txn\component\Actor\OpDispatcher.cpp
Hamilton C shell(tm) Release 2.3.b
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpRunner.o
Z:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\bin\ccppc
-c -mcpu=603e -mstrict-align
-BZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\lib\gcc-
lib\ -ansi -nostdinc -DRW_MULTI_THREAD -D_REENTRANT -fvolatile -fno-builtin
-fno-defer-pop -Wall -DCPU=PPC603 -DVX -DRWDEBUG -O
-IZ:\allbuilds\build_main_2000-04-11_txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\cm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\lm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\tm
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\config\all
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\config
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\drv
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h\arch\ppc
-IZ:\allbuilds\build_main_2000-04-11_txn\component\Actor -o
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpRunn
er.o Z:\allbuilds\build_main_2000-04-11_txn\component\Actor\OpRunner.cpp
Hamilton C shell(tm) Release 2.3.b
Archive_gnu
Z:\allbuilds\build_main_2000-04-11_txn\build\lib\lib_Actor_vx-ppc_rel.a
Z:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\bin\arppc
-d Z:\allbuilds\build_main_2000-04-11_txn\build\lib\lib_Actor_vx-ppc_rel.a
Actor.o OpDispatcher.o OpRunner.o
Z:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\bin\arppc
-q Z:\allbuilds\build_main_2000-04-11_txn\build\lib\lib_Actor_vx-ppc_rel.a
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\Actor.
o
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpDisp
atcher.o
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpRunn
er.o
Done target.Done BuildActor=
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Wed, 12 Apr 2000 09:31:14 +0100
Subject: Re: targets mysteriously not getting built
Yes, jam usually ends with messages like "...updated xxx targets...",
and a message is printed on every call to the exit() function (I checked this).
Did you look at the return code of the Jam process?
It may show a reason why the program stopped.
Date: Thu, 13 Apr 2000 18:08:38 -0500 (CDT)
Subject: INCLUDES
Well, maybe I'm brain-dead, but I don't understand what the
INCLUDES rule does/means.
it says
INCLUDES targets1 : targets2 ;
makes targets2 dependencies of anything of which
targets1 are dependencies.
so I think, if:
Depends A : B ;
Depends A : C ;
INCLUDES A : F ;
means that
Depends F : B ;
Depends F : C ;
Date: Fri, 14 Apr 2000 17:09:56 -0500 (CDT)
Subject: waif child found!
What does this mean:
...patience...
...found 3641 target(s)...
...updating 228 target(s)...
vC++ ./debug-solaris/config/client/menus.o
waif child found!
Compilation exited abnormally with code 1 at Fri Apr 14 16:59:40
Date: Sat, 29 Apr 2000 19:47:29 -0500 (CDT)
From: Nikolas Kauer <kauer@pheno.physics.wisc.edu>
Subject: interprocedure optimizations
I would like to facilitate more interprocedure optimizations by
compiling several source files in one compiler call. I've been
using jam for a while essentially compiling my program like this:
cxx -c -O src1.cpp
cxx -c -O src2.cpp
cxx -c -O src3.cpp
cxx -o executable src1.o src2.o src3.o
Say, I know code in file src1.cpp calls functions defined in file src2.cpp
and interprocedure optimizations would strongly increase program performance.
I would then manually compile in the following way:
cxx -c -O4 src2.cpp src1.cpp
cxx -c -O src3.cpp
cxx -o executable src1.o src2.o src3.o
or in cases with a small number of source files (in one programming language):
cxx -o executable -O4 src1.cpp src2.cpp src3.cpp
How would I write a Jamfile that results in these actions? Do I need rules
that are not defined in the default Jambase?
PS I've been using jam for almost two years now and like it quite a bit.
PPS I'm not on the jamming list, so please reply to this message.
Date: Mon, 1 May 2000 12:58:52 -0700
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: interprocedure optimizations
First I'd try to create a new .cpp that just #included the other two source
files.
If that own't work you'll probably want to write your own rule/action pair to
do the compiling. The key is to use the "together" modifier on the compilation
action. The existing rules will always call the C++ rule for .cpp files, so
you'll have to somehow avoid that.
From: "Temesgen Habtemariam" <temesgen@aetherworks.com>
Date: Mon, 8 May 2000 13:07:57 -0500
Subject: Accessing target-specific variables values inside a rule.
Is there any way to look at the value for a target specific variable
inside a rule. I know I can do some thing like VARNAME on TARGET = VALUE
to set the target-specific value for variable 'VARNAME'. Does any one know if
there is a syntax for using target specific values as the right had side
of an assignment. Some thing like VARNAME_ON_TARGET = $(VARNAME on TARGET)
From: "Temesgen Habtemariam" <temesgen@aetherworks.com>
Date: Fri, 5 May 2000 18:15:32 -0500
Subject: Accessing target-specific variables values inside a rule.
Is there any way to look at the value for a target specific variable
inside a rule. I know I can do some thing like VARNAME on TARGET = VALUE
to set the target-specific value for variable 'VARNAME'. Does any one know
if there is a syntax for using target specific values as the right had side
of an assignment. Some thing like VARNAME_ON_TARGET = $(VARNAME on TARGET)
Date: Mon, 08 May 2000 11:44:39 -0700
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Accessing target-specific variables values inside a rule.
There isn't. It a suggested workaround somebody on this list suggested
to me was to set a different variable coded by the target name. So
every time you set the variable you do:
FOO on $(target) = VALUE ;
FOO_on_$(target) = VALUE ;
Then when you want to read the value you use the FOO_on_$(target) copy.
Ugly but it works.
From: "Brian Mosher" <bmosher@digimarc.com>
Date: Mon, 15 May 2000 20:42:43 -0700
Subject: Help compiling jam on the mac and/or need Mac Binaries...
After working all day to get the latest version of jam to build on my G4. I
am at wit's end. I'm using a G4 with OS9.04, the latest update of Code
Warrior Pro 5, MPW 3.5 and I am compiling using Universal Headers v3.3.1
I downloaded v1.83 of CWGUSI and fixed up all of the include and lib paths
in the MPW build file. What I soon discovered was that the version of GUSI I
was using was no longer compatible with my current MSL. It seems that some
of the file routine's internals have changed.
I was able to get around this problem with a total hack where I pulled out
the GUSI routines that jam needs and built with just those. It looks as
though jam uses GUSI only for the dirent routines opendir, readdir, &
closedir. After a bunch of munging I was able to get the whole business to
build.
Here is my problem: it doesn't work. As soon as I try to run it in mpw I
randomly get one of the following problems:
1. A Sioux window pops up with the message "Jamfile: No error". This Sioux
window opens in the context of mpw, so closing it kills mpw. There is no way
to get back to mpw and there are two menus on the menu bar, one for mpw and
2. I get the following error message in my mpw worksheet:
### MPW Shell - Unable to load code fragment "jam" of "jam".
# Fragment container format unknown (OS error -2806)
3. MPW disappears without any warning.
I am giving up for today. If anyone has any advice I'd love to hear it. I
would especially appreciate it if some kind soul could find it in their
heart to email me the binaries for the PPC(hqx'ed if possible, my email
system eats mac attachments.)
I am very new to Mac programming, being mostly a win/unix guy, but I don't
think I've made any obvious screw-ups so I am really unsure what to do next...
Date: Thu, 18 May 2000 10:12:00 -0700
From: Scott RoLanD <shr@chat.net>
Subject: Trying to justify Jam
I am a fan of Perforce and therefore have seen a couple of references
to jam as a better make. I've poked around a bit and tried to find a
concise comparison of jam vs. make but still don't understand the
difference.
Now I am looking at starting a big project at work with either Jam or
GNU make. I'd like to know if Jam is really that much better that I
should promote it, and if it is I need a quick reasoning to explain
why I chose it rather than make. The project is complex, but will only
be running on SUN Solaris systems and is actually a hardware
simulation (using Verilog, but with a bunch of perl and other scripts).
Date: Thu, 18 May 2000 17:38:37 -0500 (CDT)
Subject: Re: Trying to justify Jam
Jam is faster than make.
The rule system means that individual Jamfiles become very simple.
The dependency tracking is more sophisticated, and c and c++ files
are scaned for include dependencies automatically, so that info does
not need to be kept or updated.
Jam deals with dependencies across the entire build, so dependencies
between directories are automatically dealt with. I got to deal
with that when I converted a small system from jam to make.
On the other hand, it is different and hard to understand in some ways,
and make can do a perfectly good job and people know how to use it.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Thu, 18 May 2000 17:23:05 -0700
Subject: Re: Trying to justify Jam
One big thing is the way Jam handles subdirectories. It knows the whole
tree at once and can automatically deal with things like .../this/subdir
needs to include headers from .../that/subdir.
The syntax is more abstract and more flexible. It's not just targets,
dependencies, and shell lines. It's more like a programming language.
Dependency generation is built in and automatic.
While it doesn't apply to your case with one platform, the
platform-independent syntax of the jamfiles is handy.
Scott RoLanD <shr@chat.net> on 2000-05-18 10:12:00
Subject: Trying to justify Jam
I'd like to know if Jam is really that much better that I
should promote it, and if it is I need a quick reasoning to explain
why I chose it rather than make.
Date: Fri, 19 May 2000 12:02:38 +0200 (METDST)
From: Igor Boukanov <Igor.Boukanov@fi.uib.no>
Subject: Re: Trying to justify Jam
If you need to update a big system after small changes, make output will
be in general very cluttered and it is sometimes hard to guess was the
build successful or not. In the same time jam will print just essential
information. This is very handy with big systems.
Date: Thu, 18 May 2000 12:10:28 -0700
From: Iain McClatchie <iain@10xinc.com>
Subject: Re: Trying to justify Jam
Scott> The project is complex, [...] and is actually a hardware
Scott> simulation (using Verilog, but with a bunch of perl and
Scott> other scripts).
Do you use your verilog files in lots of different ways? Are
some of them for gate-level simulation, others for formal
validation, others for behavioral-level representations of
blocks that have other verilog representation too? Make's
suffix-based rules are terrible for this kind of thing. Jam's
explicit procedural definition of dependencies and actions is
way more controllable. This is basically why I went with Jam.
I'm not unhappy with the result, but two years later I'm still
screwing around with the regression system (which is part of the
build system and implemented in Jam).
Most hardware design ends up doing iterations. Maybe Magma's got
the answer, but the rest of us iterate. Make, Jam, and all the
rest want to represent dependencies as a DAG. I have not seen a
well thought through solution to representing hardware design
iterations in a build system. We do it here: some tools essentially
use locally cached results from previous runs; the cache files do
not show up in the dependency graph, and each jam invokation runs
just one iteration.
The downside is that everyone who ever has to mess with the build
system (tools folks, logic designers, the guy who uses a perl
script to generate layout and RTL for some wacky full-custom ROM,
etc) has to learn a different way of doing things. They have to
learn about grist, which isn't complicated but it is different.
If you're going to have to hand this project off to other people
later, for instance, if you're building some piece of IP where the
build system is really part of the IP that gets transferred or
reused, then I'd either do it in make, build a make interface to
the Jam scripts right from the very start, or get more of a buy-in
for Jam from the rest of the folks in your organization.
If you're going to be making a change from make, you would do well
to evaluate everything out there. There is a perl-based make
replacement that might be good too: check out
http://www.dsmit.com/cons/
and other stuff on
http://software-carpentry.codesourcery.com/sc_build
Basically, though, you have a problem because build systems are
software-oriented and hardware is a different kind of thing.
The closest software gets to the kind of incremental updates that
end up happening in ECO flows are the incremental updates to .a
files from component .o files. Make has this particular rule
hardwired into the program as an exception. Ugh.
Date: Thu, 18 May 2000 12:15:56 -0600
From: Ray Caruso <rayman@powerplay.com>
Subject: Problems Defining my own rule
I am using Jam 2.2.5 on Solaris 2.6.
I am trying to define a new rule in my own Jamrules file.
The rule, named Moc, is used to build a new .cpp file from
a special .h file. To build the new .cpp file, I must run
a program called moc. The moc program takes three
arguments: moc file.h -o moc_file.cpp
So I set up a new rule called Moc in my jamrules file:
# These are the compilers we use.
CC = cc ;
C++ = CC ;
# This is the moc program.
MOC = moc ;
# Rule moc states that the file on the left side of the : is dependant on
on the file on the right side of the :
# Namely, moc_file.cpp is dependant on file.h
rule Moc { Depends $(<) : $(>) ; }
# To build moc_file.cpp we must run moc like this...
actions Moc { $(MOC) $(>) -o $(<) }
This Jamrules file sits in /home/me/Devel.
The jam file sits in /home/me/Devel/project/src.
My Jamfile looks like this:
SubDir TOP project src ;
HDRS = $(QTDIR)/include ;
Main application :
main.cpp moc_file.cpp ;
Moc moc_file.cpp : file.h ;
I know Jam is reading the Jamrules file because it is using the correct
compiler. By default, it uses gcc, not my CC.
When I run jam I get the following result:
$ jam
don't know how to make <as-ovm!src>moc_file.cpp
...found 112 target(s)...
...updating 8 target(s)...
...can't find 1 target(s)...
...can't make 2 target(s)...
C++ ../../as-ovm/src/main.o
And then it starts to build the .o files.
I don't get it. What am I missing here??
Date: Mon, 22 May 2000 11:41:20 -0500 (CDT)
Subjdct: Re: Problems Defining my own rule
The key here (at least part of it) is that it doesn't know how to make
<as-ovm!src>moc_file.cpp. Your rule tells it how to make moc_file.cpp,
which is a slightly different critter.
you should grist the args to the rule and send 'em to the action, which
probably means creating a slightly differently named action to call.
local hfile ;
makegristed name hfile : $(>) ;
makelocate $(<) : something ;
Depends $(<) : $(hfile) ;
Moc1 $(<) : $(hfile) ;
You need to look up the real names of those rules and commands and
verify that I got the $(<) and $(>) things in the right places!
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Mon, 22 May 2000 18:45:47 +0100
Subject: Re: Problems Defining my own rule
Your problem comes from the "grist" that Jam adds to
every target when using the SubDir rule.
In Jambase, look at the rule Objects: it contains a line
makeGristedName s : $(<) ;
that adds this grist.
You should rewrite your rule:
rule Moc {
local s t;
# Add grist to file names
MakeGristedName s : $(>) ;
MakeGristedName t : $(<) ;
Depends $(t) : $(s) ;
}
# Rule moc states that the file on the left side of the : is dependant on
on the file on the right side of the :
# Namely, moc_file.cpp is dependant on file.h
rule Moc { Depends $(<) : $(>) ; }
# To build moc_file.cpp we must run moc like this...
actions Moc { $(MOC) $(>) -o $(<) }
This Jamrules file sits in /home/me/Devel.
The jam file sits in /home/me/Devel/project/src.
My Jamfile looks like this:
SubDir TOP project src ;
HDRS = $(QTDIR)/include ;
Main application :
main.cpp moc_file.cpp ;
Moc moc_file.cpp : file.h ;
I know Jam is reading the Jamrules file because it is using the correct
compiler. By default, it uses gcc, not my CC.
When I run jam I get the following result:
$ jam
don't know how to make <as-ovm!src>moc_file.cpp
...found 112 target(s)...
...updating 8 target(s)...
...can't find 1 target(s)...
...can't make 2 target(s)...
C++ ../../as-ovm/src/main.o
And then it starts to build the .o files.
I don't get it. What am I missing here??
From: "Michael O'Brien" <mobrien@pixar.com>
Subject: Re: Problems Defining my own rule
Date: Mon, 22 May 2000 09:53:38 -0700
Jam alters a file name with grist to create a unique file description. This
allows multiple files with the same name in different directories. The
default grist is to replace dir1/dir2/file.cpp as <dir1!dir2>file.cpp.
rule MakeMoc {
local gristedLhs gristedRhs ;
MakeGristedName gristedLhs : $(<) ;
MakeGristedName gristedRhs : $(>) ;
Moc $(gristedLhs) : $(gristedRhs) ;
}
rule Moc {
Depends $(<) : $(>) ;
Clean $(<) ;
}
actions Moc {
# your action goes here...
}
The gristing for *.h files, by default, is nothing. So, the gristedRhs in
the above example, is actually the same as the *.h file. The gristing for
*.h files is nothing because header files need to be located during the header search.
Anyway, I didn't really test the above snipets, so your mileage may vary. If
you have any questions, feel free to shoot me an e-mail.
Date: Tue, 23 May 2000 17:01:56 +0530
From: Amitava Bhattacharjee <amitav@cisco.com>
Subject: JAM for HPUX & AIX
- What is JAM stands for?
- Can you give any pointer for JAM for HPUX 11.0 & AIX 4.3.* ?
From: "Kolarik, Tony" <TKolarik@Verbind.com>
Date: Wed, 24 May 2000 13:37:04 -0400
Subject: FW: timestamps
So far I know about 'cons', anyone know of any other makers, commercial or
not that are not based solely on timestamps? Thanks,
From: "Kolarik, Tony" <TKolarik@Verbind.com>
Date: Wed, 24 May 2000 13:28:13 -0400
Subject: timestamps
I'm trying to find a maker that will *always* rebuild things correctly -
whenever the contents of say a header file have changed since the last build
- regardless of timestamps. Getting an earlier version of a perforce
controlled file using the modtime keyword in the client spec is a common
example of this.
Looking at the Jam doc I get the impression that dependencies are based on
file's timestamps. Is that true? If so is it the only method used?
I know 'cons' can handle it, anyone know of any other tools, commercial or not? Thanks,
Date: Thu, 25 May 2000 14:37:27 -0700 (PDT)
Subject: Re: JAM for HPUX & AIX
Just Another Make (actually, the full name is Jam/MR, because there was
already another product out in the world called JAM...I think the MR
stands for Make Replacement).
Just pick up the source for it and build it on the platforms. The README
mentions AIX, without a specific release, and HPUX at 9.0, but unless
there's something hugely different between 9.0 and 11.0, there's probably
no reason why it shouldn't work. Can't hurt to give it a try anyway.
Apache's Jakarta project includes a build tool called "ant" that lets you
say what criteria to use to determine whether it should be rebuilt (e.g.,
different OS, different compiler, etc.), so you might be able to get it to
build based on file contents.
It's primarily geared towards doing Java things (e.g., javac'ing, jar'ing,
zip'ing, etc.), and comes with those types of "tasks" (as they refer to
them) defined, much the same way Jam comes with certain rules already
defined -- but you can add new "task definitions", assuming you know how
to write Java code (which is what Ant is written in; the build files are
in XML, so it probably wouldn't hurt to know something about that as well).
It's still a really new tool, so it's still going thru changes, but if you
want to check it out, go to jakarta.apache.org, and click on Ant under SubProjects.
From: "Allan Anderson" <a@be.com>
Date: Wed, 07 Jun 2000 11:44:39 -0700
Subject: templated targets
I'm trying to set up some default lists of link libraries and header
locations that something in my build tree can use. My goal is to be
able to just say something like 'TYPE=driver' in a Jamfile and have it
build stuff with the appropriate pre-defined link options and dependencies.
I suppose that I could have it depend on a particular dummy target that
sets up the link options and always gets called. I guess that could get
slow, tho. Maybe it could be done with a bunch of variables defined at
the top that get expanded when appropriate -- macros, I guess. Does my
make background show? :)
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: templated targets
Date: Wed, 7 Jun 2000 12:41:18 -0700
Do you mean something like this:
BuildType default = Normal ;
LINK_MODE default = Link ;
switch $(BuildType) {
case NoLink* : ECHO Linking is being disabled ;
LINK = echo Skipping... ;
case FullDebug* : ECHO Generating full debug for all components ;
OPTIM = $(DEBUG) ;
case Purify : ECHO Setting up for Purify ;
OPTIM = $(DEBUG) ;
LINK_MODE = Purify ;
SUFEXE = .purified ;
case Quantify : ECHO Setting up for Quantify ;
OPTIM = $(DEBUG) ;
LINK_MODE = Quantify ;
SUFEXE = .quantified ;
DEBUG += -DNDEBUG ;
OPTIM += -DNDEBUG ;
case Optimize : ECHO Optimizing all components ;
ECHO "(no debugging symbols)" ;
DEBUG = $(OPTM) ;
case NoDebug* : ECHO Disabling All Debugging Information ;
DEBUG = $(OPTIM) ;
DEBUG += -DNDEBUG ;
OPTIM += -DNDEBUG ;
case Normal : ECHO Starting normal build ;
case * : EXIT "Unknown option (" $BuildType ") for BuildType"
;
}
Subject: RE: templated targets
From: "Allan Anderson" <a@be.com>
Date: Wed, 07 Jun 2000 13:07:26 -0700
Sure. But how would I specify this in lots of different Jamfiles (not
on the command line) and have it automatically differ for each?
with make, I'd have each file do an include of this logic. I want to do
the switch for each Jamfile. Passing the info in from the jamfile just
above it is no good, because this needs to work regardless of where it
is in the tree.
Date: Wed, 7 Jun 2000 16:00:34 -0500 (CDT)
Subject: Re: templated targets
I think what you'd do is make up a rule with the logic in it and invoke
it with an argument to do the differentation.
like:
SetFlagsFor driver ;
I probably don't fully understand what you want.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Date: Wed, 7 Jun 2000 16:01:22 -0700
Subject: Jam and AIX 4.3
this format, some or all of this message may not be legible.
AIX 4.3 changed the format of ar archives. They
refer to the new format as AR_BIG, it extends ths
size of the file names which can be stored in the
main part of the archive header to 20 bytes (from 16).
All AIX 4.3 commands ONLY produce archives in the
big format (they can read the older "SMALL" format as well).
Otherwise, the archive header has not changed.
The problem is that the header file, by default,
selects the AR_SMALL format.
Here is the two pathes to get Jam to work with
the big farmat archives. I've assumed that
Jam does not need to be able to handle the small
archive format, as Jam really only needs to
parse/scan/timestamp libraries which it itself
has created.
(There was a very small change in Jamfile for Jam itself, the line
if $(OS)(OSVER) = AIX43 { CFLAGS += -D_AIX43 ; }
needs to be added.)
From: Randy Roesler <rroesler@mdsi.bc.ca>
Subject: diff jam.h
Date: Wed, 7 Jun 2000 15:47:45 -0700
*** ../../orginal/src/jam.h Thu Sep 16 21:06:13 1999
--- jam.h Wed Jun 7 13:21:23 2000
***************
*** 151,160 ****
# ifdef _AIX
# define unix
+ # ifdef _AIX43
+ # define OSSYMS "UNIX=true","OS=AIX","OSVER=43"
+ # else
# ifdef _AIX41
# define OSSYMS "UNIX=true","OS=AIX","OSVER=41"
# else
# define OSSYMS "UNIX=true","OS=AIX","OSVER=32"
+ # endif
# endif
From: Randy Roesler <rroesler@mdsi.bc.ca>
Subject: diff fileunix.c
Date: Wed, 7 Jun 2000 15:48:14 -0700
*** ../../orginal/src/fileunix.c Thu Sep 16 21:06:01 1999
--- fileunix.c Wed Jun 7 15:22:46 2000
***************
*** 44,49 ****
# else
# if !defined( __QNX__ ) && !defined( __BEOS__ )
+ # ifdef _AIX43
+ /* AIX 43 ar SUPPORTs only __AR_BIG__ */
+ # define __AR_BIG__
+ # endif
# include <ar.h>
# endif /* QNX */
# endif /* MVS */
***************
*** 274,282 ****
if( ( fd = open( archive, O_RDONLY, 0 ) ) < 0 ) return;
! if( read( fd, (char *)&fl_hdr, FL_HSZ ) != FL_HSZ ||
strncmp( AIAMAG, fl_hdr.fl_magic, SAIAMAG ) ) {
close( fd );
return;
}
if( ( fd = open( archive, O_RDONLY, 0 ) ) < 0 ) return;
! if( read( fd, (char *)&fl_hdr, FL_HSZ ) != FL_HSZ ||
! #ifdef _AIX43
! strncmp( AIAMAGBIG, fl_hdr.fl_magic, SAIAMAG ) )
! #else
strncmp( AIAMAG, fl_hdr.fl_magic, SAIAMAG ) )
+ #endif
{
+ printf( "magic number wrong on %s\n", archive );
close( fd );
return;
}
Date: Thu, 08 Jun 2000 08:51:25 -0700
From: Iain McClatchie <iainmcc@ix.netcom.com>
Subject: Include file dependencies
All build systems have quirky weaknesses when it comes to handling
#include files, and Jam is no different. When Jam runs the "make"
phase on a .c or .cc file, it scans that file for #include files,
and marks them as NOCARE and as dependencies, which is great.
Ordinarily, these .h files are source code. Jam gets a timestamp
for each, and figures if the .c or .cc file should be rebuilt.
I have a situation in which the .h file is not source code. It's
"built" by an installation rule which copies it from the source
location in a different directory tree. Jam copies direct
dependencies correctly.
But it appears that Jam does not run the HdrRule on these .h files
that it finds. As a result, when these .h files furthur include
other .h files, Jam does not copy those over, and compilation fails.
Right now, my "workaround" is that multiple invokations of Jam copy
successive levels of the #include hierarchy, until they're all copied
in and the build works.
I can dig into Jam's source code to attempt to fix this problem, but
first I could use a little guidance. I think the HdrRule doesn't
get run on built .h files because it gets run during the bind phase,
before any of the update actions are run. As a result, I suspect
a fix would involve a change to the basic phased operation of Jam,
and now I'm probably talking about a totally different build tool.
Do you have any ideas on the matter?
From: David Moore <david.moore@dialogic.com>
Date: Sun, 11 Jun 2000 22:42:18 GMT
Subject: CORBA IDL rule
I'm trying to define a rule for CORBA IDL which will allow us to
build an executable by specifying the IDL files.
e.g. Main server : server.cc
pet.idl
petimpl.cc
owner.idl
ownerimpl.cc
;
My problem is the header file generated from one .idl file may depend
on that generated by another .idl file.
e.g. owner.idl #includes pet.idl, so when owner.h is generated it
has a #include "pet.h", but Jam tries to build owner.o from
owner.cc and owner.h before it has run the IDL rule for pet.h
and of course fails.
This is what I have...
rule UserObject {
switch $(>:S){
case .idl : C++ $(<) : $(<:S=.cc) ;
Idl $(<:S=.cc) : $(>) ;
case * : EXIT "Unknown suffix on" $(>) "- see UserObject rule in
Jamfile(5)." ;
}
}
rule Idl {
# based on the Yacc rule
local h ;
h = $(<:BS=.h) ;
MakeLocate $(<) $(h) : $(LOCATE_SOURCE) ;
# Some places don't have an Idl.
if $(IDL) {
Depends $(<) $(h) : $(>) ;
Idl1 $(<) $(h) : $(>) ;
Clean clean : $(<) $(h) ;
}
INCLUDES $(<) : $(h) ;
}
actions Idl1 {
$(IDL) $(IDLFLAGS) $(>)
}
I have seen Jam used in two steps:
- first to generate all the IDL files into C++ source regardless of
if they have changed or are even used
- then, to compile those source files into executables as specified
in a Jamfile.
I think Jam should be able to do better than that though, I want
it to manage the dependencies on the IDL files themselves.
Date: Mon, 12 Jun 2000 17:26:02 -0700 (PDT)
Subject: RE: CORBA IDL rule
You might try adding the "files" pseudo-target as a dependency on your Idl
targets, since "files" gets built before "lib" and "exe" do:
rule Idl {
Depends files : $(<) $(h) ;
Depends $(<) $(h) : $(>) ;
}
I don't have any IDL stuff, so I don't have any way of testing it to make
sure it'll do what you need, but it seems like it should.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: CORBA IDL rule
Date: Mon, 12 Jun 2000 19:17:32 -0700
My Idl (Web Logic) produces 4 output files from the single Idl
source file. The files produced for X.idl are X_c.h X_c.cpp (the
client stubs) and X_s.h and X_s.cpp (server skeletons).
So, here is my IDL rule.
IdlRm removes targets if they are links
IdlMv moves targets to correct dirctory.
My Idl compiler does not let you specify the output directory for the files !
I also build a default "interface control file" for
tuxedo and arrange for dependecy checking ...
rule Idl {
local g s c n ;
if ! $($(<:G=)-idl) {
# Cheesy gate to prevent multiple invocations
$(<:G=)-idl = true ;
makeGristedName g : $(<:G=) ;
n = $(<:G=) ;
s = $(n:S=_s.h) $(g:S=_s.cpp) ;
c = $(n:S=_c.h) $(g:S=_c.cpp) ;
IdlRm $(c) $(s) : $(g) ;
IdlDo $(c) $(s) : $(g) ;
IdlMv $(c) $(s) : $(g) ;
}
}
rule IdlDo {
local h i ;
# special case because of how idl.pl works
MakeLocate $(<[1]) $(<[3]) : $(LOCATE_COMPONENT) ;
MakeLocate $(<[2]) $(<[4]) : $(LOCATE_SOURCE) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
Depends $(IDLS) : $(<) ;
Clean clean : $(<) ;
# alias to non gristed form
for i in $(<) {
if $(i) != $(i:G=){
Depends $(i:G=) : $(i) ;
}
}
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIR_HDRS) ;
# Build a "default" ICF file
Depends $(<) : $(>:S=.xx) ;
Depends $(>:S=.xx) : $(>) $(ICFTMPLT) ;
MakeLocate $(>:S=.xx) : $(LOCATE_SOURCE) ;
RmIfLink $(>:S=.xx) ;
IdlIcf $(>:S=.xx) : $(>) $(ICFTMPLT) ;
ICFFILE on $(<) += $(>:S=.xx) ;
# If the source file is in a distant directory, look there.
# Else, look in "" (the current directory).
ScanFile $(>) ;
}
Will notice how the dependency is set up ..
X_s.h X_s.cpp X_c.h X_c.cpp depends on X.xx
X.xx depends X.idl
Is is because X.xx is used by the Idl command, and
needs to exist before X_s.h X_s.cpp X_c.h X_c.cpp can be build.
My Object rule now looks like this, I use a phone extention
.skel and .stub to build only part of the Idl compiler's output.
I have another macro which expands something linke
Main main : X.idl ; Into something (like)
Main main : X_c.cpp X_s.cpp ;
But if I only want the skeleton of the stub, I would invoke
Main main : X.skel ; (or X.stub) instead
rule Object {
local h ;
# locate object and search for source, if wanted
Clean clean : $(<) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
# Save HDRS for -I$(HDRS) on compile.
# We shouldn't need -I$(SEARCH_SOURCE) as cc can find headers
# in the .c file's directory, but generated .c files (from
# yacc, lex, etc) are located in $(LOCATE_TARGET), possibly
# different from $(SEARCH_SOURCE).
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIR_HDRS) ;
# handle #includes for source: Jam scans for headers with
# the regexp pattern $(HDRSCAN) and then invokes $(HDRRULE)
# with the scanned file as the target and the found headers
# as the sources. HDRSEARCH is the value of SEARCH used for
# the found header files. Finally, if jam must deal with
# header files of the same name in different directories,
# they can be distinguished with HDRGRIST.
# $(h) is where cc first looks for #include "foo.h" files.
# If the source file is in a distant directory, look there.
# Else, look in "" (the current directory).
ScanFile $(>) ;
RmIfLink $(<) ;
switch $(>:S) {
case .asm : As $(<) : $(>) ;
case .c : Cc $(<) : $(>) ;
case .C : C++ $(<) : $(>) ;
case .cc : C++ $(<) : $(>) ;
case .cpp : C++ $(<) : $(>) ;
case .pc : Cc $(<) : $(>:S=.c) ;
ProC $(<:S=.c) : $(>) ;
case .f : Fortran $(<) : $(>) ;
case .idl :
switch $(<:S=) {
case *_c : C++ $(<) : $(>:S=_c.cpp) ; Idl $(>) ;
case *_s : C++ $(<) : $(>:S=_s.cpp) ; Idl $(>) ;
}
case .skel : C++ $(<) : $(>:S=_s.cpp) ; Idl $(>:S=idl) ;
case .stub : C++ $(<) : $(>:S=_c.cpp) ; Idl $(>:S=idl) ;
case .l : C++ $(<) : $(<:S=.cpp) ;
Lex $(<:S=.cpp) : $(>) ;
case .s : As $(<) : $(>) ;
case .y : C++ $(<) : $(<:S=.cpp) ;
Yacc $(<:S=.cpp) : $(>) ;
case * : UserObject $(<) : $(>) ;
}
}
From: David Moore <david.moore@dialogic.com>
Date: Tue, 13 Jun 2000 22:19:29 GMT
Subject: RE: CORBA IDL rule
Adding that extra Depends works magic!
[jamming] CORBA IDL rule:
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Wed, 28 Jun 2000 14:12:32 +0900
Subject: Library rule quirks with LOCATE
Has anyone run in to the following? A known bug? I'm running on linux with
recent binutils.
When LOCATE is set on a library target (or MakeLocate rule used), strange
things start happening:
* The target is always remade even when it's current. Setting NOARSCAN
option flag seems to fix this (at a performance cost).
* If the KEEPOBJS option flag is set, the library won't be created.
Date: Thu, 29 Jun 2000 10:06:44 -0700
From: Scott RoLanD <shr@chat.net>
Subject: Need help with Rules
I am trying to get Jam to help me create a set of files. These files
can either be generated by calling a program "my_script" or they can
be custom files prepared by the user.
Here is sort of a shell scripting mock up of what I want...
tribs = "01 02 03 04" ;
for trib in $tribs do
begin
if [ -f custom/gen_$trib.cmnd ]; then
cp custom/gen_$trib.cmnd output/gen_$trib.cmnd
else
my_script gen.cfg $trib > output/gen_$trib.cmnd
fi
end
I want to be able to change the value of tribs depending on the setup
for this test.
Also I only want the cp to run if custom/gen_$trib.cmnd is newer than
output/gen_$trib.cmnd.
Finally I only want my_script to run if gen.cfg is newer than
output/gen_$trib.cmnd.
I can't seem to write Jam rules that let me say something like this:
tribs = 01 02 03 04 ;
for trib in $(tribs) {
Forge $(trib) ;
}
Date: Thu, 29 Jun 2000 17:09:45 -0700 (PDT)
Subject: Re: Need help with Rules
Works for me:
$ cat Jamfile
tribs = 01 02 03 04 ;
for trib in $(tribs) {
Echo $(trib) ;
}
$ jam
01
02
03
04
...found 7 target(s)...
Maybe it's your Forge rule that's the problem?
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Fri, 30 Jun 2000 17:00:52 +0900
Subject: Library rule quirks with LOCATE
Has anyone run into the following? A known bug? I'm running on linux with
recent binutils.
When LOCATE is set on a library target (or MakeLocate rule used), strange
things start happening:
* The target is always remade even when it's current. Setting NOARSCAN
option flag seems to fix this (at a performance cost)
* If the KEEPOBJS option flag is set, the library won't be created.
Date: Fri, 30 Jun 2000 07:54:47 -0700
From: Scott RoLanD <shr@chat.net>
Subject: Re: Need help with Rules
Right, I was saying that I couldn't write a Forge rule (or set of
rules) that would mimic the shell pseudo-code to do what I want:
tribs = "01 02 03 04" ;
for trib in $tribs do
begin
if [ -f custom/gen_$trib.cmnd ]; then
cp custom/gen_$trib.cmnd output/gen_$trib.cmnd
else
my_script gen.cfg $trib > output/gen_$trib.cmnd
fi
end
Date: Fri, 30 Jun 2000 08:52:08 -0700
From: Scott RoLanD <shr@chat.net>
Subject: Re: Need help with Rules
I played around with gmake and managed to get it to do what I want...
First the simple form:
tribs = 01 02 03 04
all : $(foreach trib,$(tribs),output/gen_$(trib).cmnd)
output/gen_%.cmnd: custom/gen_%.cmnd
cp -a $(<) $@
output/gen_%.cmnd: gen.cfg
my_script gen.cfg $* > $@
This works because gmake reads implicit rules from the top of the file
and once one matches it runs it and marks the target as updated.
I started with a more brute force form:
tribs = 01 02 03 04
all : $(foreach trib,$(tribs),output/gen_$(trib).cmnd)
output/gen_01.cmnd: $(shell if [ -f custom/gen_01.cmnd ] \; then \
echo custom/gen_01.cmnd \; else \
echo gen.cfg \; fi )
output/gen_02.cmnd: $(shell if [ -f custom/gen_02.cmnd ] \; then \
echo custom/gen_02.cmnd \; else \
echo gen.cfg \; fi )
output/gen_03.cmnd: $(shell if [ -f custom/gen_03.cmnd ] \; then \
echo custom/gen_03.cmnd \; else \
echo gen.cfg \; fi )
output/gen_04.cmnd: $(shell if [ -f custom/gen_04.cmnd ] \; then \
echo custom/gen_04.cmnd \; else \
echo gen.cfg \; fi )
output/gen_%.cmnd:
@if [ -f custom/gen_$*.cmnd ] ; then \
echo Copying custom $* ; \
cp custom/gen_$*.cmnd $@ ; \
else \
echo Creating $* ; \
my_script gen.cfg $* > $@ ;\
fi
This works because the dependencies are created on the fly. And then
when the rule is called it checks to see why it was run and does the
appropriate action.
So the question still remains.... Can I do this in Jam?
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Wed, 28 Jun 2000 14:12:32 +0900
Subject: Library rule quirks with LOCATE
Has anyone run in to the following? A known bug? I'm running on linux with
recent binutils.
When LOCATE is set on a library target (or MakeLocate rule used), strange
things start happening:
* The target is always remade even when it's current. Setting NOARSCAN
option flag seems to fix this (at a performance cost).
* If the KEEPOBJS option flag is set, the library won't be created.
Date: Mon, 3 Jul 2000 13:15:13 -0500 (CDT)
Subject: Re: Library rule quirks with LOCATE
we have not encountered this on solaris or nt.
I would guess that this means the dependencies are always out of date, possibly
because the thing built is not the same thing that the dependency is expressed on.
Date: Mon, 3 Jul 2000 19:55:03 -0700 (PDT)
Subject: Re: Need help with Rules
As far as I know, Jam doesn't do wildcards in source-file names. For
example, you can't do something like:
Bulk doc : *.html ;
If my assumption of what you're trying to do is correct:
If files are in custom, and they're newer or not in outdir, copy them
Else generate them if they don't exist in outdir or gen.cfg is newer
then I can't think of any clean way (just clunky ones) in Jam to do what
you're looking to do, other than to have a pseudo-target that always runs,
which just hands off to some other tool, say a perl script, that does the
work for you. It'd be a pretty straightforward script, but it would mean
having a separate tool to maintain.
If Jam is doing what you need it to for the most part, and these files are
a relatively small part of your build, then working around it is probably
reasonable. But if this is a huge part of what you do, you might want to
consider a build-tool that does deal with wildcarding.
Date: Mon, 3 Jul 2000 20:07:36 -0700 (PDT)
Subject: Re: Library rule quirks with LOCATE
I meant to reply to the guy who said this works for him, but I
accidentally deleted it. I'd like to see how you're doing it to make it work.
The behaviour I see is exactly as John described. If you set LOCATE on the
library target, it'll get put where you specify, but the object modules
for it will recompile everytime, because they depend on a local library
that doesn't exist (e.g., libfoo.a(foo.o)). The same is true if you use
MakeLocate.
If a Library target is specified without a directory, the
LibraryFromObjects rule calls MakeLocate with LOCATE_TARGET as the
directory, which, if you're using SubDir, is set to the local directory,
and if you're not using SubDir, is unset. So in either case, you end up
with the target library being LOCATE on'd the local directory, and the
object modules being dependent on that local library.
If you specify LOCATE (or LOCATE_TARGET) generically (as opposed to "on"
the library target), you'll get the object modules depending on the right
(full-path) library, but the modules themselves will be compiled into the
directory the library goes in rather than in the local source directory
(which probably isn't what you want, especially if you keep your objects,
since you could potentially end up with name conflicts).
If you set KEEPOBJS, then the object modules become dependencies of "obj"
and will get recompiled when necessary, but no dependency is set for the
library $(l) on "lib", so the library is never built. This looks to be an
actual bug. I'm not sure what the thinking was for the if on KEEPOBJS
setting different Depends, but commenting that all out makes it work, or adding:
Depends lib : $(l) ;
in the if $(KEEPOBJS) makes it work. Either way.
The only way I know of to get building-a-library-directly-into-somewhere-
other-than-the-local-directory to work correctly is to get your library
target into a full-path before handing it off to the Library rule.
Historically, when Jam was first being put together, that's how things
were done -- we had a CONFIG file, part of which was a list of
library-name symbols (e.g., LIBFOO), all of which had as their values a
full-path name (e.g., $(BUILDDIR)/lib/libfoo.a), and library targets were
specified as:
Library LIBFOO : foo.c bar.c ;
If you don't want to go that route, you could instead have a wrapper rule
that does something like:
rule myLib {
local l ;
l = $(LIBDIR)$(SLASH)$(<:S=$(SUFLIB)) ;
Library $(l) : $(>) ;
}
This will get the object modules to be dependencies of the full-path
library (e.g., /work/lib/libfoo.a(foo.o) -- the object modules will get
built into the local source directory, the library will get built into
$(LIBDIR), and the modules will only recompile as needed.
Alternatively, you could just let the library get built into the local
directory then do an InstallLib on it. (Note: Since install isn't part of
the dependencies on "all", if you want to both build and install, you have
to run 'jam install').
Subject: RE: Library rule quirks with LOCATE
Date: Tue, 4 Jul 2000 14:18:33 -0700
I've just started to use jam (under Windows NT), and I'm very interested in
putting my libs in a idfferent directory from my obj, which is different
from my source.
I saw the email earlier today about this, and I thought I would share my
solution and see what others might have to say about it.
The default implementation of the Library rule looks like:
rule Library {
LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ;
Objects $(>) ;
}
I created an override for the Library rule in my Jamrules file:
rule Library {
LOCATE_TARGET = $(OBJ_TARGET) ;
if ! $(<:D) { LibraryFromObjects $(<:D=$(LIB_TARGET)) : $(>:S=$(SUFOBJ)) ;
} else { LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ; }
Objects $(>) ;
}
and I use LIB_TARGET and OBJ_TARGET to control where the libraries go.
This seems to work for me, and it doesn't appear to rebuild anything unnecessarily.
Does anyone see any flaws with this?
Subject: RE: Library rule quirks with LOCATE
Date: Tue, 4 Jul 2000 14:12:46 -0700
I've just started to use jam (under Windows NT), and I'm very interested in
putting my libs in a idfferent directory from my obj, which is different
from my source.
I saw the email earlier today about this, and I thought I would share my
solution and see what others might have to say about it.
The default implementation of the Library rule looks like:
rule Library {
LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ;
Objects $(>) ;
}
I created an override for the Library rule in my Jamrules file:
rule Library {
LOCATE_TARGET = $(OBJ_TARGET) ;
if ! $(<:D) { LibraryFromObjects $(<:D=$(LIB_TARGET)) : $(>:S=$(SUFOBJ)) ;
} else { LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ; }
Objects $(>) ;
}
and I use LIB_TARGET and OBJ_TARGET to control where the libraries go.
This seems to work for me, and it doesn't appear to rebuild anything unnecessarily.
Subject: RE: Library rule quirks with LOCATE
Date: Tue, 4 Jul 2000 14:12:46 -0700
I've just started to use jam (under Windows NT), and I'm very interested in
putting my libs in a idfferent directory from my obj, which is different
from my source.
I saw the email earlier today about this, and I thought I would share my
solution and see what others might have to say about it.
The default implementation of the Library rule looks like:
rule Library {
LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ;
Objects $(>) ;
}
I created an override for the Library rule in my Jamrules file:
rule Library {
LOCATE_TARGET = $(OBJ_TARGET) ;
if ! $(<:D) { LibraryFromObjects $(<:D=$(LIB_TARGET)) : $(>:S=$(SUFOBJ)) ;
} else { LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ; }
Objects $(>) ;
}
and I use LIB_TARGET and OBJ_TARGET to control where the libraries go.
This seems to work for me, and it doesn't appear to rebuild anything unnecessarily.
Does anyone see any flaws with this?
From: Martine Habib <mhabib@microsoft.com>
Date: Thu, 6 Jul 2000 11:59:35 -0700
Subject: Using different flags for single file
I am trying to not just add cflags to a single file by using the
ObjectCcFlags rule, but substitute them entirely (as in the module is
compiled as debug, but the single file foo.c will have full optimization).
Is there any way I can do that ?
Date: Thu, 6 Jul 2000 14:22:51 -0700 (PDT)
Subject: Re: Using different flags for single file
You can just set CCFLAGS directly on the target (foo.o) itself. For example:
CCFLAGS on foo.o = -O ; #and whatever other flags you need
Note that you do need to do it on the object filename, not the source
filename (so you might want to be more platform-independently correct and
say instead:
CCFLAGS on foo$(SUFOBJ) = -O ;
If you're using SubDir, you'll need to specify it by the gristed name:
CCFLAGS on <$(SOURCE_GRIST)>foo$(SUFOBJ) = -O ;
Alternatively, if you do have a number of flags you use, you might want to
consider using OPTIM (which is passed on the compile line in the Cc
actions) to hold your debug (-g) flag, then set it to your optimize (-O)
flag on foo.o, since that would be pretty much maintenance free, whereas
if use CCFLAGS and at some point you changed the flags you compile with,
you'd need to remember to change them for foo.o as well.
From: Martine Habib <mhabib@microsoft.com>
Subject: RE: Using different flags for single file
Date: Thu, 6 Jul 2000 19:36:28 -0700
Can I do this with any variable ?
I mean for example, in the jamfile:
BLA on <$(SOURCE_GRIST)>foo$(SUFOBJ) = true ;
And in the jambase, I in the Cc rule:
if $(BLA) {
echo "BLA is true !" ;
}
It does not seem to work, but maybe I am doing something wrong.
Sent: Thursday, July 06, 2000 2:23 PM
Subject: Re: Using different flags for single file
You can just set CCFLAGS directly on the target (foo.o) itself. For example:
CCFLAGS on foo.o = -O ; #and whatever other flags you need
Note that you do need to do it on the object filename, not the source
filename (so you might want to be more platform-independently correct and
say instead:
CCFLAGS on foo$(SUFOBJ) = -O ;
If you're using SubDir, you'll need to specify it by the gristed name:
CCFLAGS on <$(SOURCE_GRIST)>foo$(SUFOBJ) = -O ;
Alternatively, if you do have a number of flags you use, you might want to
consider using OPTIM (which is passed on the compile line in the Cc
actions) to hold your debug (-g) flag, then set it to your optimize (-O)
flag on foo.o, since that would be pretty much maintenance free, whereas
if use CCFLAGS and at some point you changed the flags you compile with,
you'd need to remember to change them for foo.o as well.
Date: Fri, 7 Jul 2000 12:03:19 +0200 (METDST)
From: Igor Boukanov <boukanov@fi.uib.no>
Subject: RE: Using different flags for single file
When you assign a variable on a target, its assigned value will be
visible only in the target update action. There is now way to get
the value before that stage.
What you can do is to store the value in some variable, i.e.:
BLA on <$(SOURCE_GRIST)>foo$(SUFOBJ) = true ;
XX_BLA_$(SOURCE_GRIST)_foo$(SUFOBJ) = true ;
# Now you can query
if $(XX_BLA_$(SOURCE_GRIST)_foo$(SUFOBJ)) {
Echo "BLA is true !" ;
}
Here is rules that can simplify life in your case:
rule targetVarSet {
$(1) on $(2) = $(3) ;
__tVar_$(1)_$(2) = $(3) ;
}
rule targetVarGet { $(3) = $(__tVar_$(1)_$(2)) ; }
rule targetVarEcho { Echo $(__tVar_$(1)_$(2)) ; }
Usage can be like:
targetVarSet BLA : <$(SOURCE_GRIST)>foo$(SUFOBJ) : true ;
targetVarEcho BLA : <$(SOURCE_GRIST)>foo$(SUFOBJ) ;
targetVarGet BLA : <$(SOURCE_GRIST)>foo$(SUFOBJ) : tmp ;
if $(tmp) { Echo "BLA is true !" ; }
From: Martine Habib <mhabib@microsoft.com>
Date: Thu, 6 Jul 2000 11:30:13 -0700
Subject: Using different flags for single file
I am trying to not just add cflags to a single file by using the
ObjectCcFlags rule, but substitute them entirely (as in the module is
compiled as debug, but the single file foo.c will have full optimization).
Date: Fri, 7 Jul 2000 15:39:43 -0700 (PDT)
Subject: RE: Using different flags for single file
[ Just a reminder: Nowadays Jambase is compiled into the jam executable,
so if you make a change to the Jambase file and it doesn't seem to be
working, you might have forgotten (I usually do :) that you need to
specify the Jambase file on the commandline('jam -f /path/to/Jambase').
You can also just make the change to jambase.c and rebuild jam (but you
probably want to do that after you've worked it out in Jambase first :) ]
As to what you're trying to do...as far as I know:
- If you set a variable-value "on" a target, you can't access
that value in a rule, but it will carry through to the actions.
- If you set the value generally, you can access that value in a
rule, but if you access it in the actions it will be whatever
it was last (as in finally, in the end) set to.
Example variable-value set "on" a target:
Jamfile:
Main foo : foo.c ;
BLAH on foo = true ;
Jambase:
actions Link bind NEEDLIBS {
[ $(BLAH) ] && echo "Blah is true"
$(LINK) $(LINKFLAGS) -o $(<) $(UNDEFS) $(>) $(NEEDLIBS) $(LINKLIBS)
}
$ jam -f /usr/local/lib/Jambase
....found 10 target(s)...
....updating 2 target(s)...
Cc foo.o
Link foo
Blah is true
Chmod foo
....updated 2 target(s)...
Example variable-value set generally:
Jambase:
rule Cc {
if $(BLAH) { Echo BLAH is $(BLAH) ; }
[...]
}
Jamfile:
rule BlahFalse {
BLAH = ;
Main $(<) : $(>) ;
}
rule BlahTrue {
BLAH = true ;
Main $(<) : $(>) ;
}
BlahTrue foo : foo.c ;
BlahFalse bar : bar.c ;
$ jam -f /usr/local/lib/Jambase -n
BLAH is true
....found 13 target(s)...
....updating 4 target(s)...
Cc foo.o
cc -c -O -o foo.o foo.c
Link foo
[ ] && echo "Blah is true"
cc -o foo foo.o
Chmod foo
[etc.etc...]
....updated 4 target(s)...
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Library rule quirks with LOCATE
Date: Fri, 7 Jul 2000 18:28:30 -0700
Basically, the way that Jam is implemented, there is a
requirement on the Jamrules that library members have the
same LOCATE value as the library itself. This stems from
the fact that the timestamp() function, when applied to
a library member, looks for the library using the LOCATE
value associated with the member.
The default LibraryFromObjects rule implements this
requirement. But you can break you build is you override
the LOCATE on the library without also resetting LOCATE
on all of the members.
Thus do not MakeLocate on the library.
Don't like this behavior ? You can change your Jamrules
to place libraries in LOCATE_LIBRARIES instead of LOCAT_TARGET
by modifing the LibraryFromObjects rule. You would also
want to modify the SubDir rule.
Another option would be to change jam itself so that the
timestamp function would first lookup/bind the library node
(and thus the library's location), and then use this to
scan the archive. Effectively ignoring the LOCATE associated
with the library member (if any).
Perhaps I'll post a patch for this when builds on NT become
an issue for me (symbolic links start failing).
Re the KEEPOBJ flag, in the default Jamrules, if set,
libraries are not build unless explicitly required by another
target. You can see that dependency is established for the
objs to the obj pseudo target objs rather than the library
being linked to the lib target.
From: "Kimpton, Andrew" <awk@pulse3d.com>
Date: Tue, 11 Jul 2000 12:08:47 -0700
Subject: Multiple 'independant' targets
I have (what I think is) a pretty straightforward question but browsing the
docs couldn't answer it.
I need to build two 'independant' binaries from a single execution of Jam -
something that
all: foo bar
foo : foo.c
cc -o foo foo.c
bar : bar.c
cc -o bar bar.c
would do in Make. To add to the confusion one of the things I'm building is
a shared library built from Objects kept in a $(TOP)/obj.i386 heirarchy so I
use theMainFromObjects rule.
Date: Tue, 11 Jul 2000 15:25:22 -0500 (CDT)
Subject: Re: Multiple 'independant' targets
Depends exe : foo bar ;
main foo : foo.c ;
main bar : bar.c ;
LinkLibrary foo : obj.i386 ;
LinkLibrary bar : obj.i386 ;
or something like that.
We actually have customized version of main and linklibrary rules, as
well as others.
Date: Thu, 13 Jul 2000 08:38:56 -0400
From: Alex Nicolaou <alex@freedomintelligence.com>
Subject: adding an incremental mode
I'm trying to add an incremental mode to Jam 2.2.5 since building the
dependancy tree is pretty expensive and is taking a long time. The
approach I've taken is to modify make() to include a loop, and
re-initialize the target's fate, hfate and progress to their initial
values. This doesn't seem to work properly, as nothing is ever built
after the initial build. I imagine that I'm just not initializing some
element of the target structure and so I am hitting a short-circuit
somewhere; any suggestions?
Date: Thu, 13 Jul 2000 07:23:44 -0700
Subject: Re: adding an incremental mode
From: Matt Armstrong <matt@corp.phone.com>
Try searching the mail archives for this list. A patch to do just
this was circulated about 1.5 years ago.
Date: Thu, 13 Jul 2000 11:41:26 -0700
Subject: Re: adding an incremental mode
From: Matt Armstrong <matt@corp.phone.com>
You're right. I probably mailed one of the people in that thread for
the patch. Unfortunately, I've sence lost track of the patch after
changing jobs.
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: adding an incremental mode
Date: Fri, 14 Jul 2000 11:41:23 -0700
I, too, am interested in this patch. Does it exist? The few postings
I read on the archive didn't contain any patch or pointer to a patch.
Date: Thu, 13 Jul 2000 10:55:41 -0700
From: Iain McClatchie <iain@10xinc.com>
Subject: Re: adding an incremental mode
I'm not sure what you mean by incremental mode.
One of the troubles I have with Jam is that it first builds the
dependency tree, and then it figures out what to execute and does
that. I would very much like to be able to modify the dependency
tree during execution -- essentially, I'd like an action to be able
to call a rule.
I thought this was too large a change to make to Jam without a
total rewrite. Is your incremental mode something like what I'm
talking about?
Date: Mon, 17 Jul 2000 14:28:25 -0400
From: Alex Nicolaou <alex@freedomintelligence.com>
Subject: Re: adding an incremental mode
No, it's almost opposite from what you want. What I want to do
is persist the dependancy graph so that it doesn't need to be
recomputed each time - the time it takes to read all the headers
and determine what includes what is the lion's share of my build time.
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Subject: RE: adding an incremental mode
Date: Mon, 17 Jul 2000 11:30:58 -0700
Like the idea. But, for this to work well don't you need the OS to
tell you about file system changes so you can reevaluate which
dependencies need updating?
Date: Mon, 17 Jul 2000 11:39:43 -0700
Subject: Re: adding an incremental mode
From: Matt Armstrong <matt@corp.phone.com>
The OS tells you by updating the modification time on the file. Say
you cache dependency information for foo.h. Next time you run jam,
jam checks the modification time of foo.h and uses the cached
dependency information only if foo.h hasn't canged.
If I remember correctly, jam's internal data structures are amenable
to this kind of thing. The patch to do this was not large.
Date: Mon, 17 Jul 2000 12:09:42 -0700
Subject: Re: adding an incremental mode
From: Matt Armstrong <matt@corp.phone.com>
Jam has to check the timestamp on all the files in the dependency tree
anyway -- how else would it know what to build?
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Subject: RE: adding an incremental mode
Date: Mon, 17 Jul 2000 12:17:07 -0700
If you are a listener for OS file changes then you can update your out of
date table from just these changes. You don't have to check every file on every build,
it should always be up to date. This requires a "server" component to act as the
listener.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Date: Fri, 21 Jul 2000 15:34:06 -0700
Subject: variable default = value
Its not explicit in the jam documentation, but
variable = ;
is equivalent to unsetting the variable.
this is normally not a problem, except if you use
variable default = x ;
The semantics of which should be documented as
"Set variable to x if variable unset or variable has a vero length (zero elements)"
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Sun, 23 Jul 2000 23:24:16 +0900
Subject: bug in HDRPATTERN
Here is a line pulled from a header in SGI's STL:
#include <time.h> /* XXX should use <ctime> */
Jam's default HDRPATTERN interprets the header name as "time.h> /* XXX
should use <ctime" because the .* in the regexp does greedy matching.
Here is a corrected HDRPATTERN. Be sure to replace each {t} with a real tab.
HDRPATTERN = "^[ {t}]*#[ {t}]*include[ {t}]*[<\"]([^\">]*)[\">].*$" ;
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Mon, 24 Jul 2000 11:16:42 +0900
Subject: MakeLocate problem
I'm still gathering details, but here is what I have so far...
I ran into a problem of targets being remade for no reason, and on top of
that the set of remade targets seemed to change cyclically on every jam run.
This was just for a simple Main rule with a few sources. After pulling my
hair out for a long time I realized that the problem was timing dependent...
it seemed to be related to compile or link times.
After trudging through jam debug output I noticed that all my targets had a
dependency on the current directory. In unix a directory's date is updated
whenever its contents change, which in turn causes my targets to be tagged
as old. If say, object and exe targets are put in the same directory, and
the link takes more than 1 second, the writing of the exe will make the
directory date newer than the object files.
The cause of the dependency is LOCATE_TARGET (and the MakeLocate rule),
which is being set by the SubDir rule. I understand that the reason for the
dependency is to ensure that the target directory exists, but surely we
don't want to be remaking targets when a directory date changes...?
I'm also wondering if this is not related to the problem I posted previously
about using MakeLocate on library targets.
It seems hard to believe that no one else has run into this problem. I must
be making a mistake somewhere?
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Mon, 24 Jul 2000 12:54:14 +0900
Subject: conditional bug?
The jam language docs state that ( cond ) can be used for precedence
grouping. However the following expression will evaluate to true:
TESTVAR = Hello ;
if $(TESTVAR) && ($(TESTVAR) != $(TESTVAR)){ ECHO "true" ; }
If the grouping ( ) is removed the expression evaluates correctly.
From: john@nanaon-sha.co.jp (John Belmonte)
Subject: Re: MakeLocate problem
Date: Mon, 24 Jul 2000 13:11:43 +0900
It turns out that this problem only occurs for targets in the TOP directory,
in other words when LOCATE is set to the current directory ("." or DOT).
Although MkDir correctly sets the NOUPDATE rule on the directory targets, it
does not do this when the directory is DOT. MakeLocate however still does a
Depends in this case, resulting in a dependency on DOT without a NOUPDATE.
Here is a proposed correction to MakeLocate (doing nothing when the
directory is DOT). I'm not that confident that it won't break something else.
rule MakeLocate {
if $(>) && $(>[1]) != $(DOT) {
LOCATE on $(<) = $(>) ;
Depends $(<) : $(>[1]) ;
MkDir $(>[1]) ;
}
}
From: "Greg Loucks" <gloucks@msmail.tti.bc.ca>
Subject: RE: conditional bug?
Date: 25 Jul 2000 09:33:00 -0700
I believe you need spaces around the grouping ()'s as in:
TESTVAR = Hello ;
if $(TESTVAR) && ( $(TESTVAR) != $(TESTVAR) ) { ECHO "true" ; }
From: john@nanaon-sha.co.jp [mailto:john@nanaon-sha.co.jp]
Sent: July 24, 2000 06:00
Subject: conditional bug?
The jam language docs state that ( cond ) can be used for precedence
grouping. However the following expression will evaluate to true:
TESTVAR = Hello ;
if $(TESTVAR) && ($(TESTVAR) != $(TESTVAR)) { ECHO "true" ; }
If the grouping ( ) is removed the expression evaluates correctly.
Date: Tue, 25 Jul 2000 14:14:21 -0700 (PDT)
Subject: Converting to jam
After having used jam at my last company, I've have
come to love and depend on it. However, at the new
company I am working at, they are still using smake.
I'm looking to try to convert from their existing
makefiles to jam on my own and then maybe maintain
them in tandem of a while until the other developers
buy into it.
In anycase, here's my question. Does anyone have some
autmoted scripts/tools that will convert makefiles to jamfiles?
Date: 26 Jul 2000 17:47:48 +0200
From: " <robert.muench@robertmuench.de>
Subject: How to: Link libraries?
Hi, I have a bunch of 'jamfile' in different sub-directories. All use
the 'Objects' rule and just list the *.cpp filenames. All obj files
are being generated. No problem. Now I want to link them all together
into one DLL, therefore I added a MainFromObjects rule. But well...
the files are missing. How can I refer to all the compiled/needed
objects files with one variable?
TOP = f:/openamulet/source ;
MainFromObjects oa.dll $OBJS;
^^^^^^ ???
When do you want to reboot today?
Subject: RE: How to: Link libraries?
Date: Wed, 26 Jul 2000 11:37:18 -0700
So I was able to get this to work under NT using the following:
SUFEXE = $(SUFLIB:S=.dll) ;
LINKLIBS += $(NAMES_OF_LIB_FILES) ;
LINKFLAGS += "/dll /def:$(NAME_OF_DEF_FILE) /implib:$(NAME_OF_IMPLIB)
/LIBPATH:$(PLACE_TO_LOOK_FOR_LIBS)" ;
Main MyProject :
SomeSource.cpp
;
If the import library goes in the same directory as the .dll file then you
can eliminate the /implib portion. If your libraries are fully qualified,
then you can eliminate the /LIBPATH portion.
Date: 27 Jul 2000 14:16:10 +0200
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: How to: Link libraries?
What are all these $(<) and $(>) ??
Perhaps more detailed information helps:
1. I have jamrules file in the base directory of the source tree. This
file only contains the compiler/link flags stuff and sets the TOP
variable.
2. In the base directory I have jamfile, which contains all the
'SubInclude' statements.
SubDir TOP opal ;
Objects opal.cpp opal_code.cpp opal_operations.cpp ...
That's it! I now tried to add 'MainFromObjects oa.dll ;' to my
jamrules files. (Perhaps it's better to put this into the base
directory jamfile?). But of course the linker states, no files to link...
Ok, but than you have to state the libraries by hand inside the
jamrules file, right? I thought JAM is smart enough to set an implicit
variable (like $objs), which I can refer to from a rule.
From: Walter Boggs <wboggs@cybercrop.com>
Date: Wed, 26 Jul 2000 11:43:55 -0600
Subject: Newbie question
I have built Jam 2.2 and want to use it for Java. However, when I run it, it
wants an environment variable pointing to my C compiler. I built Jam on a
Win2k box that had MSVC, then moved Jam to my current box that has no C
compiler. Do I need one to use Jam with Java?
Subject: RE: How to: Link libraries?
Date: Thu, 27 Jul 2000 13:43:41 -0700
When you specify stuff in Jam, you're specifying a rule with "targets" and
"dependants". Typically it looks something like:
Main foo : foo.c ;
Main is the name of the rule. $(<) refers to the list of stuff between Main
and the colon, and $(>) refers to the list of stuff between the colon and
the semi-colon.
You shouldn't have to set the TOP variable. It gets set automatically by the
first SubDir call.
Ahhh. Specifiying "MainFromObjects oa.dll ;" basically says to build oa.dll
from no objects (because no objects are listed on the right hand side of the
colon ("MainFromObjects oa.dll ;" is really the same as "MainFromObjects
oa.dll : ;".
Currently, jam builds up a list of dependants on an obj target, but I don't
think you can acess this as a variable.
I created a little test to see if I could get this work and got it working.
I made my top most jamfile look like:
SubDir TOP ;
rule MyObjects {
Objects $(<) ;
local s ;
makeGristedName s : $(<) ;
MY_OBJS += $(s:S=$(SUFOBJ)) ;
}
MyObjects foo.c ; # Only needed if there is source in the topmost directory.
SubInclude TOP Foo1 ;
SubInclude TOP Foo2 ;
ECHO "MY_OBJS =" $(MY_OBJS) ;
SubDir TOP ; # re-execute SubDir to reset grist, locate, etc.
SUFEXE = $(SUFLIB:S=.dll) ;
LINKFLAGS += "/dll /def:foo.def" ;
MainFromObjects foo : $(MY_OBJS) ;
and the subdirectory jamfiles looked like:
SubDir TOP Foo1 ;
MyObjects foo1.c ;
I had to put in a second invocation of SubDir to reset the SOURCE_GRIST and
LOCATE_TARGET etc, so that the invocation of MainFromObjects would work
properly. This is because the SubInclude'd jamfiles execute SubDir and
MainFromObjects uses makeGristedSource, so it would get the grist from the
last SubInclude'd jamfile rather than the one we want.
So basically, it provides a MyObjects rule which builds up a variable
MY_OBJS which contains a list of object files, which is then used by the
MainFromObjects rule.
A better place to put the MyObjects rule would be in the Jamrules file
located in the topmost directory.
This example builds a .dll.
Subject: RE: Newbie question
Date: Thu, 27 Jul 2000 14:26:54 -0700
I think you should be able to define one of MSVC, MSVCNT, or BCCROOT and
just point them to some directory (existant or not). As long as you don't
try to compile any C/C++, I think that this will work (although I haven't
tested it).
If you feel really ambitious, you could modify the Jambase located in the
src directory to not generate an error if one of these is not defined
(assuming of course that pointing one to some arbitrary location works).
Date: Thu, 27 Jul 2000 15:03:09 -0700 (PDT)
Subject: Re: Newbie question
You don't need a compiler -- you just need to satisfy Jam's need to have
one of BCCROOT, MSVCNT, or MSVC set. If I'm on an NT, and not intending to
do any C compiles (which I assiduously try to avoid ever doing on an NT
:), I usually just set MSVC to foo.
On the other hand, if you know for sure you won't ever be doing any C/C++
compiling/linking/etc., you could just get rid of all that if'ing and
else'ing (and its associated variables-setting) in the Jambase file (do it
in jambase.c and recompile to have it be the default, so you don't have to
point to Jambase on the command-line).
Just as a side note: Jam isn't really particularly geared towards Java, so
you might want to consider using a build tool that is.
From: Morgan Fletcher <morgan.fletcher@luna.com>
Subject: RE: Newbie question
Date: Thu, 27 Jul 2000 15:07:06 -0700
this format, some or all of this message may not be legible.
I know about Ant. (http://jakarta.apache.org/ant/index.html) Are there others?
Date: 29 Jul 2000 11:31:34 +0200
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: How to: Link libraries?
Thanks for the detailed explanation. I'm going to try it out and let
you know but it looks like it's exactly what I need. Thanks again
Date: Mon, 31 Jul 2000 12:57:21 -0700 (PDT)
Subject: Java build tools (was: RE: Newbie question)
There were a number of ones in development for awhile, but I think most of
them died off once Ant came along. There are a couple, though, that got
out before it did, but it's hard to say what their current state (as in
fate) actually is. I haven't used any of them, so I can't make any
comments about how good (or not) they are.
There's one called JarMaker:
www.gjt.org/servlets/JCVSlet/show/gjt/org/gjt/jem/jarmaker/README/1.3
And there's 'jmk', which is more or less 'make' written in Java, but it
does appear to have a strong Java-use slant to it:
www.ccs.neu.edu/home/ramsdell/make/edu/neu/ccs/jmk/jmk.html
I also ran across a tool the other day called STIX, which isn't available
yet (it isn't strictly Java=centric, but would work for it, as well as for
C/C++, etc.) -- it sounds like it could be pretty interesting:
A couple of articles that might be of interest:
www.inf.cbs.dk/staff/nielsj/research/make/tool.html (a little outdated,
but with some reasonably good info)
www.geosoft.no/javamake.html (Make-oriented, but ditto above "but")
Date: 1 Aug 2000 20:19:06 +0200
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: How to: Link libraries?
Hi, I now tried your solution. It's getting me closer. However, now I
have the problem, that the command line for the linker is too long :-|
Is there an easy way to let Jam split it up and call the linker
command several times, or to redirect the content to a file?
Date: Tue, 1 Aug 2000 14:44:26 -0400
From: Donald Sharp <sharpd@cisco.com>
Subject: Re: How to: Link libraries?
look at the actions piecemeal modifier.
http://public.perforce.com/public/jam/src/Jamlang.html
It's under Rules/Action Modifiers.
Subject: RE: How to: Link libraries?
Date: Tue, 1 Aug 2000 11:52:12 -0700
Unfortunately, the linker doesn't really lend itself to running mulitple
times (i.e. using the piecemeal option), and jam doesn't seem to support the
notion of "response" files (maybe I'm missing something). One approach would
be to merge several libraries into a bigger library and pass that on the
command line (thus reducing the size of the command line). If you're feeding
raw objects, creating libraries is probably the easiest way to go.
Another approach is to rewrite the Link rule/action to do something like:
rule Link {
MODE on $(<) = $(EXEMODE) ;
Chmod $(<) ;
local i ;
StartLink $(<) : $(>) ; { LinkItem $(<) : $(i) ; }
FinishLink $(<) : $(>) ;
}
actions quietly Link {}
actions quietly StartLink { $(RM) $(<:S=.rsp) }
actions quietly LinkItem { ECHO $(>) >> $(<:S=.rsp) }
actions FinishLink bind NEEDLIBS {
$(LINK) $(LINKFLAGS) /out:$(<) $(UNDEFS) @$(<:S=.rsp) $(NEEDLIBS)
$(LINKLIBS)
}
There doesn't appear to be any way to delete an action once it's been
created, so I've been using the form above for Link. The word "quietly"
means that it won't print "Link someoutput" message.
I tested this under NT and it seems to work for the simple test that I have.
You may wish to add the line
$(RM) $(<:S=.rsp)
to the FinishLink action to remove the .rsp file when you're done.
Date: 2 Aug 2000 09:38:04 +0200
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: How to: Link libraries?
Hi, I did but either I don't get it or it's not working as expected. I
variable. Than I added:
actions piecemeal OAObjects {}
In the hope that the rule will be used until a 'shell-buffer-overrun'
is approaching and that Jam continues normal processing and later
returns to where it left to continue the build process. But this
failed :-(
I expect that I have to add the link command inside the 'piecmeal'
scope, right?
Date: Wed, 2 Aug 2000 08:29:41 -0400
From: Donald Sharp <sharpd@cisco.com>
Subject: Re: How to: Link libraries?
That sounds right. I would have put the piecemeal
subcommand under the Link action.
From: "Greg Loucks" <gloucks@msmail.tti.bc.ca>
Date: 2 Aug 2000 10:14:00 -0700
Subject: multiple target platforms?
I would like to be able to use a single run of Jam to build multiple targets
for an embedded system (pSOS+). I have projects that have targets that can be
built for:
1. simulation environment (an addin to Visual Studio)
2. evaluation board (my target processor with distinct Board Support Package)
3. final hardware (my hardware and my BSP)
4. host OS, Windows NT (for source generation executables)
And, if that weren't enough, I also require the ability to build some tools
for the above target platforms _and_ Windows CE:
5. WinCE simulation environment
6. WinCE CEPC target platform (like an Eval Board)
7. WinCE on target device (different CPU, different BSP)
Has anyone come across this issue before?
I've fiddled a little bit with the code and came up with a hardware "grist"
that I can add to every file which specifies the target system, os, config etc.
Then I have a target-system-related action for every environment.
It seems pretty hokey to me.
E.g.
a Jamfile
=========
P1 = d.estp.pp ; # d for debug; estp is the eval board; pp is pRISM+ environ
P2 = d.wipc.ps ; # d for debug; wipc is wintel pc; ps is pSOSim
P3 = r.mybd.pp ; # d for release; mybd is my board; pp is pRISM+
S = exception.cpp object.cpp task.cpp queue.cpp mutex.cpp semaphore.cpp ;
LINKFLAGS$(P1:S) += -e _START ram-estp.dld ;
LINKFLAGS$(P3:S) += -e _START ram-mybd.dld ;
CCFLAGS$(P1:S) += -Xsmall-data=8 -ei1683 -g2 -Xno-optimized-debug ;
CCFLAGS$(P3:S) += -Xsmall-data=8 -ei1683 -g2 -Xno-optimized-debug ;
C++FLAGS$(P1:S) += -Xsmall-data=8 -ei1683 -g2 -Xno-optimized-debug ;
C++FLAGS$(P3:S) += -Xsmall-data=8 -ei1683 -g2 -Xno-optimized-debug ;
PLibrary $(P1) $(P2) $(P3) : tools-PSos : $(S) ;
PMain $(P1) : PSos-test-eval : test.cpp ; # generates an .elf file
PMain $(P2) : PSos-test-sim : test.cpp ; # generates an .exe file
PMain $(P3) : PSos-test : test.cpp ;
PLinkLibraries $(P1) $(P2) $(P3) : tools-PSos-test :
$(TOP)/system/pSOS/pSOS.a tools-PSos.a ;
The Jambase file contains rules like PLibrary which applies my hardware "grist"
to each target and source and then refers to the regular Library rule:
rule PLibrary # platform(s) : library : source(s)
{ { Library $(2:H=$(i)) : $(3:H=$(i)) ; } }
Then I have to modify all the rules and actions all the way down to check the
hardware settings and call the appropriate actions:
rule Link {
local p ;
makePlatform p : $(<) ; # get the Hardware grist
if $(p) {
LINK on $(<) = $(LINK$(p:S)) ;
LINKFLAGS on $(<) = $(LINKFLAGS$(p:S)) ;
UNDEFS on $(<) = $(UNDEFS$(p:S)) ;
LINKLIBS on $(<) = $(LINKLIBS$(p:S)) ;
switch $(p:S) {
case .pp : Link.pp $(<) : $(>) ;
case .vs : Link.vs $(<) : $(>) ;
# etc...
}
}
MODE on $(<) = $(EXEMODE) ;
Chmod $(<) ;
}
Date: Wed, 2 Aug 2000 12:53:08 -0700 (PDT)
Subject: RE: How to: Link libraries?
Have you tried just cranking up the value for MAXLINE in jam.h? I have it
set to 32768 on an NT and've never had a problem (w.r.t. too-long lines
anyway).
Date: Thu, 3 Aug 2000 16:07:29 +1000 (EST)
From: David Funk <d.funk@photonics.com.au>
Subject: Making non-unique target names fails
I'm using the latest jam sources from http://www.perforce.com/jam/jam.html
(version 2.2.1), compiled for FreeBSD 4.0.
In real life, I have a project that contains multiple libraries in
subdirectories, with each generating its own host test file to do unit
tests. I've simplified this in the following test setup:
/tmp
|
/\ SubInclude TOP sub1 ;
/ \ SubInclude TOP sub2 ;
/ \
/ \
| SubDir TOP sub2 ;
| Main hosttest : hosttest.c ;
|
SubDir TOP sub1 ;
Main hosttest : hosttest.c ;
TOP is set to /tmp/jamtest. I have an empty /tmp/jamtest/Jamrules.
Running jam gives me:
1 $ jam -d2
2 ...found 17 target(s)...
3 ...updating 3 target(s)...
4 Cc /tmp/jamtest/sub1/hosttest.o
5
6 cc -c -O -I/tmp/jamtest/sub1 -o /tmp/jamtest/sub1/hosttest.o \
7 /tmp/jamtest/sub1/hosttest.c
8
9 Cc /tmp/jamtest/sub2/hosttest.o
10
11 cc -c -O -I/tmp/jamtest/sub2 -o /tmp/jamtest/sub2/hosttest.o \
12 /tmp/jamtest/sub2/hosttest.c
13
14 Link /tmp/jamtest/sub2/hosttest
15
16 cc -o /tmp/jamtest/sub2/hosttest /tmp/jamtest/sub1/hosttest.o
17
18 Chmod /tmp/jamtest/sub2/hosttest
19
20 chmod 711 /tmp/jamtest/sub2/hosttest
21
22 Link /tmp/jamtest/sub2/hosttest
23
24 cc -o /tmp/jamtest/sub2/hosttest /tmp/jamtest/sub2/hosttest.o
25
26 Chmod /tmp/jamtest/sub2/hosttest
27
28 chmod 711 /tmp/jamtest/sub2/hosttest
29
30 ...updated 3 target(s)...
31 $
Lines 6 and 11 compile the two hosttest.c files as expected.
However, line 16 links sub1/hosttest.o into sub2/hosttest instead of
sub1/hosttest. Then line 24 links sub2/hosttest.o into sub2/hosttest,
overwriting the previous link. sub1/hosttest is never made.
Am I doing something wrong? Is jam designed to have non-unique target names in
a project? Is this a bug?
Date: 4 Aug 2000 12:28:19 +0200
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: How to: Link libraries?
that the shell states: command line to long. How have you changed the
limit of the command line length in NT? Robert
Subject: RE: Making non-unique target names fails
Date: Sun, 6 Aug 2000 11:11:35 -0700
I recreated your little example, and then did the following:
jam -d6 | grep Depends
I got the following output (note that I did this under NT, which is why the
extensions are a little different than what your exact example would
produce):
In particular notice the two lines:
The second dependency will overwrite the first one, and this is why you're
only getting a single target built. Jam uses the notion of "grist" to aid in
distinguishing duplicate targets. You'll see that hosttest1.obj is prefixed
by <sub1>. <sub1> is referred to as the "grist".
I think of "targets" in jam as arbitrary text strings or labels. Each label
gets bound to a real file on disk.
When you add grist, you are creating labels which are more unique. When you
do this it is very important that the grist is added both when you identify
the target and when you use it. For example, in Jambase, the Objects rule
makes object targets which associate object files with source files. The
Objects rule applies grist to both the source and object names. When the
MainFromObjects rule wants to associate objects with a final output target,
it need to make sure that it applies the exact same grist.
I modified your example by creating the following rule (and I would put this
rule in your Jamrules file):
rule MyMain {
local t ;
makeGristedName t : $(<) ;
Main $(t) : $(>) ;
}
and modified the lower level Jamfile to use MyMain rather than Main. If you
repeat the jam -d6 | grep Depends with these modifications you'll see:
and now the same corresponding two lines that I extracted earlier look like:
and you get two executable programs built rather than one.
Anybody who is dealing with multi-directory jam projects should take the
time to become familiar with, and understand, the concept of grist. It can
be a very powerful tool. I found grist to be a little mysterious when I
started to use jam, but now that I understand it, it gives me a great deal
of control about how things work.
Date: Mon, 14 Aug 2000 13:44:58 +0100
From: Paul Haffenden <pjh@unisoft.com>
Subject: Regular expression variable editing
I've added some changes to expand.c to implement
regular expression substitutions on variables. It uses a :E modifier.
e.g.
instring = "Now let all good men lend me five pounds" ;
# Delete the first 'o'
first = $(instring:E=/o//) ;
# Delete all the 'o's. Uses the global flag g.
second = $(instring:E=/o//g) ;
# Use () to remember part of the pattern string, and use
# it to replace \1.
third = $(instring:E="/good (...)/good wo\\1/") ;
# Use the magic character '&', that replaces the whole matched
# string. Note use of a different delimiter from '/'
fourth = $(instring:E="'good'& &'") ;
ECHO $(instring) ;
ECHO $(first) ;
ECHO $(second) ;
ECHO $(third) ;
ECHO $(fourth) ;
produces:
Now let all good men lend me five pounds
Nw let all good men lend me five pounds
Nw let all gd men lend me five punds
Now let all good women lend me five pounds
Now let all good good men lend me five pounds
Its a bit quirky, you have to be careful with your
'\', ':', '[' and ']' characters.
If anyone would find this change useful, email me and
I'll send you the amended source.
Date: Mon, 14 Aug 2000 10:58:59 -0500
From: Eric Scouten <scouten@Adobe.COM>
Subject: Re: Regular expression variable editing
Yes, I would certainly find this useful. This sounds like something that
belongs in the public P4 depot at public.perforce.com. I'll be happy to
submit it there if that's okay with you.
Date: Tue, 15 Aug 2000 17:23:46 -0700
From: Jos Backus <josb@corp.webtv.net>
Subject: SEARCH_SOURCE question
I'm very new to Jam so I may be missing something very obvious here...
How do I go about building a library consisting of sources that are grouped in
separate directories?
hash.c hash_bigkey.c live in ../hash; bt_close.c bt_conv.c live in ../btree. I tried
SRC1 = hash.c hash_bigkey.c ;
SRC2 = bt_close.c bt_conv.c ;
SEARCH_SOURCE = ../hash ../btree ;
Library libdb.a : $(SRC1) $(SRC2) $(SRC3) $(SRC4) $(SRC5) $(MISC) ;
This works but is ugly because I know that hash.c lives in ../hash, not in
../btree. I thought it would be possible to say something like
SEARCH_SOURCE on $(SRC1) = ../hash ;
SEARCH_SOURCE on $(SRC2) = ../btree ;
but that doesn't work: jam says
don't know how to make hash.c
Subject: RE: SEARCH_SOURCE question
Date: Wed, 16 Aug 2000 11:33:52 -0700
I think you want to use
SEARCH on $(SRC1) = ../hash ;
SEARCH on $(SRC2) = ../btree ;
but I'm pretty sure that this won't work exactly. The Library rule calls the
Objects rule which in turn sets SEARCH on each source file, and you would
need to use gristed versions of $(SRC1), not $(SRC1) itself. So the complete
solution using this technique would need to look like:
Library libdb.a : $(SRC1) $(SRC2) $(SRC3) $(SRC4) $(SRC5) $(MISC) ;
local s ;
makeGristedName s : $(SRC1) ;
SEARCH on $(s) = ../hash;
makeGristedName s : $(SRC2) ;
SEARCH on $(s) = ../btree;
It's important that the "SEARCH on" appear after the Library since the
Library rule calls "SEARCH on" which would wipe out any "SEARCH on"'s done
prior to the Library rule.
Another thing that you should be able to do is this:
SEARCH_SOURCE = ../hash ;
Library libdb.a : $(SRC1) ;
SEARCH_SOURCE = ../btree ;
Library libdb.a : $(SRC2) ;
I haven't tried either of these, so you'll need to try these and figure out
if they do what you want and which approach will satisfy your needs.
Subject: RE: SEARCH_SOURCE question
Date: Wed, 16 Aug 2000 21:29:49 -0700
This example won't work for 2 reasons:
1 - $(SRC1) is the raw source file names and doesn't include grist.
2 - The SEARCH on got wiped out by the SEARCH on that happens through the
Library rule.
If you call SubDir, then grist (which is derived from the directory
information passed to SubDir) is added to source files and object files. If
you don't call SubDir then the grist is empty and you'd get the same results
with ot wihout grist being applied. Since many people use SubDir, I always
assume that it's being used and code accordingly. In the examples below,
I've assumed that SubDir (and hence grist) is being used.
When you say
Library libdb.a : $(SRC1) ;
this is the same as saying
Library libdb.a : hash.c hash_bigkey.c ;
The Library rule calls LibraryFromObjects with $(<) and $(>:S=$(SUFOBJ)
which is effectively:
LibraryFromObjects libdb.a : hash.obj hash_bigkey.obj ;
The Library rule calls Objects with $(>) so this is effectively:
Objects hash.c hash_bigkey.c ;
Objects makes gristed names and calls Object for each one, so you'll get:
Object <Dir1!Dir2>hash.obj : <Dir1!Dir2>hash.c ;
Object <Dir1!Dir2>hash_bigkey.obj : <Dir1!Dir2>hash_bigkey.c ;
Object executes
SEARCH on $(>) = $(SEARCH_LOCATE) ;
so in effect it's doing:
SEARCH on <Dir1!Dir2>hash.c = ../hash ;
LibraryFromObjects adds grist to $(>), and for the simple case you wind up
with libdb.a depending on <Dir1!Dir2>hash.obj and
<Dir1!Dir2>hash_bigkey.obj.
<Dir1!Dir2>hash.obj depends on <Dir1!Dir2>hash.c
So the dependency tree is built up using the gristed names. You can see this
quite clearly if you run
jam -d6 | grep Depends
Saying
SEARCH on hash.c = blah ;
is quite different from saying
SEARCH on <Dir1!Dir2>hash.c = blah ;
hash.c and <Dir1!Dir2>hash.c are two completely different targets.
This will only work in the situation where grist is not used (i.e. you don't
call SubDir). If you do call SubDir, then this won't work. When you don't
call SubDir gristed names are the same as ungristed names.
This is crucial in jam, since unlike make, jam does eveything when it's
executed, and nothing is deferred.
I'm glad I could help. Examples do help. I've also learned alot by trying to
figure out other people's problems. I find the act of working through the
problem always gives me a new perspective, and allows me to learn more about
how the program works.
Date: Sun, 20 Aug 2000 20:35:51 -0500
From: Eric Scouten <eric@scouten.com>
Subject: Emacs syntax coloring for Jamfiles
For those of you who are in the habit of using Emacs to edit your Jamfiles,
you might want to take a look at jam-mode.el which I just submitted to
public.perforce.com (see //guest/eric_scouten/jam-mode/...).
It does a reasonably good job of syntax-coloring Jam files. It's my first
significant attempt at Emacs-lisp coding, so let me know if there are problems.
From: Jos Backus <josb@microsoft.com>
Subject: RE: SEARCH_SOURCE question
Date: Wed, 16 Aug 2000 14:44:07 -0700
...but this doesn't work. In fact, it was the first thing I tried. Then I
read about the Library rule setting
SEARCH to SEARCH_SOURCE and tried that. I still don't grasp why it doesn't
work. It would be so OO-ish :)
Forgive my ignorance, but why is this so?
This works. Interestingly, this also works:
Library libdb.a : $(SRC1) $(SRC2) ;
SEARCH on $(SRC1) = ../hash;
SEARCH on $(SRC2) = ../btree;
So what's important is _when_ the SEARCH (attribute) is bound to the files.
Yes, this one works as well.
Imo, it's examples like these that are really helpful in understanding and using Jam.
Subject: RE: SEARCH_SOURCE question
Date: Tue, 15 Aug 2000 23:36:24 -0700
I think you want to use
SEARCH on $(SRC1) = ../hash ;
SEARCH on $(SRC2) = ../btree ;
but I'm pretty sure that this won't work exactly. The Library rule calls the
Objects rule which in turn sets SEARCH on each source file, and you would
need to use gristed versions of $(SRC1), not $(SRC1) itself. So the complete
solution using this technique would need to look like:
Library libdb.a : $(SRC1) $(SRC2) $(SRC3) $(SRC4) $(SRC5) $(MISC) ;
local s ;
makeGristedName s : $(SRC1) ;
SEARCH on $(s) = ../hash;
makeGristedName s : $(SRC2) ;
SEARCH on $(s) = ../btree;
It's important that the "SEARCH on" appear after the Library since the
Library rule calls "SEARCH on" which would wipe out any "SEARCH on"'s done
prior to the Library rule.
Another thing that you should be able to do is this:
SEARCH_SOURCE = ../hash ;
Library libdb.a : $(SRC1) ;
SEARCH_SOURCE = ../btree ;
Library libdb.a : $(SRC2) ;
I haven't tried either of these, so you'll need to try these and figure out
if they do what you want and which approach will satisfy your needs.
From: Jos Backus <josb@microsoft.com>
Subject: RE: SEARCH_SOURCE question
Date: Thu, 17 Aug 2000 10:52:14 -0700
OK, this bites you in the SubDir case because the names become gristed.
I still don't quite understand why
SEARCH_SOURCE on $(SRC1) = ../hash ;
SEARCH_SOURCE on $(SRC2) = ../btree ;
Library libdb.a : $(SRC1) $(SRC2) ;
doesn't work, as the Object rule (called from Library -> LibraryFromObjects
-> Objects)
does use SEARCH_SOURCE to set SEARCH on each target, right?
I understand that when using SubDir you would need to set SEARCH_SOURCE
on the gristed names, but in this case LibraryFromObjects wouldn't do
any gristing because there is none (because I'm not using SubDir).
Surely you mean SEARCH_SOURCE here.
Date: Sun, 20 Aug 2000 20:33:05 -0500
From: Eric Scouten <eric@scouten.com>
Subject: Re: Regular expression variable editing
Hello... I've just submitted Paul Haffenden's regexp mods to
public.perforce.com. Paul wrote that he had tested on Linux and Solaris
only; I was able to confirm that it works on Windows NT as well. Perhaps
others could comment on other platforms...
For now, you can read this version from //guest/eric_scouten/jam/...
From: "Thorsten Schiller" <tschiller@ifhl.com>
Date: Mon, 28 Aug 2000 16:21:31 -0400
Subject: v2.2.5 deleting source files
I'm new to JAM and will apologize in advance if this issue has been
discussed before. While I have visited the archive, I'm stuck on a very
slow link right now and couldn't search through all the old messages.
I grabbed the most current jam build (2.2.5) and set up a little test
directory just to ensure that I got the basics right. I didn't, of course.
I made a dumb typo and, while the mistake is entirely my own, jam's
behaviour of deleting targets (RELNOTES for 2.2.5) may need some ...
fine-tuning.
It turns out that if you fail to leave a space between the target name and
the colon, jam may end up deleting your source code. I gather this is
because jam sees the source files as additional targets that need to be
deleted when the build fails?
I understand that the parsing is kept simple on purpose to make jam as
flexible as possible, however, anything that can result in the accidental
destruction of source code likely warrants some form of warning mechanism
(even a flag that I need to manually enable that reports that one of my
targets contains a colon would be perfectly adequate). I'm running linux
with gcc 2.95.3. I've included steps to recreate my result:
$ jam -v
Jam/MR Version 2.2.5. Copyright 1993, 1999 Christopher Seiwald.
$ echo blah >test1.c
$ echo blah >test2.c
$ cat Jamfile
Main test: test1.c test2.c ;
$ jam
warning: no targets on rule Objects
...found 10 target(s)...
...updating 1 target(s)...
Link test: test1.c test2.c
test1.c:1: unterminated character constant
test2.c:2: unterminated character constant
cc -o test: test1.c test2.c
...failed Link test: test1.c test2.c ...
...removing test1.c
...removing test2.c
...failed updating 1 target(s)...
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Thu, 31 Aug 2000 21:25:55 +0900
Subject: header search order & the current directory
If you have a header file in the current directory that has the same name as
a header in your include path (say because you are installing the header
into somewhere in your include path), then the jam header scanning and a
compiler such as gcc will not agree about which header is used. The jam
header scan will use the header in the include path, while gcc will use the
header in the current directory. Of course it can be argued that a project
should never be including a directory that it's installing headers into.
A workaround is to put $(DOT) at the start of your HDRS variable to force
gcc and jam to agree about where the current directory is in search order.
Date: Mon, 04 Sep 2000 17:53:40 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: Parallel builds on NT ?
The home page (http://www.perforce.com/jam/jam.html) states among
features that: "On UNIX and NT, jam can do this with multiple, concurrent
processes. " On the other hand, the documentation on the jam command
(http://public.perforce.com/public/jam/src/Jam.html) says that the -j option
(number of concurrent jobs) is "UNIX only".
From: Alexander Nicolaou <anicolao@mud.cgl.uwaterloo.ca>
Subject: Re: Parallel builds on NT ?
Date: Mon, 4 Sep 2000 12:49:29 -0400 (EDT)
I assume -j is implemented for NT, but if you're using MSVC's compiler
you cannot use it. The issue is that to save space and time cl creates
a ".pdb" file which is a database of debugging information for your
program, and only one compile process at a time can update this file;
the second simultaneous compile will fail (just like the irritating
feature that you can't compile while also running the debugger).
With g++ you will not have this problem, but g++ is about twice as
slow as MSVC at at compile time, so there is zero gain on a dual
processor machine. (Although g++ accepts a more complete subset of
ANSI C++ than MSVC, particularly in regard to templates.)
From: Martine Habib <mhabib@microsoft.com>
Subject: RE: Parallel builds on NT ?
Date: Mon, 4 Sep 2000 10:16:44 -0700
You can use the -j option, however, if you do not use "compiler pdb files"
(obtained by using the option /Zi), but "linker" pdb files (obtained by
using the /pdb: linker option. The price is that either you get an "all
release" build with limited debug info, or you have to build using /Z7 with
full debug info.
From: Alexander Nicolaou [mailto:anicolao@mud.cgl.uwaterloo.ca]
Sent: Monday, September 04, 2000 9:49 AM
Subject: Re: Parallel builds on NT ?
I assume -j is implemented for NT, but if you're using MSVC's compiler
you cannot use it. The issue is that to save space and time cl creates
a ".pdb" file which is a database of debugging information for your
program, and only one compile process at a time can update this file;
the second simultaneous compile will fail (just like the irritating
feature that you can't compile while also running the debugger).
With g++ you will not have this problem, but g++ is about twice as
slow as MSVC at at compile time, so there is zero gain on a dual
processor machine. (Although g++ accepts a more complete subset of
ANSI C++ than MSVC, particularly in regard to templates.)
Date: Mon, 4 Sep 2000 16:33:32 -0700 (PDT)
Subject: RE: Parallel builds on NT ?
Just to clarify, you mean that if I want to use the -j
option with jam and MSVC, then I just use /Z7 with cl,
right? What are the disadvantages to using /Z7 vs. /Zi
? I presume that you would lose the ability to do
"edit and debug" but I don't tend to use this feature anyways.
From: Martine Habib <mhabib@microsoft.com>
Subject: RE: Parallel builds on NT ?
Date: Mon, 4 Sep 2000 16:52:06 -0700
You also lose incremental linking and compilation. This can still be OK if
your modules are not too large, as you gain quite a lot of speed by using
You do, of course also lose "Edit and Continue".
Sent: Monday, September 04, 2000 4:34 PM
Subject: RE: Parallel builds on NT ?
Just to clarify, you mean that if I want to use the -j
option with jam and MSVC, then I just use /Z7 with cl,
right? What are the disadvantages to using /Z7 vs. /Zi
? I presume that you would lose the ability to do
"edit and debug" but I don't tend to use this feature anyways.
Date: Tue, 05 Sep 2000 15:21:42 -0400 (EDT)
From: Lee Marzke <lmarzke@kns.com>
Subject: Setting environment variables for compiler
How would you set an environment variable
e.g. GCC_EXEC_PREFIX
in a Jamfile? ( on Unix ) We have different targets that use
different version of GCC and this changes often.
Date: Wed, 06 Sep 2000 13:48:30 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: automagically listing source files
Thanks for your useful information on parallel builds on NT. One thing I
can say for Jam is that it has a friendly and competent user group !
With gnu make, I am used to listing source files automatically, e.g.
myprog: $(wildcard *.cpp)
This way, the makefile can be set up once and for all, and does not need
to be modified each time a new source file is added to the project. Is this
sort of thing possible using Jam ?
From: "Sawhney, Davinder" <DSawhney@ciena.com>
Date: Fri, 8 Sep 2000 10:41:58 -0400
Subject: Anyone looking for Jamming job - Maryland-Baltimore area
I am not sure if this is the correct use of this group
but any interested in consulting or permanent job in the
Maryland area can email me with their resume
Date: Fri, 8 Sep 2000 12:36:39 -400
From: "Lex Spoon" <lex@cc.gatech.edu>
Subject: Re: automagically listing source files
In my view, one of the nice things about Jam is that it gets away from
wildcard stuff. What's so hard about:
SourceFile foo.c ;
SourceFile bar.c ;
(etc)
If you can write the code for a file, then you can certainly add a line
to the Jamfile for it. And the really nice thing is, you can say what
*kind* of source file it is, even if the filename doesn't help:
ExtraOptimizedSourceFile zap.c ;
Furthermore, if you have any automatically generated *.c files, they
won't get picked up.
So anyway, you *can* do things like what you describe, but I actually
like having things listed out, better.
Date: Fri, 8 Sep 2000 12:33:43 -400
From: "Lex Spoon" <lex@cc.gatech.edu>
Subject: Re: Setting environment variables for compiler
What does this have to do with environment variables, exactly? You can
do things like:
set CC = /bin/gcc on foo.o ;
(I'm sure this syntax isn't exactly right). Anyway, then the regular
Jam commands will use that version of CC just for foo.o.
Date: Fri, 08 Sep 2000 14:57:26 -0400 (EDT)
From: Lee Marzke <lmarzke@kns.com>
Subject: Re: Setting environment variables for compiler
According the the manual variables are not exported to the shell.
See excerpt below:
Jam/MR variables are not re-exported to the shell that executes the updating
actions, but the updating actions can reference
Jam/MR variables with $(variable).
Unfortunately I need to export the variable.
Date: Fri, 8 Sep 2000 13:10:26 -0700
Subject: Re: Setting environment variables for compiler
From: Matt Armstrong <matt@corp.phone.com>
Since this is unix only, maybe try
CC = env GCC_EXEC_PREFIX=/whatever /wherever/this/one/is/gcc ;
You'll end up with long command lines, but unix tends to handle that okay.
Date: Fri, 8 Sep 2000 17:41:52 -400
From: "Lex Spoon" <lex@cc.gatech.edu>
Subject: Re: Setting environment variables for compiler
Surely you can export variables explicitly by changing commands like this:
mycommand myargs
to this:
FOO=$FOO mycommand myargs
Date: Thu, 14 Sep 2000 08:56:01 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: Newbie: simplest system for release/debug variants ?
I have a simple jam setup for a project with a complex directory
structure, using SubDir and SubInclude. Everything (obj, lib, exe) gets
built in the source directories.
Now I need to add release/debug variants. I am happy to build these
separately (invoking jam with -sDEBUG=1, for example), but I don't want
to get the targets mixed up.
What is the simplest way to do this ?
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 14 Sep 2000 14:46:09 +0100
Subject: RE: Newbie: simplest system for release/debug variants ?
You could let the release build in the source directories,
and have the debug files built in a subdirectory of each SubDir.
After the SubDir invocation, just write:
if $(DEBUG) { LOCATE_TARGET = $(SUBDIR)/debug ; }
Date: Thu, 14 Sep 2000 14:52:52 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: RE: Newbie: simplest system for release/debug variants ? -REPONSE
Just out of interest, could this be put in some kind of rule, to avoid having
to write it in each Jamfile of the directory tree ?
You could let the release build in the source directories,
and have the debug files built in a subdirectory of each SubDir.
After the SubDir invocation, just write:
if $(DEBUG) {
LOCATE_TARGET = $(SUBDIR)/debug ;
}
Date: Thu, 14 Sep 2000 14:58:30 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: MAXLINE problem
Jam is producing a command (a final link) that is too long for the
command shell (on NT 4.0).
1) I left MAXLINE in jam.h set to 996 for NT, but this does not seem to be
having any effect.
2) The README that comes with the jam distribution actually mentions
that NT 4.0 is no longer limited to 996. So how come I am having a
problem with this ?
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 14 Sep 2000 15:16:49 +0100
Subject: RE: Newbie: simplest system for release/debug variants
You could create a new SubDir rule (in your Jamrules file):
rule MySubDir {
SubDir $(<) ;
if $(DEBUG) { LOCATE_TARGET = $(SUBDIR)/debug ; }
}
And use this one instead of the standard rule.
Date: Thu, 14 Sep 2000 08:41:45 -0500
From: Eric Scouten <scouten@Adobe.COM>
Subject: RE: Newbie: simplest system for release/debug variants ? -REPONSE
If you look at the definition of SubDir (in Jambase), the first Jamfile
that gets read in relies on SubDir to read Jamrules, so your build will
fail with an unknown command "MySubDir" if you start jam from a directory
with a Jamfile that uses this rule because it hasn't seen the definition in
Jamrules yet. :-(
For this to work, you'd have to add the rule to Jambase and rebuild Jam.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 14 Sep 2000 16:43:58 +0100
Subject: Re: MAXLINE problem
The only way is to create a response file, containing the files to link,
and use it in the link command:
actions Link bind NEEDLIBS IMPLIB {
echo $(>) > $(<:S=.tmp)
echo $(LINKLIBS) >> $(<:S=.tmp)
echo $(NEEDLIBS) >> $(<:S=.tmp)
$(LINK) $(LINKFLAGS) /out:$(<) $(UNDEFS) @$(<:S=.tmp)
}
Following the old make(1) utility, I also hacked Jam to handle diversion:
In actions, all the text between <+ and +> is expanded, written into a temporary
file,
and the file name is put on the command line:
actions Link bind NEEDLIBS {
$(LINK) $(LINKFLAGS) /out:$(<) $(UNDEFS) @<+$(>) $(NEEDLIBS) $(LINKLIBS)+>
}
I suppose it would be useful to put this in the Public Depot,
if there is a general agreement on the <+ +> syntax;
Unfortunately I made lots of other changes in Jam since the original 2.1.5,
and it would require some work to integrate the changes back to the trunk.
Date: Thu, 14 Sep 2000 10:16:23 -0500 (CDT)
Subject: Re: MAXLINE problem
Here is our solution to the line too long problem:
# Notice the solution for the line too long problem
# create a file for the items, and use this trick, courtesy
# of Laura Wingerd to output the items
# This works due to the mix 'n match composition of macros
# by jam, each item in the extraobjects or $(>) is concatenated
# with the rest of the line. The period after the echo is
# ignored, I guess, but serves to make the whole thing one
# string. The newline macro splits it into individual lines.
actions vLink bind NEEDLIBS EXTRAOBJECTS {
copy nul: linkobjs.txt
echo.$(>)>>linkobjs.txt$(NEWLINE)
This will work no matter how big the $(>) or $(LINKLIBS) macros get
NEWLINE macro looks like:
NEWLINE = "
" ; # used to break up long lines for echo to a file
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: MAXLINE problem
Date: Thu, 14 Sep 2000 09:35:35 -0700
Can you copy/paste the exact error message?
FYI, we had a similar snag here on our Solaris boxes. Turned out that
a faulty heuristic in the command-line "chunking" code would make a
guess at how many target/filenames could be tacked onto a single
command line. But that guess would cause the command buffer to overflow.
When Mark Baushke was still with us, he fixed this code to do a
"backtrack and try again" approach. His patch was submitted to the
official Jam depot @ perforce, but the patch has never been included
in an official jam release. You might want to retrieve that patch
from the jam depot and try it out.
[We've been using the patch in production for approx. a year now, with no problems.]
Date: Thu, 14 Sep 2000 11:37:37 -0700 (PDT)
Subject: Re: MAXLINE problem
Are you sure it's NT that the line's too long for? Maybe it's too long for
Jam because of what you have MAXLINE set to? I have mine (on NT) set to
32768, with no problem. (Don't remember now if I chose that as just a
reasonably large number, or whether I thought that was NT's limit. You
might want to give it try and see what happens -- maybe even crank it up
higher and see what it does).
Date: Fri, 15 Sep 2000 10:19:16 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: Re: MAXLINE problem
set to 32768, with no problem.
Well, I am fairly sure it's NT for two reasons:
1- the error message is in French (my NT is in French, but I don't think
Jam has French error messages ;-)
2- if I type open a MS-DOS window and type an extra long command by
hand, it stops after about 2000 (2048?) characters. I just can't type on,
the keyboard doesn't work.
What makes people say that this limit no longer exists on NT4.0 ? Can I
make it go away ? Is it a problem with the French version of NT ?
What is the difference between these ? In Jambase I found an "actions
Link bind NEEDLIBS", without IMPLIBS, and found no "actions vLink...".
Should I modify the "actions Link..." in Jambase, or add one of these ?
Date: Fri, 15 Sep 2000 14:09:34 -0500 (CDT)
Subject: Re: MAXLINE problem
oh, the vLink is our own link rule. the important difference is that
no matter how big the jam macro gets, it will be properly echoed into
the response file (unless jam cannot handle the length of it!)
so, if $(>) = a.o b.o c.o d.o e.o f.o etc. ;
then
copy nul: linkobjs.txt # this creates an empty linkobjs.txt file
echo.$(>)>>linkobjs.txt$(NEWLINE) This line is expanded by jam to look like:
echo.a.o>>linkobjs.txt
echo.b.o>>linkobjs.txt
echo.c.o>>linkobjs.txt
echo.d.o>>linkobjs.txt
so that each item in the macro gets its own echo line. This is
important if the macro gets really big, with lots of items, otherwise
you can hit the dos line length limit.
From: "Paul Moore" <gustav@morpheus.demon.co.uk>
Subject: RE: MAXLINE problem
Date: Fri, 15 Sep 2000 20:20:02 +0100
The OS itself has no discernable limit (I've started commands with 3 Mbyte
command lines using the CreateProcess API - this was under WIndows 95, as
well, so the limit doesn't even exist there). However, the command
interpreters have limits - COMMAND.COM is a pathetic 256 bytes, IIRC. I'm
not sure what CMD.EXE is - it may not be documented. JP Software's 4NT.EXE
has a limit of 1023 bytes.
It would be nice if Jam used the OS (CreateProcess) when it doesn't need
shell services (such as redirection). I don't know if it does, though...
Date: 16 Sep 2000 01:10:00 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: RE: Newbie: simplest system for
Why can't the if statement be added to Jamrules itself?
Date: Sat, 16 Sep 2000 08:38:12 -0700 (PDT)
Subject: RE: MAXLINE problem
Actually, even if jam does need redirection services,
they can be easily implemented without resorting to CMD.EXE.
Date: Sat, 16 Sep 2000 11:11:10 -0500
From: Eric Scouten <scouten@Adobe.COM>
Subject: RE: Newbie: simplest system for
Good question. Jamrules gets read once and only once, regardless of how
many subdirectories you have. $(SUBDIR) changes per-SubDir invocation.
There might be some sort of skanky hack involving overriding SubDir in
Jamrules... I haven't tried that approach (yet).
Date: Thu, 7 Sep 2000 18:56:58 -0700 (PDT)
Subject: Multiple targets with JAM/MR
I was wondering if somebody could point me
to an example Jamfile which can build multiple
targets. I was primarily interested in
having Jam produce different libraries into
different directories from the same C code
based on compiler options (eg -g for a debug
build, -O2 for a performance build etc).
PS: I would have looked at the email archives before asking, but they seem to be missing
a global search index.
From: john@nanaon-sha.co.jp (John Belmonte)
Subject: Re: Multiple targets with JAM/MR
Date: Tue, 19 Sep 2000 10:47:51 +0900
Jam is not too pretty about this kind of thing. I'd recommend the post
"multiple target platforms?" from Aug 3, 2000.
Subject: Multiple targets with JAM/MR
From: "Michael O'Brien" <mobrien@pixar.com>
Date: Tue, 19 Sep 2000 09:43:41 -0700
Subject: Re: Multiple targets with JAM/MR
I've been using the grist for the file to allow multiple targets. Essentially,
I just make two targets that bind to two different object files. All generated
files need to have this duality (for example, we have some lex stuff).
So far, it's worked great. For reference, check out the makeGristedName rule in Jambase.
From: Grant_Glouser@palm.com
Date: Tue, 19 Sep 2000 15:37:09 -0700
Subject: Newbie: simplest system for
My solution to this is to add a "hook" to the SubDir rule. This
requies changing the Jambase and rebuilding Jam - but only once!
After that you can change the behavior of SubDir without rebuilding
Jam every time.
At the end of SubDir in the Jambase, I add this:
# invoke SubDirHook for customization by Jamrules SubDirHook $(<) ;
Then add an empty SubDirHook to the Jambase (this prevents Jam
from complaining if you decide not to use a SubDirHook in your Jamrules):
rule SubDirHook {
# Override this rule in the Jamrules
}
Notice that the SubDirHook is invoked *after* reading Jamrules.
So, the new SubDirHook rule in your Jamrules will always be
used, even in the first call to SubDir.
In the Jamrules, in this example, you could put a SubDirHook like this:
rule SubDirHook {
if $(DEBUG) { LOCATE_TARGET = $(SUBDIR)/debug ; }
}
This technique (hooks that can be overridden in Jamrules) could
be used in other places, but SubDir is where I've found it most useful.
From: Grant_Glouser@palm.com
Date: Tue, 19 Sep 2000 17:23:05 -0700
Subject: Re: Newbie: simplest system for
Someone else explained this in another message, but I will try to explain it
again. Short answer: SubDir is a special case when it comes to redefining
standard rules.
The problem with redefining SubDir in the Jamrules is that Jamrules is included
by SubDir. The first time SubDir is invoked, it will *always* use the SubDir in
the Jambase because you have not had the opportunity to redefine it. A new
SubDir in the Jamrules would take effect on subsequent invocations of SubDir,
but the first one will always be wrong (a real problem when you try to build
from a leaf directory in the hierarchy). You could redefine SubDir in every
Jamfile, but that is not a good solution.
SubDirHook gets around this by invoking another rule after you have had the
chance to override it. This way, it affects all uses of SubDir, including the first.
A better question might be: Why bother adding this hook mechanism when you can
just change the Jambase? I'm sure many sites have custom Jambases anyway, so
why not just change the Jambase and rebuild Jam when you need to? My only
answer is that it is a matter of choice and style - development style and usage
style. SubDirHook is another option, which I have found useful because I prefer
changing the Jamrules to changing the Jambase and rebuilding Jam.
Subject: Re: Newbie: simplest system for
Grant> My solution to this is to add a "hook" to the SubDir rule.
Grant> This requies changing the Jambase and rebuilding Jam - but
Grant> only once!
Hmm. You can redefine SubDir, or any other rule, in any jamfile
(typically Jamrules). Why bother recompiling jam? That just adds
more dependencies, potentially circular.
Date: Tue, 19 Sep 2000 16:02:42 -0700
From: Iain McClatchie <iain@10xinc.com>
Subject: Re: Newbie: simplest system for
Grant> My solution to this is to add a "hook" to the SubDir rule.
Grant> This requies changing the Jambase and rebuilding Jam - but
Grant> only once!
Hmm. You can redefine SubDir, or any other rule, in any jamfile
(typically Jamrules). Why bother recompiling jam? That just adds
more dependencies, potentially circular.
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Thu, 21 Sep 2000 15:27:57 +0200
Subject: PDB file
In jamrule file I want to write following rule, that should create in target
directory PDB file with target name. How I can pass to jam that every build
I want use target_name also for PDB file name?
I think, that it should be like this:
C++FLAGS += -Fd$(ALL_LOCATE_TARGET)$(SLASH)<target_name>.pdb ;
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 21 Sep 2000 18:40:45 +0100
Subject: E9f._:_[jamming]_PDB_file?=
You could rewrite the C++ rule and add
C++FLAGS += -Fd$(<:R=$(LOCATE_TARGET):S=.pdb) ;
But it's much better to create your own rule:
rule MainWithPdb {
Main $(<) : $(>) ;
local i s ;
# Add grist to file names
makeGristedName s : $(>) ;
for i in $(s) {
C++FLAGS on $(i:S=$(SUFOBJ)) += -Fd$(i:R=$(LOCATE_TARGET):S=.pdb) ;
}
}
Then replace each call to the Main rule by MainWithPdb !
From: Alfred Landrum <alandrum@s8.com>
Date: Thu, 21 Sep 2000 11:54:32 -0700
Subject: How to set per-target link flags
I'm looking for a Link version of "ObjectC++Flags", so that
Main badprogram : badprogram.cpp ;
could be made to link with compiled version of libefence.a.
I see the variable LINKFLAGS; is there a target-specific way of setting it?
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Fri, 22 Sep 2000 14:29:01 +0100
Subject: Re: How to set per-target link flags
You can use
LinkLibraries badprogram : libefence ;
This is roughly the same as setting the LINKFLAGS variable
for this target:
LINKFLAGS on badprogram += libefence.a ;
But the latter is not portable.
From: "Temesgen Habtemariam" <temesgen@nxnetworks.com>
Date: Wed, 11 Oct 2000 18:45:43 -0500
Subject: using c-shell on NT.
I am trying to get around the command line length limitation of cmd on NT.
I have been able to execute longer compile lines on the NT c-shell I got
from MKS Unix toolkits. I was thinking of making changes to jam so that it
executes all actions on NT using the c-shell instead of cmd. Does this sound
like some thing that can be done ? I was wondering if some one out there has
done some thing similar ...
From: "Temesgen Habtemariam" <temesgen@nxnetworks.com>
Date: Wed, 11 Oct 2000 18:27:11 -0500
Subject: using c-shell on NT.
I am trying to get around the command line length limitation of cmd on NT.
I have been able to execute longer compile lines on the NT c-shell I got
from MKS Unix toolkits. I was thinking of making changes to jam so that it
executes all actions on NT using the c-shell instead of cmd. Does this sound
like some thing that can be done ? I was wondering if some one out there has
done some thing similar ...
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Wed, 18 Oct 2000 11:13:03 +0200
Subject: ResourceCompiler rule
I need a help
I want to write rule that compiles *.rc files in *.res files
My Jamfile is:
Main ST : STMain.cpp ;
LibraryFromObjects ST : STMain.res ;
Object STMain.res : STMain.rc ;
My Jamrules looks like this:
rule UserObject {
switch $(>:S) {
case .rc : ResourceCompiler $(>:S=.res) : $(>) ;
case * : ECHO "unknown suffix on" $(>) ;
}
}
rule ResourceCompiler {
ECHO $(<) $(>) ;
Depends $(<) : $(>) ;
Clean clean : $(<) ;
}
actions ResourceCompiler {
ECHO "I am in action" ;
rc /D _AFXDLL /fo $(<) $(RCFLAGS) $(>)
}
But it doesn't work. I receive next message:
"Don't know how to make STMain.res"
Where is my mistake? If somebody tried to compile rc files into res?
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Wed, 18 Oct 2000 13:45:49 +0100
Subject: RE: ResourceCompiler rule
You should not use the UserObject rule, which only builds *.obj files
For resources, I use the following rules:
# Resource : builds a resource file
#
rule Resource {
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
Depends $(<) : $(>) ;
Clean clean : $(<) ;
RCFLAGS on $(<) = $(RCFLAGS) /d$(RCDEFINES) ;
}
actions Resource {
RC $(RCFLAGS) /Fo$(<) $(>)
}
# LinkResource : Links the resource file into an executable
#
rule LinkResource {
local t r ;
if $(<:S) { t = $(<) ;
} else { t = $(<:S=$(SUFEXE)) ; }
r = $(>:S=.res) ;
Depends $(t) : $(r) ;
NEEDLIBS on $(t) += $(r) ;
}
Then I write something like:
Main program : program.c ;
Resource resource.res : resource.rc ;
LinkResource program : resource.res ;
Date: Thu, 26 Oct 2000 16:04:34 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Compile Problem on Windows NT
consider changing to a mail reader or gateway that understands how to
I would like to test whether jam is a better alternative to make, as our
make problems get tougher and tougher to solve. But I have problems to get
jam working on NT.
We are using NT as a cross development platform for embedded PPC.
Therefore I just have a working GNU-Cross-Compiler and a cygwin (1.0)
native compiler I rarely use.
Neither make nor build.bat works. Could somebody mail me directly a
working jam.exe file or provide me with a working makefile?
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Mon, 30 Oct 2000 14:37:17 +0200
Subject: build library from two source with the same names
I need to build library, for example guess.lib, from files, that have same
names but different location,
example:
guess\eval\hash\of\int2.cpp
guess\prd\vec\plt\int2.cpp
gess\sol\vec\int2.cpp
In jamrules I use following:
ALL_LOCATE_TARGET = $(TOP)$(SLASH)lib.$(OS)$(SLASH)debug ;
if $(NT) {
OPTIM = -Zi ;
LINKFLAGS += /debug ;
C++FLAGS += -MDd ;
rule Library {
local name browse ;
LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ;
for name in $(>) {
ObjectC++Flags $(name)
: -Fr$(ALL_LOCATE_TARGET)$(SLASH)$(name:S=.sbr) -Fd$(ALL_LOCATE_TARGET)$(SLASH)$(<:S=.pdb) -D_DEBUG -MDd ;
}
Objects $(>) ;
}
}
In this case all object files saved in one directory and every new int2.obj
rewrite previous.
How I can ask to build my source directory tree in target directory and than
to build library from target object files?
From: Behrad Mehraie <Behrad_Mehraie@creoscitex.com>
Subject: RE: build library from two source with the same names
Date: Mon, 30 Oct 2000 08:27:32 -0800
this format, some or all of this message may not be legible.
To fix this problem, you have to use the grists. I had the same issue.
I have attached a sample project that I was working on for you.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Tue, 31 Oct 2000 10:51:47 +0100
Subject: RE: build library from two source with the same names
Using grists is not enough with libraries:
Jam cannot handle 2 files with the same name in the same library.
That is, when scanning the .lib file, all the targets will have the same name
guess.lib(int2.obj)
I think this comes from libraries in Unix, where libraries don't store the
directory names of their members.
As NT libraries do store the complete path of the .OBJ files, you may try
to hack Jam this way:
=======================
in filent.c:
replace:
- /* strip leading directory names, an NT specialty */
-
- if( c = strrchr( name, '/' ) )
- name = c + 1;
- if( c = strrchr( name, '\\' ) )
- name = c + 1;
-
- sprintf( buf, "%s(%.*s)", archive, endname - name, name );
- (*func)( buf, 1 /* time valid */, (time_t)lar_date );
by:
+ sprintf( buf, "%s(%.*s)", archive, endname - name, name );
+ for(c = buf; *c; c++)
+ if( *c == '\\' )
+ *c = '/';
+
+ (*func)( buf, 1 /* time valid */, (time_t)lar_date );
======================
in Jambase, rule LibraryFromObjects:
replace:
- local i l s s2 ;
-
- # Add grist to file names
-
- makeGristedName s : $(>) ;
-
by:
+ local i l s s2 ;
+
+ # Add grist to file names
+
+ makeGristedName s : $(>) ;
+
+ # NT use full path member names
+ if $(MSVCNT)
+ {
+ s2 = $(s:R=$(LOCATE_TARGET)) ;
+ }
+ else
+ {
+ s2 = $(>:BS) ;
+ }
+
replace:
- if ! $(l:D)
- {
- MakeLocate $(l) $(l)($(s:BS)) : $(LOCATE_TARGET) ;
- }
by:
+ if ! $(l:D)
+ {
+ MakeLocate $(l) $(l)($(s2:G=)) : $(LOCATE_TARGET) ;
+ }
replace:
- # If we can scan the library, we make the library depend
- # on its members and each member depend on the on-disk
- # object file.
-
- Depends $(l) : $(l)($(s:BS)) ;
-
- for i in $(s)
- {
- Depends $(l)($(i:BS)) : $(i) ;
- }
by:
+ # If we can scan the library, we make the library depend
+ # on its members and each member depend on the on-disk
+ # object file.
+ Depends $(l) : $(l)($(s2:G=)) ;
+ for i in $(s2)
+ {
+ Depends $(l)($(i:G=)) : $(i:GBS) ;
+ }
From: Alfred Landrum <alandrum@s8.com>
Date: Tue, 31 Oct 2000 18:10:38 -0800
Subject: Cross Including Jamfiles - Bad Idea?
[see dir structure below]
If I cd into /top, everythings works as planned. But if I cd into
/top/exec, it can't find targets libfoo or libbar.
I want to be able to run jam from /top/exec, and have it find (and
potentially update) libfoo and libbar.
I see two solutions. One, make libfoo and libbar globally defined. Or, I
could try to include libfoo's and libbar's Jamfile from exec's Jamfile.
This will take some work, because I think we'll need to put "header guards"
in the library's Jamfiles.
I don't want to globally define them; it won't scale.
# Example directory structure
/top/exec
/top/libfoo
/top/libbar
# Jamfile in /top/exec
SubDir TOP exec ;
Main exec : main.c ;
LinkLibraries exec : libfoo libbar ;
# Jamfile in /top
SubDir TOP ;
SubInclude TOP exec ;
SubInclude TOP libfoo ;
SubInclude TOP libbar ;
From: Behrad Mehraie <Behrad_Mehraie@creoscitex.com>
Subject: RE: build library from two source with the same names
Date: Tue, 31 Oct 2000 08:56:47 -0800
this format, some or all of this message may not be legible.
Sorry, it seems that the attachment was not gone through,
because of the anti virus installed on our server.
so please rename the attached file from .zi1 to .zip and
then unzip it.
From: Amaury FORGEOT-d'ARC [mailto:Amaury.FORGEOTDARC@atsm.fr]
Subject: RE: build library from two source with the same names
Using grists is not enough with libraries:
Jam cannot handle 2 files with the same name in the same library.
That is, when scanning the .lib file, all the targets will have the same name
guess.lib(int2.obj)
I think this comes from libraries in Unix, where libraries don't store the
directory names of their members.
As NT libraries do store the complete path of the .OBJ files, you may try
to hack Jam this way:
=======================
in filent.c:
replace:
- /* strip leading directory names, an NT specialty */
-
- if( c = strrchr( name, '/' ) )
- name = c + 1;
- if( c = strrchr( name, '\\' ) )
- name = c + 1;
-
- sprintf( buf, "%s(%.*s)", archive, endname - name, name );
- (*func)( buf, 1 /* time valid */, (time_t)lar_date );
by:
+ sprintf( buf, "%s(%.*s)", archive, endname - name, name );
+ for(c = buf; *c; c++)
+ if( *c == '\\' )
+ *c = '/';
+
+ (*func)( buf, 1 /* time valid */, (time_t)lar_date );
======================
in Jambase, rule LibraryFromObjects:
replace:
- local i l s s2 ;
-
- # Add grist to file names
-
- makeGristedName s : $(>) ;
-
by:
+ local i l s s2 ;
+
+ # Add grist to file names
+
+ makeGristedName s : $(>) ;
+
+ # NT use full path member names
+ if $(MSVCNT)
+ {
+ s2 = $(s:R=$(LOCATE_TARGET)) ;
+ }
+ else
+ {
+ s2 = $(>:BS) ;
+ }
+
replace:
- if ! $(l:D)
- {
- MakeLocate $(l) $(l)($(s:BS)) : $(LOCATE_TARGET) ;
- }
by:
+ if ! $(l:D)
+ {
+ MakeLocate $(l) $(l)($(s2:G=)) : $(LOCATE_TARGET) ;
+ }
replace:
- # If we can scan the library, we make the library depend
- # on its members and each member depend on the on-disk
- # object file.
-
- Depends $(l) : $(l)($(s:BS)) ;
-
- for i in $(s)
- {
- Depends $(l)($(i:BS)) : $(i) ;
- }
by:
+ # If we can scan the library, we make the library depend
+ # on its members and each member depend on the on-disk
+ # object file.
+
+ Depends $(l) : $(l)($(s2:G=)) ;
+
+ for i in $(s2)
+ {
+ Depends $(l)($(i:G=)) : $(i:GBS) ;
+ }
From: Behrad Mehraie <Behrad_Mehraie@creoscitex.com>
Subject: RE: build library from two source with the same names
Date: Tue, 31 Oct 2000 08:48:43 -0800
this format, some or all of this message may not be legible.
If you open the attached .zip file, you'll see that everything is working without
any problem, althought all the source files have the same name.
Also the output files are going to have the same names.
Again I have attached the .zip file. Run Jam from the directory which includes
the jamRules and you'll see what happens.
From: Amaury FORGEOT-d'ARC [mailto:Amaury.FORGEOTDARC@atsm.fr]
Sent: Tuesday, October 31, 2000 1:52 AM
Subject: RE: build library from two source with the same names
Using grists is not enough with libraries:
Jam cannot handle 2 files with the same name in the same library.
That is, when scanning the .lib file, all the targets will have the same name
guess.lib(int2.obj)
I think this comes from libraries in Unix, where libraries don't store the
directory names of their members.
As NT libraries do store the complete path of the .OBJ files, you may try
to hack Jam this way:
=======================
in filent.c:
replace:
- /* strip leading directory names, an NT specialty */
-
- if( c = strrchr( name, '/' ) )
- name = c + 1;
- if( c = strrchr( name, '\\' ) )
- name = c + 1;
-
- sprintf( buf, "%s(%.*s)", archive, endname - name, name );
- (*func)( buf, 1 /* time valid */, (time_t)lar_date );
by:
+ sprintf( buf, "%s(%.*s)", archive, endname - name, name );
+ for(c = buf; *c; c++)
+ if( *c == '\\' )
+ *c = '/';
+
+ (*func)( buf, 1 /* time valid */, (time_t)lar_date );
======================
in Jambase, rule LibraryFromObjects:
replace:
- local i l s s2 ;
-
- # Add grist to file names
-
- makeGristedName s : $(>) ;
-
by:
+ local i l s s2 ;
+
+ # Add grist to file names
+
+ makeGristedName s : $(>) ;
+
+ # NT use full path member names
+ if $(MSVCNT)
+ {
+ s2 = $(s:R=$(LOCATE_TARGET)) ;
+ }
+ else
+ {
+ s2 = $(>:BS) ;
+ }
+
replace:
- if ! $(l:D)
- {
- MakeLocate $(l) $(l)($(s:BS)) : $(LOCATE_TARGET) ;
- }
by:
+ if ! $(l:D)
+ {
+ MakeLocate $(l) $(l)($(s2:G=)) : $(LOCATE_TARGET) ;
+ }
replace:
- # If we can scan the library, we make the library depend
- # on its members and each member depend on the on-disk
- # object file.
-
- Depends $(l) : $(l)($(s:BS)) ;
-
- for i in $(s)
- {
- Depends $(l)($(i:BS)) : $(i) ;
- }
by:
+ # If we can scan the library, we make the library depend
+ # on its members and each member depend on the on-disk
+ # object file.
+
+ Depends $(l) : $(l)($(s2:G=)) ;
+
+ for i in $(s2)
+ {
+ Depends $(l)($(i:G=)) : $(i:GBS) ;
+ }
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 2 Nov 2000 09:46:02 +0100
Subject: E9f._:_[jamming]_Cross_Including_Jamfiles_-_Bad_?=
You can avoid these "header guards" in the library's Jamfiles
with this trick:
in a sub-Jamfile, the $(s) variable is set to the arguments of
the current invocation of the SubInclude rule (see Jambase).
In your case, you could add to the Jamfile in /top/exec:
# if we called Jam from this directory, build libraries
if ! $(s) {
SubInclude TOP libfoo ;
SubInclude TOP libbar ;
}
Date: 6 Nov 2000 23:05:03 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: 1 file to two dependancies
i have an object file that needs to go into two libraries, how can i do
that if both libraries and the object file are in the same directory?
Date: Mon, 06 Nov 2000 17:35:05 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Command line limit on WindowsNT
It took me about a day to convert my vxWorks-makefiles to jam, which is not too bad.
Using this this for some of our real projects I ran into the infamous command line length
limit of WindowsNT.
I was really frustrated as with the cygwin make I never had this problem.
After unsuccessfully fiddeling around to circumvert the problem, I decided to go ahead
and to merge the solution used by the cygwin make-utility to exec-commands. This took
me another day, but now everythins is more or less working (no big tests done yet).
I think there are a few problems involving redirection of IOs around which plague
also the cygwin-make utiltity. Therefore I still want do polish the whole thing a little bit.
Is anybody else interested in a cygwin-port?
Is it okay to send the patches and new files to this mail list?
Do you think there are any problems incorporating these changes as they come under
the GNU license?
Date: Tue, 7 Nov 2000 21:53:59 -0800 (PST)
Subject: Re: Command line limit on WindowsNT
I'd be interested in your changes, Niklaus. I never
really did understand what's the precise problem with
the command line length though. This discussion has
been raised before on this mailing list and there
didn't seem to be a definitive answer. How did you
resolve the problem?
From: "Rukun Wei" <Rukun.Wei@sybase.com>
Date: Tue, 28 Nov 2000 11:13:53 -0800
Subject: How to check file status
We need to make sure that "runtime.zip" exists before compiling some
java files. Is there any way to tell jam to check this file exists
before even trying to build?
From: Amaury.FORGEOTDARC@atsm.fr
Subject: Re: How to check file status
Date: Thu, 30 Nov 2000 09:25:51 +0100
You could make it the first target of your build:
rule checkFirst {
Depends first : $(<) ;
ALWAYS runtime.zip ;
}
if $(NT) { actions checkFirst { if not exist $(<) exit 1 } }
if $(UNIX) { actions checkFirst { test -f $(<) } }
checkFirst runtime.zip ;
Then use the -q option of Jam to make it quit on the first failed action.
Date: Wed, 06 Dec 2000 15:20:48 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Antw: jamming digest, Vol 1 #142 - 2 msgs
I achieved the same result by patching the SubInclude rule like this:
rule SubInclude {
local s ;
# That's
# SubInclude TOP d1 [ d2 [ d3 [ d4 ] ] ]
#
# to include a subdirectory's Jamfile.
if ! $($(<[1])) {
EXIT Top level of source tree has not been set with $(<[1]) ;
}
makeDirName s : $(<[2-]) ;
#ECHO "SubInclude $(s) -> $($(s)-SubIncluded) " ;
if ! $($(s)-SubIncluded) {
# Gated entry.
$(s[1])-SubIncluded = TRUE ;
# ECHO SubIncludexy $(JAMFILE:D=$(s):R=$($(<[1]))) ;
include $(JAMFILE:D=$(s):R=$($(<[1]))) ;
# ECHO "End of SubInclude $(s) -> $($(s)-SubIncluded) " ;
} else {
#ECHO "Skipping as already included $(s[1])-included " ;
}
}
Personally I prefer to modify the rule, as everybody will get the correct result.
Date: Wed, 06 Dec 2000 15:18:47 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Jam for WindowsNT (cygwin)
consider changing to a mail reader or gateway that understands how to
I successfully compiled Jam under cygwin (Version 1.1x (Beta)).
As far as I tested it with my environment there is no limit to the command
line length (> 15 kB) nor an other apparent bug.
I would appreciate if anybody would crosscheck this readme and tell me,
whether he managed install it also on his/her machine.
Date: Thu, 7 Dec 2000 13:08:18 +0200
Subject: DEPENDENCE
I have the next situation: two different libraries use the same header file
that was changed.
I want that linking of one library will call also linking of all libraries
that use the same changed header file.
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Tue, 26 Dec 2000 16:22:02 +0200
Subject: recompilation
Every time that I want to run build, Jam compiles all files again even
nothing was changed.
I think that should exist any option that cause to compile only changed
files and not to touch files, that wasnot changed since last build.
Date: Fri, 5 Jan 2001 12:07:27 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: ANNOUNCE: jam 2.3 available at www.perforce.com/jam/jam.html
After 3 years, jam 2.3 is out to replace jam 2.2.
This is not a major release, but includes a number of user-contributed
changes from the Perforce Public Depot, as well as bug fixing and
enhancements done at Perforce Software.
The complete release notes are at
The highlights are:
Jam code is now ANSI C, so it can be compiled with a C++
compiler, but no longer with a K&R compiler.
Experimental support for rules returning values.
Miscellaneous bug fixes.
Lots of porting changes.
This release is being made in anticipation of more agressive development
of Jam in the next few months. I wanted to get what we had out so that
any user-contributed development can then be more easily merged.
The starting page for Jam information is:
From: Alfred Landrum <alandrum@s8.com>
Subject: RE: ANNOUNCE: jam 2.3 available at www.perforce.com/jam/jam.html
Date: Fri, 5 Jan 2001 13:39:09 -0800
We (Scale8) are shortly going to move to Jam.
I was wondering what the wish list is for jam? You mention aggressive
development; what new features are you planning?
From: Christopher Seiwald [mailto:seiwald@perforce.com]
Sent: Friday, January 05, 2001 12:07
Subject: ANNOUNCE: jam 2.3 available at www.perforce.com/jam/jam.html
After 3 years, jam 2.3 is out to replace jam 2.2.
This is not a major release, but includes a number of user-contributed
changes from the Perforce Public Depot, as well as bug fixing and
enhancements done at Perforce Software.
The complete release notes are at
http://public.perforce.com/public/jam/src/RELNOTES
The highlights are:
Jam code is now ANSI C, so it can be compiled with a C++
compiler, but no longer with a K&R compiler.
Experimental support for rules returning values.
Miscellaneous bug fixes.
Lots of porting changes.
This release is being made in anticipation of more agressive development
of Jam in the next few months. I wanted to get what we had out so that
any user-contributed development can then be more easily merged.
The starting page for Jam information is:
http://www.perforce.com/jam/jam.html
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: ANNOUNCE: jam 2.3 available at www.perforce.com/jam
Date: Fri, 5 Jan 2001 14:10:21 -0800
I'm happy that development has restarted on Jam.
I have a few questions about the just announced release ...
I did not see in the release notes that my
patch to handle the new AIX 3.4 ar format was applied.
I also did not see mention of the regular expression stuff
The problem with LOCATE on library's and library members
(they must be the same or Jam fails) was not mentioned.
Was the bug about grouping fixed:
TESTVAR = Hello ;
if $(TESTVAR) && ($(TESTVAR) != $(TESTVAR)) { }
Do we (the Jam users at large) need to resumit
which patches/bugs/etc ?
Finally, what are these planned enhansements, and
do you need any help.
Date: Sun, 7 Jan 2001 21:30:57 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: RE: ANNOUNCE: jam 2.3 available at www.perforce.com/jam /jam.html
Several have written with comments and questions about the jam 2.3 release.
I should have been more emphatic: jam 2.3 includes only SOME of the user
contributed changes. What got left behind? Anything done after September,
as that is when I last pulled in changes submitted to the public depot,
the variable substitution using regular expressions, as that went too
far in a direction ($(X:do-everything)) I'm trying to re-evaluate, and
some things no doubt I simply missed.
Development of Jam has never really stopped, though it slowed and all
of what was done was kept internal to Perforce. This release was to
push things out the door, so that user contributions can come from
similar code.
Jam does not have a full-time curator. As some of you may know, I have
other responsibilities at Perforce that take my time. We are looking
to hire an open source curator, for both Jam and other projects here.
I still plan to control its direction (which remains lean and mean, in
case you're wondering), but its the machinations that take considerable
effort.
Still, the best way to get changes into Jam is to submit them to the
public depot. Eventually, I plan to incorporate all changes or send
email explaining why the change is not being incorporated.
As to what's being planned, here are the crib notes:
- a directory scan operation, like
files = [ glob $(SRCDIR) *.c ] ;
That would populate the $(files) variable with the .c
files in $(SRCDIR).
- Better manipulation of variables, moving away from
$(X:do-everything).
- (Big) Allowing UNIX-path target names on all platforms,
which get bound properly in non-unix envs (VMS, OS2,
NT, MAC), so that Jamfiles could be written entirely in
UNIX format.
- An overhaul of the way Jamrules works. Sharing Jamrules
is right now rather clunky.
- Fixing the scanner, so that =, :, and ; don't need
whitespace surrounding them. Whitespace was the sacred
character 8 years ago. Now everyone seems to have
whitespace in file names.
- Splitting defines out of CCFLAGS and C++FLAGS, so that
they can be expressed in a system independent fashion.
I thank you all for you interest, support, and ask your forgiveness for
its somewhat absentee landlord.
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Mon, 8 Jan 2001 16:39:24 +0200
Subject: multi-project build
I have following structure of my projects:
Proj1 -|
|-ProjWatcom
|-Jamfile
|-ProjUnix
|-Jamfile
Proj2- |
|-ProjMSVCNT
|-Jamfile
Tools-
|-Jambase
|-bin.linuxx86
|-jam
|-bin.ntx.86
|-jam
|-Jambase
|-Jamrules
My task is to create jamfiles in such way,
that I will be able to compile different environments (LINUX,NT,WATCOM)
using the same Jambase and Jamrules
My problem is to define TOP. I know that TOP is a directory that has the Jamrules
How I can create any pointer from my jamfiles to the TOP directory that
contains my Jamrules?
From: Miklos Fazekas <boga@mac.com>
Date: Mon, 8 Jan 2001 16:12:52 +0100
Subject: jam crash
The attached file causes a crash on Windows NT 2000, and MPW. (Unmapped memory excpetion).
(I've tested with 2.3, and that one contained the error too.)
Can anyone reproduce this? I used:
jam -fki.txt
Or is there something illegal with the file?
Date: Mon, 8 Jan 2001 11:03:09 -0600 (CST)
Subject: Re: jam crash
I imagine that it is just generating a macro that is too big. We had to
up the macro string size once...
The attached file causes a crash on Windows NT 2000, and MPW. (Unmapped
memory excpetion).
(I've tested with 2.3, and that one contained the error too.)
Can anyone reproduce this?
I used:
jam -fki.txt
Or is there something illegal with the file?
ECHO "Begin" ;
LFLAGS = "$(LFLAGS) $(alma)$(alma)" ;
ECHO "End" ;
ECHO "Begin" ;
LFLAGS = "$(LFLAGS) $(alma)$(alma)" ;
ECHO "End" ;
EXIT ;
From: Miklos Fazekas <boga@mac.com>
Subject: Re: jam crash
Date: Mon, 8 Jan 2001 18:25:14 +0100
I was not aware of this limit. (Probably 1024 then.)
I see, so instead of generating on long LFLAGS like:
LFLAGS = "$(LFLAGS) $(newflag)" ;
I should generate it into a list:
LFLAGS += $(newflag) ;
From: "Tim Baker" <dbaker@direct.ca>
Subject: Re: ANNOUNCE: jam 2.3 available at www.perforce.com/jam /jam.html
Date: Mon, 8 Jan 2001 17:52:33 -0800
Here's a simple 'glob' command for jam 2.3. The syntax is
glob DIR PATTERN PATTERN ...
The pattern matching uses the builtin glob() function.
I added this to the end of compile.c.
#include "filesys.h"
static LIST *_glob_pat;
static LIST *_glob_list;
/* Callback to file_dirscan() */
static void glob_func(
char *file,
int status,
time_t t ) {
FILENAME f;
LIST *l;
file_parse( file, &f );
f.f_dir.len = 0;
file_build( &f, file, 0 );
{
if ( !glob( l->string, file ) ) {
_glob_list = list_append( _glob_list,
list_new( L0, newstr( file ) ) );
}
}
}
static LIST *
builtin_glob(
PARSE *parse,
LOL *args ) {
/* FIXME: check number of args */
LIST *l = lol_get( args, 0 );
char *dir = l->string;
_glob_pat = list_next( l );
_glob_list = L0;
file_dirscan(dir, glob_func);
return _glob_list;
}
Then add this to the end of compile_builtins():
bindrule( "glob" )->procedure
parse_make( builtin_glob, P0, P0, P0, C0, C0, 0 );
From: Behrad Mehraie <Behrad_Mehraie@creoscitex.com>
Subject: RE: multi-project build
Date: Mon, 8 Jan 2001 12:18:26 -0800
The solution is simple. Just put a JamFile in the Root of your projects and
then the JamRule in the same place.
The Jamfile in the root must have this line as the first statement: SubDir
TOP ;
It also has to have the loop below to tell Jam which folders are included in
the project:
Note that this loop must be at the end of your JamFile in the project root.
CORE_PROJECTS = Proj1 Proj2 ;
for proj in $(CORE_PROJECTS) {
SubInclude TOP $(proj) ;
}
I have corrected your project tree. I hope it helps.
JamRule
JamFile ---> SubDir TOP ;
Proj1 -|
|-ProjWatcom
|-Jamfile ---> SubDir TOP Proj1 ProjWatcom ;
|-ProjUnix
|-Jamfile --> SubDir TOP Proj1 ProjUnix ;
Proj2- |
|-ProjMSVCNT
|-Jamfile --> SubDir TOP Proj1 MSVCNT ;
Tools-
|-Jambase
|-bin.linuxx86
|-jam
|-bin.ntx.86
|-jam
From: Ivetta Estrin [mailto:ivetta@schema.com]
Sent: Monday, January 08, 2001 6:39 AM
Subject: multi-project build
I have following structure of my projects:
Proj1 -|
|-ProjWatcom
|-Jamfile
|-ProjUnix
|-Jamfile
Proj2- |
|-ProjMSVCNT
|-Jamfile
Tools-
|-Jambase
|-bin.linuxx86
|-jam
|-bin.ntx.86
|-jam
|-Jambase
|-Jamrules
My task is to create jamfiles in such way,
that I will be able to compile different environments (LINUX,NT,WATCOM)
using the same Jambase and Jamrules
My problem is to define TOP. I know that TOP is a directory that has the Jamrules
How I can create any pointer from my jamfiles to the TOP directory that
contains my Jamrules?
From: Miklos Fazekas <boga@mac.com>
Date: Wed, 10 Jan 2001 17:56:57 +0100
Subject: Multiple Depnedants and order of actions.
I have the following jam-file, and the files:
a,b,c,d,e
actions copy { echo "copy $(<) " > $(<) }
rule copy { Depends $(<) : $(>) }
actions copy2 {
echo "copy $(>) " > $(<[1])
echo "copy $(>) " > $(<[2])
}
rule copy2 { Depends $(<) : $(>) ; }
copy a : c ;
copy2 a b : d ;
copy b : e ;
Depends all : b a ;
NOTFILE all ;
If i edit the file 'c' and 'e'. The orders of actions will be:
copy2 a b :d ;
copy b : e ;
copy a : c ;
That is not the same order as i defined!
I'd except an:
copy a : c ;
copy2 a b : d ;
copy b : e ;
order.
Is it a bug in Jam or is there any missing dependency in my example?
(Adding Depends b : c won't help.)
From: Amaury.FORGEOTDARC@atsm.fr
Subject: Re: Multiple Depnedants and order of actions.
Date: Wed, 10 Jan 2001 19:06:17 +0100
According to your rule:
you ask Jam to generate b before a,
and that's why the rules having b as targets are executed first.
Objet : Multiple Depnedants and order of actions.
I have the following jam-file, and the files:
a,b,c,d,e
actions copy { echo "copy $(<) " > $(<) }
rule copy { Depends $(<) : $(>) }
actions copy2 {
echo "copy $(>) " > $(<[1])
echo "copy $(>) " > $(<[2])
}
rule copy2 { Depends $(<) : $(>) ; }
copy a : c ;
copy2 a b : d ;
copy b : e ;
Depends all : b a ;
NOTFILE all ;
If i edit the file 'c' and 'e'. The orders of actions will be:
copy2 a b :d ;
copy b : e ;
copy a : c ;
That is not the same order as i defined!
I'd except an:
copy a : c ;
copy2 a b : d ;
copy b : e ;
order.
Is it a bug in Jam or is there any missing dependency in my example?
(Adding Depends b : c won't help.)
Date: Wed, 10 Jan 2001 12:25:12 -0600 (CST)
Subject: Re: Multiple Depnedants and order of actions.
my experience is that outside of dependencies, there is no specific
order that actions will occur in. Logically, if there are no
dependencies, then it will not matter...
my experience with make vs jam made me realize that lots of dependencies
are left unspecified in make such that the procedural order of things
in make is important. It is much less important in jam.
Date: Wed, 10 Jan 2001 13:09:09 -0600 (CST)
Subject: Re: Multiple Depnedants and order of actions.
well, in that case, just ask it to update that target instead of all of them.
Well, I suppose somebody could do a newest first sort of ordering on top
of equally choosable targets...
From: <boga@mac.com>
Subject: Re: Multiple Depnedants and order of actions.
Date: Wed, 10 Jan 2001 21:00:39 -0000
No! I tell jam that target 'all' needs to be updated if either 'a' or 'b'
was updated.
This is the meaning of: Depends all : b a ; NOTFILE all; Not?!
I have a simpler example:
action copy { cp $(>) $(<) }
rule copy { Depends $(<) : $(>) }
action append { cat $(>) >> $(<) }
rule append { Depends $(<) : $(>) }
rule mergefiles {
copy $(<) : $(>[1])
append $(<) : $(>[2])
}
mergefiles a : b c ;
Q1:
Is it true that:
a.) if either 'b' or 'c' updated 'copy' and 'append' action will be
executed!? (both, and not one of them.)
b.) if 'a' is updated the action 'copy' will be executed first and then
'append' !?
And if i change:
rule mergefiles {
copy $(<[1]) : $(>[1]) ;
append $(<) : $(>[2]) ;
}
mergefiles a d : b c ;
Q2:
Is it true that:
a.) if either 'b' or 'c' updated copy and append will be executed? (both,
and not one of them.)
b.) if 'a' is updated the action 'copy' will be executed first and then
'append' !?
Date: Wed, 10 Jan 2001 20:04:40 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Multiple Depnedants and order of actions.
Well, logic aside, it matters. I want the file I was just editing to be
compiled first, because that's the file I'm thinking about right now.
Some file has to be the first to be compiled, and the one I'm currently
working on and saved just before I started jam is better than a random
choice. Often not _much_ better, but I personally really hate it when I
'p4 sync' and the next time I compile, I have to wait a minute or two to
see the errors for the file I'm working on.
Date: Wed, 10 Jan 2001 20:20:24 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Multiple Depnedants and order of actions.
Yes, that's exactly what I tried to implement some time ago, but then a
deadline at work came and took me. I'll see if I can do it to 2.3. It's so
good to see a new version of jam.
What I tried was this:
1. The "score" of a target is set to the number of seconds since it's
been modified, plus the number of targets with the same modification time.
lowest of the scores on which it depends.
3. Targets are built lowest-score-first, to the degree the dependency graph permits.
The addition in step 1 is because things that modify many files aren't me.
I am human and almost always think about one file at a time. Perhaps two.
Things like "p4 sync" work on many.
Date: 11 Jan 2001 05:12:45 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: two results from one action
When compiling a .dll on win2k, the link /dll command will produce two
important files, the .dll and the .lib. The .lib needs to be understood as
"existing" by jam, so my LinkLibraries rule can find it (in the binding
phase) later.
what i need is improve my dll build rules so that they somehow make the
side effect files (.lib) avilable to the binding phase.
I'm not sure if any of that made sense.. any suggestions?
From: Amaury.FORGEOTDARC@atsm.fr
Subject: Re: Multiple Depnedants and order of actions.
Date: Thu, 11 Jan 2001 09:38:28 +0100
Yes, it would be a nice feature.
but it's NOT a random choice.
it's the first file appearing in the dependency tree, which is built from the Jamfile
In the Miklos' original jamfile:
The dependency tree is:
all _____ b _____ d
| \__ e
|
\___ a _____ c
\__ d
Suppose c and e are modified:
- b needs rebuild because e is newer
- a needs rebuild because c is newer
actions having b as target are executed first, so the order:
copy2 a b : d ;
copy b : e ;
copy a : c ;
From: Amaury.FORGEOTDARC@atsm.fr
Subject: Re: Multiple Depnedants and order of actions.
Date: Thu, 11 Jan 2001 10:48:20 +0100
Yes, but since 'all' has no associated actions...
NOTFILE only means that the target doesn't exist as a file and has no timestamp.
If either 'b' or 'c' are updated, then 'a' is outdated, and yes, both actions
having 'a' as targets will be executed, in the order they appear in the Jamfile.
The Depends rules are as follow (execute jam -d5 to see the rule invocation)
Depends a : b
Depends a d : c
Then all depends of the order of the targets 'a' and 'd' in the dependency tree.
1) if 'a' appears before 'd' (add the rule "Depends all : a d ;" or invoke jam as
"jam a d"):
all _____ a _____ b
| \__ c
|
\__ d _____ c
if b or c is updated, then a will be rebuilt first,
and Jam executes all actions having 'a' as target, in the order they appear
in the jamfile.
(When jam comes to the d --> c dependency, the corresponding action is already executed,
so nothing more is done.)
2) if 'd' appears before 'a' (add the rule "Depends all : d a ;" or invoke
jam as
"jam d a"):
all _____ d _____ c
|
\__ a _____ b
\__ c
if 'b' is updated, then all actions having 'a' as target are fired,
in the original order.
BUT if 'c' is updated, the actions having 'd' as target are fired first,
the actions having 'a' as target come after.
so the 'append' command is run before the 'copy' command...
From: Miklos Fazekas <boga@mac.com>
Subject: Re: Multiple Depnedants and order of actions.
Date: Thu, 11 Jan 2001 12:07:33 +0100
Ok maybe i misunderstood Jam.
Then my question is:
Depends a b : c
the same as
Depends a : c
Depends b : c
I think the first one is:
a
\
*--- c
/
b
The second one is:
a
\
c
/
b
The differnce is, that the first one implies an update on 'b' any time an
update on 'a' is done. For example i use the first style rule for linking:
from one or more source i link:
- dll and importlib
However they are made at the same time! So if dll needs to be relinked, the
importlib should be updated too. And any targets depending on importlib
should be updated too.
From: Amaury.FORGEOTDARC@atsm.fr
Subject: Re: Multiple Depnedants and order of actions.
Date: Thu, 11 Jan 2001 12:33:17 +0100
What is confusing you (I think) is that actions
are independant from dependencies.
is not the same as
because there is one *action* on 2 targets in the first case,
and 2 actions in the second case, which Jam runs separately.
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Multiple Depnedants and order of actions.
If there isn't a rationale for the action execution order, the order is arbitrary.
I can't think of any rationale for this and didn't see any in the
documentation; can any other jam users?
Date: Thu, 11 Jan 2001 10:14:07 -0600 (CST)
Subject: Re: two results from one action
ok, I know we had this problem too, so I look around in our
jamrules (I should know, I put the fix in, but...)
Here is what we did:
# vLink is identical to Link in Jambase, explicitly uses $(<[1]) as
# the output file. This allows us to pass down extra targets that
# get ignored by the action, but makes jam think we're building them.
# This is used to let jam know that Link will build a .lib in addition
# to a .dll when building a .dll. See the vMainFromObjects rule.
From: leon glozman <leonid_g@schema.com>
Date: Mon, 15 Jan 2001 14:14:22 +0200
Subject: build library from two or more sources with the same name
I want to build library from two or more sources with the same name in LINUX
environment (we use g++ for compiling and linking) as the following:
For example, I have MyDir/utils/dir1/source.cc, MyDir/utils/dir2/source.cc
and MyDir/utils/dir3/source.cc.
I have Jamrules & Jambase in MyDir. I create the objects and targets (libs &
executables) in MyDir/lib.LINUX.
I want to create the objects as the following:
MyDir/lib.LINUX/utils/dirx/source.o (x is 1-3).
Generally, if my source path is MyDir/utils/srcdir/src_name.cc, it will be
compiled to MyDir/lib.LINUX/utils/srcdir/src_name.o.
When all objects will be created, I want to link the library utils.a from
the objects and put it under MyDir/lib.LINUX/
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Wed, 17 Jan 2001 10:00:13 +0200
Subject: compile all sources from current directory
I have following Jamfile:
SubDir TOP dir subdir ;
Library BaseLib : foo1.cc foo2.cc foo3.cc foo4.cc foo5.cc ;
SubInclude TOP dir subdir subdir1 ;
SubInclude TOP dir subdir subdir2 ;
Instead of sources list (*.cc) I want to write expression that will take all
*.cc files that are in current directory as sources (something like
$(>:S=.cc))
Does somebody know how to do it?
Date: Wed, 17 Jan 2001 12:30:00 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: making #include scanning look in more directories
In a Jambase, I've set the C++ compilation flags to include an extra -I,
and all has been well for a very long time. But a few days ago I
discovered that the #include files in that directory aren't found. Hasn't
mattered up to now, because they practically never change.
It seems that I need to set SEARCH on all my source files to include that
directory, and I don't know how.
The only relevant rules I have are Main - three Main rules in all. Can I
do something to a Main invocation that'll magically propagate more SEARCH
directories to the source files it pulls in?
From: "Robert Cowham" <robert@vaccaperna.co.uk>
Subject: RE: compile all sources from current directory
Date: Wed, 17 Jan 2001 11:40:32 -0000
Christopher just announced that this sort of thing would be available in the
upcoming version of Jam.
Alternatively, there was a message about implementing it a week or two
back - check the archives.
From: "Patrick Frants" <patrick@quintiq.nl>
Date: Fri, 19 Jan 2001 14:04:45 +0100
Subject: newbie: Include problem
I am playing around with jam and would like to solve the following problem:
My TOP is at GlobalProject. I have jamfiles in ProjectX and
SubProjectXX. The include files for ProjectX are located in the 'hdr'
subdirectory in ProjectX. The only way I get /I$(TOP)/Project1/hdr on
the command line is by specifying it with rule SubDirHrs in the jamfile
on the SubProject level. It would be more appropriate however to specify
it in the jamfile on the Project level because it is the same for all
SubProjects of the Project. I tried to put the SubDirHdrs rule on that
level, but it is overridden in the jamfile of the SubProject... How can
I add $(TOP)/Project1/hdr to the include path in the jamfile in
$(TOP)/Project1?
Is there any variable (HDRS?) which I can set directly?
Also I use cygwin and gcc is the compiler instead of cc. Therefore I
created a jamrules in $(TOP) which contains one line: 'C++ = gcc'. Is
that the right way to do it?
GlobalProject
Project1
SubProject11 (contains .cpp files)
SubProject12
SubProject13
hdr
SubProject11 (contains .h files)
SubProject12
SubProject13
Project2
SubProject21
SubProject22
SubProject23
hdr
SubProject21
SubProject22
SubProject23
From: <boga@mac.com>
Date: Sun, 21 Jan 2001 10:37:56 -0000
Subject: Q: compiling multiple sources together
I'd like to compile multiple sources with one(!) compiler invocation.
My compiler support's compiling multiple sources at once in the form:
cc $(CFLAGS) a.c b.c c.c -o bin/
this is the same as:
cc $(CFLAGS) a.c -o bin/a.c.o
cc $(CFLAGS) b.c -o bin/b.c.o
cc $(CFLAGS) c.c -o bin/c.c.o
Just the first one is much faster. I'd like to implement such optimalization
with jam. Is it possible?
I'd like to have a rule like:
$(OBJECTS) = [ multiccompile $(SOURCES) : $(DESTDIR) ] ;
What have to be work is:
1. Should handle INCLUDE rules on SOURCES
2. If only some of the objects need to be updated, only those sources should
be passed to the compiler.
3. $(SOURCES) might contain sources from different directories, but not
sources with the same name.
The framework for multiccompile is something like:
rule multiccompile {
local OBJECTS ;
local OBJECT = $(i:d=$(2)).o ;
OBJECTS += $(OBJECT) ;
}
return $(OBJECTS) ;
}
What i've tried:
- i've tried to use action with 'updated' modifier, it didn't work because
INCLUDES aren't handled.
- the compiler also supports options from a file: cc @filename, so i've
tried to generate the file with echoing the to the file. It did not work,
because it generated actions like:
Echo $(CFLAGS) > compile.cmd
Echo a.c >> compile.cmd
cc @compile.cmd
Echo b.c >> compile.cmd
I've did this with something like:
rule multiccompile {
local OBJECTS ;
local OBJECT = $(i:d=$(2)).o ;
OBJECTS += $(OBJECT) ;
}
_starccompile $(OBJECTS) ;
local OBJECT = $(i:d=$(2)).o ;
_compile $(OBJECT) : $(i) ;
}
_endcompile $(OBJECTS) ;
return $(OBJECTS) ;
}
Date: Sun, 21 Jan 2001 14:59:28 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Q: compiling multiple sources together
You'll need a wrapper around your compiler so that jam can call it the
same way as it calls e.g. rm, and then you'll need an Object-like rule
that uses together and/or piecemeal the way e.g. the rm rule does.
From: Stephen.Riehm@varetis.de
Date: Mon, 5 Feb 2001 18:16:37 +0100
Subject: Jam can't handle libraries on AIX?
we've been using jam on AIX 4.1 for some time now, and it's been working
great (which is why you haven't heard much from me :-)
However, now we're trying to use jam on AIX 4.3 - and it appears that jam
can't access the contents of library files any more - has anyone else seen
this problem before?
We tried using the binary from 4.1, a recompiled (jam 2.1.1) binary and a
brand new jam 2.3 binary, but they all show the same problems.
Any information would be greatly appreciated.
varetis COMMUNICATIONS GmbH, Munich, Germany
Date: Mon, 5 Feb 2001 12:22:45 -0800 (PST)
From: Matt Watson <mwatson@apple.com>
Subject: Re: Jam can't handle libraries on AIX?
Has AIX 4.3 changed the size of off_t, or the header hierarchy? We noticed
that the lack of a prototype for lseek() was causing archives to be read
incorrectly. I believe that this change was sent back upstream...
From: Stephen.Riehm@varetis.de
Date: Wed, 7 Feb 2001 15:08:26 +0100
Subject: updating from jam 2.1.1 to 2.3
we've been using a slightly modified version of jam for a few years
now, and upgrading to jam 2.3 isn't going all that well, as our
modifications are of course missing in the new version. Our
modifications were minimal, but I'ld like to know if the same
functionality is now available in the current version.
We had the following extras:
A project path ($PRJPATH) was used to refer to multiple
development trees. (ie: <build_tree>:<private_tree>:<reference_tree>)
A routine was used to split a variable on any character
(similar to perl's split() function), this was used to split
PRJPATH into a list of paths (ie: a list of TOP directories).
(I believe such splitting is now standard for environment
variables whose name ends in PATH)
A second routine was added to jam to determine the directory
name relative to one of the TOP directories in $(PRJPATH).
Since jam could be started in a directory without a Jamfile
(i.e.: the Jamfile is only in the <reference_tree>) - the
Jambase set up a SEARCH for $(RELDIR)/Jamfile (SubDir thus
couldn't be used, because the Jamfile hasn't been found yet).
The effect was that a central source directory exists (the
<reference_tree>), in which the "official" code is placed. The
developers then work in a private directory (<private_tree>), again
only with source code. Finally, the developer has a third directory
(<build_tree>) where the build takes place. A complete "clean
re-build" simply requires this build directory to be deleted and a new call
to jam. The other side effect being that no-one can pollute a directory
with objects etc in the central dircetory, but everyone can build
using the same sources. No object or binary files are ever created in
the source directories - and no jam files need to exist in the build directories.
My Question:
How do these concepts fit in to the current version of jam? Is it
possible to determine the difference between the current directory and
a parent directory, and then use this difference for searching for
Jamfiles in other trees?
If this is not the case, would you [Christopher] mind if I sent you my
patch for the ability to split a path into TOP and RELATIVE parts, for
inclusion in the next version of jam?
(PS: there was a little discussion of this in 1996, but it kinda died,
and I haven't been working on the build environment for at least 3
years now - so sorry if I'm a little out of touch)
From: Leon Glozman <leonid_g@schema.com>
Date: Thu, 8 Feb 2001 15:02:13 +0200
Subject: Link of many objects in WATCOM
I work in NT WATCOM environment. For linkage I use wlink command. I have
many objects to link, but the wlink command can't link with many objects.
For this situation, I may write the object names into file and the wlink
will read the names from this file, as the following:
wlink @[file_name]
How can I do it in Jamrules file.
Date: Thu, 8 Feb 2001 10:29:28 -0600 (CST)
Subject: Re: Link of many objects in WATCOM
We do it like this:
# vLink is identical to Link in Jambase, explicitly uses $(<[1]) as
# the output file. This allows us to pass down extra targets that
# get ignored by the action, but makes jam think we're building them.
# This is used to let jam know that Link will build a .lib in addition
# to a .dll when building a .dll. See the vMainFromObjects rule.
#
# Notice the solution for the line too long problem
# create a file for the items, and use this trick, courtesy
# of Laura Wingerd to output the items
# This works due to the mix 'n match composition of macros
# by jam, each item in the extraobjects or $(>) is concatenated
# with the rest of the line. The period after the echo is
# ignored, I guess, but serves to make the whole thing one
# string. The newline macro splits it into individual lines.
actions vLink bind NEEDLIBS EXTRAOBJECTS {
copy nul: linkobjs.txt
echo.$(>)>>linkobjs.txt$(NEWLINE)
echo.$(EXTRAOBJECTS)>>linkobjs.txt$(NEWLINE)
set LIB=$(LIBPATH)$(EXTRALIBPATH)
$(LINK) $(LINKFLAGS) /out:$(<[1]) /PDB:$(<[1]:S=.pdb) $(COMLIBRESOURCE)
$(DEFEXPORT) $(UNDEFS) @linkobjs.txt $(NEEDLIBS) $(LINKLIBS)
$(RM) linkobjs.txt
}
newline is defined like this:
NEWLINE = "
" ; # used to break up long lines for echo to a file
Date: Fri, 9 Feb 2001 14:08:26 -0800 (PST)
From: Mark Lakata <lakata@mips.com>
Subject: jam as cad glue
I'm trying to use jam as CAD glue for chip design. That means, I don't use
any of the built in rules in Jambase, but I've got my own set of rules and
actions.
One thing that is sorely missing is access to the shell, like the GNU
makefile $(shell cmd) feature. What I would like is something like this:
files = `cat filelist`
or
files = [ shell cat filelist ]
Is this feature there already? Has anyone hacked it in?
Subject: Re: jam as cad glue
From: Matt Armstrong <matt.armstrong@openwave.com>
Date: 09 Feb 2001 15:37:49 -0800
It is not there as far as I know, though jam 2.3 added support for
filename globbing.
But if the first line of your filelist file is:
files
and the last line is
;
then
include filelist ;
will work. ;-) Jam takes input from the environment and the jamfiles,
that's it.
Date: Tue, 13 Feb 2001 11:32:01 -0800 (PST)
From: Mark Lakata <lakata@mips.com>
Subject: 'system' built-in rule
I hacked together a built-in rule called 'system' which is like the
backtick operator in csh/perl, and the $(shell ...) command in GNU make.
The system command you run must output one list item per output line, each
line terminated with a newline character. The command is run under the
Bourne shell (/bin/sh).
This compiles on Solaris 2.6, perhaps on other unix machines too.
syntax:
variable = [ system "cmd arg arg ..." ] ;
example:
listOfFiles = [ system "find . -name '*.c' -mdate +1" ]
ECHO $(listOfFiles)
The purists will argue that this is a bad idea, since the output of the
system command is not reproducible, and therefore builds are time
dependent. Well, my answer is that I am not building an executable, I'm
gluing together 3rd party tools in a design verification flow.
# Make this addition to Jamfile
LINKLIBS += -lgen ;
Make these modifications to compile.c:
/* add this declaration after the line that declares builtin_flags */
static LIST *builtin_system( PARSE *parse, LOL *args );
/* add this to the compile_builtins() routine */
bindrule( "system" )->procedure
parse_make( builtin_system, P0, P0, P0, C0, C0, 0 );
/* add this definition at the end */
static LIST * builtin_system( PARSE *parse, LOL *args ) {
/* FIXME: check number of args */
LIST *cmd_list = L0;
LIST *l = lol_get( args, 0 );
char *cmd = l->string;
FILE* fp[2];
char buf[BUFSIZ];
if (p2open("/bin/sh", fp) == 0) {
write(fileno(fp[0]),cmd,strlen(cmd));
write(fileno(fp[0]),"\n",1);
fclose(fp[0]);
while (fgets(buf, BUFSIZ, fp[1]) != NULL) {
char* lastchr = &buf[strlen(buf)-1];
if (*lastchr == '\n') {
*lastchr = NULL;
} else {
if (strlen(buf) >= BUFSIZ) {
fprintf(stderr,"FATAL: command \"%s\" generated line too long\n",cmd);
} else {
fprintf(stderr,"FATAL: command \"%s\" generated line with no newline termination\n",cmd);
}
exit( EXITBAD );
}
cmd_list = list_append( cmd_list,
list_new( L0, newstr( buf ) ) );
}
p2close(fp);
} else {
fprintf(stderr,"FATAL: failed to spawn /bin/sh\n");
exit( EXITBAD );
}
return cmd_list;
}
From: "Czura, Wojtek" <Wojtek.Czura@cognos.com>
Date: Tue, 13 Feb 2001 15:55:54 -0500
Subject: Build tree on VMS...
I started to install Jam 2.3 build environment on Alpha OpenVMS 7.1 box.
The build is large and deep and covers almost all existing platforms. We
want our Jamrules file to be in DEV:[dir.name.top], while builds span
between sources in top/our_group/lev1/lev2/.../levN and targets in
top/platform/public/lev1/.../levM. I have a problem setting the root
directory TOP which I use in local Jamfiles as: TOP = TOP: ;
If I use: define TOP dev:[dir.name.top.], I have access to all directories
in the build, except TOP itself. As the result, Jam will not start due to
TOP:Jamrules. not found.
If I use: define TOP dev:[dir.name.top], I have access to TOP:Jamrules. but
not to any of the branches.
If I don't use TOP at all, Jam tries to use relative paths
[-.-.-.-.vms.public.lev1..] and at some point the number of levels is
greater than VMS can handle and targets are not created or copied.
Included is a VMS listing which illustrates the problem:
$ pwd
PATH$SRVC:[SRVC.ME.SB.LEV1.LEV2.COMMON.COMMON]
$ sho log top*
"TOP" = "$1$DIA34:[SERVICES_AXP.SRVC.ME.SB.]"
"TOPSB" = "$1$DIA34:[SERVICES_AXP.SRVC.ME.SB]"
$ dir top
%DIRECT-E-OPENIN, error opening
$1$DIA34:[SERVICES_AXP.SRVC.ME.SB.][SRVC.ME.SB.LEV1.LEV2.COMMON.COMMON]*.*;*
as input
-RMS-E-DNF, directory not found
-SYSTEM-W-NOSUCHFILE, no such file
$ dir topsb
Directory $1$DIA34:[SERVICES_AXP.SRVC.ME.SB]
JAMRULES.;8 31 13-FEB-2001 14:03:31.09 (R,R,R,)
VMS.DIR;1 1 12-FEB-2001 13:04:13.54 (RWE,RWE,RWE,)
Total of 12 files, 322 blocks.
$ dir top:[vms]
Directory $1$DIA34:[SERVICES_AXP.SRVC.ME.SB.][VMS]
Total of 1 file, 2 blocks.
$ dir topsb:[vms]
%DIRECT-E-OPENIN, error opening TOPSB:[VMS] as input
-RMS-F-DIR, error in directory name
From: "Ducharme, Gregory" <Gregory.Ducharme@Cognos.COM>
Date: Thu, 15 Feb 2001 08:34:30 -0500
Subject: How do I do this in Jam?
I am stuck on what should be a trivial problem in Jam (it certainly is in
make): How do I create a simple rule to allow usage of an arbitrary code
generator and still maintain dependancies?
The usage would be similar to:
Library X : a.c b.c ;
Generator b.c : b.txt : "command generating b.c from b.txt" ;
Its easy to make this work if LOCATE_TARGET and LOCATE_SOURCE are left as
the defaults (i.e. cwd). However, as soon as they are changed the build
fails as follows:
jam
cannot make yadda!yadda!b.c
Generator locate!target!b.c (depends on files so
will run anyway)
skipping locate!target!b.o due to missing yadda!yadda!b.c
skipping locate!target!X due to missing locate!target!b.o
I've tried things like:
SEARCH on b.c = $(LOCATE_SOURCE) ;
but to no avail.
Has anyone solved this problem, and is willing to publish a solution in this newsgroup?
Date: Tue, 20 Feb 2001 16:52:24 -0800
From: Donald Blackfield <dtb@cisco.com>
Subject: Missing jam 2.2 jambase rules in jambase for jam 2.3
I am somewhat new to jam and have been given the task of getting jam 2.3
"up and running". We have been using jam 2.2 . I understand that jam 2.3 is
supposed to be backwards compatible with jam 2.2 . However, I have found that
there are at least 4 rules that were defined in the Jambase for jam 2.2
which appear to be missing in Jambase for jam 2.3
Below is an excerpt from an e-mail detailing some of the differences that we
have found:
In jam 2.3, there are some rules defined to replace some corresponding ones
in jam 2.2 since rules can return value in jam 2.3.
To make it compatible with jam 2.2, the old rules are redefined in jam 2.3
as a wrapper of their new peers.
The problem is that some of them are not redefined in the new Jambase.
Here's the list of all such rules.
Jam2.2 Jam2.3
======================================================================
addDirName FDirName
makeDirName FDirName
makeGristedName FGristSourceFiles
makeRelPath FRelPath
makeSuffixed FAppendSuffix
makeCommon _makeCommon !! not redefined !!
makeGrist FGrist !! not redefined !!
makeString FConcat !! not redefined !!
makeSubDir FSubDir !! not redefined !!
======================================================================
A quick fix is to put these 4 lines in the new Jambase
rule makeCommon { _makeCommon $(<) : $(>) ; }
rule makeGrist { $(<) = [ FGrist $(>) ] ; }
rule makeString { $(<) = [ FConcat $(>) ] ; }
rule makeSubDir { $(<) = [ FDirName $(>) ] ; }
Besides performing this "quick" fix, I am wondering why these rules have not
been defined. Have they been replaced by rules with a different name? Are we not
supposed to invoke these rules? If so, then why and what rules should we be
using instead? Have I "grabbed" the wrong version? I downloaded the jam-2.3.tar
file from your web site.
Any information which you can provide would be greatly appreciated.
From: Simon Cornish <simon.cornish@calix.com>
Date: Mon, 26 Feb 2001 13:41:54 -0800
Subject: Multiple recompiles of common object.
Jam 2.2.1 seems to have a bug determining dependancies of objects that are
required by a number of targets.
Consider the following Jamfile:
# Jamfile
TOP = . ;
Main A : a.c common.c ;
Main B : b.c common.c ;
# End Jamfile
Jam will compile common.c twice in order to build the targets A and B. This
seems wrong to me and running jam with maximum debugging yields no clues.
Even worse, only building one target (ie. running "jam A") still compiles
common.c twice!!
Any ideas how to avoid this? Building the common objects into a library is
not applicable for my target environment, but even if it was I think the
behaviour Jam exhibits here is incorrect.
Date: Mon, 26 Feb 2001 10:35:00 -0500
From: Beman Dawes <bdawes@acm.org>
Subject: Win 2K path in quotes?
Jambase doesn't seem to have quotes around paths based on MSVCNT, as
required by Windows for directory and file names with embedded spaces.
From: Grant_Glouser@palm.com
Date: Tue, 27 Feb 2001 12:13:22 -0800
Subject: Re: Multiple recompiles of common object.
I have encountered this many times. It's a common thing to try in a Jamfile.
The reason it happens is that common.c is passed to the Cc rule twice, and each
invocation of the Cc rule adds the Cc actions block to the target "common.o".
Main A : common.c ;
Objects common.c ;
Object common.o : common.c ;
Cc common.o : common.c ; # compile common.c (add the Cc
actions block to the actions for common.o)
Main B : common.c ;
Objects common.c ;
Object common.o : common.c ;
Cc common.o : common.c ; # compile common.c (add the Cc
actions block to the actions for common.o)
This happens during the evaluation of the Jamfiles, not during the processing of
the dependency graph. The target named "common.o" always has two Cc actions
blocks attached to it. So even if you just "jam A", common.o will be compiled twice.
This behavior is useful in some circumstances, which is why I hestitate to agree
that it's a bug. I think you want a library here, or else compile all the files
by hand, something like this:
Objects a.c b.c common.c ;
MainFromObjects A : a.o common.o ;
MainFromObjects B : b.o common.o ;
You could even modify the Jambase such that Main allows objects in $(>), which
makes the final Jamfile slightly cleaner, IMHO. We've done that to our
equivalent in-house rules, and it seems natural and obvious to me now to have a
Jamfile like this:
Objects common.c ;
Main A : a.c common.o ;
Main B : b.c common.o ;
Subject: Multiple recompiles of common object.
Jam 2.2.1 seems to have a bug determining dependancies of objects that are
required by a number of targets.
Consider the following Jamfile:
# Jamfile
TOP = . ;
Main A : a.c common.c ;
Main B : b.c common.c ;
# End Jamfile
Jam will compile common.c twice in order to build the targets A and B. This
seems wrong to me and running jam with maximum debugging yields no clues.
Even worse, only building one target (ie. running "jam A") still compiles
common.c twice!!
Any ideas how to avoid this? Building the common objects into a library is
not applicable for my target environment, but even if it was I think the
behaviour Jam exhibits here is incorrect.
Subject: Re: Multiple recompiles of common object.
From: Matt Armstrong <matt.armstrong@openwave.com>
Date: 27 Feb 2001 07:43:53 -0800
Some comments:
Don't set TOP like this, instead use SubDir like this:
SubDir TOP ;
That sets up a whole slew of other needed variables.
Instead of that, do this:
local common_src = common.c ;
local common_obj = $(common_src:S=$(SUFOBJ))
Objects $(common_obj) ;
Main A : a.c $(common_obj) ;
Main B : b.c $(common_obj) ;
Date: Wed, 28 Feb 2001 16:28:29 +0100
From: David Turner <david.turner@freetype.org>
Subject: Jam with Windows 95/98
I've been a new jam user for a few weeks now, and it's
really a fascinating tool. Thanks a lot to Christopher Seiwald
and all other people involved in the development of Jam.
I have managed to patch jam to run under Windows 95/98
(mostly) correctly, but I don't know if my approach is the
correct one. I'd appreciate input about this, and I'd be
glad to contribute my changes to the main sources if
you find it adequate..
The current jam (2.3) binaries do not run correctly
under Windows 95/98, and there are several reasons for this:
- first of all, there is no shell named "cmd.exe" that
comes with this version of Windows. Instead, we have
the incredibly clumsy "command.com"
- if you patch "execunix.c" to use "command.com /c" instead
of "cmd.exe /c/q" when a Windows95/98 system is detected,
the following problems appear:
- the trailing newline ('\n') at the end of the
"string" variable used to hold the command line
is interpreted as an additional argument by command.com
that is passed to the action being launched.
Most programs (compilers/linkers) are unable to deal
with it, so a quick patch is also applied to strip the
newline before calling "command.com" (this is minor,
but was really tough to track !!)
- "command.com /c" seem to always return 0, even if the
command that was launched through it failed. Jam will
think that every action is succesful, leading for a really
messy builds.
- the "del" command doesn't accept multiple arguments, i.e.
something like "del a.o b.o" will not work. Making
"jam clean" completely irrelevant with this shell..
To overcome this problems, I have written a new source file
named "execnt.c" to be used under Windows 95/98 and NT. It
acts exactly like the normal "execunix.c" on NT, while having
a very special behaviour under Windows 95/98:
- it detects hard-coded commands of "command.com" like
"del", "copy", "rename", etc.. and specifically
execute them through the ANSI "system" call
(which really invokes command.com)
- with the exception of "del" and "erase", which are
specially filtered in order to ignore any toggles/flags
and accept multiple targets/arguments..
- all other commands are called directly through a
synchronous "spawnvp" (which returns the program's
exit code). Given that W95/98 doesn't support multiple
processors, I don't think it's really a problem..
It seems to work very well here, given that I didn't need to
change anything to the Jambase, though I'm not exceptionnaly
proud of what I've done.
I have not tested this under a different shell, e.g. the Cygwin
bash one (though I wonder if it is supported ??)
I have also looked at the way actions and processes are handled
in GNU Make under Win32, but the method they chose involves
complex process setup/loading/invocations that would complexify
things, even if they seem to be more generic and "clean", it's
a _lot_ more code to write..
In all cases, I'd be glad to send my modifications to any
people who would like too.
Date: Wed, 28 Feb 2001 19:16:56 +0100
From: David Turner <david.turner@freetype.org>
Subject: Re: Jam with Windows 95/98
I'm the main author of a _very_ portable library (see www.freetype.org),
and we need to support a large set of build platforms. I have managed
to build a rather strange build system with GNU Make and a set of
specially crafted sub-Makefiles, but the end result is _really_
hard to understand and maintain, even if it supports a wide variety
of compilers and platforms.
Jam is able to do the same thing in dramatically simpler way, and
it's also capable of automatically compiling executables and run
them. This is really useful for automated test suites :-)
I'm now trying to scrap our old build system for Jam, but I need
to ensure that it supports all the platforms we currently do..
This means that I also intend to work on the Jambase to support
the following toolsets (on Windows):
- Intel C/C++
- Watcom C/C++
- GCC (MingW)
- Win32-LCC
The only thing I've done about it for now is in the "del"/"erase"
filter, which effectively supports double quotes in filenames.
Otherwise, I believe that more work in required in jam itself, but
I've not taken the pain to solve this (relatively minor) problem.
I just followed the naming convention used by "filent.c" and "pathnt.c" :-)
Of course, changing this wouldn't be too difficult but the rest of Jam
uses "NT" quite extensively (in macros, in the Jambase, etc..)
And even if MS is phasing out the NT name, I believe we're more
interested in working tools than marketing fluff..
PS: On a related note, has anyone considered the ability to automatically
generate Makefiles from Jam ? I know, I know, it's really a strange
proposal :-)
Date: Wed, 28 Feb 2001 22:25:45 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Jam with Windows 95/98
Fairly easily done, if all you want is the thing I anticipate wanting.
Say, a thirty-line perl hack.
1. Run jam clean, taking note of the commands executed.
2. touch every file.
3. Run jam -v, taking note of all commands executed.
4. Write a Makefile with two targets: 'clean' that does what jam did
during 'jam clean' and removes all files created during the 'jam -v',
and 'all' that depends on every file whose atime changed during the
'jam -v', and whose commands are all the commands jam executed during
'jam -v'.
Evil? Yes. But it'll produce a working makefile, enough that people who
don't have jam can compile the thing.
Date: Thu, 1 Mar 2001 12:57:25 -0800
From: "Neil Okamoto" <neil_o@my-deja.com>
Subject: question about SubDir* rules
I'm trying to understand something about jam's SubDir* rules.
As long as I build from the top level directory everything is fine.
However, if I try to build from a subdirectory in the project,
any dependencies that are not beneath the current directory in the
hiearachy are not found. I'd like to be able to build from any
arbitrary directory in the project.
Date: Fri, 02 Mar 2001 20:32:30 +0100
From: David Turner <david.turner@freetype.org>
Subject: Re: Jam with Windows 95/98
Well, I was thinking about generating the Makefiles from Jam itself.
Given that it knows all about dependencies, and that it has pretty good
string manipulation routines, it shouldn't be that hard..
(Also, relying on Perl or Python on the host isn't something I really
enjoy for such a simple task).
My biggest problem is that I want to preserve the use of macros
like $(CC), $(RM), $(CP), etc.. in the Makefile rules, as well
as their definitions (which depend on the toolset).. The solution
you're suggesting wouldn't allow this..
Date: Fri, 02 Mar 2001 20:41:54 +0100
From: David Turner <david.turner@freetype.org>
Subject: Jam binaries for Windows 95/98 and OS/2
I've finally publicly posted my changes. Have a look at the following addresses:
ftp://ftp.freetype.org/pub/contrib/jam/jam-win.zip
contains a pre-compiled Jam binary for Windows NT
and Windows 95/98, that supports the following
compilers: Visual C++, Intel C++, Borland C++,
Watcom C++, Mingw (gcc) as well as LCC-Win32
ftp://ftp.freetype.org/pub/contrib/jam/jam-os2.zip
contains a pre-compiled Jam binary for OS/2, that
supports the EMX and Watcom compilers/toolsets
(Jam 2.3 only supports Watcom). VisualAge is in
the works..
ftp://ftp.freetype.org/pub/contrib/jam/jam-src.zip
is my version of the Jam sources, based on version
2.3. they were used to build the two binaries above
I'm releasing these files because several people have
already asked me for the W95/98 binaries, and because
I want to use them as soon as possible in order to get
rid of the ugly build system in FreeType 2.
Note that it's just a "quick hack", that many things may
be re-written later, and that I really hope that these
changes will be accepted by the Jam community.
I'm sorry for not using Perforce yet, not sending patches,
etc.. If these changes would not reflect the current developments
in Jam, I'd be glad to adapt them in any way consistent with
what Christopher might think be best..
I'm joining the README for the new sources, as it explain the
rather important changes that were adopted here..
This is a special version of the Jam/MR tool.
For more information about Jam, see the file README.ORG, as well as:
http://www.perforce.com/jam/jam.html
Note that this code is based on Jam release 2.3
However, it includes the following enhancements:
- it runs under Windows 95/98 (mostly) correctly
(jam 2.3 only runs under Windows NT, as well as
UNIX, OS/2, BeOS, MacOS, VMS, etc..)
this required the writing of a new source called
"execnt.c" ("execunix.c" is still used on OS/2 though)
- it runs under OS/2 with either EMX (gcc) or Watcom C/C++
- it contains a new builtin rule named HDRMACRO
this rule is used to indicate that a source file
contains macro definitions that are used in
#include statements, like the following:
public.h:
#define MY_FILE_H <myfile.h>
#define OTHER_FILE_H "otherfile.h"
such files are parsed when a line like:
HDRMACRO public.h ;
is found in a Jamfile, in order to detect and record
the macro definitions in a global dictionary.
when other source files are parsed for #include statements,
lines like:
#include MY_FILE_H
are resolved through the global macro dictionary.
this new rule is required to compile FreeType 2 with
Jam, as well as a few other interesting projects..
(it is implemented by "hdrmacro.c", some changes were also
necessary in "compile.c" and "headers.c")
- it supports the following toolsets on Windows 95/98 and NT:
- MS Visual C/C++
- Intel C/C++
- Watcom C/C++
- Borland C/C++
- LCC-Win32
- MingW (GCC for Windows, but _not_ Cygwin)
even though it is compatible with the old jam 2.3 windows
support (i.e. you can define MSVC, MSVCNT or BCCROOT), the
toolset is now selected through the following scheme:
- define one of the following environment variable, with the
appropriate value according to this list:
Variable Toolset Description
BORLANDC Borland C++ BC++ install path
VISUALC Microsoft Visual C++ VC++ install path
VISUALC16 Microsoft Visual C++ 16 bit VC++ 16 bit install
INTELC Intel C/C++ IC++ install path
WATCOM Watcom C/C++ Watcom install path
MINGW MinGW (gcc) anything..
LCC Win32-LCC LCC-Win32 install path
- define the JAM_TOOLSET environment variable with the name
of the toolset variable you want to use.
as an example, you can use the Microsoft Visual C++ compiler with
something like:
set VISUALC=C:\Visual6 (really the path to the VC++ installation)
set JAM_TOOLSET=VISUALC
jam
a similar scheme is used under OS/2 to select between EMX and WATCOM
note that Watcom support is not fully tested, especially with shared
libraries..
I plan to add support for IBM Visual Age C/C++ to both the Windows
and OS/2 ports.
- I added a new variable expansion modifier, named "T", that is
used to toggle "\" and "/" in strings. This is required to correctly
support GCC on Windows. As an example:
set VAR = "c:\mydir\myfile" ;
echo $(VAR:T) ;
will print:
c:/mydir/myfile
(this was a quick hack in "expand.c")
Hoping that these changes will be integrated into the official version
of Jam, or at least that they'll drive the inclusion of similar features
Don't forget that all of this is a cuick hack over the Jam 2.3 sources,
and that some cleanup should certainly occur in the source code (HDRMACRO
might as well be renamed to something different, by the way).
Of course, everything is released under the Jam license..
Date: Fri, 2 Mar 2001 22:03:30 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Jam with Windows 95/98
Probably about as hard as its current job, I'd guess.
Might I ask why you'd want to do this?
Date: Sun, 04 Mar 2001 09:57:59 -0500
From: Beman Dawes <bdawes@acm.org>
Subject: Re: Win 2K path in quotes?
>Jambase doesn't seem to have quotes around paths based on MSVCNT, as
>required by Windows for directory and file names with embedded spaces.
>What am I missing?
What I was missing was the obvious workaround of using the old 8.3 names
that Windows generates for each real directory and file name. Ugly, but it
does appear to work.
It might be a good idea to add something to the docs. Perhaps add a final
paragraph to jam.html - LANGUAGE - Lexical Features:
Directory and file names with embedded whitespace characters will not work
correctly, because Jam treats the whitespace as a token separator. The
workaround for MS Windows is to use the 8.3 form of names. For example,
"c:\Program Files\Microsoft Visual Studio\vc98" might become
"c:\progra~1\micros~2\vc98". The exact translation to 8.3 names is system dependent.
Date: Mon, 05 Mar 2001 07:53:01 -0500
From: Beman Dawes <bdawes@acm.org>
Subject: Build multiple flavors of a library?
It is common to need to build multiple flavors of a library. For example,
release and debugging versions with single, multi-threaded, and dynamic
linking. 2 * 3 = total of 6 libraries. Each should go in a different directory.
How would I go about doing this with a single invocation of Jam?
From: Leon Glozman <leon.glozman@schema.com>
Date: Mon, 5 Mar 2001 14:56:32 +0200
Subject: How can I count object files for any exe or lib in Jam language?
How can I count object files for any exe or lib in Jam language?
Date: Tue, 6 Mar 2001 06:14:51 -0800
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
Subject: Precompiled headers
I can't quite get the dependencies right with Jam for precompiled
headers (MSVCNT). Anyone already have rules with proper dependencies
for the pch?
The closest I've gotten was that it worked fine as long as I let it
build "all", but if I tried to build a specific "foo.exe" then it tried
to build the Main foo : foo.cpp ; object prior to the PCH (that's the
only dep problem remaining, but it's a real bugger trying to eradicate
it).
Moving foo.cpp into a Library FOOLIB : foo.cpp otherfiles.cpp ; kind of
works except that the linker complains because I'm only linking from
libs so it doesn't know for sure the right target platform (ok,
whatever). I can work around that a few different ways, but the bottom
line is I should just be able to get the dependencies right and have the
whole problem go away.
In an attempt to get the deps totally correct, I've tried making
surgical changes to jambase to introduce support for a SubDirPrecompHdr
rule that sets up some per-subdir variables used by the C++ rule (to
indicate the PCH name, etc) and a new Pch rule (to create the PCH).
(for whatever reason MSVC is choosing to often ignore the PCH if I use
the /YX flag for automatic pch). This approach is not working very
well, because although it has the deps for the pch file right, the deps
are busted for everything else such that it recompiles everything (minus
the pchs) each time.
From: Leon Glozman <leon.glozman@schema.com>
Date: Wed, 7 Mar 2001 15:06:10 +0200
Subject: Different compilation flags for dll & exe creation
I work in WATCOM project. I want to create some dllls & executables. The
problem is that compilation flags for dlls & executables are different (dll
compilation flags have -bd, exe compilation flags are without it), but "rule
C++" in Jam "don't know", if you should create objects for dll or executable compilation.
How I can solve the problem?
Date: Fri, 9 Mar 2001 21:26:33 -0800
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
Subject: Multiple target files from one action?
I finally seem to have gotten the dependencies right for MSVC
precompiled headers (.pch) and interface files (.idl). It was no small
task, so I feel compelled to share: if anyone wants the rules/actions,
let me know. In particular, they work for both "jam -j" and "jam
target". Very minor tweaks to the C++, Library, and SubDir rules.
From: "Jan Mikkelsen" <janm@transactionware.com>
Subject: Re: Multiple target files from one action?
Date: Sat, 10 Mar 2001 17:07:13 +1100
I'm certainly interested in seeing the rules.
I new to Jam, and I'm creating rules for Antlr (a compiler compiler)
which also generates multiple targets from a single action. I haven't
really started to think about it yet, so looking at a working approach
would certainly be useful.
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Sun, 11 Mar 2001 19:18:50 +0900
Subject: flag ordering problem / target-specific variables
The current flag ordering for the As, Cc, and C++ rules is:
<target-specific flags> <global flags> <subdir flags>
With this ordering it's not possible for target-specific flags to override
global/subdir flags. (I'm assuming that most compilers are like gcc in that
later flags have precedence.) For example I have my global C++FLAGS set to
"-fno-rtti" but on a certain target I would like to override this with
"-frtti". Hence I would like the ordering to be:
<global flags> <subdir flags> <target-specific flags>
but Jam does not support this neatly because there is no way to access
target-specific variables outside an action other than indirectly with the
+= operator.
I know the workaround would be to put the global/subdir flags under a
different variable and have the action take care of the ordering, but I'm wondering...
Wouldn't it be useful if Jam provided a way to read target-specific
variables? The current situation where append is possible (via +=), but
read is not, seems a bit strange.
Subject: RE: Multiple target files from one action?
Date: Mon, 12 Mar 2001 15:26:54 -0800
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
As far as I can tell, Jam seems to have a bug in how it treats a
rule/action that updates multiple targets from one source. If a
rule/action updates say 4 targets (for example, the Midl compiler for
interface definitions produces one .h file and three .c files), "jam
-j2" will run the Midl action, but will simultaneously start running the
Cc action to update the object files from the 3 generated .c files. But
of course the .c files do not yet exist.
I worked around this by inserting dependencies to fake Jam into
believing that only the .h file is generated by the Midl rule, and that
the three .c files are dependent on the .h file. I also use RmTemps on
the .c files, which ought to generally avoid the problem of not knowing
how to generate them independently if the .h file already exists.
However, now I've run into a third instance of this bug. (The first was
with .pch/.obj, the second was with .h/.c/.idl). Now I'm trying to hook
up .sbr/.bsc Extended Browse Symbols to be automatically built, but
unfortunately for me the compiler generates the .sbr file prior to
generating the .obj file. In order to apply my earlier trick, I would
need to make .obj depend on .sbr depend on .cpp, but that will cause
havoc all around. I guess I'll have to make the Cc/Cpp actions touch
the .sbr file after it's updated, to fake the dependencies in such a way
that Jam can understand them.
In any case, the source for my rules for the .pch/.idl files will be
mostly not useful to you. The only part I can see that would be useful
is just the trick about telling Jam something like this:
Depends parenttarget : fourgeneratedtargets ;
Depends threeofthetargets : theoneparticulartarget ;
UpdateRuleFor theoneparticulartarget : thesourcetarget ;
As indicated above, this has the side effect that if
theoneparticulartarget exists but one or more of threeofthetargets is
missing, Jam doesn't know how to build threeofthetargets. In my case
this is acceptable, and even unlikely since threeofthetargets are really
just temporary files anyway, and I enforce this with "RmTemps
parenttarget : threeofthetargets ;".
Longer term, I hope to track down exactly why Jam doesn't deal well with
multiple targets generated by a single action. Hopefully it will be
something relatively trivial, in which case I'll see what I can do to fix it.
From: Jan Mikkelsen [mailto:janm@transactionware.com]
Sent: Friday, March 09, 2001 10:07 PM
Subject: Re: Multiple target files from one action?
I'm certainly interested in seeing the rules.
I new to Jam, and I'm creating rules for Antlr (a compiler compiler)
which also generates multiple targets from a single action. I haven't
really started to think about it yet, so looking at a working approach
would certainly be useful.
Date: Mon, 12 Mar 2001 18:02:09 -0600 (CST)
Subject: Re: Multiple target files from one action?
I dont' think its a bug - jam just executes actions in the order
that is necessary, and since it waits for an action to complete
before doing the next, things are fine. -jn is a hack which just
makes it issue n actions at one time. They are issued in the
correct order, but it doesn't have the logic to wait for important
stuff to finish. I think it alludes to this in the description for -j
One approach is to make stuff dependent upon various phases, so that
a jam -j8 obj is fine because all source is already generated.
Subject: RE: Multiple target files from one action?
Date: Mon, 12 Mar 2001 16:56:31 -0800
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
You're right than -jn starts actions in the correct order. But both
code inspection and empirical testing show that the way in which it
determines the correct order is via the dependencies. Pretty much any
build that successfully completes will at one or more points starve the
-jn pipeline. For example, "jam -j8" waits until all objects are
updated before it starts the Link action. If there are some actions
that don't depend on the target of the Link action, then it will start
actions to work on those while the Link is in progress, but if there are
no more actions, it starves (AFAICT) temporarily.
This works successfully and consistently at several points during our
build, so Jam seems to basically have it right, except when the
dependency graph says that multiple targets are updated by a single
action (i.e., the dependency is M:1 or M:M). In that case it seems to
not wait at all, or perhaps it is only waiting for the first target to
have been updated; I haven't investigated that detail yet.
For example:
rule CreateFiles {
local cfiles = x.c y.c z.c ;
Depends $(<) : $(cfiles) ;
Depends $(cfiles) : $(>) ;
LibraryFromObjects xyzlib : $(cfiles) ;
}
actions CreateFiles {
$(CREATEFILES) /input=$(>) /out1=$(<[1]) /out2=$(<[2]) /out3=$(<[3])
}
CreateFiles myprogram : createfiles.src ;
Since all 3 $(cfiles) depend on $(>), success is dubious if it starts
the CreateFiles action and simultaneously starts the LibraryFromObject
rule (or more precisely the Cc actions). It should wait to build the
dependents (i.e. $(cfiles)) until the action that was invoked on $(>)
(i.e. createfiles.src) is completed. Again, because it does wait
correctly when an action updates only one target, I interpret this as a
limitation in the wait logic (I'll try to be P.C. and avoid calling it a
bug ;-). Since this limitation makes things quite difficult sometimes,
I plan to investigate it and hopefully enhance the wait logic to
successfully support actions with M:M dependency, rather than only 1:M
as currently.
I've recently run into a situation where at best I will need to Touch
files to help massage the dependency graph into a shape that Jam can
understand with its current 1:M wait logic.
Disclaimer: I'm expressing this in empirical terms based on what I'm
seeing. Perhaps in the end this will turn out to be as simple as a
flipped boolean somewhere. In the code, this may not strictly be a
limitation in the wait logic, but the manifest symptom is that the wait
logic needs improvement.
Sent: Monday, March 12, 2001 4:02 PM
Subject: Re: Multiple target files from one action?
I dont' think its a bug - jam just executes actions in the order that is
necessary, and since it waits for an action to complete before doing the
next, things are fine. -jn is a hack which just makes it issue n
actions at one time. They are issued in the correct order, but it
doesn't have the logic to wait for important stuff to finish. I think
it alludes to this in the description for -j
One approach is to make stuff dependent upon various phases, so that a
jam -j8 obj is fine because all source is already generated.
at least that's my understanding!
From: Jean-Daniel.Aussel@bull.net
Date: Tue, 13 Mar 2001 11:38:30 +0100
Subject: substitution built-in command for jam
For those interested, I added a substitution built-in command for jam,
supporting regular expressions. The following changes have been tested only
on Windows NT.
The subst built-in syntax is [ subst <sourcestring> <pattern>
<substitutionstring> ].
A sample use of the subst built in is
#--- JamFile --- start ---
# simple replacement
#
SOURCESTRING = x:\\private\\@WORKSPACE@\\src\\test ;
TARGETSTRING = [ subst $(SOURCESTRING) @WORKSPACE@ dummy ] ;
ECHO $(TARGETSTRING) ;
# regular expression matching
#
SOURCEPATH = x:\\private\\dummy\\src\\test ;
PATHWITHOUTDRIVE = [ subst $(SOURCEPATH) ([A-Za-z]:)(.*) "$2" ] ;
ECHO $(PATHWITHOUTDRIVE) ;
DRIVEWITHOUTPATH = [ subst $(SOURCEPATH) ([A-Za-z]:)(.*) "$1" ] ;
ECHO $(DRIVEWITHOUTPATH) ;
#--- Jamfile --- end ---
Which results in the output:
x:\private\dummy\src\test
\private\dummy\src\test
x:
Building none
...found 11 target(s)...
To implement the subst built-in:
1. Add the following file to your jam sources
/*
* Permission is granted to anyone to use this software for any
* purpose on any computer system, and to redistribute it freely,
* without restrictions.
* ALL WARRANTIES ARE HEREBY DISCLAIMED.
*/
/*
* _substcmd.c implements subst built-in command
*/
#include "lists.h"
#include "newstr.h"
#include "parse.h"
static LIST *_subst_list;
LIST *
builtin_subst(
PARSE *parse,
LOL *args )
{
/* FIXME: check number of args */
LIST* l;
char* pszIn;
char* pszToReplace;
char* pszReplacement;
char szOut[4096];
int iOnce;
void __cdecl replace( const char* szIn, const char* szOld, const char*
szNew, const char* szOut );
_subst_list = L0;
for( iOnce=1; iOnce; iOnce-- ) {
l = lol_get( args, 0 );
if(!l) { break;}
pszIn = l->string;
l = list_next( l );
if(!l) { break; }
pszToReplace = l->string;
l = list_next( l );
if(!l) { break; }
pszReplacement = l->string;
replace( pszIn, pszToReplace, pszReplacement, szOut );
_subst_list = list_append( _subst_list,
list_new( L0, newstr( szOut ) ) );
}
return _subst_list;
}
2. Add this code to the end of compile_builtins() in compile.c:
{
extern LIST* builtin_subst( PARSE *parse, LOL *args );
bindrule( "subst" )->procedure
bindrule( "SUBST" )->procedure
parse_make( builtin_subst, P0, P0, P0, C0, C0, 0 );
}
3. Add the following three four files to your jam sources: _replace.cpp,
_perlclass.cpp, _perlclass.h, _regexp.h. The last three files have been
written by Jim Morris, are public domain, and are available as part as the
toogl tool available on sgi web site.
/*
* Permission is granted to anyone to use this software for any
* purpose on any computer system, and to redistribute it freely,
* without restrictions.
* ALL WARRANTIES ARE HEREBY DISCLAIMED.
*/
/*
* _replace.cpp string replacement function supporting regular expressions;
* uses the PerlString class written by Jim Morris of sgi as
* part of the toogl tools. See copyright in perlclassp.cpp
*/
#include "_perlclass.h"
extern "C" void replace(
const char* pszIn,
const char* pszOld,
const char* pszNew,
char* pszOut )
{
int bHasChanged;
PerlString str( pszIn );
PerlString strOld( pszOld );
PerlString strNew( pszNew );
bHasChanged=str.s( strOld, strNew );
strcpy( pszOut, str );
}
/*
* Version 1.6
* Kudos to Larry Wall for inventing Perl
* Copyrights only exist on the regex stuff, and all
* have been left intact.
* The only thing I ask is that you let me know of any nifty fixes or
* additions.
* Credits:
* I'd like to thank Michael Golan <mg@Princeton.EDU> for his critiques
* and clever suggestions. Some of which have actually been implemented
*/
#include <iostream.h>
#include <string.h>
#include <malloc.h>
#include <stdio.h>
#ifdef __TURBOC__
#pragma hdrstop
#endif
#include "_perlclass.h"
// VarString Implementation
VarString& VarString::operator=(const char *s) {
int nl= strlen(s);
if(nl+1 >= allocated) grow((nl-allocated)+allocinc);
assert(allocated > nl+1);
strcpy(a, s);
len= nl;
return *this;
}
VarString& VarString::operator=(const VarString& n) {
if(this != &n){
if(n.len+1 >= allocated){ // if it is not big enough
# ifdef DEBUG
fprintf(stderr, "~operator=(VarString&) a= %p\n", a);
# endif
delete [] a; // get rid of old one
allocated= n.allocated;
allocinc= n.allocinc;
a= new char[allocated];
# ifdef DEBUG
fprintf(stderr, "operator=(VarString&) a= %p, source= %p\n", a,n.a);
# endif
}
len= n.len;
strcpy(a, n.a);
}
return *this;
}
void VarString::grow(int n) {
if(n == 0) n= allocinc;
allocated += n;
char *tmp= new char[allocated];
strcpy(tmp, a);
#ifdef DEBUG
fprintf(stderr, "VarString::grow() a= %p, old= %p, allocinc= %d\n", tmp, a, allocinc);
fprintf(stderr, "~VarString::grow() a= %p\n", a);
#endif
delete [] a;
a= tmp;
}
void VarString::add(char c) {
if(len+1 >= allocated) grow();
assert(allocated > len+1);
a[len++]= c; a[len]= '\0';
}
void VarString::add(const char *s) {
int nl= strlen(s);
if(len+nl >= allocated) grow(((len+nl)-allocated)+allocinc);
assert(allocated > len+nl);
strcat(a, s);
len+=nl;
}
void VarString::add(int ip, const char *s) {
int nl= strlen(s);
if(len+nl >= allocated) grow(((len+nl)-allocated)+allocinc);
assert(allocated > len+nl);
memmove(&a[ip+nl], &a[ip], (len-ip)+1); // shuffle up
memcpy(&a[ip], s, nl);
len+=nl;
assert(a[len] == '\0');
}
void VarString::remove(int ip, int n) {
assert(ip+n <= len);
memmove(&a[ip], &a[ip+n], len-ip); // shuffle down
len-=n;
assert(a[len] == '\0');
}
//
// PerlString stuff
//
// assignments
PerlString& PerlString::operator=(const PerlString& n) {
if(this == &n) return *this;
pstr= n.pstr;
return *this;
}
PerlString& PerlString::operator=(const substring& sb) {
VarString tmp(sb.pt, sb.len);
pstr= tmp;
return *this;
}
// concatenations
PerlString PerlString::operator+(const PerlString& s) const {
PerlString ts(*this);
ts.pstr.add(s);
return ts;
}
PerlString PerlString::operator+(const char *s) const {
PerlString ts(*this);
ts.pstr.add(s);
return ts;
}
PerlString PerlString::operator+(char c) const {
PerlString ts(*this);
ts.pstr.add(c);
return ts;
}
PerlString operator+(const char *s1, const PerlString& s2) {
PerlString ts(s1);
ts = ts + s2;
// cout << "s2[0] = " << s2[0] << endl; // gives incorrect error
return ts;
}
// other stuff
char PerlString::chop(void) {
int n= length();
if(n <= 0) return '\0'; // empty
char tmp= pstr[n-1];
pstr.remove(n-1);
return tmp;
}
int PerlString::index(const PerlString& s, int offset) {
for(int i=offset;i<length();i++){
if(strncmp(&pstr[i], s, s.length()) == 0) return i;
}
return -1;
}
int PerlString::rindex(const PerlString& s, int offset) {
if(offset == -1) offset= length()-s.length();
else offset= offset-s.length()+1;
if(offset > length()-s.length()) offset= length()-s.length();
for(int i=offset;i>=0;i--){
if(strncmp(&pstr[i], s, s.length()) == 0) return i;
}
return -1;
}
PerlString::substring PerlString::substr(int offset, int len) {
if(len == -1) len= length() - offset; // default use rest of string
if(offset < 0){
offset= length() + offset; // count from end of string
if(offset < 0) offset= 0; // went too far, adjust to start
}
return substring(*this, offset, len);
}
// this is private
// it shrinks or expands string as required
void PerlString::insert(int pos, int len, const char *s, int nlen) {
if(pos < length()){ // nothing to delete if not true
if((len+pos) > length()) len= length() - pos;
pstr.remove(pos, len); // first remove subrange
}else pos= length();
VarString tmp(s, nlen);
pstr.add(pos, tmp); // then insert new substring
}
int PerlString::m(Regexp& r) {
return r.match(*this);
}
int PerlString::m(const char *pat, const char *opts) {
int iflg= strchr(opts, 'i') != NULL;
Regexp r(pat, iflg?Regexp::nocase:0);
return m(r);
}
int PerlString::m(Regexp& r, PerlStringList& psl) {
if(!r.match(*this)) return 0;
psl.reset(); // clear it first
Range rng;
rng= r.getgroup(i);
psl.push(substr(rng.start(), rng.length()));
}
return r.groups();
}
int PerlString::m(const char *pat, PerlStringList& psl, const char *opts) {
int iflg= strchr(opts, 'i') != NULL;
Regexp r(pat, iflg?Regexp::nocase:0);
return m(r, psl);
}
//
// I know! This is not fast, but it works!!
//
int PerlString::tr(const char *sl, const char *rl, const char *opts) {
if(length() == 0 || strlen(sl) == 0) return 0;
int cflg= strchr(opts, 'c') != NULL; // thanks Michael
int dflg= strchr(opts, 'd') != NULL;
int sflg= strchr(opts, 's') != NULL;
int cnt= 0, flen= 0;
PerlString t;
unsigned char lstc= '\0', fr[256];
// build search array, which is a 256 byte array that stores the index+1
// in the search string for each character found, == 0 if not in search
memset(fr, 0, 256);
for(int i=0;i<strlen(sl);i++){
if(i && sl[i] == '-'){ // got a range
assert(i+1 < strlen(sl) && lstc <= sl[i+1]); // sanity check
for(unsigned char c=lstc+1;c<=sl[i+1];c++){
fr[c]= ++flen;
}
i++; lstc= '\0';
}else{
lstc= sl[i];
fr[sl[i]]= ++flen;
}
int rlen;
// build replacement list
if((rlen=strlen(rl)) != 0){
for(i=0;i<rlen;i++){
if(i && rl[i] == '-'){ // got a range
assert(i+1 < rlen && t[t.length()-1] <= rl[i+1]); // sanity check
for(char c=t[i-1]+1;c<=rl[i+1];c++) t += c;
i++;
}else t += rl[i];
}
// replacement string that is shorter uses last character for rest of string
// unless the delete option is in effect or it is empty
while(!dflg && rlen && flen > t.length()){
t += t[t.length()-1]; // duplicate last character
}
rlen= t.length(); // length of translation string
// do translation, and deletion if dflg (actually falls out of length of t)
// also squeeze translated characters if sflg
PerlString tmp; // need this in case dflg, and string changes size
for(i=0;i<length();i++){
int off;
if(cflg){ // complement, ie if NOT in f
char rc= !dflg ? t[t.length()-1] : '\0'; // always use last character for replacement
if((off=fr[(*this)[i]]) == 0){ // not in map
cnt++;
if(!dflg && (!sflg || tmp.length() == 0 || tmp[tmp.length()-1] != rc))
tmp += rc;
}else tmp += (*this)[i]; // just stays the same
}else{ // in fr so substitute with t, if no equiv in t then delete
if((off=fr[(*this)[i]]) > 0){
off--; cnt++;
if(rlen==0 && !dflg && (!sflg || tmp.length() == 0 ||
tmp[tmp.length()-1] != (*this)[i])) tmp += (*this)[i]; // stays the same else if(off < rlen && (!sflg || tmp.length() == 0 || tmp[tmp.length
()-1] != t[off]))
tmp += t[off]; // substitute
}else tmp += (*this)[i]; // just stays the same
}
*this= tmp;
return cnt;
}
int PerlString::s(const char *exp, const char *repl, const char *opts) {
int gflg= strchr(opts, 'g') != NULL;
int iflg= strchr(opts, 'i') != NULL;
int cnt= 0;
Regexp re(exp, iflg?Regexp::nocase:0);
Range rg;
if(re.match(*this)){
// OK I know, this is a horrible hack, but it seems to work
if(gflg){ // recursively call s() until applied to whole string
rg= re.getgroup(0);
if(rg.end()+1 < length()){
PerlString st(substr(rg.end()+1));
// cout << "Substring: " << st << endl;
cnt += st.s(exp, repl, opts);
substr(rg.end()+1)= st;
// cout << "NewString: " << *this << endl;
}
}
if(!strchr(repl, '$')){ // straight, simple substitution
rg= re.getgroup(0);
substr(rg.start(), rg.length())= repl;
cnt++;
}else{ // need to do subexpression substitution
char c;
const char *src= repl;
PerlString dst;
int no;
while ((c = *src++) != '\0') {
if(c == '$' && *src == '&'){
no = 0; src++;
}else if(c == '$' && '0' <= *src && *src <= '9')
no = *src++ - '0';
else no = -1;
if(no < 0){ /* Ordinary character. */
if(c == '\\' && (*src == '\\' || *src == '$'))
c = *src++;
dst += c;
}else{
rg= re.getgroup(no);
dst += substr(rg.start(), rg.length());
}
rg= re.getgroup(0);
substr(rg.start(), rg.length())= dst;
cnt++;
}
return cnt;
}
return cnt;
}
PerlStringList PerlString::split(const char *pat, int limit){
PerlStringList l;
l.split(*this, pat, limit);
return l;
}
//
// PerlStringList stuff
//
int PerlStringList::split(const char *str, const char *pat, int limit){
Regexp re(pat);
Range rng;
PerlString s(str);
int cnt= 1;
if(*pat == '\0'){ // special empty string case splits entire thing
while(*str){
s= *str++;
push(s);
}
return count();
}
if(strcmp(pat, "' '") == 0){ // special awk case
char *p, *ws= " \t\n";
TempString t(str); // can't hack users data
p= strtok(t, ws);
while(p){
push(p);
p= strtok(NULL, ws);
}
return count();
}
while(re.match(s) && (limit < 0 || cnt < limit)){ // find separator
rng= re.getgroup(0); // full matched string (entire separator)
push(s.substr(0, rng.start()));
for(int i=1;i<re.groups();i++){
push(s.substr(re.getgroup(i))); // add subexpression matches
}
s= s.substr(rng.end()+1);
cnt++;
}
if(s.length()) push(s);
if(limit < 0){ // strip trailing null entries
int off= count()-1;
while(off >= 0 && (*this)[off].length() == 0){ off-- ;}
splice(off+1);
}
return count();
}
PerlString PerlStringList::join(const char *pat) {
PerlString ts;
for(int i=0;i<count();i++){
ts += (*this)[i];
if(i<count()-1) ts += pat;
}
return ts;
}
PerlStringList::PerlStringList(const PerlStringList& n) {
for(int i=0;i<n.count();i++){
push(n[i]);
}
}
PerlStringList& PerlStringList::operator=(const PerlList<PerlString>& n) {
if(this == &n) return *this;
// erase old one
reset();
for(int i=0;i<n.count();i++){
push(n[i]);
}
return *this;
}
int PerlStringList::m(const char *rege, const char *targ, const char *opts)
{
int iflg= strchr(opts, 'i') != NULL;
Regexp r(rege, iflg?Regexp::nocase:0);
if(!r.match(targ)) return 0;
Range rng;
rng= r.getgroup(i);
push(PerlString(targ).substr(rng.start(), rng.length()));
}
return r.groups();
}
PerlStringList PerlStringList::grep(const char *rege, const char *opts) {
PerlStringList rt;
int iflg= strchr(opts, 'i') != NULL;
Regexp rexp(rege, iflg?Regexp::nocase:0); // compile once
for(int i=0;i<count();i++){
if(rexp.match((*this)[i])){
rt.push((*this)[i]);
}
return rt;
}
// streams stuff
istream& operator>>(istream& ifs, PerlString& s) {
char c;
#if 0
char buf[40];
#else
char buf[132];
#endif
s= ""; // empty string
ifs.get(buf, sizeof buf);
// This is tricky because a line teminated by end of file that is not terminated
// with a '\n' first is considered an OK line, but ifs.good() will fail.
// This will correctly return the last line if it is terminated by eof with the
// stream still in a non-fail condition, but at eof, so next call will fail as
// expected
if(ifs){ // previous operation was ok
s += buf; // append buffer to string
// cout << "<" << buf << ">" << endl;
// if its a long line continue appending to string
while(ifs.good() && (c=ifs.get()) != '\n'){
// cout << "eof1= " << ifs.eof() << endl;
ifs.putback(c);
// cout << "eof2= " << ifs.eof() << endl;
if(ifs.get(buf, sizeof buf)) s += buf; // append to line
}
return ifs;
}
istream& operator>>(istream& ifs, PerlStringList& sl) {
PerlString s;
// Should I reset sl first?
sl.reset(); // I think so, to be consistent
while(ifs >> s){
sl.push(s);
// cout << "<" << s << ">" << endl;
};
return ifs;
}
ostream& operator<<(ostream& os, const PerlString& arr) {
#ifdef TEST
os << "(" << arr.length() << ")" << "\"";
os << (const char *)arr;
os << "\"";
#else
os << (const char *)arr;
#endif
return os;
}
ostream& operator<<(ostream& os, const PerlStringList& arr) {
for(int i=0;i<arr.count();i++)
#ifdef TEST
os << "[" << i << "]" << arr[i] << endl;
#else
os << arr[i] << endl;
#endif
return os;
}
/*
* Version 1.6
* Kudos to Larry Wall for inventing Perl
* Copyrights only exist on the regex stuff, and all have been left intact.
* The only thing I ask is that you let me know of any nifty fixes or
* additions.
*
* Credits:
* I'd like to thank Michael Golan <mg@Princeton.EDU> for his critiques
* and clever suggestions. Some of which have actually been implemented
*
* 01/08/01 (jda) - fixed PerlListBase<T> operator= prototype for VC++ error
* renamed regexp.h to _regexp.h for collision name with
jam regexp.h
*/
#ifndef _PERL_H
#define _PERL_H
#include <string.h>
//#include "regexp.h"
// replaced as follows (jda)
#include "_regexp.h"
#if DEBUG
#include <stdio.h>
#endif
#define INLINE inline
// This is the base class for PerlList, it handles the underlying
// dynamic array mechanism
template<class T>
class PerlListBase {
private:
enum{ALLOCINC=20};
T *a;
int cnt;
int first;
int allocated;
int allocinc;
void grow(int amnt= 0, int newcnt= -1);
protected:
void compact(const int i);
public:
#ifdef USLCOMPILER
// USL 3.0 bug with enums losing the value
PerlListBase(int n= 20)
#else
PerlListBase(int n= ALLOCINC)
#endif
{
a= new T[n];
cnt= 0;
first= n>>1;
allocated= n;
allocinc= n;
# ifdef DEBUG
fprintf(stderr, "PerlListBase(int %d) a= %p\n", allocinc, a);
# endif
}
PerlListBase(const PerlListBase<T>& n);
//PerlListBase<T>& PerlListBase<T>::operator=(const PerlListBase<T>&n);
// replaced as follows (jda)
PerlListBase<T>& operator=(const PerlListBase<T>& n);
virtual ~PerlListBase(){
# ifdef DEBUG
fprintf(stderr, "~PerlListBase() a= %p, allocinc= %d\n", a, allocinc);
# endif
delete [] a;
}
INLINE T& operator[](const int i);
INLINE const T& operator[](const int i) const;
int count(void) const{ return cnt; }
void add(const T& n);
void add(const int i, const T& n);
void erase(void){ cnt= 0; first= (allocated>>1);}
};
// PerlList
class PerlStringList;
template <class T>
class PerlList: private PerlListBase<T> {
public:
PerlList(int sz= 10): PerlListBase<T>(sz){}
// stuff I want public to see from PerlListBase
T& operator[](const int i){return PerlListBase<T>::operator[](i);}
const T& operator[](const int i) const{return PerlListBase<T>::operator [](i);}
PerlListBase<T>::count;
// add perl-like synonym
void reset(void){ erase(); }
int scalar(void) const { return count(); }
operator void*() { return count()?this:0; } // so it can be used in tests
int isempty(void) const{ return !count(); } // for those that don't like the above (hi michael)
T pop(void) {
T tmp;
int n= count()-1;
if(n >= 0){
tmp= (*this)[n];
compact(n);
}
return tmp;
}
void push(const T& a){ add(a);}
void push(const PerlList<T>& l);
T shift(void) {
T tmp= (*this)[0];
compact(0);
return tmp;
}
int unshift(const T& a) {
add(0, a);
return count();
}
int unshift(const PerlList<T>& l);
PerlList<T> reverse(void);
PerlList<T> sort();
PerlList<T> splice(int offset, int len, const PerlList<T>& l);
PerlList<T> splice(int offset, int len);
PerlList<T> splice(int offset);
};
// just a mechanism for self deleteing strings which can be hacked
class TempString {
private:
char *str;
public:
TempString(const char *s) {
str= new char[strlen(s) + 1];
strcpy(str, s);
}
TempString(const char *s, int len) {
str= new char[len + 1];
if(len) strncpy(str, s, len);
str[len]= '\0';
}
~TempString(){ delete [] str; }
operator char*() const { return str; }
};
/*
* This class takes care of the mechanism behind variable length strings
*/
class VarString {
private:
enum{ALLOCINC=32};
char *a;
int len;
int allocated;
int allocinc;
INLINE void grow(int n= 0);
public:
#ifdef USLCOMPILER
// USL 3.0 bug with enums losing the value
INLINE VarString(int n= 32);
#else
INLINE VarString(int n= ALLOCINC);
#endif
INLINE VarString(const VarString& n);
INLINE VarString(const char *);
INLINE VarString(const char* s, int n);
INLINE VarString(char);
~VarString(){
# ifdef DEBUG
fprintf(stderr, "~VarString() a= %p, allocinc= %d\n", a, allocinc);
# endif
delete [] a;
}
VarString& operator=(const VarString& n);
VarString& operator=(const char *);
INLINE const char operator[](const int i) const;
INLINE char& operator[](const int i);
operator const char *() const{ return a; }
int length(void) const{ return len; }
void add(char);
void add(const char *);
void add(int, const char *);
void remove(int, int= 1);
void erase(void){ len= 0; }
};
class PerlStringList;
//
// Implements the perl specific string functionality
//
class PerlString {
private:
VarString pstr; // variable length string mechanism
public:
class substring;
PerlString():pstr(){}
PerlString(const PerlString& n) : pstr(n.pstr){}
PerlString(const char *s) : pstr(s){}
PerlString(const char c) : pstr(c){}
PerlString(const substring& sb) : pstr(sb.pt, sb.len){}
PerlString& operator=(const char *s){pstr= s; return *this;}
PerlString& operator=(const PerlString& n);
PerlString& operator=(const substring& sb);
operator const char*() const{return pstr;}
const char operator[](int n) const{ return pstr[n]; }
int length(void) const{ return pstr.length(); }
char chop(void);
int index(const PerlString& s, int offset= 0);
int rindex(const PerlString& s, int offset= -1);
substring substr(int offset, int len= -1);
substring substr(const Range& r){ return substr(r.start(), r.length ());}
int m(const char *, const char *opts=""); // the regexp match m/.../ equiv
int m(Regexp&);
int m(const char *, PerlStringList&, const char *opts="");
int m(Regexp&, PerlStringList&);
int tr(const char *, const char *, const char *opts="");
int s(const char *, const char *, const char *opts="");
PerlStringList split(const char *pat= "[ \t\n]+", int limit= -1);
int operator<(const PerlString& s) const { return (strcmp(pstr, s) < 0); }
int operator>(const PerlString& s) const { return (strcmp(pstr, s) > 0); }
int operator<=(const PerlString& s) const { return (strcmp(pstr, s) <= 0); }
int operator>=(const PerlString& s) const { return (strcmp(pstr, s) >= 0); }
int operator==(const PerlString& s) const { return (strcmp(pstr, s) == 0); }
int operator!=(const PerlString& s) const { return (strcmp(pstr, s) != 0); }
PerlString operator+(const PerlString& s) const;
PerlString operator+(const char *s) const;
PerlString operator+(char c) const;
friend PerlString operator+(const char *s1, const PerlString& s2);
PerlString& operator+=(const PerlString& s){pstr.add(s); return *this;}
PerlString& operator+=(const char *s){pstr.add(s); return *this;}
PerlString& operator+=(char c){pstr.add(c); return *this;}
friend substring;
private:
void insert(int pos, int len, const char *pt, int nlen);
// This idea lifted from NIH class library -
// to handle substring LHS assignment
// Note if subclasses can't be used then take external and make
// the constructors private, and specify friend PerlString
class substring
{
public:
int pos, len;
PerlString& str;
char *pt;
public:
substring(PerlString& os, int p, int l) : str(os)
{
if(p > os.length()) p= os.length();
if((p+l) > os.length()) l= os.length() - p;
pos= p; len= l;
if(p == os.length()) pt= 0; // append to end of string
else pt= &os.pstr[p];
}
void operator=(const PerlString& s)
{
if(&str == &s){ // potentially overlapping
VarString tmp(s);
str.insert(pos, len, tmp, strlen(tmp));
}else str.insert(pos, len, s, s.length());
}
void operator=(const substring& s) {
if(&str == &s.str){ // potentially overlapping
VarString tmp(s.pt, s.len);
str.insert(pos, len, tmp, strlen(tmp));
}else str.insert(pos, len, s.pt, s.len);
}
void operator=(const char *s) {
str.insert(pos, len, s, strlen(s));
}
};
};
class PerlStringList: public PerlList<PerlString> {
public:
PerlStringList(int sz= 6):PerlList<PerlString>(sz){}
// copy lists, need to duplicate all internal strings
PerlStringList(const PerlStringList& n);
PerlStringList& operator=(const PerlList<PerlString>& n);
int split(const char *str, const char *pat= "[ \t\n]+", int limit= -1);
PerlString join(const char *pat= " ");
int m(const char *rege, const char *targ, const char *opts=""); // makes list of sub exp matches
PerlStringList grep(const char *rege, const char *opts=""); // trys
rege against elements in list
};
// This doesn't belong in any class
inline PerlStringList m(const char *pat, const char *str, const char *opts
="")
{
PerlStringList l;
l.m(pat, str, opts);
l.shift(); // remove the first element which would be $&
return l;
}
// Streams operators
template <class T>
istream& operator>>(istream& ifs, PerlList<T>& arr) {
T a;
// Should I reset arr first?
arr.reset(); // I think so, to be consistent
while(ifs >> a){
arr.push(a);
// cout << "<" << a << ">" << endl;
};
return ifs;
}
template <class T>
ostream& operator<<(ostream& os, const PerlList<T>& arr)
{
for(int i=0;i<arr.count();i++){
#ifdef TEST
os << "[" << i << "]" << arr[i] << " ";
}
os << endl;
#else
os << arr[i] << endl;
}
#endif
return os;
}
istream& operator>>(istream& ifs, PerlString& s);
istream& operator>>(istream& ifs, PerlStringList& sl);
ostream& operator<<(ostream& os, const PerlString& arr);
ostream& operator<<(ostream& os, const PerlStringList& arr);
// Implementation of template functions for perllistbase
template <class T>
INLINE T& PerlListBase<T>::operator[](const int i)
{
assert((i >= 0) && (first >= 0) && ((first+cnt) <= allocated));
int indx= first+i;
if(indx >= allocated){ // need to grow it
grow((indx-allocated)+allocinc, i+1); // index as yet unused element
indx= first+i; // first will have changed in grow()
}
assert(indx >= 0 && indx < allocated);
if(i >= cnt) cnt= i+1; // it grew
return a[indx];
}
template <class T>
INLINE const T& PerlListBase<T>::operator[](const int i) const {
assert((i >= 0) && (i < cnt));
return a[first+i];
}
template <class T>
PerlListBase<T>::PerlListBase(const PerlListBase<T>& n) {
allocated= n.allocated;
allocinc= n.allocinc;
cnt= n.cnt;
first= n.first;
a= new T[allocated];
for(int i=0;i<cnt;i++) a[first+i]= n.a[first+i];
#ifdef DEBUG
fprintf(stderr, "PerlListBase(PerlListBase&) a= %p, source= %p\n", a,
n.a);
#endif
}
template <class T>
PerlListBase<T>& PerlListBase<T>::operator=(const PerlListBase<T>& n){
// cout << "PerlListBase<T>::operator=()" << endl;
if(this == &n) return *this;
#ifdef DEBUG
fprintf(stderr, "~operator=(PerlListBase&) a= %p\n", a);
#endif
delete [] a; // get rid of old one
allocated= n.allocated;
allocinc= n.allocinc;
cnt= n.cnt;
first= n.first;
a= new T[allocated];
for(int i=0;i<cnt;i++) a[first+i]= n.a[first+i];
#ifdef DEBUG
fprintf(stderr, "operator=(PerlListBase&) a= %p, source= %p\n", a,
n.a);
#endif
return *this;
}
template <class T>
void PerlListBase<T>::grow(int amnt, int newcnt){
if(amnt <= 0) amnt= allocinc; // default value
if(newcnt < 0) newcnt= cnt; // default
allocated += amnt;
T *tmp= new T[allocated];
int newfirst= (allocated>>1) - (newcnt>>1);
for(int i=0;i<cnt;i++) tmp[newfirst+i]= a[first+i];
#ifdef DEBUG
fprintf(stderr, "PerlListBase::grow() a= %p, old= %p, allocinc= %d\n",
tmp, a, allocinc);
fprintf(stderr, "~PerlListBase::grow() a= %p\n", a);
#endif
delete [] a;
a= tmp;
first= newfirst;
}
template <class T>
void PerlListBase<T>::add(const T& n){
if(cnt+first >= allocated) grow();
a[first+cnt]= n;
cnt++;
}
template <class T>
void PerlListBase<T>::add(const int ip, const T& n){
assert(ip >= 0);
if(first == 0 || (first+cnt) >= allocated) grow();
assert((first > 0) && ((first+cnt) < allocated));
if(ip == 0){ // just stick it on the bottom
first--;
a[first]= n;
}else{
for(int i=cnt;i>ip;i--) // shuffle up
a[first+i]= a[(first+i)-1];
a[first+ip]= n;
}
cnt++;
}
template <class T>
void PerlListBase<T>::compact(const int n){ // shuffle down starting at n
int i;
assert((n >= 0) && (n < cnt));
if(n == 0) first++;
else for(i=n;i<cnt-1;i++){
a[first+i]= a[(first+i)+1];
}
cnt--;
}
// implementation of template functions for perllist
template <class T>
void PerlList<T>::push(const PerlList<T>& l) {
for(int i=0;i<l.count();i++)
add(l[i]);
}
template <class T>
int PerlList<T>::unshift(const PerlList<T>& l) {
for(int i=l.count()-1;i>=0;i--)
unshift(l[i]);
return count();
}
template <class T>
PerlList<T> PerlList<T>::reverse(void) {
PerlList<T> tmp;
for(int i=count()-1;i>=0;i--)
tmp.add((*this)[i]);
return tmp;
}
template <class T>
PerlList<T> PerlList<T>::sort(void) {
PerlList<T> tmp(*this);
int n= tmp.scalar();
for(int i=0;i<n-1;i++)
for(int j=n-1;i<j;j--)
if(tmp[j] < tmp[j-1]){
T temp = tmp[j];
tmp[j] = tmp[j-1];
tmp[j-1]= temp;
}
return tmp;
}
template <class T>
PerlList<T> PerlList<T>::splice(int offset, int len, const PerlList<T>& l) {
PerlList<T> r= splice(offset, len);
if(offset > count()) offset= count();
for(int i=0;i<l.count();i++){
add(offset+i, l[i]); // insert into list
}
return r;
}
template <class T>
PerlList<T> PerlList<T>::splice(int offset, int len) {
PerlList<T> r;
if(offset >= count()) return r;
for(int i=offset;i<offset+len;i++){
r.add((*this)[i]);
}
for(i=offset;i<offset+len;i++)
compact(offset);
return r;
}
template <class T>
PerlList<T> PerlList<T>::splice(int offset) {
PerlList<T> r;
if(offset >= count()) return r;
for(int i=offset;i<count();i++){
r.add((*this)[i]);
}
int n= count(); // count() will change so remember what it is
for(i=offset;i<n;i++)
compact(offset);
return r;
}
// VarString Implementation
INLINE VarString::VarString(int n) {
a= new char[n];
*a= '\0';
len= 0;
allocated= n;
allocinc= n;
# ifdef DEBUG
fprintf(stderr, "VarString(int %d) a= %p\n", allocinc, a);
# endif
}
INLINE VarString::VarString(const char* s) {
int n= strlen(s) + 1;
a= new char[n];
strcpy(a, s);
len= n-1;
allocated= n;
allocinc= ALLOCINC;
# ifdef DEBUG
fprintf(stderr, "VarString(const char *(%d)) a= %p\n", allocinc, a);
# endif
}
INLINE VarString::VarString(const char* s, int n) {
a= new char[n+1];
if(n) strncpy(a, s, n);
a[n]= '\0';
len= n;
allocated= n+1;
allocinc= ALLOCINC;
# ifdef DEBUG
fprintf(stderr, "VarString(const char *, int(%d)) a= %p\n", allocinc, a);
# endif
}
INLINE VarString::VarString(char c) {
int n= 2;
a= new char[n];
a[0]= c; a[1]= '\0';
len= 1;
allocated= n;
allocinc= ALLOCINC;
# ifdef DEBUG
fprintf(stderr, "VarString(char (%d)) a= %p\n", allocinc, a);
# endif
}
INLINE ostream& operator<<(ostream& os, const VarString& arr) {
#ifdef TEST
os << "(" << arr.length() << ")" << (const char *)arr;
#else
os << (const char *)arr;
#endif
return os;
}
INLINE const char VarString::operator[](const int i) const {
assert((i >= 0) && (i < len) && (a[len] == '\0'));
return a[i];
}
INLINE char& VarString::operator[](const int i) {
assert((i >= 0) && (i < len) && (a[len] == '\0'));
return a[i];
}
INLINE VarString::VarString(const VarString& n) {
allocated= n.allocated;
allocinc= n.allocinc;
len= n.len;
a= new char[allocated];
strcpy(a, n.a);
#ifdef DEBUG
fprintf(stderr, "VarString(VarString&) a= %p, source= %p\n", a, n.a);
#endif
}
#endif
/*
* version 1.6
* Regexp is a class that encapsulates the Regular expression
* stuff. Hopefully this means I can plug in different regexp
* libraries without the rest of my code needing to be changed.
*
* 01/08/01 (jda) - renamed regexp.h to _regexp.h for collision name with
jam regexp.h
*
*/
#ifndef _REGEXP_H
#define _REGEXP_H
#include <iostream.h>
#include <stdlib.h>
#include <malloc.h>
#include <string.h>
#include <assert.h>
#include <ctype.h>
//#include "regex.h"
// replaced as follows (jda)
extern "C" {
#include "regexp.h"
}
/*
* Note this is an inclusive range where it goes
* from start() to, and including, end()
*/
class Range {
private:
int st, en;
public:
Range() { st=0; en= -1; }
Range(int s, int e) { st= s; en= e; }
int start(void) const { return st;}
int end(void) const { return en;}
int length(void) const { return (en-st)+1;}
};
class Regexp {
public:
enum options {def=0, nocase=1};
private:
regexp *repat;
const char *target; // only used as a base address to get an offset
int res;
int iflg;
#ifndef __TURBOC__
void strlwr(char *s) {
while(*s){
*s= tolower(*s);
s++;
}
#endif
public:
Regexp(const char *rege, int ifl= 0) {
iflg= ifl;
if(iflg == nocase){ // lowercase fold
char *r= new char[strlen(rege)+1];
strcpy(r, rege);
strlwr(r);
if((repat=regcomp(r)) == NULL){
cerr << "regcomp() error" << endl;
exit(1);
}
delete [] r;
}else{
if((repat=regcomp (rege)) == NULL){
cerr << "regcomp() error" << endl;
exit(1);
}
}
~Regexp() { free(repat); }
int match(const char *targ) {
int res;
if(iflg == nocase){ // fold lowercase
char *r= new char[strlen(targ)+1];
strcpy(r, targ);
strlwr(r);
res= regexec(repat, r);
target= r; // looks bad but is really ok, really
delete [] r;
}else{
res= regexec(repat, targ);
target= targ;
}
return ((res == 0) ? 0 : 1);
}
int groups(void) const {
int res= 0;
if(repat->startp[i] == NULL) break;
res++;
}
return res;
}
Range getgroup(int n) const {
assert(n < NSUBEXP);
return Range((int)(repat->startp[n] - (char *)target),
(int)(repat->endp[n] - (char *)target) - 1);
}
};
#endif
From: "Fabio Parodi" <fabio.parodi@libero.it>
Date: Tue, 13 Mar 2001 11:43:07 +0100
Subject: jam for Windows 98 - spawn: No such file or directory
I am trying to use jam on windows 98. The fourth phase of operation fails:
spawn: No such file or directory
The problem is in function execmd, file execunix.c. I guess this depends on
the differences in the fork/exec model, between 98 and NT. Has anyone a
workaround ready?
Date: Tue, 13 Mar 2001 13:20:26 +0100
From: Rainer Wiesenfarth <Rainer.Wiesenfarth@inpho.de>
Subject: Using jam with Trolltech's Qt
We have a product based on Trolltech's Qt library. This product uses
Rule and Action for the "moc" tool that work quite well. However, Qt
now includes a GUI builder that uses also another tool ("uic"). I
tried to define Rule(s) and Action(s) for this also, but as I am a
"jam-rookie", I did not succeed.
Has anyone an idea how to handle this?
For those that do not know about the tools, I try to describe it:
mygui.ui =(uic)=> mygui.h
=(uic)=> mygui.cc
mygui.h =(moc)=> mygui.moc
mygui.c <-dep--- mygui.h (#include'd)
myguiimpl.h <-dep--- mygui.h (#include'd)
=(moc)=> myguiimpl.moc
myguiimpl.cc <-dep--- mygui.h (#include'd)
<-dep--- mygui.moc (#include'd)
<-dep--- myguiimpl.moc (#include'd)
mytarget <-dep--- myguiimpl.cc
<-dep--- mygui.ui
where "=(tool)=>" means "generates using tool" and "<-dep---" means
"depends on".
Sorry, this might be explained very bad, but I find it hard to
describe it. The files known to be present are mygui.ui, myguiimpl.h,
and myguiimpl.cc, the other files are generated.
Date: Tue, 13 Mar 2001 15:54:28 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Using jam with Trolltech's Qt
I can suggest one way, yes. I don't use uic myself, but I do use Jam and
Qt, and based on my Moc rule I can write a Uic rule that should work. You
may have to tinker a bit.
First of all, here's the Moc stuff I use. From Jamrules:
rule Moc {
TEMPORARY $(<) ;
NOCARE Moc ;
NOTFILE Moc ;
Clean $(<) ;
RmTemps $(<:S=.o) : $(<) ;
# are both of the following necessary?
Depends $(<) : $(>) ;
Depends $(<:S=.o) : $(<) ;
}
actions Moc { $(RM) $(<) }
And in Jamfile, I do something like this
Main myapp : ... .moc.cpp ;
LINKLIBS on myapp += -L$QTDIR/lib -lqt ;
Moc .moc.cpp : header1.h header2.h ... headern.h ;
This causes jam to keep a single .o file around, .moc.o, and says that
.moc.o depends on the temporary file .moc.cpp, which in turn depends on
all the header files named by the Moc rule. Jam will figure out whether it
needs to create .moc.cpp and update .moc.o, and if it does it will delete
.moc.cpp again after compiling it.
I hope this is understandable. I'll use it to build an Uic rule that
generates one .h and one .cpp file from one .ui file. Since Jam doesn't
really like multi-target rules, I hack.
My goal is to have a rule that I can use like this:
Uic mumble.h : mumble.ui ;
And to achieve it, I'll crudely hack that rule so that it'll make a
mumble.cpp file as well.
rule Uic {
TEMPORARY $(<:S=.cpp) ; # the .cpp file is temporary
Clean $(<) ; # jam clean deletes the .h
Clean $(<:S=.cpp) ; # jam clean deletes the .cpp
RmTemps $(<:S=.o) : $(<:S=.cpp) ; # jam deletes .cpp after compiling .o
Depends $(<) : $(>) ; # the .h file depends on the .ui file
Depends $(<:S=.cpp) : $(>) ; # the .cpp file depends on the .ui file
Depends $(<:S=.o) : $(<:S=.cpp) ; # the .o file depends on the .cpp file
}
I don't have time to dig into uic right now. The action for Uic has to be
something like this:
actions Uic { uic $(>) -o $(<) }
I can probably look into it more thoroughly tomorrow, but I'm pressed for time today.
From: "Fabio Parodi" <fabio.parodi@libero.it>
Date: Fri, 16 Mar 2001 14:43:25 +0100
Subject: jam for windows 9x
I needed a version of Jam for windows 9x. I made some little change on the
2.3.1 and now it works fine. The actions are serialized, one at a time.
Needs a working rm.
It autodetects the os, so the same Jam.exe works well on Windows NT and 9x.
I had some problem with a firewall and I could not put the sources in the
perforce public depot. Please find attached sources. I'd like to see it in
next official release.
From: Matt Bruce <mbruce@instipro.com>
Date: Mon, 19 Mar 2001 11:44:45 -0500
Subject: jamming apache
Has anyone had any luck getting apache to build with Jam?
I'm trying to build it on Solaris and was hoping someone had
a nice Jamfile to speed things up.
Date: Wed, 28 Mar 2001 00:39:02 -0800 (PST)
Subject: JAM documentation
I am in need of proper documentation on JAM.
I am unable configure JAM and run small program of compiling a .c file.
I think a good starter is
http://public.perforce.com/public/jam/src/Jamfile.html
which is an easy reading on creating a Jamfile and using jam.
Date: Tue, 27 Mar 2001 23:00:08 -0800 (PST)
Subject: Documentation on JAM
I am in urgent need of detailed documentation of JAM
with lots of examples about how to run programs with
it. I am finding it very difficult without documentation.
Can anyone kindly tell me where can I get it.
From: Leon Glozman <leon.glozman@schema.com>
Date: Thu, 29 Mar 2001 17:47:29 +0200
Subject: Link of library with other library
I compile my project in WATCOM. I have libraries that are linked with other libraries.
The JAM supports executable linkage with libraries (rule LinkLibraries), but
not library with library.
From: Amaury.FORGEOTDARC@ubitrade.com
Subject: Re: Link of library with other library
Date: Fri, 30 Mar 2001 09:54:45 +0100
(I didn't know it was possible to link a library with another library.)
This is not part of the standard Jambase,
but it seems easy to write in Jamfile:
Library main.lib : file1.c file2.c ;
Archive main.lib : second.lib ;
Depends main.lib : second.lib ;
and main.lib is built with file1.obj, file2.obj and second.lib.
This is done in a single invocation of wlib.
This works at least with MSVCNT, where .obj and .lib arguments can be mixed.
After a quick look at the wlib doc, it seems to work also with WATCOM.
From: "David Abrahams" <abrahams@mediaone.net>
Date: Sun, 1 Apr 2001 17:07:32 -0400
Subject: negative success
I have some unit tests which are expected to fail compilation and/or linking
if everything is working correctly. Is it possible to keep Jam from halting
in these cases? The only approach I can imagine involves building a negation
tool which passes its arguments to system() and negates the result. I'd
rather not have to do that.
From: matt@corp.phone.com
Subject: Re: negative success
Date: 02 Apr 2001 09:09:28 -0700
That is in fact what you have to do.
Unless you don't care whether the commands succeed or fail. Then you
can use "actions ignore Foo {}"
Date: Mon, 2 Apr 2001 19:59:08 -0700
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
Subject: DST and Jam
Is anyone else finding that with MSVCNT (VC6), Jam's .obj archiving
doesn't work during daylight savings time? Jam expects the timestamp in
the .lib file to be in UTC, but it's off by one hour. This throws off
the dependencies quite badly. Anyone have a workaround or fix?
Date: Mon, 2 Apr 2001 22:33:13 -0500 (CDT)
Subject: Re: DST and Jam
funny you should mention this. we are having this problem with
gmake where it thinks everything is an hour in the future, which
upsets it mightly. I switched the time back to regular and moved
the time 1 hr ahead and it started working again.
I haven't noticed the problem with jam yet, but it may be just a little
more hidden than gmake's squwaking.
I think that the time offset is not corrected for daylight savings
time, and this screws things up.
I'd appreciate any *real* fix for this problem.
ps no problem on solaris, just nt 4 sp6
Is anyone else finding that with MSVCNT (VC6), Jam's .obj archiving
doesn't work during daylight savings time? Jam expects the timestamp in
the .lib file to be in UTC, but it's off by one hour. This throws off
the dependencies quite badly. Anyone have a workaround or fix?
Subject: RE: DST and Jam
Date: Mon, 2 Apr 2001 21:19:16 -0700
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
I found an MSDN article that says there was a bug in the C runtime time
conversion logic that will manifest itself for one week during the week
of April 1, 2001, and is self-correcting after that period. I'm not
surprised: my Palm handheld also freaked out this year about DST (the
3rd party DST auto-switcher thought "last Sunday in March" was the
rule). The DST transition rules are more complicated than that, and
this year seems to hit a boundary case in the rules.
Anyway, the bug is supposedly fixed by VC6 SP3, and is not related to
Y2K at all. I had VC6 SP4, but to be safe I just now upgraded to VS6
SP5 (couldn't find VC6 SP5) and rebuilt Jam, and the problem seems to
have disappeared. YMMV.
Sent: Monday, April 02, 2001 8:33 PM
Subject: Re: DST and Jam
funny you should mention this. we are having this problem with gmake
where it thinks everything is an hour in the future, which upsets it
mightly. I switched the time back to regular and moved the time 1 hr
ahead and it started working again. I haven't noticed the problem with
jam yet, but it may be just a little more hidden than gmake's squwaking.
I think that the time offset is not corrected for daylight savings time,
and this screws things up.
FILETIME=[0D3C9770:01C0BBEA]
timestamp in
off
Is anyone else finding that with MSVCNT (VC6), Jam's .obj archiving
doesn't work during daylight savings time? Jam expects the timestamp in
the .lib file to be in UTC, but it's off by one hour. This throws off
the dependencies quite badly. Anyone have a workaround or fix?
Date: Tue, 03 Apr 2001 14:55:36 +1000
From: Graeme Gill <graeme@colorbus.com.au>
Subject: Re: DST and Jam
I haven't noticed any problems, but then my version of Jam
was modified some time ago (June '97) to use Win32 GetFileTime(),
so that it would compile with the IBM WinNT compiler (I posted
my changes to this list at the time.)
Note that Win98 has a problem in that its FILETIME is
always local, not GMT.
If you're interested I can mail you my filent.c, but you
may need to port it to the latest Jam revision.
From: "RobertWoodcock" <rmw@fractalgraphics.com.au>
Date: Tue, 3 Apr 2001 13:04:39 +0800
Subject: Header dependency caching
Reading through the archives I noted a few places were people had mentioned
modifcations to jam that support caching of the file header dependency
checks. I couldn't find any postings of the source modifications though. If
anyone is willing to put these up or give some pointers on what needs to be
done it would be appreciated. Since moving to jam our build times have
reduced dramatically but when making small changes the dependency checking
is taking way to long for our patience.
Also, a special thankyou to all of you who have posted solutions to the
mailing list over the past year or so. I read through the archives when
setting up our new system and suffice to say your input made the job much
much easier.
Date: Tue, 3 Apr 2001 00:10:15 -0500 (CDT)
Subject: Re: DST and Jam
check out http://news.cnet.com/news/0-1007-200-5424581.html?tag=nbs
From: mihir@eparle.com
Date: Wed, 28 Mar 2001 12:28:36 +0100
Subject: JAM documentation
I am in urgent need of detailed documentation of JAM with lots of
examples about how to run programs with it. I am finding it very
difficult without documentation .
Is it available on website ????
Can anyone kindly tell me where can I get it.
Date: Wed, 04 Apr 2001 16:34:14 +0200 (CEST)
From: Werner LEMBERG <wl@gnu.org>
Subject: omissions in the manual
I couldn't find documentation about the macro delimiting tokens `['
and `]'. Similarly, the `return' keyword is undocumented.
To find out that `Depends' and `Depends' are the same isn't documented
either... Is there any reason why both forms are used in Jambase? I
consider this quite irritating.
Date: Wed, 04 Apr 2001 16:30:43 +0200 (CEST)
From: Werner LEMBERG <wl@gnu.org>
Subject: targets and variables
I have a problem with setting variables for a specific target.
In our library (FreeType) we have two compile `modes'
. compile master1.c which itself includes file11.c, file12.c, ...
compile master2.c which itself includes file21.c, file22.c, ...
...
build library from master1.o, master2.o, ...
This is the default.
. compile file11.c, file12.c, ..., file21.c, file22.c, ...
...
build library from file11.o, file12.o, ...
This should be target `multi'.
The solution with GNU make is to scan MAKECMDGOAL which contains all
command line targets. If `multi' has been found, a variable `MULTI'
has been set which can then be used during the parse phase to select
the proper set of files to compile. It seems to me that this feature
is not available with jam. Is this intentional?
I tried some hours for a workaround but wasn't successful (I know that
`jam -sMULTI=true' would work).
A MAKECMDGOAL variable within jam would be useful for other things
also. For example, another target in FreeType is `devel' which
bypasses the configure script and sets a bunch of special compilation flags.
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Tue, 10 Apr 2001 20:47:54 -0400
Subject: Bug Fix
The built-in FConcat rule doesn't work. Here is a fix. This version takes an
optional second parameter which acts as a separator:
rule FConcat {
# Puts the variables together, removing spaces.
local _t _r ;
_r = $(<[1]) ;
local sep = $(>) ;
if ! $(sep){ sep = "" ; }
{
_r = $(_r)$(sep)$(_t) ;
}
return $(_r) ;
}
From: "Malloy, Michael" <MMalloy@TRADEC.com>
Date: Mon, 9 Apr 2001 18:20:22 -0700
Subject: Using JAM with VB and VC++
The project I am working on is heavily dependent on Visual Basic. Has
anyone used Jam with VB and other Visual Studio products? In particular, I
need rules to create .ocx files based on .bas and .frm files. Then, .cab
files are needed to be created from the resulting executable files. If
anyone has already done this, please let me know!
From: "Malloy, Michael" <MMalloy@TRADEC.com>
Date: Mon, 9 Apr 2001 18:20:22 -0700
Subject: Using JAM with VB and VC++
The project I am working on is heavily dependent on Visual Basic. Has
anyone used Jam with VB and other Visual Studio products? In particular, I
need rules to create .ocx files based on .bas and .frm files. Then, .cab
files are needed to be created from the resulting executable files. If
anyone has already done this, please let me know!
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Wed, 11 Apr 2001 08:45:39 -0400
Subject: Bug Report
I don't know if anybody is maintaining Jam at the moment, but...
The following rule produces surprising (at least!) results when the marked
lines are interchanged:
rule split-path {
local parent = $(<:P) ;
if ! $(parent) {
ECHO "split-path =" $(<) ; ######
return $(<) ; ######
}else{
local p ;
p = [ split-path $(parent) ] ;
local b = $(<:B) ;
p += $(b) ;
ECHO "split-path =" $(p) ;
return $(p) ;
}
}
ECHO [ split-path a/b ] ;
From: "David Abrahams" <abrahams@mediaone.net>
Date: Tue, 24 Apr 2001 16:12:58 -0400
Subject: Bug (?) report
clearing the grist on a variable seems to "normalize" the path slashes, at
least under Win32:
a = <foo>bar/baz ;
ECHO $(a:G=) ;
prints:
bar\baz
this behavior is at the very least suprising!
From: "David Abrahams" <abrahams@mediaone.net>
Date: Tue, 24 Apr 2001 17:17:11 -0400
Subject: bug (?) report
The result of using :D or :P on a path with no $(SLASH) separators seems to
be not an empty list, but an empty string. This makes code like the
following fail to work as expected:
$(x:D?=$(DOT))
or,
if $(x:D) { ... }
From: "Dowdy, Mark" <mark@ciena.com>
Date: Tue, 8 May 2001 10:26:59 -0700
Subject: What Happened to Jamlang.html?
I apologize if I missed this discussion earlier, but
what happened to jamlang.html? Sadly, Christopher
deleted this file with the 2.3 submission. Personally,
I find this to be the most useful reference document
for Jam. I know I can always use an old version of the
file but I would think new users of Jam would find the
information in this file useful.
Date: Wed, 9 May 2001 11:21:36 +0200
Subject: Bad "warning using independet target" messages
Sometimes i got the following warning:
"warning using independent target xy."
But target 'xy', is not independent. Or at least i didn't found any reason
for this message....
Have anyone experienced such problems with Jam 2.3?!
By debugging into the Jam executable i found the following:
This message comes from:
make1bind() [make1.c line 624]
which is called by
make1list() [make1.c line 549]:
>/* Sources to 'actions existing' are never in the dependency */
>/* graph (if they were, they'd get built and 'existing' would */
>/* be superfluous, so throttle warning message about independent */
>/* targets. */
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Is '!(flag & RULE_EXISTING)' is correct here?! From the comment i'd rather
write
'(flag & RULE_EXISTING)'.
From: <boga@mac.com>
Date: Thu, 10 May 2001 15:41:21 +0200
Subject: Recovering from errors
If there's a syntax error in the jam makefile, jam continues the execution.
(WinNT)
Is it usefull for someone?
Isn't it dangerous? Some variables are pointing to source directories and
some others are going to be cleaned, so a bad jam file might clean your sources.
Date: Mon, 14 May 2001 15:56:39 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Jam binaries for Windows 95/98 and OS/2
Thanks for the fix, I'll include it shortly. For those that may be
interested in them, I've put my version of Jam in the Perforce public
depot, under the path //guests/david_turner/jam/src.
A summary of the changes applied is readable at:
http://www.freetype.org/jam/index.html
From: "Fabio Parodi" <fabio.parodi@fredreggiane.com>
Date: Mon, 14 May 2001 15:16:02 +0200
Subject: Re: Jam binaries for Windows 95/98 and OS/2
I've only found one bug. It happens when the action specified in Jambase is
large:
the program tries to execute it as a batch file, but the name of the
temporary file is NULL.
It was easy to correct. Now it works fine on my windows 98 box.
Please find the attachment.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Tue, 22 May 2001 17:59:27 +0100
Subject: Generated Header files
I'm just starting out using Jam, so I apologise if this is old hat. I'm
investigating using it to replace make(1) on a medium-sized body of code, by
attempting to build some of the libraries.
I've got a directory containing a bunch of .cpp files. These are to be
built into a library. Some of these include a file called errors.h, which
is generated from a corresponding errors.mes file (it's a list of #define
error codes, and the corresponding text).
In short, I'd like to do the following:
Library something : a.cpp b.cpp c.cpp errors.mes ;
and have it invoke the relevant command to build errors.h from errors.mes,
then build something.lib from the .cpp files.
At present, of course, it attempts to build a .obj file from the .mes file,
and then UserObject complains.
My questions:
a) How do I specify that the .cpp files are dependent on errors.h? Do I need to?
b) How do I specify that errors.h is dependent on errors.mes? How do I do
this in a nice parameterised way? I've got .mes files in some of the other
library directories.
c) How do I tell Jam what commands to execute? It's got to run a Python
script to generate the header.
Date: Tue, 22 May 2001 19:24:39 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Generated Header files
In your Jamfile:
MyCustomRule errors.h : errors.mes ;
Writing MyCustomRule is left as an exercise for the reader.
The actions for MyCustomRule run your script.
rule MyCustomRule {
Depends $(<) : $(>) ;
Clean $(<) ; # so that jam clean will kill errors.h
}
Assuming that the script has -i input -o output:
actions MyCustomRule {
myScript -o $(<) -i $(<)
}
Warning: this is all typed straight in. Hopefully it'll make sense, though.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Generated Header files
Date: Tue, 22 May 2001 19:02:30 +0100
Hmmm, that didn't work. Maybe I'm being stupid. Probably am.
I've pared the problem down to bare essentials. My Jamfile looks like this:
# Jamfile
if $(NT) { C++FLAGS += -D "WIN32" -D "WINDOWS" -D "_WIN32" -D "_WINDOWS"
; }
Main bing : bing.cpp ;
bing.cpp looks like this:
// bing.cpp
#include "bing_errors.h"
int main(void) { return 0; }
This fails because the compiler can't find 'bing_errors.h'
So I add (before the Main clause), the following:
rule ErrBuild { #1
Depends $(<) : $(>) ;
Clean $(<) ;
}
actions ErrBuild { #2 copy $(>) $(<) }
ErrBuild bing_errors.h : bing_errors.mes ; #3
Same error.
Interestingly, if I rename bing_errors.mes to something else, I get
"...skipped bing_errors.h for lack of bing_errors.mes..."
...which suggests that jam knows that it wants a .mes file for the .h file.
I guess that this is done by the #1 and #3 stuff, yes?
However, it doesn't seem to know anything about the #2 stuff. What have I
got wrong? It's probably something simple.
Do I need to add bing_errors.mes to the Main line? If I do, I get the
UserObject error.
Subject: Re: [ Arnt Gulbrandsen ] Generated Header files
Date: Tue, 22 May 2001 13:52:28 -0500
From: "Gregg G. Wonderly" <gregg@skymaster.c2-tech.com>
One of the above '$(<)' should be '$(>)' should it not?
Date: Tue, 22 May 2001 22:37:22 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: [ Arnt Gulbrandsen ] Generated Header files
You're right, of course.
myScript -o $(>) -i $(<)
Date: Tue, 22 May 2001 13:10:04 -0700 (PDT)
From: sales@perforce.com
Subject: Re: [ Arnt Gulbrandsen ] Generated Header files (CALL#192429)
One of the above '$(<)' should be '$(>)' should it not?
Date: Tue, 22 May 2001 22:59:14 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Generated Header files
The "...skipped" message means that jam didn't even try to run the #2
stuff, because bing_errors.mes didn't exist.
I suspect that if you touch bing_errors.h into existence, all will be
well until the next time you delete it.
Here's my guessw^Wreasoning. Jam doesn't attempt to build .h files that it
sees included. It will use such a dependency to decide whether or not to
compile the .c, but if the .h does not exist, jam assumes it's a case of
#if defined(UNIX)
#include <mumble.h>
#else
#include <gargle.h>
#endif
where exactly one of mumble.h or gargle.h exists on any given system, but
compilation works never the less.
So, how to fix it properly? Adding a hard dependency should do it. Perhaps
something like this, but I'm hoping one of the perforce people will Know:
rule Depends { Depends $(<) : $(>) ; }
Depends bing.cpp : bing_errors.h ;
I've never tried this :)
Date: Tue, 22 May 2001 16:01:27 -0700 (PDT)
Subject: Re: Generated Header files
Yep -- there's nothing about Jam's checking for included headers that does
anything about building generated header files.
It'd probably be better to keep things more general. What you want to do
is say "generated header files need to get built before any object files".
So create a rule (say, GenHdr) that depends on the "files" pseudo-target.
For example:
rule GenHdr {
Depends files : $(<) ;
Depends $(<) : $(>) ;
Clean clean : $(<) ;
}
actions GenHdr {
copy $(>) $(<)
}
Then, in your Jamfile, you'd have:
Main bing : bing.cpp ;
GenHdr bing_errors.h : bing_errors.mes ;
(If your generated headers are always <something>_errors.mes -> .h, you
could beef up the rule to handle the filename/suffix stuff so you don't
need to include all that in your Jamfile and could instead just have
"GenHdr bing ;" -- I was just too lazy to do it myself :)
Now when you run 'jam', it'll do:
[BINKY:dianeh]: jam -n
...found 12 target(s)...
...updating 3 target(s)...
GenHdr bing_errors.h
cp bing_errors.mes bing_errors.h
C++ bing.o
cc -c -D__cygwin__ -O -o bing.o bing.cpp
Link bing.exe
gcc -D__cygwin__ -o bing.exe bing.o
Chmod1 bing.exe
chmod 711 bing.exe
...updated 3 target(s)...
Date: Tue, 22 May 2001 19:32:14 -0400 (EDT)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Generated Header files
We ran into this problem earlier and took a slightly different approach.
There is a problem with 'files' depending on the generated headers, as
there is not a direct dependency between the object file which requires the
header, and files. When doing a build with multiple jobs, there is nothing
in the dependency graph to stop Jam from building the objects before the
files (and failing). Running Jam with a single job will probably give what
you expect, as it just so happens that the order of the graph has 'files'
before 'obj'
A slightly more complex alternative, which works with multiple jobs is
to create a new node - derived-files.
Depends all : derived-files ;
NOTFILE derived-files ;
NOUPDATE derived-files ; # (requires the NOUPDATE fix posted a while ago.)
and then do not create any object before all the derived-files are built.
In the 'rule Object' code, add:
Depends $(<) : derived-files ;
...or something like that.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Generated Header files
Date: Tue, 22 May 2001 16:52:49 -0700
I believe that Jam would build included the header files except that
they are marked as NOCARE in HdrRule.
In fact, I had the same problem and worked arround it as
Diane suggests -- but only because in my cases I had
includes files driving included file (CORBA don't you know).
What I was tempted to do (and did attempt, but it did not solve
my problem) was to have NOCARE be a NOOP if the target had an
action attached to it.
This works because NOCARE was designed to allow Jam to ignore
platform specific header files which might be hidden behind
#ifdefs.
If in your local code, you never use ifdefs to include/hide
LOCAL header files, then you could modify the HdrRule to
only NOCARE header files which are not part of your project.
Date: Tue, 22 May 2001 21:28:44 -0400 (EDT)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Generated Header files
I think my previous post warning about multiple jobs was misleading,
although I believe that solution works. There are other things going
on in our Jambase which required that approach - grist I believe.
If A.c has '#include <der.h>'
and <der.h> is created from der.tmpl, the graph should be something like:
Depends A.o : A.c ;
INCLUDES A.c : der.h ;
NOCARE der.h ;
Depends der.h : der.tmpl ;
I don't think the NOCARE rule interferes with this - since the
header has the same dependency graph node name, everything works out. The
trick to making that work is either to avoid grist on derived files or to
somehow know what grist to apply in the HdrRule. I think avoiding grist
demands that header files have unique names across the system you're
building. The GenFile rule avoids grist on header files, unless SOURCE_GRIST
is set.
the HdrRule is going to be called as:
HdrRule A.c : der.h ... ;
If you know which <der.h> is intended, you can INCLUDE it and the
dependencies will work out. If there is more than one <der.h> that may be
intended, depending on the SEARCH path at bind time, then you're in
trouble. In that case my prior post gives an approach to take - build all
derived files before any objects.
Date: Wed, 23 May 2001 08:26:46 +0200
Subject: Re: jamming digest, Vol 1 #196 - 3 msgs
I think the problem is caused by the
"NOCARE bing_errors.h ;"
call in HdrRule.
So actually you have to explicitly write that bing.cpp includes
bing_errors.h !
Try adding:
HDRGRIST = "hdr" ;
INCLUDES bing.cpp : bing_errors.h ;
Or:
INCLUDES bing.cpp : <generated>bing_errors.h ;
ErrBuild <generated>bing_errors.h : bing_errors.mes ; # Replaces 3
this:
file.
From: Tony Smith <tony@perforce.com>
Subject: Re: Generated Header files
Date: Wed, 23 May 2001 11:28:22 +0100
The simplest way I've found is to use GenFile which does that for you.
Again, use GenFile. Here's a small example:
rule GenHdr {
Depends first : $(<) ;
GenFile $(<) : $(>) ;
}
GenHdr errors.h : genfile.py errors.mes ;
Library mylib : file.c ;
Note the additional bit of trickery in that the rule GenHdr makes the target
of the rule dependent on the pseudotarget "first". That'll make jam build the
header file very early on so you can be sure it's there by the time the
library gets built. That should avoid your UserObject errors.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Generated Header files
Date: Wed, 23 May 2001 15:10:15 +0100
Thanks to everyone for their help with the original question. Once I'd
upgraded to jam-2.3.2, it worked properly.
Except when trying to build recursively. I've expanded my original example
to the following:
marmalade/
lib/
peel/
src/
I think I've got the SubInclude and SubDir rules right. Without the .mes ->
.h rules (in TOP/Jamrules), it builds correctly. With the .mes -> .h rules,
I can only get it to build correctly if I start in lib/peel.
What am I missing?
You can grab a tarball of what I'm playing with at
http://differentpla.net/~roger/jam/marmalade.tar.gz if you want to look at
the files I'm using (or attempting to use <grin>).
In fact, I might write an article about my experiences and put it up there
as well. If I can get this working, it might serve as a useful tutorial.
From: Grant Glouser <Grant.Glouser@corp.palm.com>
Subject: RE: Generated Header files
Date: Tue, 22 May 2001 14:29:01 -0700
Are you using Jam 2.3? If you are, you could be running into a bug that was
fixed in a recent patch (2.3.2). I haven't encountered this one myself, but
it sounds like Jam was failing to run any actions that build header files.
This would cause the symptoms you show in your bing example.
Try 2.3.2. You might have to get it from the public perforce depot, because
the Jam homepage may not have the latest tarballs.
http://public.perforce.com/public/jam/index.html
From the 2.3.2 release notes:
"0. Bugs fixed since 2.3.1
PATCHLEVEL 2 - 3/12/2001
NOCARE changed back: it once again does not applies to targets
with sources and/or actions. In 2.3 it was changed to apply to
such targets, but that broke header file builds: files that are
#included get marked with NOCARE, but if they have source or
actions, they still should get built."
From: Tony Smith <tony@perforce.com>
Subject: Re: Generated Header files
Date: Wed, 23 May 2001 17:34:39 +0100
A couple of things I think, but the main one is that "lib" is also the name
of a built-in target in Jambase and that conflicts with the name of the lib
subdirectory.
Changing the directory name is the easiest option, but see "Using Jamfiles
and Jambase" for the alternatives.
The other thing that's missing is that your ErrBuild rule doesn't locate the
created header file in the same directory as the source, so if you build from
the top, it will create it at the top. Add a line like:
MakeLocate $(<) : $(LOCATE_SOURCE) ;
to sort that out. Here's a better version of your Jamrules which uses the
File rule to do the actual copy.
rule ErrBuild {
Depends $(<) : $(>) ;
Depends first : $(<) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
File $(<) : $(>) ;
Clean clean : $(<) ;
}
Note no HDRS line, I'd just use "SubDirHdrs" in the Jamfile in question. i.e.
in src/Jamfile:
SubDirHdrs $(TOP) mylib peel ;
where "mylib" is the renamed "lib" directory. Also, in your top Jamfile, you can use:
SubDir TOP ;
to avoid having to set the TOP environment variable in your shell. Just saves
a bit more setup time.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Generated Header files
Date: Wed, 23 May 2001 18:42:00 +0100
OK. That could be a problem with the project I'm actually trying to use
this on. The directory's already called 'lib', and it lives in CVS, which
could make things a bit messy. I'll look at the alternatives.
Can I implicitly make jam look somewhere else for Jambase, or are my only
options using -f or rebuilding it?
I also found that I needed to add a line like:
MakeLocate $(>) : $(LOCATE_SOURCE) ;
in order to tell it that the .mes file lived there, also. Don't know why.
It wouldn't work without both.
Yeah, I know about the File rule. I'm not (in the actual code) just doing a
copy. Maybe I should have changed the example to run grep over the file or
something, to make that more obvious.
Excellent. I wondered how to do that.
Well, that answers all of today's questions. I'm off to a beer festival
now, so I'll probably be back tomorrow with some more questions -- and a hangover :-)
From: Grant Glouser <Grant.Glouser@corp.palm.com>
Subject: RE: Generated Header files
Date: Wed, 23 May 2001 11:24:53 -0700
I think you also need to set SEARCH because otherwise Jam doesn't know where
to look for the .mes source file. The default IIRC is to look only in the
current directory (the one you run jam in). That's why it works in the peel
directory but not in the toplevel. Add this to your ErrBuild rule:
SEARCH on $(>) += $(SEARCH_SOURCE) ;
SEARCH_SOURCE is set automatically by the SubDir rule. (In Tony's example,
the File rule does this for you.)
This shows the advantage of using the GenFile rule (which many other people
have been suggesting), because GenFile does these things for you!
PS Works fine for me *without* "Depends first : $(<) ;".
rule ErrBuild {
SEARCH on $(>) += $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
Depends $(<) : $(>) ;
Clean $(<) ;
}
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 24 May 2001 12:47:59 +0100
Subject: Response files (Microsoft LINK command line length)
I've got a directory with 94 C++ files in it. These all get built (by a
Main rule) into a single executable. Unfortunately, the resulting command
line is too long for Microsoft LINK (about 4500 characters, rather than the
rather arbitrary 996 character limit).
Question: How do I get jam to output response files (that I can then feed to
link using the @ operator)? I can then edit the relevant section in
Jambase.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Response files (Microsoft LINK command line length)
Date: Thu, 24 May 2001 13:55:43 +0100
OK, on closer inspection, it seems that this limit is imposed by jam itself,
at line 506 of make1.c. My rule needs to have piecemeal defined, or MAXLINE
needs to be longer. It's set to 996 if NT is defined. And it is.
Date: Fri, 25 May 2001 11:55:14 +0200
Subject: Re: Response files
Jam contains no build-in response file generation. The limit of the
NT(4.0/2000) shell is not 996, but not infinite. So if you have many object
files, increasing the MAXLINE won't really help.
You can use picemeal to generate response files. I use something like this:
# Old link, replaced by the response version bellow
actions Link bind NEEDLIBS { $(LINK) $(LINKFLAGS) -o $(<) $(UNDEFS) $(>) $(NEEDLIBS) $(LINKLIBS) }
# Link with response files
actions dolink { $(LINK) $(LINKFLAGS) -o $(<) $(UNDEFS) @$(CMDFILE) $(NEEDLIBS) $(LINKLIBS) }
actions Link {}
rule Link {
CMDFILE on $(<) = $(<[1]).cmd ;
initcmdfile $(<) ;
echofilestocmdfile $(>) ;
dolink $(<) : $(>) ;
closecmdfile $(<) ;
# Insert old Link body here...
}
# initcmdfile, echofilestocmdfile, closecmdfile
actions quietly initcmdfile { copy nul: "$(CMDFILE)" > nul }
actions piecemeal quietly echoparamstocmdfile { echo $(>) >> "$(CMDFILE)" }
rule echoparamstocmdfile { NOTFILE $(>) ; }
actions piecemeal quietly echofilestocmdfile { echo "$(>)" >> "$(CMDFILE)" }
actions quietly closecmdfile { DEL "$(CMDFILE)" }
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Fri, 25 May 2001 16:26:49 +0100
Subject: Building DLLs... (and precompiled headers)
I've got an executable, which depends on a couple of DLLs, which also need
to be built. So, I cloned the 'Main' rule in Jambase, like this:
rule SharedLibrary {
SharedLibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ;
Objects $(>) ;
}
rule SharedLibraryFromObjects {
_t = [ FAppendSuffix $(<) : $(SUFSHR) ] ;
LINKFLAGS on $(_s) += /dll ;
Link $(_t) : $(_s) ;
}
Then I can use this rule to build the DLL. And it works.
My question is: how do I modify the rule to tell jam that this also emits
the import library? What I mean is: I've got:
SharedLibrary shared_lib : shr1.cpp shr2.cpp ;
Main my_exe : exe1.cpp exe2.cpp ;
LinkLibraries my_exe : lib1 lib2 shared_lib ;
Jam doesn't know that the SharedLibrary step builds shared_lib.lib. How do
I tell it that it does?
Also, I managed to implement a rule to use Visual C++ precompiled headers,
and I was wondering if anyone could offer me a critique of it:
rule PrecompileHeader {
Depends $(<) : $(>) ;
Depends first : $(<) ;
SubDirC++Flags /Fp$(LOCATE_TARGET)/$(<:S=.pch) /Yu"$(>:S=.h)" ;
MakeLocate $(>) : $(LOCATE_SOURCE) ;
# If you uncomment the following line, and I don't think that you ought to,
# remove the $(LOCATE_TARGET) from the /Fp, above.
# MakeLocate $(<) : $(LOCATE_TARGET) ;
Clean $(<) ;
}
# This one's been wrapped for clarity
actions PrecompileHeader {
$(C++) /c $(C++FLAGS) $(OPTIM)
/Fp$(LOCATE_TARGET)/$(<:S=.pch)
/Fd$(LOCATE_TARGET)/
/Fo$(LOCATE_TARGET)/
/I$(HDRS)
/I$(STDHDRS)
/Tp$(>)
/Yc$(<:S=.h)
}
There are a couple of problems with it:
a) Setting the dependency of the other C++ files on the generated .pch file.
I'd like to remove the 'Depends first', but I'm not convinced that it'll
build in the correct order.
b) It gets invoked as PrecompileHeader foo.pch, where everything else is
invoked as C++ a\b\c.cpp (i.e. with the path). Is this something I should worry about?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Building DLLs... (and precompiled headers)
Date: Fri, 25 May 2001 16:46:10 +0100
Ah, no problem. Found the fix:
MakeLocate $(_t:S=$(SUFLIB)) : $(LOCATE_TARGET) ;
...in my SharedLibraryFromObjects rule seems to have fixed it.
Subject: RE: Building DLLs... (and precompiled headers)
Date: Fri, 25 May 2001 22:13:37 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
It took me a long time to get my PCH rules and dependencies working no
matter what directory I built from, etc. The changes were extensive,
because I had to hack around the fact that Jam doesn't properly handle
the multiple target case. I still mean to crack open the dependency
code inside jam.exe and fix that, but I haven't found the time yet.
I've attached my Jambase file for your perusal. As you can see, my
attempt at a solution is a bit of a hack, but it does seem to work. Uh,
my Jambase is tailored in other ways as well, FYI. It also includes
rules for .sbr/.bsc browse information files, and .idl files.
The key changes regarding PCH files involve these rules (and also their
actions, for many of the rules):
- rule C++
- rule HdrRule (very nasty, gristing is tricky, it won't necessarily
figure out it needs to rebuild the PCH file when headers it included are updated!)
- rule Library
- rule Object
- rule Pch
- rule Res
- rule SubDir
- rule SubDirPrecompHdr
- rule SubDirPrecompHdrEnd
Maybe also these, but since I think Main is not involved, these probably
aren't either:
- rule DllLinkFlags
- rule DllMain
A sample Jamfile using the PCH stuff:
SubDir TOP foo ;
SubDirHdrs $(TOP) bar ;
SubDirPrecompHdr ;
Main foo : main.cpp ;
LinkLibraries main : user32.lib ;
The SubDirPrecompHdr rule usage is:
SubDirPrecompHdr [cppfile [: hdrfile]] ;
where brackets indicate optionalness. If cppfile is omitted, it
defaults to precomp.cpp. If hdrfile is omitted, it defaults to precomp.h.
I've even tried making several directories share a single PCH file, and
that does seem to work, except that if you build from somewhere whose
scope doesn't include the directory that is primarily responsible for
the PCH file, then if I remember correctly it doesn't know how to
rebuild the PCH file in that case.
From: Roger Lipscombe [mailto:rlipscombe@riohome.com]
Sent: Friday, May 25, 2001 8:46 AM
Subject: Re: Building DLLs... (and precompiled headers)
Ah, no problem. Found the fix:
MakeLocate $(_t:S=$(SUFLIB)) : $(LOCATE_TARGET) ;
...in my SharedLibraryFromObjects rule seems to have fixed it.
Subject: RE: Building DLLs... (and precompiled headers)
Date: Sun, 27 May 2001 10:43:20 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
I should clarify my comments about HdrRule -- I meant that without my
changes to the gristing, it failed to get the dependencies right and was
not rebuilding the PCH file when headers it included changed. The
HdrRule in my Jambase should have the gristing right so that it rebuilds
the PCH file when appropriate.
I think that gristing could be improved to automatically use the actual
path where the header file was found, rather than being given an
arbitrary prefix. For example, in my project, that would consolidate
the headers uniquely, and drop the number of dependencies from ~6000 to
~1000. I may address this eventually but for now it's not a big issue
for me, it's just a little annoying that Jam takes longer than it needs
to, when figuring out dependencies.
From: Chris Antos
Sent: Friday, May 25, 2001 10:14 PM
Subject: RE: Building DLLs... (and precompiled headers)
It took me a long time to get my PCH rules and dependencies working no
matter what directory I built from, etc. The changes were extensive,
because I had to hack around the fact that Jam doesn't properly handle
the multiple target case. I still mean to crack open the dependency
code inside jam.exe and fix that, but I haven't found the time yet.
I've attached my Jambase file for your perusal. As you can see, my
attempt at a solution is a bit of a hack, but it does seem to work. Uh,
my Jambase is tailored in other ways as well, FYI. It also includes
rules for .sbr/.bsc browse information files, and .idl files.
The key changes regarding PCH files involve these rules (and also their
actions, for many of the rules):
- rule C++
- rule HdrRule (very nasty, gristing is tricky, it won't necessarily
figure out it needs to rebuild the PCH file when headers it included are updated!)
- rule Library
- rule Object
- rule Pch
- rule Res
- rule SubDir
- rule SubDirPrecompHdr
- rule SubDirPrecompHdrEnd
Maybe also these, but since I think Main is not involved, these probably
aren't either:
- rule DllLinkFlags
- rule DllMain
A sample Jamfile using the PCH stuff:
SubDir TOP foo ;
SubDirHdrs $(TOP) bar ;
SubDirPrecompHdr ;
Main foo : main.cpp ;
LinkLibraries main : user32.lib ;
The SubDirPrecompHdr rule usage is:
SubDirPrecompHdr [cppfile [: hdrfile]] ;
where brackets indicate optionalness. If cppfile is omitted, it
defaults to precomp.cpp. If hdrfile is omitted, it defaults to
precomp.h.
I've even tried making several directories share a single PCH file, and
that does seem to work, except that if you build from somewhere whose
scope doesn't include the directory that is primarily responsible for
the PCH file, then if I remember correctly it doesn't know how to
rebuild the PCH file in that case.
From: Roger Lipscombe [mailto:rlipscombe@riohome.com]
Sent: Friday, May 25, 2001 8:46 AM
Subject: Re: Building DLLs... (and precompiled headers)
Ah, no problem. Found the fix:
MakeLocate $(_t:S=$(SUFLIB)) : $(LOCATE_TARGET) ;
...in my SharedLibraryFromObjects rule seems to have fixed it.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 31 May 2001 15:58:18 +0100
Subject: C++ rule and current directory...
When invoking 'jam -d2' from the root of the source tree, I see things like
this:
C++ lib\httpd\core\ByteRanges.obj
cl [...] /I ./lib /IP:\MSSDK\Include /Folib\httpd\core\ByteRanges.obj
/IP:\VStudio\VC98\include /Tplib\httpd\core\ByteRanges.cpp
(I've snipped and wrapped it for readability)
It seems that this occurs with most other rules, too. The current directory
is left as the root of the source tree. This causes problems with #include
"", since the path to the actual source files must be named in a /I
statement (added with SubDirC++Flags).
This strikes me as counter-intuitive -- if I was building something with the
Visual C++ IDE, or with a recursive make (i.e. make -C), I wouldn't need to
worry about this.
So:
a) Am I misunderstanding it? It is in the correct directory, and something
else is wrong?
b) Do I have to furtle with the $(HDRS) stuff? If so, how? What have I forgotten?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 31 May 2001 16:00:05 +0100
Subject: LinkLibraries and system libraries
I've got a Jamfile like this:
Main my_server : my_file.cpp ;
LinkLibraries httpd_server :
my_library
ws2_32.lib ;
ws2_32.lib is a system library.
Jam says that it can't build the project, since it doesn't know how to build
ws2_32.lib. How do I tell jam that the file exists?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 31 May 2001 16:00:52 +0100
Subject: External Makefiles
Simple question: What's the correct way to get jam to spawn make to build
something that came with its own Makefile (in this case, id3lib).
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 31 May 2001 16:35:15 +0100
Subject: Re: C++ rule and current directory...
Doh! Further reading of the supplied Jambase reveals the -I $(HDRS) in the
C++ actions, and the HDRS on $(<) in the Object rule. Seems my header file
was in the wrong place. Plus the fact that the compiler looks in the same
directory as the .cpp file, rather than the working directory anyway.
Date: Thu, 31 May 2001 19:43:35 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: LinkLibraries and system libraries
Don't add system libraries to your dependency graph.
Use LINKLIBS or NEEDLIBS instead..
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Thu, 31 May 2001 14:05:27 -0400
Subject: emacs editing mode for Jam?
One thing has been driving me crazy recently: I can't find a decent emacs
mode for editing Jam files. sh-mode, perl-mode, and python-mode all come
close in various ways, but fail in others. Hacking modes in emacs is such a
PITA that I can't easily figure out what to do. Have you come across/heard
of anything?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: LinkLibraries and system libraries
Date: Thu, 31 May 2001 19:26:28 +0100
Ah, Of course. At the moment, I've gone with something like:
rule SystemLibraries {
local _t = [ FAppendSuffix $(<) : $(SUFEXE) ] ;
LINKLIBS on $(_t) += $(>:S=$(SUFLIB)) ;
}
Main foo : foo.cpp ;
SystemLibraries foo : ws2_32.lib ;
...which works (once I thought to steal the FAppendSuffix stuff from
LinkLibraries).
However, this becomes a problem when I try to use it with my SharedLibrary
rule (which builds a DLL) instead of Main, because it assumes the $(SUFEXE) suffix.
The same problem applies to the supplied LinkLibraries rule.
Any suggested fixes? Would it be a good idea to generate some kind of
LINKLIBS-foo variable, which I could then apply in the
{Main|SharedLibrary}FromObjects rule, once I've figured out the correct suffix?
Can I change the LinkLibraries/SystemLibraries rules to know what the
Main/SharedLibrary is going to build? Should I just put up with it and
create LinkLibrariesShared/SystemLibrariesShared rules that use $(SUFSHR) instead?
From: "Brett Calcott" <brett.calcott@paradise.net.nz>
Date: Fri, 1 Jun 2001 11:38:27 +1200
Subject: layered dependencies and shared libs
The project I am working on consists of several independently usable layers,
which are separated into subdirs and are independent cvs modules. Because
each of the layers is independent, they are not arranged in a hierarchy.
I check all of the modules out to a single directory and define this subdir as TOP :
utilities\
layer1\
layer2\
app1\
app2\
where utilities and layer1 & 2 build shared libs (so or dll) and app1 & 1
build executables.
dependencies are as follows:
layer1 : utilities
layer2 : layer1 utilities
app1 : layer1 utilities
app2 : layer1 layer2 utilities
I have 2 questions:
1. How do I get a shared library to depend correctly on another shared
library. When I use LinkLibraries it attempts to create a static library as
the dependency.
2. The way I am doing things does not fit the standard hierarchical method -
and I would like the Jamrules to be in cvs. With Make I can put a
Makefile.include in another subdir & "include
$TOP/setup_dir/Makefile.include" in the other Makefiles. The Subdir rules in
Jam don't seem amenable to this. Is creating a link the easiest way?
Date: Fri, 01 Jun 2001 10:53:11 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: layered dependencies and shared libs
I don't think that this is possible with Jam currently, however, there are
ways to circumvent this (more on this below).
well, you could just use "include ...." in your Jamfile. The SubDir rule
is specially designed to deal with sub-directories of a single project.
I believe that the solution to your problem is to treat each one of
your independent layers as independent projects, that is:
- define a "Jamfile" in each of "layer1", "layer2" and "utilities",
eventually add Jamrules if you need them.
- you're not forced to use a variable named "TOP" in your SubDir rules,
so use something project-specific instead, like "LAYER1_TOP",
"LAYER2_TOP", "UTILITIES_TOP", etc..
in one of my projects:
# This Jamfile is used to compile the ZLib source code.
#
# We need to invoke a SubDir rule if the ZLib source directory top
# is not the current directory. This allows us to build the ZLib
# as part of another project easily.
#
ZLIB_TOP ?= $(DOT) ;
if $(ZLIB_TOP) != $(DOT)
{
SubDir ZLIB_TOP ; # this includes Jamrules if any..
}
#only use the source files that are required by LibPNG !!
# (we don't compile gzio.c, compress.c and uncompr.c)
#
zlib_sources = crc32.c deflate.c inflate.c zutil.c adler32.c infblock.c
inftrees.c infcodes.c inffast.c infutil.c trees.c ;
ZLIB_INCLUDE = $(ZLIB_TOP) ;
ZLIB_NEEDLIBS = $(LIBPREFIX)zlib$(SUFLIB) ;
Library $(LIBPREFIX)zlib : $(zlib_sources) ;
"""
Notice that this example only builds a static library, but
you could easily change it for a DLL. The important points are:
- the "SubDir" rule is only called if ZLIB_TOP is already
defined (to something that isn't the current directory)
This lets you compile the ZLib independently in its
directory, or as part of a larger project, using the
same Jamfile..
- the Jamfile defines ZLIB_INCLUDE and ZLIB_NEEDLIBS that
are used by other packages that depend on the ZLib
(in my example, LibPNG) when they use this version
of the library.
My main application Jamfile/Jamrules are organized as follows:
- there is a default Jamrules file, used on all systems except
Unix, where the same file is generated from a "Jamrules.in"
template through "configure"
- the Jamrules file contains a configuration variable named
"USE_SYSTEM_ZLIB". When it is "true", the Jamrules must also
define "ZLIB_INCLUDE" and "ZLIB_NEEDLIBS" (which are typically
filled automatically by "configure" on Unix).
- the Jamfile tests the value of "USE_SYSTEM_ZLIB". If it is not
true, then it defines ZLIB_TOP relative to the current path,
then calls "SubInclude ZLIB_TOP"
And of course, I use ZLIB_NEEDLIBS to link any application or DLL
than needs to link to the ZLib (even indirectly), while ZLIB_INCLUDE
should be used in "SubHdrs" rules for source code that needs to
#include the ZLib public headers..
This works flawlessly here, hope this helps.. :-)
From: Amaury.FORGEOTDARC@ubitrade.com
Subject: Re: emacs editing mode for Jam?
Date: Fri, 1 Jun 2001 11:57:15 +0200
There is the jam-mode.el from Eric Scouten
in the Perforce Public Depot:
Date: Fri, 1 Jun 2001 09:47:43 -0500 (CDT)
Subject: Re: layered dependencies and shared libs
just hack the linkLibraries rule to generate a SharedLinkLibraries rule
rule vSharedLinkLibraries {
local t ;
# make library dependencies of target
# set NEEDLIBS variable used by 'actions Main'
if $(<:S) { t = $(<) ;
} else { t = $(<:S=$(SUFEXE)) ; }
s = $(>:S=$(SUFSHR)) ;
Depends $(t) : $(s) ;
NEEDLIBS on $(t) += $(s) ;
SEARCH on $(s) += $(BUILT_LIBS) $(SHADOW_BUILT_LIBS) ;
}
There's a bit of extra stuff in there specific to our rules, so I
suspect it might be easier to hack the original LinkLibraries rule.
From: "Kimpton, Andrew" <awk@pulse3d.com>
Subject: RE: layered dependencies and shared libs
Date: Fri, 1 Jun 2001 09:27:54 -0700
We use a similar 'conditional include' mechanism here to deal with 'layers'
of libraries etc. such as you described.
One problem however we have found is that the link rule when used with the
Microsoft Library manager (we mostly build for Windows) supplies a 'TOP
relative' path. This path seems to change unfortunately depending on the
'depth' relative to TOP when you invoked Jam. This in turn means that the
same object (a.obj) can be passed to link as ../../build/a/a.obj and
../../../build/a/a.obj. The microsoft lib tool treats these as different
members of the archive since the paths are different although in reality
they are the same - which is not good.
Perhaps this is more related to our use of a separate build tree which is
actually a peer to TOP not a child. Any thoughts or suggestions on fixing
this (right now we just have to be careful about not changing the apparent
'depth' of our trees to avoid this) would be gratefully received.
Date: Fri, 01 Jun 2001 19:59:18 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: layered dependencies and shared libs
The "Library" and "LibraryFromObjects" rules use the "grist" path
that is re-computed by each SubDir invocation. Indeed, this is done
by the following line of the Jambase:
rule SubDir {
...
SOURCE_GRIST = [ FGrist $(<[2-]) ] ;
...
}
As you can see, the grist is only composed from the second-to-last
parameters to the SubDir rule. This means that the two following commands:
SubDir PROJECT1_TOP src ;
SubDir PROJECT2_TOP src ;
will produce the same grist, even if "PROJECT1_TOP" and "PROJECT2_TOP"
correspond to completely different directories
Maybe this may help you solve your problem..
By the way, I don't understand well why you'd want to link the same
object file to two different libraries or programs that are placed
in different directories ? Could you enlighten us with a practical example ??
From: "Kimpton, Andrew" <awk@pulse3d.com>
Subject: RE: layered dependencies and shared libs
Date: Fri, 1 Jun 2001 11:53:59 -0700
Yeah - re-reading my description I realized it wasn't as clear as it could
have been 8-) Here's what we do :
In each Jamfile for each library/executable we have something like
OBJDIR = obj.$(OSFULL[1]:L).debug ;
BUILD_OUTPUT_PATH = $(TOP)\\..\\build\\$(OBJDIR)\\dynamic_crt
LOCATE_TARGET = $(BUILD_OUTPUT_PATH)\\ia ;
(Actually OBJDIR and BULD_OUTPUT_PATH can have a couple of different values
depending on release or debug builds, and linking against a static or
dynamic version of the Microsoft C runtime library)
The problem is that since $(TOP) is set based on the location of the Jamfile
that is the 'starting point' $(TOP) may be '.' or '..' or '../..' depending
on whether the build was 'launched' from $(TOP) or a subdirectory 1 or 2
levels beneath it.
When the dependancies actions run the microsoft library manager to extract
the build date information from the library archive in order to determine
what may need to be rebuilt the contents of $(BUILD_OUTPUT_PATH) is used as
part of the arguments. So depending on the value of $(TOP) we get different
'results' for the build dates.
I could solve this by using something other than $(TOP) in the build output
path but our engineers place the source trees in different directories on
their machines and in fact some (such as myself) have multiple copies of the
source tree (we use perforce - so I have multiple perforce client
definitions with a different client root for each tree). Using a relative
path based on $(TOP) makes things fairly neat.
I had hoped that setting $(NOARSCAN) and/or $(KEEPOBJS) might avoid the
confusion but that doesn't seem to be the case (unless my brief testing
suffered from some other problem too).
From: "Brett Calcott" <brett.calcott@paradise.net.nz>
Date: Sat, 2 Jun 2001 21:48:39 +1200
Subject: PCCTS - parser generator rules
firstly, thanks for all the help on my previous question. I have got a
preliminary setup doing pretty much what I want.
Except for this:
Jam has support for yacc - but I use a Parser generator called PCCTS.
You run 2 programs:
antlr $(ANTLR_OPTIONS) -o $(OUTPUT_DIR) $(INPUT_GRAMMAR)
this generates the following:
$(OUTPUT_DIR)/tokens.h
$(OUTPUT_DIR)/OipParser.h
$(OUTPUT_DIR)/OipParser.cpp
$(OUTPUT_DIR)/OipGrammar.cpp
then you run this:
dlg $(DLG_OPTIONS) -o $(OUTPUT_DIR) $(OUTPUT_DIR)/parser.dlg
to generate these:
$(OUTPUT_DIR)/DLGLexer.cpp
$(OUTPUT_DIR)/DLGLexer.h
Can I use GenFile?
Any hints would be welcome...
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 7 Jun 2001 11:13:10 +0100
Subject: Bogus LOCATE_TARGET
I've got a directory tree something like this:
TOP
lib
core
app
tests
Which I've strung together with SubInclude and SubDir rules, like this:
# TOP/Jamfile
SubDir TOP ;
SubInclude TOP lib ;
SubInclude TOP app ;
SubInclude TOP tests ;
# TOP/lib/Jamfile
SubDir TOP lib ;
SubInclude TOP lib core ;
...etc....
To the default MSVCNT C++ rule, which reads like this:
$(C++) /c $(C++FLAGS) $(OPTIM) /Fo$(<) /I$(HDRS) /I$(STDHDRS) /Tp$(>)
...I've added /Fd, so it reads like this:
$(C++) /c $(C++FLAGS) $(OPTIM) /Fd$(LOCATE_TARGET)/ /Fo$(<) /I$(HDRS)
/I$(STDHDRS) /Tp$(>)
The /Fd switch tells Visual C++ where to put the .pdb file, containing debug
information, etc. However, it always passes the last directory listed in
the tree, i.e. in this case, it'll always pass /Fdtests/
I was expecting it to pass the name of the subdirectory in which the C++
rule was invoked. Now, I suspect that what is happening is that when the
actions are invoked, the value of LOCATE_TARGET is different from what I was
expecting.
The SUBDIRC++FLAGS, which ought to suffer from the same problem work because
of this line:
C++FLAGS on $(<) += $(C++FLAGS) $(SUBDIRC++FLAGS) ;
in 'rule C++'.
My question: Do I need to do something like this to fix the problem? Should
I just add the /Fd switch to this part of the rule?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 7 Jun 2001 11:14:50 +0100
Subject: Compiling multiple C++ files at once
Microsoft's C++ compiler has a natty feature where you're allowed to pass
multiple filenames to it at once. This reduces the compile time, since the
compiler only has to be spun up once for each batch.
Obviously, you have to ensure that the switches are the same for all of the files.
My question: How to do this in jam? Using 'together' on the rule doesn't
appear to do anything.
Date: Thu, 07 Jun 2001 13:18:22 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Bogus LOCATE_TARGET
Action rules are invoked after the complete build of the dependency graph,
i.e. after parsing all other rules. By default, the action command expansion
uses the "current" (i.e. last) values for each variable, which is why your
LOCATE_TARGET is "tests/" here.
You can however alter this by using the "bind VARNAME" modifier in the
action rule definition. This causes the expansion of VARNAME to use the
value that the variable had when the corresponding C++ rule was invoked..
(as an example, this is also what is used for the NEEDLIBS variable, in the CC rule)
This is another example of target-specific variable expansion. Note
that this should be read as:
$(C++FLAGS) when used in actions generating $(<) should
be expanded as the _current_ value of
"$(C++FLAGS) $(SUBDIRC++FLAGS)"
Date: Thu, 07 Jun 2001 13:26:53 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Compiling multiple C++ files at once
That's because "together" concats the source targets that apply to a single
destination target. Calling the Visual C++ (or even GNU C) compiler with
multiple source files, as in:
cl file1.c file2.c
really creates two distinct (object files) targets, and there is no way
to indicate this to Jam currently. You could probably hack some custom
rules using pseudo/temporary targets, but I wouldn't recommend it.
Another way to achieve what you need is to use a "wrapper" source code
that simply #includes other sources, as in:
#include <file1.c>
#include <file2.c>
and compile it in one pass, into a single object. Of course, this supposes
that this multiple inclusion will not create conflicts (mainly in static
data and function names), but it works pretty well with FreeType 2 :-)
From: <boga@mac.com>
Date: Fri, 8 Jun 2001 09:23:58 +0200
Subject: Compiling multiple C++ files at once
We'd like to use the same feature too. I'm not sure if it's possible with
Jam (using custom rules).
This is the feautre i'd like to use with JAM. Two object file means, if only
one of the object needs to be updated, only one of them have to be
recompiled. Has anyone implemented such custom rule?! Has anyone got any ideas?
I've tried the following:
I'd like to implement a rule like this:
local SOURCES = file1.c file2.c file3.c file4.c file5.c file6.c ;
local OBJECTS = [ multicppcompile $(TARGETDIR) : $(SOURCES) : $(CFLAGS) ] ;
Where multicppcompile is something like:
rule multicppcompile {
local i,OBJECTS;
Depends $(i:D=$(1):S=.obj) : $(i) ;
OBJECTS += $(i:D=$(1):S=.obj) ;
}
# ????
return $(OBJECTS) ;
}
What i'd need is:
- if only file1.c need to be recompiled: command should be: >cl -o
$(TARGETDIR) file1.c
- if file1.c and file2.c have to be recompile: command should be: >cl -o
$(TARGETDIR) file1.c file2.c
What i have tried:
1. 'updated' action modifier: then the objects files would have to be the
$(>) of the action, but then i couldn't get the corresponding source files
for the objects.
2. 'together' action modifer: $(<)-should be the same, so it's useless.
3. response files:
rule multicppcompile {
local i;
local OBJECTS ;
Depends $(i:D=$(1):S=.obj) : $(i) ;
OBJECTS += $(i:D=$(1):S=.obj) ;
}
initcmdfile $(OBJECTS) : $(OBJECTS[1]).cmd ;
addfiletocmdfile $(i:D=$(1):S=.obj) : $(i) ; # !!!!
}
execcl $(OBJECTS) ;
closecmdfile $(OBJECTS) ;
return $(2:D=$(1):S=.obj) ;
}
But this won't work as the line marked with #!!! should be
addfiletocomdfile $(OBJECTS) : $(i) ;
otherwise execcl might be executed before the second addfiletocmdfile!
And if it's OBJECTS all object files will be udpated.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Compiling multiple C++ files at once
Date: Fri, 8 Jun 2001 10:55:50 -0700
I have a suggestion .. I have not tried this, but could
the compile step be modeled after the archive step in the build ?
In the standard rules, libX depends in libX(A.o), libX(B.o), etc.
Then the libX(A.o) depends on A.o, libX(B.o) depends on B.o, etc
Finally A.o depends on A.cpp, B.o depends on B.cpp
Could the rules be changes so that ...
libX depends in libX(A.o), libX(B.o)
and libX(A.o) depends on A.cpp, libX(B.o) depends on B.cpp,
And then, instead of the Archive rule and Objects rule, use a ArcCpp rule
That rule might be writen as
actions updated together piecemeal ArcCpp {
$(CPP) -c $(>) etc ...
$(AR) $(<) $(>:S=.o)
$(RM) $(>:S=.o)
}
(in its most basic form)
Yes, A lot of the Jamrules would have to duplicatd to make this
work (if it can work).
From: <boga@mac.com>
Date: Sun, 10 Jun 2001 17:55:07 +0100
Subject: [Bug] jam and action with more than one target + fix.
If an action has more than one target, before executing the action, jam
should update all dependents of the targets.
However jam will update only the dependets of the first target!
Here's a very simple jam file to demonstrate the bug:
(Only tested with Jam 2.3.1 but this bug should be in 2.3.2 too.)
Test.jam:
=======
# _ a <--- a_src
# alll <-/
# \_ b <-- b_tmp <-- b_src
#
# 'upd a b' : a_src b_tmp ; is executed first and not 'upd b_tmp : b_src' ;
#
actions upd { ECHO Updating $(<) : $(>)}
upd a b : a_src b_tmp ;
upd b_tmp : b_src ;
# BUG!: Jam won't update dependent of 'b' before executing this action!
# Jam will update only dependets of 'a'
Depends a : a_src ;
Depends b : b_tmp ;
Depends b_tmp : b_src ;
Depends all : a b ;
NOTFILE all ;
NOTFILE a_src ;
NOTFILE b_src ;
The output of the rules are:
That is 'upd b_tmp : b_src' action executed after(!) 'upd a b : a_src b_tmp'
but b depends on b_tmp!
A possible fix to this problem is to insert the following code into make1.c
into make1a() function:
{
ACTIONS *actions;
for( actions = t->actions; actions; actions = actions->next ) {
TARGETS *targets;
targets->next) {
if (targets->target != t)
make1a (targets->target,t);
}
}
}
t->progress = T_MAKE_ACTIVE;
/* Now that all dependents have bumped asynccnt, we now allow */
/* decrement our reference to asynccnt. */
make1b( t );
}
Date: Sun, 10 Jun 2001 18:22:05 +0100
Subject: Re: Compiling multiple C++ files at once
Here's a solution for compiling multiple source at once. Jam+multiple
targets in action have to be fixed in order to work:
[ This version works with microsoft CL, using response files].
Thank Randy for the idea.
# OBJECTS = MultiCppCompile $(TARGETDIR) :
# $(SOURCES) : $(CFLAGS) ;
#
# This rule will compile $(SOURCES) to the
# targetdir, and will return the result objects.
#
rule MultiCppCompile {
local destdir = $(1) ;
local cflags = $(3) ;
local objects srcrefs i ;
# Set up dependecies and create reference files for each source.
# This way reference files will be updated.
local srcref = $(i:D=$(destdir)).file ;
local object = $(i:D=$(destdir):S=.obj) ;
mksrcref $(srcref) : $(i) ;
objects += $(object) ;
srcrefs += $(srcref) ;
Depends $(object) : $(srcref) ;
}
cflags += "/Fo$(destdir:G=)\\" ;
initcmdfile $(objects) : $(objects[1]).cmd ;
appendupdatedfiletocmdfile $(objects) : $(i) ;
}
addparamstocmdfile $(objects) : $(cflags) ;
domulticppcompile $(objects) : $(srcrefs) ;
closecmdfile $(objects) ;
return $(objects) ;
}
# actions domulticppcompile:
actions quietly updated domulticppcompile { cl /nologo @$(CMDFILE) }
actions quietly mksrcref { ECHO "$(>)" > "$(<)" }
rule mksrcref { Depends $(<) : $(>) ; }
# Response file utility rules:
actions quietly initcmdfile { copy nul: $(CMDFILE) > nul: }
rule initcmdfile { CMDFILE on $(<) = $(>) ; NOTFILE $(>) ; }
actions quietly closecmdfile { DEL $(CMDFILE) }
actions updated quietly appendupdatedfiletocmdfile { type $(>[1]) >> $(CMDFILE) }
actions quietly together piecemeal addparamstocmdfile { ECHO $(>) >> $(CMDFILE) }
rule addparamstocmdfile { NOTFILE $(>) ; }
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Tue, 12 Jun 2001 13:10:49 -0400
Subject: INCLUDES documentation bug?
The documentation for the INCLUDES rule reads:
INCLUDES targets1 : targets2 ;
Builds a sibling dependency: makes each of targets2 depend on
anything upon which each of targets1 depends.
But the example given doesn't seem to agree with the documentation:
Depends foo.o : foo.c ;
INCLUDES foo.c : foo.h ;
"foo.h" depends on "foo.c" and "foo.h" in this example.
According to the documentation, this would:
1. Make foo.o depend on foo.c
2. Make foo.h depend on anything on which foo.c depends
But in the example, there is no reason to think that foo.c depends on
anything. So what causes foo.h to depend on foo.c and foo.h? Is there some
information missing here? Also, isn't the circular self-dependency of foo.h
a problem?
From: Amaury.FORGEOTDARC@ubitrade.com
Subject: Re: INCLUDES documentation bug?
Date: Tue, 12 Jun 2001 20:51:49 +0200
You're right, the documentation seems incorrect.
The sentence should be something like:
Builds a sibling dependency: makes each of targets2 a
dependency of anything depending of targets1.
From: "David Abrahams" <abrahams@altrabroadband.com>
Subject: Re: INCLUDES documentation bug?
Date: Tue, 12 Jun 2001 14:39:36 -0400
I think I am having a problem getting across a language barrier here. Just
to clarify a bit, let me rephrase what you wrote. Please tell me if I got
your intention correctly:
Builds a sibling dependency: makes each of targets2 a
depend on every target that depends on a member of targets1.
Hmm, it could also mean:
Builds a sibling dependency: makes each of targets2 a
depend on every target that depends on all members of targets1.
^^^^^^^^^^^
Regardless, that would make foo.h depend on foo.o. So perhaps you didn't
mean either of these.
Maybe you meant:
Builds a sibling dependency: for each target X that depends
on a member of targets1, makes X depend on each of targets2
This is closer to the meaning of "sibling dependency", and it makes sense in
the context of the example (it would make foo.o depend on foo.h) but it
still doesn't produce the result described in the example (that foo.h
depends on foo.c and on itself).
One would hope. But what /is/ "the right thing"? ;-)
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Tue, 12 Jun 2001 15:33:49 -0400
Subject: building multiple products from a single action
Many build actions produce more than one output file. For example, when a
DLL is built on Windows, it generates an import library (.lib) and the
dynamic library(.dll). The .lib file gets statically linked into programs
that use the dynamic library, and causes the dynamic library to be loaded
when needed.
How can this be captured in Jam?
In particular, one would like:
a. A dependency on the .lib will not cause it to be rebuilt if it is
present but the .dll is not
b. a dependency on both the .lib and the .dll not cause them to be built
twice if neither is present
c. a dependency on the .dll will cause it to be rebuilt if it is not present.
The best I have been able to do so far uses INCLUDES to make the .lib a
sibling of the .dll, but this violates condition (a) above:
rule dll {
Depends $(<) : $(<).lib $(<).dll ;
Depends $(<).lib $(<).dll : $(>) ;
INCLUDES $(<).lib : $(<).dll ;
mkdll $(<).lib $(<).dll : $(>) ;
}
actions mkdll { ECHO "sources: $(>)" }
rule main { Depends $(<) : $(>) ; }
actions main { ECHO "sources: $(>)" }
dll a : foo.cpp ; # neither a.lib nor a.dll exists
dll b : foo.cpp ; # b.lib exists; b.dll doesn't
dll c : foo.cpp ; # c.lib and c.dll exist
main x : a.lib ;
main y : b.lib ;
main z : c.lib ;
Depends all : a b c x y z ;
Without the invocation of INCLUDES, I get the warning about a.dll being an
"independent target". The documentation for this warning doesn't seem to
agree with observed facts, since in this case a.dll appears in both $(<) and
$(>) of a Depends rule (right at the top of rule dll). Does anyone have a
better explanation for the warning?
Finally, it's quite unclear from the documentation exactly what it means
when a rule with build actions has multiple elements in $(<). Does that
capture the notion somehow that the elements of $(<) are built together?
Subject: RE: building multiple products from a single action
Date: Tue, 12 Jun 2001 13:00:36 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
Check the jamfile I posted on 5/27/2001.
In addition to the Pch, Idl, and other rules* it includes a DllMain rule
which builds the DLL. It does not currently note that it produces a
.lib file, but you can add that relatively easily, see the Idl rules for
an example -- they do the same kind of thing, just for .idl files that
produce .h and three .c files. In this analogy, the .h file would be
the .dll and the .c file would be the .lib (or vice versa, whatever). I
can't remember for sure if it knows how to rebuild the .c files if the
.h file exists. It's easy to get it to avoid unnecessary rebuilding,
getting it to do necessary building can be tricker. ;-)
* some bugs (mainly with Bsc file) have been fixed since 5/27/2001, plus
new rule UsePrecompHdr lets multiple directories share a pch file
generated by another directory via SubDirPrecompHdr. If people want me
to post an updated copy, let me know.
From: David Abrahams [mailto:abrahams@altrabroadband.com]
Sent: Tuesday, June 12, 2001 12:34 PM
Subject: building multiple products from a single action
Many build actions produce more than one output file. For example, when
a DLL is built on Windows, it generates an import library (.lib) and the
dynamic library(.dll). The .lib file gets statically linked into
programs that use the dynamic library, and causes the dynamic library to
be loaded when needed.
How can this be captured in Jam?
In particular, one would like:
a. A dependency on the .lib will not cause it to be rebuilt if it is
present but the .dll is not
b. a dependency on both the .lib and the .dll not cause them to be
built twice if neither is present
c. a dependency on the .dll will cause it to be rebuilt if it is not
present.
The best I have been able to do so far uses INCLUDES to make the .lib a
sibling of the .dll, but this violates condition (a) above:
rule dll {
Depends $(<) : $(<).lib $(<).dll ;
Depends $(<).lib $(<).dll : $(>) ;
INCLUDES $(<).lib : $(<).dll ;
mkdll $(<).lib $(<).dll : $(>) ;
}
actions mkdll { ECHO "sources: $(>)" }
rule main { Depends $(<) : $(>) ; }
actions main { ECHO "sources: $(>)" }
dll a : foo.cpp ; # neither a.lib nor a.dll exists
dll b : foo.cpp ; # b.lib exists; b.dll doesn't
dll c : foo.cpp ; # c.lib and c.dll exist
main x : a.lib ;
main y : b.lib ;
main z : c.lib ;
Depends all : a b c x y z ;
Without the invocation of INCLUDES, I get the warning about a.dll being
an "independent target". The documentation for this warning doesn't seem
to agree with observed facts, since in this case a.dll appears in both
$(<) and
$(>) of a Depends rule (right at the top of rule dll). Does anyone have
a better explanation for the warning?
Finally, it's quite unclear from the documentation exactly what it means
when a rule with build actions has multiple elements in $(<). Does that
capture the notion somehow that the elements of $(<) are built together?
Date: Tue, 12 Jun 2001 17:20:05 -0700 (PDT)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: INCLUDES documentation bug?
| Depends foo.o : foo.c ;
| INCLUDES foo.c : foo.h ;
| "foo.h" depends on "foo.c" and "foo.h" in this example.
That should read:
"foo.o" depends on "foo.c" and "foo.h" in this example.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: building multiple products from a single action
Date: Wed, 13 Jun 2001 09:51:22 +0100
A line like the following inside the DllMain rule does the trick for me.
I've not checked to ensure that it fulfills all of the requirements, however.
# Tell jam where it can find the import library
MakeLocate $(_t:S=$(SUFLIB)) : $(LOCATE_TARGET) ;
From: "Chris Antos" <chrisant@windows.microsoft.com>
Sent: Tuesday, June 12, 2001 9:00 PM
Subject: RE: building multiple products from a single action
I'd be interested in seeing another copy.
Date: Wed, 13 Jun 2001 13:53:25 -0400
From: Steve Leblanc <steven.leblanc@mayahtt.com>
Subject: Problem with Clean in SubDirs
A few C++ compilers that I use create a directory where they
put files which used to instantiate templates, so that
my directory tree looks like this after a build:
dir_a
|
| |
|
|
The Jamfiles in dir_b and dir_c contain the line
Clean clean : ii_files
If I do a 'jam clean' in dir_b or dir_c, all the created files
are removed, as well as the ii_files directory (I've set $(RM)
to 'rm -rf'). However, if I execute the same command in dir_a,
whose Jamfile does a SubInclude of those in dir_b and dir_c,
everything gets removed, except for the ii_files directories.
I tried adding the SOURCE_GRIST to ii_files in the Clean
command, but it didn't help. Any ideas?
Maya Heat Transfer Technologies Ltd.
Date: 14 Jun 2001 02:12:38 IST
From: Parth venkat <rvp_dac223@usa.net>
Subject: Build of Jam from sources fails:
I am trying to make a binary of jam from source on Linux Redhat
7.1 2.96-81 installation with
gcc version 2.96
make 3.79.1 gnu for i 386 redhat-Linux-gnu
I downloaded the sources and what i could gather from the README was to just
issue a make command in the source directory.
Here is the error I get
$ make
jam0
make : jam0 command not found
make *** [all] Error 127
1) I am sorry if this issue was already addressed before.
2) I am not subscribed to the list as yet. so Please ensure my email id in the
reply.
3) I could not get any help with search on the Perforce site.
4) If i missing on any system config I will be most happy to furnish any further details.
Thank you very much in advance.
Date: Thu, 14 Jun 2001 17:16:23 +1000
From: Milton Taylor <mctaylor@ingennia.com.au>
Subject: JAM Questions: BUilding for debug
We're looking at using jam to standardise our builds. I have read the
Laura Wingerd paper on how it was implemented at Sybase. Our own systems
are not nearly on the same scale as that, but we do have multiple
platforms and compilers to support, not to mention a reasonably complex
layering of C++ libraries.
I have not even tried jam yet. Before I do, I would be interested in any
insight on these issues:
Questions:
1. The paper describes some in-house customisations done to Jam for
Sybase's purposes. The relative path naming one interested me. Did these
customisations make it back into the version of Jam that exists today? (Ver 2.3)
2. I am not sure how to handle the issue of debug building, with respect
to the source file paths that are embedded into the exe or debug
databases. (e.g. on MSVC 6 there is a .pdb file that contains all this
stuff.) The problem is exacerbated when a project links in with debug
versions of shared libraries. Basically, I want to avoid having to enter
paths to source modules in the debugger. I cannot always guarantee that
the developers 'root' workspace will be the same, so I am probably in
trouble whenever absolute path names get embedded in the debug info.
Matters are complicated by the fact that the shared library may not sit
in the same workspace location when its being used to build an
application, as when it was itself built. In this case, there are
problems with both absolute and relative pathing approaches.
What do others do in respect of this?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Build of Jam from sources fails:
Date: Thu, 14 Jun 2001 15:30:37 +0100
You don't have . (dot, the current directory) in your path. Just run ./jam0
at the prompt when this fails.
From: "Parth venkat" <rvp_dac223@usa.net>
Sent: Thursday, June 14, 2001 3:12 AM
Subject: Build of Jam from sources fails:
I am working on a build system constructed on top of Jam which is designed
to address the multiple-compiler multiple-build-variant problem. I hope to
have a version ready for public inspection in the next few days, and would
welcome participation from members of this list. I have attached a
preliminary (incomplete) version of the documentation.
The system is being designed for use in my professional work and by boost
(www.boost.org), an organization developing open-source C++ libraries. It
will be hosted at boost as the Boost Build system.
The boost build system handles this. So far, the only customization has been
to the Jambase rules; we haven't had to change Jam's C++ source code. There
are a few changes we anticipate wanting to make, though, mostly for
compatibility with a wider range of platforms.
I'm not intimately familiar with the issues of where PDBs need to be located
and exactly how they work. Each toolset supported by the boost build system
has an associated toolset definition file, written in simple Jam code; I'm
hoping that experts on particular toolsets will be able to help me fill in
these details.
Hmm, sounds thorny. If you can figure out how you want the MSVC tools to be
invoked so that everything works for you, we can surely get the build system
to invoke them that way... but someone other than me will have to figure out
the toolset-specific stuff.
From: "Prabhune, Abhijeet" <APrabhun@ciena.com>
Date: Fri, 15 Jun 2001 11:32:02 -0400
Subject: Invoking batch files from within jamfile?
I read on the website that on windows NT, batch files can be invoked (from
within jamfile I presume). does anybody know how this can be done. I have a
bunch of batch files to build dlls and then I want to use these dlls to
build a object file.
From: "Brett Calcott" <brettc@paradise.net.nz>
Subject: Re: Build of Jam from sources fails:
Date: Fri, 15 Jun 2001 09:26:16 +1200
./jam0
This fooled me for a while to - it should probably get flagged as a bug and
fixed in the next release. All it needs is the ./ to be put in the Makefile.
From: "EXT-Goodson, Stephen" <Stephen.Goodson@PSS.Boeing.com>
Date: Fri, 15 Jun 2001 19:40:23 -0700
Subject: Need help with SubDir
I'm just starting to use Jam and I've run into a problem using SubDir. I've constructed
a simple example that shows my problem. Here's the Jamfile:
# Jamfile in $(TOPDIR)/Sub/
SubDir TOPDIR Sub ;
rule CatRule {
Depends $(<) : $(>) ;
Clean clean : $(<) ;
}
actions CatRule { cat $(>) > $(<) }
CatRule foo.c : somefile ;
Main foo : foo.c ;
#end of Jamfile
There is an empty Jamrules file in $(TOPDIR), and Sub/somefile exists. Originally
I had CatRule in the Jamrules file, but moved it while trying to figure this out.
When I invoke jam, it complains that it doesn't know how to make <Sub>foo.c, but
if I type "jam foo.c" it makes it just fine. Also, if I remove the SubDir rule it works fine.
What am I doing wrong? Thanks for your help.
Date: Sat, 16 Jun 2001 00:58:22 -0500 (CDT)
Subject: Re: Need help with SubDir
The reason would be that the main rule establishes a dependency
between <Sub>foo and <Sub>foo.o and <Sub>foo.c, but the
CatFile rule just makes a dependency between somefile and foo.c
Since <Sub>foo.c is a different target than foo.c, the CatFile
action is not invoked.
This is just a guess. If you run jam with debug turned on, say -d5
you will see all the info...
The <Sub>foo.c notation is called grist, and it is used to make targets
unique across directories, even if multiple directories have files of
the same name in them.
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Mon, 18 Jun 2001 18:20:22 -0400
Subject: Boost Build System prerelease
As I announced last week, the proposed Boost Build System is now available
for inspection.
It is available via:
Now supports building DLLs and (I think) shared libraries with GCC under unix.
Major code cleanup and commenting; the Jam code should be relatively
understandable now.
I'm still shoring up the documentation, but even that has been improved
quite a lot, including a gentle introduction at one end, and a guide to Jam
internals at the other.
The time has come for others to contribute. I have implemented 4 toolset
descriptions, for GCC, Metrowerks, Borland, and MSVC, and I have tested them
under Windows 2000. I need experts in various compilers and platforms
(including these) to step forward with their own toolsets and tweaks for the
4 I've got.Various other jobs that someone needs to take on are listed in
the TODO.txt file in boost/build.
http://users.rcn.com/abrahams/build_system_2001_6_18.zip
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Mon, 18 Jun 2001 18:29:05 -0400
Subject: Boost Build System prerelease (2nd try!)
That last email went out before I had edited it. It was mostly a copy of an
announcement I made to the boost.org group. Sorry!
As I announced last week, the proposed Boost Build System is now available for inspection.
It is available via:
1. http://users.rcn.com/abrahams/build_system_2001_6_18.zip (I do not intend
to keep this link current; I think the Perforce public depot might be a
better choice eventually, but I haven't got up to speed with that yet).
files repository, for boost members)
3. Anonymous CVS to the "build-development" branch of boost's "build" module at SourceForge.
Now supports building DLLs and (I think) shared libraries with GCC under unix.
Major code cleanup and commenting; the Jam code should be relatively
understandable now.
I'm still shoring up the documentation, but even that has been improved
quite a lot, including a gentle introduction at one end, and a guide to Jam
internals at the other.
As I mentioned, I would welcome contributions from members of this list. If
you are interested in the planned direction of this project, please see the
TODO.txt file enclosed in the project archive.
From: "EXT-Goodson, Stephen" <Stephen.Goodson@PSS.Boeing.com>
Subject: RE: Need help with SubDir
Date: Tue, 19 Jun 2001 17:20:22 -0700
Thanks. That helps, but I'm still not quite getting it.
Jam doesn't complain about not knowing how to make <Sub>foo.c any more,
but it won't make it either, even though it appears to know how.
If I run jam, it tries to Cc Sub/foo.o, but fails because foo.c doesn't exist.
If I type 'jam foo.c', it creates jam.c but issues the "warning: using independent
target foo.txt". After that, jam will build foo.o and foo just fine.
Interestingly, if I then 'touch foo.txt', that will cause jam to remake foo.o
(from the out of date foo.c). It won't ever make foo.c if it already exists
(even with an explicit 'jam foo.c').
My updated Jam file, and some output from jam -d is below.
As before, it works fine without the SubDir command.
# Jamfile in $(TOPDIR)/Sub/
SubDir TOPDIR Sub ;
rule CatRule {
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
Depends [ FGristFiles $(<) ] : $(>) ;
Clean clean : $(<) ;
}
actions CatRule { cat $(>) > $(<) }
CatRule foo.c : foo.txt ;
Main foo : foo.c ;
made stable /home/xgoodson/Top/Sub
make -- <Sub>foo.c
bind -- <Sub>foo.c: /home/xgoodson/Top/Sub/foo.c
time -- <Sub>foo.c: Tue Jun 19 16:50:35 2001
make -- foo.txt
bind -- foo.txt: /home/xgoodson/Top/Sub/foo.txt
time -- foo.txt: Tue Jun 19 16:51:14 2001
made* newer foo.txt
made+ old <Sub>foo.c
made+ update <Sub>foo.o
From: "David Abrahams" <abrahams@altrabroadband.com>
Subject: Re: Boost Build System prerelease (2nd try!)
Date: Wed, 20 Jun 2001 07:49:36 -0400
<<I took a quick look at your build system. It sounds very interesting.>>
<<I would really appreciate if your clarifications in the section
"Internals" about a few missing pieces in the Jam documentation.>>
I'm sorry, something is getting lost in the translation. Are you asking for
clarification? If so, what would you like clarified?
<<It took me quite a while to figure out, what you describe very clearly.
(And I never had the time to formulate it for my colleagues).>>
Well, I hope it helps. I felt I had to write it down carefully just to
understand it myself. Also, the more I wrote, the more questions I
uncovered. I would do an experiment or two to answer the questions, and
write down the answers.
<<I will discuss it with my co-workers and I think we might try a rewrite of
our adaptions using your boost system as a base.
We are using WindowsNT and cross-compile (GCC) for our PPC403-target
embedded vxWorks-system). But this will take some time (1-2 weeks).>>
This is an open-source project. If you are interested in contributing or
collaboration, it will be appreciated.
From: "Brett Calcott" <brett.calcott@paradise.net.nz>
Subject: Re: Boost Build System prerelease
Date: Thu, 21 Jun 2001 09:29:56 +1200
I am new to Jam and was going to use it for my a project of mine in c++ that
I want to use on both Win32 & Linux (which uses boost, by the way).
Firstly, well done on an amazing rework of the jam system to allow complex
multiple builds.
The approach that you have taken is a quite a change on top of the basic Jam
system. Judging by some of the questions that occur on the list (multiple
builds - dlls etc) it would be good if the whole system could be
incorporated into jam. What does perforce think of this?
The current version of Jam only works with NT, not the other versions of
Win32. (I am betting that quite a few boost users have Win98/95 installed).
David Turner at www.freetype.org has made the necessary changes for it to
work on the other Win32 platforms but has added some extra definitions as
well. There seems to be some overlap here.
Lastly, why isn't allyourbase.jam just called Jamrules? This is the
automatic toplevel that is read in without using the -f option. Or have I
missed something here?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Boost Build System prerelease
Date: Wed, 20 Jun 2001 19:09:23 -0400
I have corresponded with Christopher Seiwald, the Jam maintainer, about
making changes to the underlying Jam (C/C++) source code. He seemed
generally receptive to a collaboration on a few modifications. I have not
asked him about folding new Jamrules into the Jambase. I am guessing it
would be a hard sell, however. He (and others I suppose) have an investment
in projects built around the existing Jambase. My stuff adds a lot of code
to that, with capabilities and complexity that these existing projects
apparently don't need. I have made some effort to keep the functionality of
existing Jambase rules available in my work, but there's no guarantee that
everything works exactly as it used to.
Yes, in fact, I am beginning to find that to get certain things right the
freetype-specific "subst" rule
(http://freetype.sourceforge.net/jam/index.html#diff) may be neccessary.
Jam's built-in string/path manipulation facilities are pretty weak, and can
only get you so far, unfortunately.
Two things:
1. allyourbase.jam is a modified Jambase; I needed to replace some of the
Jambase rules, including SubDir, which gets called before Jamrules is read.
2. I still think it is useful for a project to have a Jamrules file which
starts as a blank slate. I didn't want project users to have to muck about
in allyourbase.jam just to add a few rules or variable definitions of their own.
Of course, the system is under development. Having all those files floating
around is obviously a bit inconvenient. When things get more stable, I think
it would make sense to compile allyourbase.jam and boost-base.jam into Jam
as a new Jambase. For now, I thought it would be useful if people using an
out-of-the-box Jam could try out the build system.
From: "EXT-Goodson, Stephen" <Stephen.Goodson@PSS.Boeing.com>
Subject: RE: Need help with SubDir
Date: Wed, 20 Jun 2001 17:08:50 -0700
I'm still a little baffled by this, but I have figured one more
thing out. Adding grist within the rule doesn't work, but if the
caller adds grist manually it works fine:
Depends [ FGristFiles $(<) ] : $(>) ; #doesn't work,
Depends $(<) : $(>) ;
along with
CatRule <Sub>foo.c : foo.txt ; #works
Is there anyone out there who could explain this to me? Obviously I'd
prefer not to have to add grist manually each time. How do I go about
creating a rule that works as expected with the SubDir command?
From: EXT-Goodson, Stephen [mailto:Stephen.Goodson@pss.boeing.com]
Sent: Tuesday, June 19, 2001 5:20 PM
Subject: RE: Need help with SubDir
complain about not knowing how to make <Sub>foo.c any more, but it
won't make it either, even though it appears to know how. If I run
jam, it tries to Cc Sub/foo.o, but fails because foo.c doesn't exist.
If I type 'jam foo.c', it creates jam.c but issues the
"warning: using independent target foo.txt". After that, jam will
build foo.o and foo just fine.
Interestingly, if I then 'touch foo.txt', that will cause jam to
remake foo.o (from the out of date foo.c). It won't ever make
foo.c if it already exists (even with an explicit 'jam foo.c').
My updated Jam file, and some output from jam -d is below. As
before, it works fine without the SubDir command. Any help is
appreciated. Thanks.
# Jamfile in $(TOPDIR)/Sub/
SubDir TOPDIR Sub ;
rule CatRule {
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
Depends [ FGristFiles $(<) ] : $(>) ;
Clean clean : $(<) ;
}
actions CatRule { cat $(>) > $(<) }
CatRule foo.c : foo.txt ;
Main foo : foo.c ;
made stable /home/xgoodson/Top/Sub
make -- <Sub>foo.c
bind -- <Sub>foo.c: /home/xgoodson/Top/Sub/foo.c
time -- <Sub>foo.c: Tue Jun 19 16:50:35 2001
make -- foo.txt
bind -- foo.txt: /home/xgoodson/Top/Sub/foo.txt
time -- foo.txt: Tue Jun 19 16:51:14 2001
made* newer foo.txt
made+ old <Sub>foo.c
made+ update <Sub>foo.o
From: "Brett Calcott" <brett.calcott@paradise.net.nz>
Subject: Re: Need help with SubDir
Date: Thu, 21 Jun 2001 13:00:58 +1200
I'm not sure why this does not work. The only difference I can think of
(from some of things that I am doing) is that all of my rules and actions
are in the top Jamrules file, rather than the Jamfile?
From: john@nanaon-sha.co.jp (John Belmonte)
Subject: Re: Boost Build System prerelease
Date: Thu, 21 Jun 2001 14:57:05 +0900
One point about your build system is that it doesn't address dependencies
between different targets. (I understand this is not a requirement for
Boost.) For example my embedded targets often have dependencies on native
targets, such as code generators or data coverters. It may not be good to
add a higher level system such as yours to Jam unless it is sufficiently general.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Boost Build System prerelease
Date: Thu, 21 Jun 2001 08:26:44 -0400
Yes, it does (though perhaps not at quite the level you're speaking of).
Right now, there is the <lib>path construct to generate a dependency between
different targets. You can always use Depends directly if neccessary. But I
get the sense you're talking about something else.
Yes, it is!
Could you explain a bit more about what you mean? If you have a straw-man
proposal for syntax and semantics of a construct that would allow it in
Boost.Build, that would be very useful as well.
From: john@nanaon-sha.co.jp (John Belmonte)
Subject: Re: Boost Build System prerelease
Date: Thu, 21 Jun 2001 21:44:31 +0900
I'm sorry, I wasn't following the terminology of your design document. By
"target" I meant your build variant, or in other words target platform.
I'd like to do dependencies between variants. How hard would it be to support that?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Boost Build System prerelease
Date: Thu, 21 Jun 2001 08:51:17 -0400
Oh, I see. If I understand correctly, you're not actually talking about
linking the targets of different variants with one another, but you will
produce an executable with one variant that must be used to build targets in
a different variant. I don't think that would be too hard to do. In fact,
this sounds a bit like something we need for the boost test framework. See
the last sections of TODO.txt for details. I would be very happy to
collaborate with you on this.
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Thu, 21 Jun 2001 09:23:23 -0400
Subject: Possible error in Jambase MkDir rule?
I copied the following fragment from MkDir for a rule of mine and was
surprised that it didn't work as expected:
if $(NT) {
switch $(s) {
case *: : s = ;
case *:\\ : s = ;
}
}
Is the intention of the 2nd line to match a string ending in
colon-backslash? If so, I think it won't match unless you use a quadruple
(yes!) backslash. At least, that's what my experiments tell me.
Date: Wed, 20 Jun 2001 11:30:34 +0200
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Re: Boost Build System prerelease (2nd try!)
consider changing to a mail reader or gateway that understands how to
I took a quick look at your build system. It sounds very interesting.
I would really appreciate if your clarifications in the section "Internals"=
about a few missing pieces in the Jam documentation.
It took me quite a while to figure out, what you describe very clearly.
(And I never had the time to formulate it for my colleagues).
I will discuss it with my co-workers and I think we might try a rewrite of
our adaptions using your boost system as a base.
We are using WindowsNT and cross-compile (GCC) for our PPC403-target (
embedded vxWorks-system). But this will take some time (1-2 weeks).
From: Hamish Macdonald <hamish@tropicnetworks.com>
Date: 21 Jun 2001 15:18:50 -0400
Subject: Interested in "incremental" builds....
I'm interested in enabling our developers to build incrementally upon
a daily loadbuild. The idea is that only files that have changed
since the daily loadbuild need to be rebuilt; only libraries whose
component object files needed to be built need to be re-archived
(archiving in other objects from the daily loadbuild). Only
executables whose component objects or libraries need to be rebuilt
would be re-linked.
I've got a mechanism working that uses GNU make and the VPATH/vpath
mechanism, but I'd really like to use Jam to do my builds.
Unfortunately, Jam doesn't seem to have a mechanism that works enough
like GNU makes vpath to be useful. With VPATH, GNU make will use the
target found via VPATH if it is up-to-date. If it is *not*
up-to-date, it throws away the VPATH-based path for the target and
uses the (relative) path specified in the makefile. I can thus point
VPATH/vpath at my daily loadbuild result, but have anything is rebuilt
to be placed in the local build output directory. Any subsequent
builds will use the local target for dependency checking also.
I thought that I could use:
LOCATE on $(target) = ...
and
SEARCH on $(target) = ... ...
to do this with Jam, but if $(LOCATE) is set for a target, it appears
to ignore the $(SEARCH) variable. Ideally Jam would use $(SEARCH) to
find an already existing file for a target, and use that to decide if
it needs to build it; if it needs to build it, it would then build it
in the $(LOCATE) location.
Has anyone else out there done anything like this with Jam or have any
(I'd like to avoid copying the daily loadbuild results to the users
build tree since there are 180M of object/library/debug files).
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Fri, 22 Jun 2001 14:17:18 -0400
Subject: Jam bug?
The jam docs advertise:
Start-up
Upon start-up, jam imports environment variable settings into jam
variables. Environment variables are split at blanks with each word becoming
an element in the variable's list of values. Environment variables whose
names end in PATH are split at $(SPLITPATH) characters (e.g., ":" for Unix).
On my platform, however (Win2K), $(SPLITPATH) appears to be undefined.
From: <boga@mac.com>
Date: Mon, 25 Jun 2001 18:33:12 +0200
Subject: Multiple Jam processes [WinNT]
Is there any known problem with running several Jam builds at the same time
on Windows2000? (Not the -j option).
While building one program, i'd like to build an different one.
If i do this, both build will terminate with very strange problems:
operable program or batch file.
Where 'itConnection' is different from build to build.
Has anyone else seen this behaviour?
I have the same problem with MSVC, and CodeWarrior compilers, so either the
build-system or jam uses some file/... globally.
From: "EXT-Goodson, Stephen" <Stephen.Goodson@PSS.Boeing.com>
Subject: Solved: Need help with SubDir
Date: Mon, 25 Jun 2001 14:49:55 -0700
I should have been using the gristed name everywhere, including in the
actions. Apparently the way to do that is to create a second rule
that only has actions associated with it and then call that rule
with the gristed name.
So, I think the following is the minimum needed for a rule to work
with the SubDir command:
rule CatRule {
local t = [ FGristFiles $(<) ] ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MakeLocate $(t) : $(LOCATE_SOURCE) ;
Depends $(t) : $(>) ;
Clean clean : $(t) ;
CatRuleDo $(t) : $(>) ;
}
actions CatRuleDo {
cat $(>) > $(<)
}
It would be nice if the documentation contained an example like this,
along with an explanation of how/why it works.
On a related note, I'd like to ask people who have experience using
existing Imake based build system. Our current system is quite
complicated and I am finding it extremely difficult to understand
and modify, so I am looking for something simpler. I am hoping that
jam will meet our needs, but after the difficulty I've had constructing
the above "hello world" type example, I'm having my doubts.
To get the above rule to work was not exactly straight-forward. Am I
likely to continue to encounter similar problems as I use jam, or having
figured this out, am I over the hump? I'm worried that I'll be creating
a build system that is just as complicated and mysterious to the
next person who comes along as our current system is to me.
I imagine that most jam users started in a similar situation, so I'd
be interested in any experiences or advice that anyone has to share
related to switching a moderately large project over to jam (I have
read the SCM7 paper, and noted that they chose not to use the SubDir
rule at all). I'm especially interested in comments related
to how maintainable and understandable the resulting system is.
From: EXT-Goodson, Stephen [mailto:Stephen.Goodson@pss.boeing.com]
Sent: Wednesday, June 20, 2001 5:09 PM
Subject: RE: Need help with SubDir
I'm still a little baffled by this, but I have figured one more
thing out. Adding grist within the rule doesn't work, but if the
caller adds grist manually it works fine:
Depends [ FGristFiles $(<) ] : $(>) ; #doesn't work,
Depends $(<) : $(>) ;
along with
CatRule <Sub>foo.c : foo.txt ; #works
Is there anyone out there who could explain this to me? Obviously I'd
prefer not to have to add grist manually each time. How do I go about
creating a rule that works as expected with the SubDir command?
From: EXT-Goodson, Stephen [mailto:Stephen.Goodson@pss.boeing.com]
Sent: Tuesday, June 19, 2001 5:20 PM
Subject: RE: Need help with SubDir
complain about not knowing how to make <Sub>foo.c any more, but it
won't make it either, even though it appears to know how. If I run
jam, it tries to Cc Sub/foo.o, but fails because foo.c doesn't exist.
If I type 'jam foo.c', it creates jam.c but issues the
"warning: using independent target foo.txt". After that, jam will
build foo.o and foo just fine.
Interestingly, if I then 'touch foo.txt', that will cause jam to
remake foo.o (from the out of date foo.c). It won't ever make
foo.c if it already exists (even with an explicit 'jam foo.c').
My updated Jam file, and some output from jam -d is below. As
before, it works fine without the SubDir command. Any help is
appreciated. Thanks.
# Jamfile in $(TOPDIR)/Sub/
SubDir TOPDIR Sub ;
rule CatRule {
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
Depends [ FGristFiles $(<) ] : $(>) ;
Clean clean : $(<) ;
}
actions CatRule { cat $(>) > $(<) }
CatRule foo.c : foo.txt ;
Main foo : foo.c ;
made stable /home/xgoodson/Top/Sub
make -- <Sub>foo.c
bind -- <Sub>foo.c: /home/xgoodson/Top/Sub/foo.c
time -- <Sub>foo.c: Tue Jun 19 16:50:35 2001
make -- foo.txt
bind -- foo.txt: /home/xgoodson/Top/Sub/foo.txt
time -- foo.txt: Tue Jun 19 16:51:14 2001
made* newer foo.txt
made+ old <Sub>foo.c
made+ update <Sub>foo.o
_______________________________________________
Date: Mon, 25 Jun 2001 18:09:12 -0500 (CDT)
Subject: Re: Solved: Need help with SubDir
I have several comments.
I think that grist causes many problems, and if it was automatically
done, that would simplify jam a great deal.
I think that executing all commands from the dir where jam was invoked
also causes problems when doing some sorts of things, and it runs against
"tradition" and the way people expect compilers to be invoked.
The debug output is very good, compared to make. You can figure everything
out, if you can wade thru the zillions of lines of output.
The rule-based approach makes it easy for the end-users, people who just
want to get their stuff compiled etc. and this creates a much more
uniform environment build-wise. I think this is the strongest aspect. Many
machine-dependent details can be hidden in the rules, and the resultant
Jamfiles look quite simple.
When users want to alter or make a new rule, they are mightly frustrated.
My personal experience is that when I get into writing the rules, it starts
going pretty well. My other experience is that it is hard to get back into
writing them. I think extensive comments in rules would help this a lot.
Jam's dependency processing is greatly superior to make, and the fragmented
specification of dependencies that usually goes along with make. I once
converted a small system from jam to make, and I had to discover a lot of
top-level dependencies that jam handled automatically.
We are currently using ant for java compiles, and I am beginning to wish
we had just stayed with jam. I thought that ant actually took care of
figuring out java dependencies! duh.
We use the SubDir rule, and I don't see any problems with using it. Of course,
I have never *not* used it, so my perspective may be poor.
Date: Mon, 25 Jun 2001 17:49:24 -0700 (PDT)
Subject: Re: Solved: Need help with SubDir
Jam has a number of things going for it. For one thing, it's awfully fast.
And, as Dave mentioned, the Jamfiles themselves can be very simple, so
people who deal with it at the level of just adding or deleting a file to
a list from time-to-time don't have to wade through the "guts" just to do
that. Having the rules for most common operations already set up in
Jambase makes it fairly turn-key, so getting a build-process in place can
be pretty quick -- for the first make -> jam conversion I did, I just
wrote a script that generated about 90% of the Jamfiles from the makefiles
('course, I probably couldn't have done that if I hadn't already gone
through and cleaned up all the make stuff several months earlier :)
As with any new language/tool, it does require that you do some reading
and experimenting to get the hang of how it works, but once you do,
writing your own rules for any special needs you have is usually pretty
straightforward as well. (Using the rules in Jambase as examples of how it
works is also a Good Thing, as is running in debug mode so you can see
exactly what all it's doing.)
want it to be something that always happened, since there are times when
you don't want it used.
BTW: I wouldn't recommend Jam for java-based builds -- I converted one
that took 40 minutes using Jam into one that took 4 minutes using Ant.
That's not a knock against Jam -- it's just that handing the Java compiler
the .java files one at a time was slooooow. Dave, does Ant's <depend> task
not do what you need it to?
Date: Mon, 25 Jun 2001 21:10:31 -0500 (CDT)
Subject: Re: Solved: Need help with SubDir
oh, I had the rule set up so that it would compile all the out of date
java files in one whak. You are right about the slowness if you do them
one-by-one. I think this is why for java, people just give up and compile
everything each time. The other thing is that really figuring out the
dependencies is not as simple as c++ include scanning, or at least so
I've been told. ant's depends task only orders the sequence of stuff
to do. my understanding is that if you ask a target to be built, it just
does everything in the dependency tree for that target. some of our
xml files are over 1,000 lines long now, and of course if you make
a change to a standard target, you must edit each of the xml build files...
assuming there *are* any standard targets.
Date: Tue, 26 Jun 2001 10:23:50 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Solved: Need help with SubDir
My work is to write portable software for embedded systems.
I routinely build with different compilers, on different platforms.
When I started, each project had several Makefiles, one per
toolset. Since this led to important maintenance headaches
(and a clear waste of my time), I finally designed a brand
new "build system" on top of GNU Make.
This was a _horrible_ hack, made of several Makefiles and
sub-Makefiles, that was capable of detecting the current
platform, and support multiple toolsets easily. The price
for this flexibility, using GNU Make's stupid syntax, was multiple:
- the rules needed to compile even the simplest
project sub-directory were complex and hard
to understand if you didn't know precisely how
the build system works
- the rest of the build system was really hard to
understand properly, and not the easiest thing
to maintain or extend
Sure, I could compile with several compilers and a single
set of Makefiles, but clearly, that wasn't ideal (and it was slow as well)
So I began looking at alternatives, and found Jam, and
never regretted it. Simply put it:
- I'm still able to use several compilers from
the same set of control files. Adding support
for a new toolset is also trivial
- Typical Jamfiles are a lot simpler than anything
I could find, and I do find writing new rules
easy. Of course, that's because I've studied the
"Jambase" in details in order to support new
toolsets and pretty knows how it works.
- the 'Jambase' file is pretty understandable once
you take the time to read it entirely (and slowly :-).
I think that Jam's biggest problem is the documentation,
which isn't clear enough about its inner workings.
It clearly isn't perfect, but once you master Jam, it's possible
to do very interesting things. Just have a look at the recently
released Boost build system for a very remarkable example.
You need to have a good understanding of the Jam inner workings in
order to write new rules. Unless you want really advanced features,
you should not encounter great difficulties to extend Jam.
Consider also that you won't need to write new rules for each
and every new file in your projects.
Compared to IMakefiles, I'm pretty certain that Jamfiles are
an order of magnitude more maintanable, IMHO.
Date: Tue, 26 Jun 2001 10:24:30 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Boost Build System prerelease
I just finished studying the Boost control files, and I'd like first
to congratulate you for your work. What you've done is properly amazing !!
I'd very much appreciate to be able to collaborate with you on the
Boost build system. The Jamfiles you submitted brought several questions to mind:
- your modifications are rather important, since you've replaced most
of the original Jam build rules with different ones. Do you expect
this work to be incorporated back into Jam itself, or do you
intend to create a fork, in order to create an independent build tool ?
- if you intend to fork the tools, would you be interested in
extensions of the Jam C source files in order to support:
- built-in implementations of "sort", "unique", "intersection", etc..
- additionnal language logic (e.g. variable rule call, like in
[ $(RULE) $(PARAM1) : $(PARAM2) ], substitution, globbing, etc.. )
- other kinds of enhancements that could simplify the Boost
control files tremendously.
- would you be interested in creating new features ? The one I'm
interested in would be "<ansi>on|off" to toggle ANSI compliant
compilation of C source files.
If you're interested, I'm ready to create a source archive that would
compile the Boost build system into a single executable file for easier
distribution. Simply let me know if you're interested..
I'll try to contribute toolset contributions in the next days. I should
be able to provide control files for Intel C, Watcom, LCC and a few other things..
From: "David Abrahams" <abrahams@altrabroadband.com>
Subject: Re: Boost Build System prerelease
Date: Tue, 26 Jun 2001 09:01:43 -0400
Well, I planned to gauge reactions and make a determination about how to
approach things. I would love to compile those rules into Jam directly, but
I haven't got any reason to expect that Mr. Seiwald would incorporate my
work back into Jam, so I figured it was safest to assume I was going to be
stuck with -fallyourbase.jam for the forseeable future. The boost community
has rejected the idea of a completely independent fork (and I agree), so I
am trying to stay compatible with a stock Jam executable (though some of our
users certainly need your Win95/98 work). I think something that was likely
to be merged back into the Jam release (i.e. in the right part of the public
depot, and somewhat blessed by the Jam world) would be acceptable, though.
You may have more insight into the best approach here than I do, however,
since you've been part of the Jam community longer and have made valuable
source code modifications. Suggestions?
I think these are low-priority. Informal tests show that Jam spends much
more time checking file dates than it does in any of these processing rules.
The worst thing they do is to obscure debugging (-d+5) output. My tests
could be wrong, of course.
Yes, the first one would be a huge help. There's one place in particular
that needs it. I have spoken to Mr. Seiwald about that and he seemed open to
the idea. What is globbing?
Simplicity is good. Ultimately I would love to have a Python interpreter
under the covers, but the boost community is not ready to accept Python as a
requirement for builds.
Absolutely. The supported feature set is just a proof-of-concept.
I am very interested. A little guidance with the Perforce public depot would
also go a long way. There are so many things to do, and so little time to
learn new things...
Date: Tue, 26 Jun 2001 17:08:15 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Boost Build System prerelease
First, I'd like to remark that you can use my "improved" version of Jam today
to run Boost on Windows 9x, since it is completely backwards compatible with
Jam 2.3.1 :-) The changes have been submitted a long time ago to the Perforce
team, and I'd love to see them rolled back into the main Jam source tree.
On the other hand, It seems to me that Jam and Boost differ enormously in
their designs, even if they share a common command language (and interpreter);
users of both systems need to think in vastly different ways to build their
projects with either one of them, since even the location of object files
is different between them.
Jam is a marvelous piece of code, and many companies have already made a
rather important investment on it. For example, did you know that the
Apple MacOSX App.Builder IDE uses Jam as its internal build tool ?
This investment is also why I'd be really surprised to see the changes
required for Boost integrated into the Jam source tree before long, since
this would risk breaking too many existing installations/builds.
On one hand, I don't think that creating a fork is going to hurt any
Jam users. On the other hand, it will certainly simplify the use of
Boost, as well as its development.
Agreed, though they could be used by other project-specific Jam/Boostfiles
I'll have a look and see if I can't implement the first one easily. I'm
pretty confident in the Jam sources, and I'm pretty certain that this
thing should be trivial to do..
Well, the name isn't correct. It's basically, the equivalent of the
"wildcard" function in GNU Make, which is capable of returning a list
of files or directories in a variable. It's generally something useful
to scan sub-directories automatically, instead of having to patch a
fixed list in a control file. Here are examples:
include [ wildcard $(BOOST_INSTALL_PATH)/*-tools.jam ] ;
include [ wildcard src/*/Jamfile ] ;
Well, that's probably what the guys at the Software Carpentry project
want to do. Unfortunately, it seems they're taking a long time to
develop it (fortunately, the end result should be brilliant when completed !!) :-)
In the mean-time, we'll need to design our own tools..
Agreed, but I consider that the time spent improving software tools is
nothing compared to the savings this allows in the future !!
Date: Tue, 26 Jun 2001 17:40:41 +0200
From: David Turner <david.turner@freetype.org>
Subject: new release of "FT Jam"
I'd like to inform you that I've just released a new version
of "ftjam", the improved version of Jam that runs under
Windows 9x (originally written for FreeType, but fully
bacwkwards compatible with Jam 2.3.1).
This new release fixes a few annoying things in the original
Makefile/Jamfile (they assumed that the current directory was
in the path, which rarely happens on secure Unix systems).
This new release also supports indirect rule invocation, as in
[ $(RULE) params ]
which calls the rule whose name is given by the expansion
of the RULE variable. This should be of great value to the
Boost community and to other Jam hackers..
You can download it directly (in source form, as well as a
Win32 binary), at the following address:
http://sourceforge.net/project/showfiles.php?group_id=3157&release_id=41130
Alternatively, you can grab the sources from the Perforce Public
depot. Have a look in //guests/david_turner/jam/src
Date: Tue, 26 Jun 2001 17:56:57 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: new release of "FT Jam"
Mmmm, it seems that SourceForge takes some time before updating the file
list for a new release. For the impatient, try going to:
ftp://ftp.freetype.org/pub/tools/jam
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: new release of "FT Jam"
Date: Tue, 26 Jun 2001 12:41:44 -0400
Fabulous! Does it also work without the square brackets if you don't need a
return value?
$(RULE) params ;
Another moderately-high priority for me, and one I've just discovered, is to
open up the MAXLINE constant for Windows NT. I am not interested in
supporting NT3.5 (sorry!) and it is fairly easy to come up with a link
command line that exceeds the 996 characters allocated by the default Jam on
NT. If you don't feel comfortable about folding that change into FTJam, I
would be "forced" to make the modification myself, resulting in (IMO) a very
silly code fork.
From: "David Abrahams" <abrahams@altrabroadband.com>
Subject: Re: Boost Build System prerelease
Date: Tue, 26 Jun 2001 13:13:04 -0400
Yes, I've been referring Win9x users to your version, thanks.
Well, to be fair, the Boost design is based almost entirely on the Jam
design. The things I learned by studying the Jambase code were essential to
getting the boost stuff working.
Really? Maybe I'm just naive. Could you elaborate?
I don't think the file locations should be that much of a leap; almost
anyone who's built debug and release versions of a project or built with
multiple compilers has had to use a scheme something like this.
The subvariant path branching structure is a little unorthodox: I've had
requests from boosters for a flat build directory scheme (e.g. directories
like pc-linux-gnu-release), but I don't know how to handle platforms with
filename length restrictions (MacOS) or how to come up with a simple
translation between properties and directory name components. I think the
best solution would be to provide a user- (Jamrules-) overridable hook
function for translating paths.
I think I was making something like this argument in
All the same, I have never worked on a major project that could afford to
use a build system structured the way the Jambase rules are, with no simple
way to change compilers, build a different variant of the project, or ensure
link compatibility between separately-compiled objects. It is hard for me to
understand how a cross-platform project, e.g. Perforce, can use Jam
effectively without these facilities. Not that I consider any of this to be
a major failing of Jam, mind you: it provides nearly all of the neccessary
infrastructure to do the job, with most pieces incredibly well-thought-out.
Perhaps that makes the most sense.
Sure; but they can be used that way now. What am I missing?
BTW, one of the hardest things to get right was the split-path rule. It
generates reams of debugging output and might be better as an intrinsic
rule. Also, it's unable to remove the trailing slash from the top path
component, so:
[ split-path a:\b\c\d ] = a:\b c d
You might also notice that the boost build system hijacks unix-style
pathnames to do various things like specify multiple default build
subvariants and library dependencies:
<lib>../foo/bar/my_lib # build and link in this library
<runtime-link>static/dynamic # build both versions
In order to work properly cross-platform, this will require extending the
intrinsic path parsing code for platforms other than Windows and Unix. The
change to uniform unix-style path parsing so long as it works for /all/
supported platforms. Do you have any idea how VMS pathnames work <0.2wink>?
Oh, of course: you meant /globbing/ ;-). I'm not sure I have a use for it,
actually. Did you see a way it could help?
I've corresponded with Steven Knight about this. It's hard for me to tell
whether he actually has a solid idea of the neccessary foundation for such a
system. I think it might be fairly easy to slip calls to Python into the Jam
source and come up with an excellent system, but as I've said, that wouldn't
serve my users' needs.
I think my point was just that having a "partner in crime" makes it easier
to get over some of the little bumps in the road that less about development
than technology rasslin'.
Date: Tue, 26 Jun 2001 14:40:38 -0400
From: Beman Dawes <bdawes@acm.org>
Subject: Re: Boost Build System prerelease
>From: "David Turner" <david.turner@freetype.org>
>I think my point was just that having a "partner in crime" makes it easier
>to get over some of the little bumps in the road that less about development
>than technology rasslin'.
I'm one of the Boost people who hasn't wanted to see a fork in Jam.
I was assuming that such a fork would be of interest to Boost members only,
and so we would have to maintain a piece of software only tangentially
related to our C++ library goals.
But what seems to be happening in real life is that Dave Abrahams'
innovative work on the Boost build system is of interest way beyond
www.boost.org. Developers unrelated to Boost like David Turner seem
interested enough that they might be willing to climb on board a fork.
So while I still hope that the Perforce folks can find a way to be
responsive, if the two Daves decide to fork, then they will have my
support, and I expect will get the support of a lot of other Boost members.
Date: Tue, 26 Jun 2001 20:36:17 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: new release of "FT Jam"
No, because this would involve non-trivial changes to the Jam
parser. For now, you'll need to assign a dummy variable with the
call as in:
_ignore = [ $(RULE) params ] ;
hideous, but it works..
Well, that should be trivial to change too. However, I'd appreciate if
you could make a small wish list for the next changes. I'd like to
avoid making several releases a day ;-)
Date: Tue, 26 Jun 2001 23:44:46 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: new release of "FT Jam"
Actually, a better solution would probably to define
a new rule like the following:
##########################################################
#
# invoke VARIABLE : params1 : params2 : params3 : ....
#
# a special rule used to invoke rules indirectly.
# $(<) must be a variable name and will be expanded
# to determine which rule to call
#
rule invoke # VARIABLE : params1 : params2 : .... {
local _ignore ;
_ignore = [ $($(1)) $(2) : $(3) : $(4) : $(5) : $(6) : $(7) : $(8) : $(9) ] ;
}
and a simple example would be:
RULE = "ECHO" ;
invoke RULE : "hello world" ;
Subject: RE: new release of "FT Jam"
Date: Tue, 26 Jun 2001 17:18:50 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
| Another moderately-high priority for me, and one
| I've just discovered, is to open up the MAXLINE
| constant for Windows NT. I am not interested in
| supporting NT3.5 (sorry!) and it is fairly easy
| to come up with a link command line that exceeds
| the 996 characters allocated by the default Jam
| on NT. If you don't feel comfortable about folding
| that change into FTJam, I would be "forced" to make
| the modification myself, resulting in (IMO) a very
| silly code fork.
I had two problems with the 996 limit -- (1) the limit was frequently
hit when a rule has a bug, but I couldn't see what the bug was, because
it only told me "too big"; (2) some actions are multiple lines long, and
get run as batch files, therefore it may be that the actions are say 4
lines where line 1 is 700 chars long, line 2 is 20, line 3 is 300, line
4 is 20. But the batch file would have run fine. In particular, this
causes problems for my C++/Sbr and Pch/Sbr rules, which have to call
"touch" on the long target filenames after they're produced, to force
them to have consistent timestamps.
However, Jam does need some concept of a maximum line length, so it can
optimize the PIECEMEAL actions. Btw, accordingly to comments in the Jam
source, NT isn't able to execute command lines longer than 996
characters (I don't know if that's really true, I've never tried). So
just increasing MAXLINE may not solve your problem. Perhaps in your
case it would be better to rewrite various actions to create/update an
input file, so the Link actions can simply refer to the input file.
Anyway, here's what I did for a quick improvement to address my two
issues described above:
In jam.h, added this after the #ifndef MAXLINE around line 421:
# ifndef MAXCMD
# define MAXCMD 10240 /* longest command string */
# endif
In command.h, tweaked the "struct _cmd" so:
char buf[ MAXCMD ]; /* actual commands */
In command.h and command.c, tweaked prototype for "cmd_new" so:
LIST *shell, /* $(SHELL) (freed) */
int outsize ); /* max number of chars */
In command.c, tweaked the code for "cmd_new" so:
if( var_string( rule->actions, cmd->buf, outsize, &cmd->args ) < 0 )
In make1.c, inserted this line immediately at the top of the "do..while"
loop in "make1cmds":
int outsize = ( rule->flags & RULE_PIECEMEAL ) ? MAXLINE : MAXCMD;
In make1.c, added an extra parameter to the end of the "cmd_new" call:
list_copy( L0, shell ),
outsize );
In make1.c, parameterized the "actions too long (max ###)" message:
printf( "%s actions too long (max %d)!\n",
rule->name, outsize );
End result -- Jam is still able to tweak PIECEMEAL actions for each OS,
too long. Hope this helps!
Date: Tue, 26 Jun 2001 22:26:16 -0500 (CDT)
Subject: Re: new release of "FT Jam"
our experience is that NT has a rather limited command line length, so
I don't think changing maxline is going to get you very far.
We use response files for long lists of .objs and similar things.
if you concat the macro containing the files with a macro containing
a newline, then you can get it to produce a very long sequence of
echo stmts to create the response file. I can dig up the specifics
if you are interested.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: new release of "FT Jam"
Date: Wed, 27 Jun 2001 07:57:20 -0400
The Jam documentation claims NT 4.0 and up have a maximum length of
something near 10K characters. Are you saying that's incorrect, or does 10K
correspond to your idea of "rather limited" length?
Well, at this point, I don't know. since we do not generate long lines anymore
(except on unix) I don't have any current knowledge. The error msg seemed to indicate
that dos didn't like the long lines, but perhaps it was just an error msg from
jam. Try it and see. I have had to increase the macro string size though.
Maybe we were using a slightly different dos...
I don't think that 10k is that small.
From: "Prabhune, Abhijeet" <APrabhun@ciena.com>
Date: Wed, 27 Jun 2001 13:03:45 -0400
Subject: Queries regarding jam usage.
Questions;
1) whats the utility of JAMSHELL? Can it be used to start another command
shell and call another executable from that shell, e.g. a batch file invoked
from within the new shell, or will it just start a new shell and execute jam
in its context?
2) Second question is also related to batch files, IS it possible to invoke
batch files from within jamfile? If yes how?
From: Paul Moore <gustav@morpheus.demon.co.uk>
Date: Wed, 27 Jun 2001 23:15:03 +0100
Subject: Re: new release of "FT Jam"
I tested once. Win32's raw CreateProcess API managed to handle a command line of
3M (yes, Megabytes) or so, IIRC. But if you go through the shell, you are
limited by that. The Win 9x shells (COMMAND.COM) tend to have silly limits like
128 bytes. I'm not sure about CMD.EXE, but experiment makes it 2046 bytes. 4NT
(a CMD.EXE replacement) takes 2047, but crashed on anything over 2045 for me.
So the long & short of it seems to be, on NT you should limit lines to something
on the order of 2000 characters if you are going via a shell, but if you invoke
CreateProcess directly, you can have arbitrarily long lines.
[I'd be tempted to suggest that the normal behaviour should be limit to 2000
characters and use the shell, but have a special command form which goes direct
via CreateProcess (and so doesn't support things like redirection, shell
internal commands, etc) and allows an unlimited line length, for specialist
cases.]
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: new release of "FT Jam"
Date: Wed, 27 Jun 2001 20:31:50 -0400
So, Jam writes a .bat file which it invokes. How does that square with what
you're suggesting here?
From: Paul Moore <gustav@morpheus.demon.co.uk>
Subject: Re: Re: new release of "FT Jam"
Date: Thu, 28 Jun 2001 20:46:30 +0100
The limits for .bat files are those of the shell, so you'd have to limit
individual lines to 2000 lines (obviously, the whole file can be as long as you
like) on NT. Dunno for 9x...
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: new release of "FT Jam"
Date: Thu, 28 Jun 2001 17:54:50 -0400
Can you escape lines with backslashes, or does that amount to "cheating"?
From: "Paul Moore" <gustav@morpheus.demon.co.uk>
The limits for .bat files are those of the shell, so you'd have to limit
individual lines to 2000 lines (obviously, the whole file can be as long as
you like) on NT. Dunno for 9x...
Date: Fri, 29 Jun 2001 10:48:52 +0200
From: David Turner <david.turner@freetype.org>
Subject: RFC: On the future of Jam, "FT Jam" and Boost
I'd like to collect opinion from a large pool of Jam users
regarding potential and upcoming enhancements to Jam. I'm
sorry if the following is a bit long, but I've tried to
make it as clear and accurate as possible.
To summarize the following, I'd say that I propose the following:
- to rename "FT Jam" to something a bit more pleasant
- to create a SourceForge project for it, and use it for:
* distribution os source and binary packages
* providing a mailing list related to the
developments / improvements made to the
new "FT Jam" ( using the current list for
normal Jam / Jamfile usage questions )
* providing a CVS repository for the improved
sources. This seems necessary for a lot of
people who would like to contribute but do
not master Perforce, nor want to take the
time to install and learn it on their systems.
- the project would contain two "modules":
* the first one being the enhanced Jam sources themselves
* the second one being the "boost" build
system. it will depend on the first one.
It's important to understand that all improvements integrated
into "FT Jam" should be *completely* backwards compatible with
the official Jam sources, in order to avoid breaking existing
Jamfiles. As was said previously, some companies have made
a tremendous investment in Jam.
On the other hand, the boost build system will use some of
the C sources provided by the enhanced Jam module, but
also provide its own set of control files (i.e. the equivalent
of "Jambase") as well as other C source files of its own.
This should allow the creation of a single-executable "boost.exe".
The goal of all of this is to be able to experiment nearly
freely with new "features" in the "boost" module, while
slowly moving the tested/validated ones into the "enhanced
jam" one. The Boost control files will never migrate to
this module though..
I welcome any comments. More importantly, I welcome any
suggestions for the new "enhanced jam" module. Please, please,
do not suggest cryptic acronyms :-)
I. Jam and FT Jam:
critical improvements to the base Jam sources that are
distributed under the name "FT Jam". You'll be able to
find more information about it at the following address:
http://www.freetype.org/jam/index.html
I'd like to insist on the fact that the improvements
present in this version of Jam are *completely* backwards
compatible with the Jam 2.3.2 sources, and have been
submitted to the Jam maintainer. They can be classified
into two classes of improvements:
- modifications to the C Jam sources, in order to
run correctly on Win 9x systems, as well as implement
new built-in rules (namely HDRMACRO and SUBST), etc..
- modifications to the Jambase itself, in order to
support more toolsets on Windows and OS/2 systems.
In all cases, it's important to note that _existing_ Jamfiles
should work without a single change with "FT Jam", and I'm
commited to always meet this requirement in further improvements
that could be made in the future. If you find something in
FT Jam that seems to "break" one of your builds that otherwise
work perfectly with the official release of Jam, please
contact me to get this fixed !!
find a better one for now. I welcome any suggestion to
something more appealing (possibly avoiding strange
acronyms, _please_, I like being able to spell my tools
names in my basic english :-)
II. Boost:
Recently, David Abrahams announced on this list the release
of a new build system named 'Boost.Build', which I'll call
'boost' in the rest of this document.
Boost is a set of control files that over-ride the original
Jambase and provide a different set of rules to developers
when they write "Boostfiles" (instead of Jamfiles).
Boost manages advanced concepts that are completely alien
to the original Jam/Jambase, like build variants,
features/properties, requirements, etc.. and makes a
professional developer's life a lot easier.
The differences are so great that from a user point of
view, Boost and Jam can even be thought as radically
different designs.
III. Jam limitations (wrt Boost):
Using Boost is currently a bit awkward for at least two
reasons:
- to use it, you need to copy several boost control
files to your project's top directory. Since boost
is still in rather heavy development, you need to
update continously these files if you use them
- you need to invoke Jam (or FT Jam) with the "-f"
flag in order to not use the default Jambase
Meanwhile, Boost is currently limited by some drawbacks
of the original Jam design, and would benefit greatly from
a few improvements made to the Jam C sources themselves.
I have myself released a new version of FT-Jam recently
that addressed one of these issues (while still maintaining
compatibility, I insist !! :-)
Because of these inconveniences, a recent proposal was
made to fork the Jam sources in order to enhance Boost
capabilities, while being able to build a single "boost"
executable that would be, indeed, much easier to use than
the current scheme.
The problem with this approach is that improvements to
one branch (e.g. Jam/FT Jam) would not benefit to the
other one (respectively, the Boost version of the Jam
sources).
IV. Forking isn't necessary:
After some thought, it seems however that we do not
need to make a decision as drastic as forking the
Jam source tree entirely.
Since Boost is really a set of control "Jamfiles", the
original Jam (or FT Jam) sources can be used _directly_,
to build a single "boost.exe" that would incorporate
all "Boostfiles". To explain this, I'll detail the way
the Jam sources are currently organized:
- a first set of C source files is used to create
a library, called "libjam", that provides the
base Jam functionality (i.e. control file parsing
and execution).
- a single control file, named "Jambase", contain default
rule definitions for Jam, including "Cc", "Link",
"Library", as well as various compiler-specific
variable definitions and actions
- the "mkjambase.c" program, used to convert a text
file into a embeddable C source. It is currently
used to convert "Jambase" into "jambase.c"
- a front-end program named "jam.c" which is statically
linked with "jambase.c" and "libjam", used to generate
the single executable know as "jam.exe" or "jam"
This scheme allows us to design Boost as the following:
- a set of control files, like "all-your-base.jam",
"features.jam", etc.. that can be processed through
"mkjambase" in order to convert them to C source files
- a front-end programmed, named "boost.c", which is
statically linked with "all-your-base.c", etc.. and
"libjam". It would be used to generate a single
executable named "boost.exe" or "boost".
- optionally, some other C source files used to augment
"libjam" with new features (e.g. new rules).
These two designs are not incompatible and allows boost to
benefit from all improvements made to the Jam sources.
V. Source Code Location :
The current Jam sources are currently available through the
public Perforce Depot, (see //guests/david_turner/jam/src).
Though I've submitted my changes more than one month ago,
the Perfoce Jam team didn't find the time to review them
to incorporate them into the official Jam sources (or simply
reject them).
I do not blame them for this, since they most probably have
different priorities to deal with. However, as time passes,
two things are happening:
- I'm subject to add more and more enhancements
to FT Jam, which only widens the gap between it and the
official Jam sources (NB: while still preserving
backwards compatibility). This will make the review
and integration/rejection of FT Jam enhancements
simply harder for the Jam team when they start
doing it.
- other people are starting to contribute changes, or
willing to do so. Many of them are only familiar with
CVS, and do not want to install or take the time to
learn a new tool. I admit that I'm not really confident
with Perforce myself, though I've tried rather hard :-(
I thus propose to create a new CVS repository on a SourceForge
project page to handle both the "FT Jam" and "Boost" sources.
Using SourceForge has several benefits:
- an easier management of access rights for different
writers on the CVS repository than what can be done
with the guests branch of the public Perforce depot
(it seems).
- the ability for _many_ developers to easily download
the latest sources or stable releases through CVS,
submit fixes, view revision history, etc..
- the ability to parse the CVS sources from the web
- a dedicated web page/address., plus download locations
and information through HTTP/FTP.
The current public depot sources will be kept as is, and
updated periodically in order to submit only widely tested
and stable enhancements, just in case the Jam team finds
the time to review them..
I know that Perforce is a lot better than CVS in a lot of
points, especially for complex projects with lots of
developers. However, I believe that using CVS for something
as simple as the Jam+Boost sources should not hurt us.
From: "Robert Cowham" <robert@vaccaperna.co.uk>
Subject: RE: RFC: On the future of Jam, "FT Jam" and Boost
Date: Fri, 29 Jun 2001 11:57:58 +0100
Regarding the CVS thing - it should be possible to maintain a CVS and
This is availabe from CPAN (currently in beta), and the mailing list is
revml at http://maillist.perforce.com/mailman/listinfo
Date: Fri, 29 Jun 2001 13:08:01 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Well, I'd like to seriously counter that !!
I know quite a few Unix people who would _really_ love to get
away from the _atrocities_ of the damned GNU build tools when a
sufficiently mature alternative is available.
I also know some Windows developers who would like to develop
for Unix, but are less than appealed at the idea of coping with
the "gang of four" (i.e. AutoMake+AutoConf+LibTool+Make) [1]
I believe that Jam has a _big_ potential, but is still rather
limited currently (e.g. it's inability to build DLLs or programs
that use them correctly). Fortunately, it's sufficiently flexible
to allow the addition of custom rules to overcome some of its
shortcomings, and for most developers, it's a real God's send.
It has, at least, drastically simplified the build and testing
process of a couple cross-platform projects.
Boost is a drastic improvements over the original Jam design
and promises to bring industrial-strength builds with a very
simple system.
In short, the benefits of using Jam and Boost are tremendous,
even if they still require some learning.
(Yes, I'm passionnate about this.. but I used to work in a
large company that used its own complex build system based
on make and a bunch of Perl scripts, and believe me, that
was really tough..)
On the opposite, I believe that the benefits of switching from
CVS to Perforce, while being real and proven, are of lesser
importance to the casual developer.
That's why I think that once Jam and/or Boost mature enough,
you'll see developers from all over the planet literally switch
to Jam in droves, and ditch the "Make" of the worlds :-)
Well, it's just opinion anyway ;-)
[1] And yes, I realise that AutoConf and LibTool will still be
needed with Jam/Boost on Unix systems..
Date: Fri, 29 Jun 2001 14:29:22 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Exactly, but this kind of branching is no problem for CVS either.
What concerns me is to relocate files in one branch, and not the
other, while still being able to merge them correctly..
(i.e. FTJAM/libjam/file.c would be "merged" with JAM/file.c when needed)
Does Perforce support such a thing ? If it doesn't, we'll need
another depot or repository (independent of the tool used).
Another solution is to make all file relocations before the
branch is created in the official Jam sources, and I'm
pretty certain it's not going to happen too soon, and for
very good reasons :-)
Actually, I don't think we need to keep track of revision numbers
and comments between the Jam and FTJam depot/repositories which is,
I believe, the most complex part of this process..
I had the intent of doing something around this when a stable
release of FT Jam would break through:
- copy files from libjam/ and jam/ to my client
view of //guests/david_turner/jam/src
- integrate them, using no comments (or minimal ones)
Now, the Perforce team would be free to integrate these
changes back to the official Jam sources. I intend to
update a ChangeLog in order to satisfy this task..
What do you think about it ?
Date: Fri, 29 Jun 2001 15:42:32 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
If you do that, perforce falls down to cvs standards. Which makes perforce
users complain a lot.
That's called integrated in perforce-speak.
people branch "all files in .../mumble/..." into another directory. Much
easier on the brain. But perforce can branch any file into any other file.
Keeping part of integrated changes are.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Date: Fri, 29 Jun 2001 09:49:49 -0400
If the boost build system finds wide use, it might make sense to use the
list you mention for that as well.
I think it might be unwise to start new projects at SourceForge, given the following:
http://iwsun4.infoworld.com/articles/hn/xml/01/06/27/010627hnvalinux.xml?062
We at www.boost.org are currently investigating alternatives.
Agreed.
FWIW, I like "ftjam" and wouldn't waste time finding another name. I tried
to do the same with my python/c++ binding library for boost, but just ended
up with "the boost python library" (Boost.Python).
This seems to happen every time something of interest outside the hardcore
C++ community appears at boost. People often refer to the boost python
library as simply "boost". I don't mind you calling Boost.Build just "boost"
as a kind of shorthand, as long as it's clear that boost has a very
different identity: www.boost.org is an open-source peer-reviewed C++
library development group.
Except they're still called "Jamfile".
Giving credit where it's due, the design of Boost.Build draws HEAVILY on the
design of Jam.
Just to clarify, these files don't have to live in your project's top
directory. For example, the boost Jamrules file currently contains:
BOOST_BUILD_INSTALLATION ?= $(TOP)$(SLASH)build ;
BOOST_BASE_DIR ?= $(BOOST_BUILD_INSTALLATION) ;
Which places these files in a subdirectory called "build". You could also
specify absolute paths which get them from some other location (e.g. a server).
That's quite inconvenient, and something I'd like to address.
Hmm; that's not what I envisioned.
1. allyourbase.jam is really a modified Jambase. For a while I tried to
ensure that it would be strictly compatible with the original Jambase, but
eventually gave up. Still, as long as users' Jamfiles stay away from using
variables with certain naming conventions (I'm thinking of names like
"gALL_UPPER_CASE") I think it should be possible to roll the changes back
into the Jambase from FTJam without breaking any code. There are two issues:
a. The original Jambase would cause an error if you didn't set variables
describing your single toolset. That behavior is inappropriate for
potentially multi-compiler builds.
b. The original Jambase rules are underspecified and there are no unit
tests for them, so it's hard to ensure that you've preserved the intended
behavior. We could deal with this by writing improved specification and unit
tests for the Jambase rules we modify.
2. I /like/ the fact that features.jam and the toolset definitions are not
compiled into the Jam executable. It keeps the build system configurable and
customizable. It should be possible to add features, toolsets, and variants
without recompiling the executable.
Back to the naming issue: it's a small thing, but I wouldn't like to
distribute an executable called "boost" unless the www.boost.org
participants agreed that it was appropriate.
All that said, I like the general direction you're going in.
<snip>
I'd like to suggest that you consider hosting FT Jam where boost ends up
being hosted, since we are currently exploring another CVS host with better
long-term prospects. The other advantage to this is that we anticipate
having the ability to perform server-side maintenance jobs, such as moving
files in the repository, etc., for which you currently have to petition (the
almost certainly understaffed) SourceForge.
Date: Fri, 29 Jun 2001 09:30:45 -0700
From: Jos Backus <josb@cncdsl.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Hear, hear. Maybe when Jam was designed speed was an issue, but it seems to be
a much cleaner design to keep the interpreter (jam) and the script
(Jam{base,file}, etc.) separate. This has the added advantage of avoiding
binary rebuilds when the script changes, which can be a pain when you have to
maintain versions of jam for multiple platforms.
Date: Fri, 29 Jun 2001 13:29:35 -0400
From: Beman Dawes <bdawes@acm.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
That will cause endless trouble. Boost is a repository of free C++
libraries, not a build system. Please use Boost.Build or some other name
so you don't dilute all the work Boost developers have done to make the
Boost name synonymous with high quality C++ libraries.
I'd also like to second the comments of Dave Abrahams and Joe Backus to the
effect that the Boost.Build rules shouldn't be compiled into the Jam binary.
For a site like www.boost.org which updates our libraries regularly, we
don't want to require uses to download a Jam executable every time they
download the Boost libraries. The Jam binary should stay very stable, IMO.
Date: Fri, 29 Jun 2001 20:08:21 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Humm.. there seems to be some misunderstanding. The reason for
embedding control files within the executable is to simplify
distribution by including the _defaults_ in a single file. This
doesn't mean that using other scripts should be restricted.
I have myself made quite some hacking with the Jambase in order
to support a few more toolsets, and I value the "-f" flag in
Jam very dearly :-)
I'll try to give more info on this next week (it's week-end
time here in France :o)
PS: And I agree that a different name is required for the
build system. I feel that "Boost.Build" is likely to
be abreviated by most users as simply 'boost' so I'd
rather favor a drastic name change.
"Marmalade" has been suggested, it seems nice :-)
Date: Fri, 29 Jun 2001 11:23:40 -0700
From: Jos Backus <josb@cncdsl.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Yes, I am aware of this distinction. I guess my position is that jam should
not have any built-in default behaviour, it should merely be an interpreter
for the Jam language. Just like, say, BSD make doesn't have any rules compiled
into the binary, they live in /usr/share/mk/*.mk.
How about "Jelly" or "GLU" :)
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Date: Fri, 29 Jun 2001 15:45:33 -0400
I think people would like some simple way to configure the executable to use
a different "base" file without the need for "-f".
I suppose it's easy enough to fake that with the appropriate shell script,
but it might make sense to give people a compiled-in mechanism, like the use
of a .jamrc file.
It's cute (et surtout, trs Franais), but I have these concerns:
1. It's a long name to type. Anything longer than "make" will deter adoption
2. I think I'd like to keep the boost identity attached to the architecture
somehow.
From: Paul Moore <gustav@morpheus.demon.co.uk>
Subject: Re: Re: new release of "FT Jam"
Date: Sat, 30 Jun 2001 18:00:02 +0100
Yes. Not with backslashes - you use ^ instead. So
dir ^
/?
works - both on CMD.EXE and 4NT.EXE. This is *definitely* not Win 9x compatible, though.
Date: Sun, 01 Jul 2001 08:36:57 +0200 (CEST)
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
From: Werner LEMBERG <wl@gnu.org>
`jelly' is even better, as someone else suggested. So I withdraw my
suggestion in favour of this.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Date: Sun, 1 Jul 2001 08:54:20 -0400
The low-level build system is Jam -> FTJam -> whatever you want to call it.
The high-level part written in the (FT)Jam language is what I've been
calling the Boost Build System. I would like to maintain that association
with boost.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Date: Mon, 2 Jul 2001 11:15:40 +0100
Except that it's got nothing to do with jam outside North America. :-)
Personally, I quite liked marmalade. As someone mentioned, it's a lot to
type. How about 'curd', as in Lemon?
Date: Mon, 02 Jul 2001 15:41:01 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Yes, but that scheme is not going to translate well on Windows, OS/2,
and a few other platforms supported by Jam, where hard-coded paths
aren't exactly the default..
I think that both approachs have their merits. After all, we can
easily choose to use the "/usr/share/jam/..." thing on Unix-style
systems, and the single-exe one on other ones.
As long as the tool is easily extensible by users, either though
environment variables, .jamrc files, command line flags, etc..
it really doesn't make much of a difference where the defaults are stored :-)
Date: Mon, 02 Jul 2001 15:55:20 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Yep. Either a ".jamrc" file or an environment variable should do
the trick. I think it's wiser to implement _both_ schemes, since
users have different expectations depending on the system they're
working on, typically:
- Unix users all have a HOME variable defined and can use
a ".jamrc" file easily.
to a configuration file is simpler than defining a HOME variable
then a file named ".jamrc" :-)
And of course, the command-line flags should still be there for other
users too (shell scripts invoking dynamically-generated Jamfiles
is just an example :-)
It's not French actually :-)
jam <=> confiture
marmalade <=> marmelade
jelly <=> gele
I believe that the difference between these three words
are the sugar/fruit ratios, though I'm unsure..
OK. Moreover, it's more than 8 letters long, think about how ugly
"marmal~1" is going to be on Win9x systems ;-)
Humm.. maybe we should choose two names:
A - one for the "enhanced jam" thing (currently FT Jam)
B - one for the "Boost.Build" thing
For "A", I think that any name would fit, because the final
executable should ideally still be called "jam", since it
will be 100% backwards compatible with the official Jam.
For "B", I propose "booster".
- it's short
- it has the "boost" identity
- space aeronautics are cool :-)
From: <boga@mac.com>
Date: Mon, 2 Jul 2001 16:56:58 +0200
Subject: RE: FT Jam
100 % agree with it. No binary package was very flustrating, when i was
getting started with jam.
I think that a jam user has two options:
1. Use you own Jambase (-f option, or compile into jam.exe). This is what
Apple's ProjectBuilder, and many others do (including us).
2. Try to live with the built in Jambase without any modification to it.
Generally i cannot imagine #2 for a complex build system. The current
Jambase is simply too limited for it.
I think Jambase should be for Jam, what the srd c libraries to ANSI C. It's
not that now. For example since the current Jambase don't follow any naming
rules, just adding a new variable name to Jambase can break compatiblity
with exiting makefiles!
I vote for a brand new Jambase!!!
I think that boost's jambase should be kept in the boost project.
What we done to avoid the problem of compiling the jambase inside the jam.exe:
1. We store a set of jamfiles in a folder named Jam inside the project root(ROOT).
2. We compiled a thin jambase into the jam.exe that contains only the
implementation of a 'SubDir' like rule that does the following:
- calculates the project root(ROOT)
- includes ROOT/Jam/Base.jam.
- calls _SubDir (that can be defined in the ROOT/Jam/Base.jam).
With this design we can edit our Jambase without recompiling the jam.exe,
and without using the -f option.
Date: Mon, 2 Jul 2001 11:07:48 -0700
From: Jos Backus <josb@cncdsl.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
True. One could use registry settings or OS2.INI entries in those environments.
From: Paul Moore <gustav@morpheus.demon.co.uk>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Date: Mon, 02 Jul 2001 20:37:43 +0100
On Windows, it is trivial (ie, a single API call) to get the name of the jam
executable. Locating the default jambase alongside this (ie, jambase.jam in the
same directory as jam.exe) makes sense, and is quite a common model.
From: "Jerry Nettleton" <nett@mail.com>
Date: Fri, 6 Jul 2001 00:53:18 -0500
Subject: message & resource compilers
I am new to jam and have been experimenting with building a Windows
program. I've read about using UserObject and supporting new file
extensions. But after several days of changes, I still can't get jam to
essentially recreate the desired flow of compilation.
questions
1. Since file.rc and file.h are generated using file.mc, how can I make
sure the message compiler (mc) is always run before the resource compiler
(rc) to create file.res or compiling file.c (depedent on file.h)?
2. With this setup, jam is looking for a way to compile file.res into
file.obj. How can I change this behavior so that file.res is linked with
Main (prog)?
simulated desired flow
cc lib1.c
cc lib2.c
rem generate file.rc, file.h
mc file.mc
rem generate file.res
rc /r file.rc
copy file.res ..\lib
copy file.h ..\include
cc file.c (depedent on file.h)
lib /out:libutil.lib lib1.obj lib2.obj
cc prog1.c
cc prog2.c
link /out:prog prog1.obj prog2.obj libutil.lib file.res
jamfile
LIBSRC = lib1.c lib2.c file.c ;
PROGSRC = prog1.c prog2.c ;
RCFLAGS = /r ;
Main prog : $(PROGSRC) ;
MainFromObjects prog : file.res ;
LinkLibraries prog : libutil ;
Library libutil : $(LIBSRC) ;
InstallLib ../lib : libutil ;
#GenFile file.h : mc file.mc ;
#GenFile file.rc : mc file.mc ;
#GenFile file.res : rc /r file.rc ;
UserObject file.h : file.mc ;
UserObject file.rc : file.mc ;
UserObject file.res : file.rc ;
InstallFile ../lib : file.res ;
InstallFile ../include : file.h ;
jamrules
rule UserObject {
switch $(>) {
case *.mc : MessageCompiler $(<) : $(>) ;
case *.rc : ResourceCompiler $(<) : $(>) ;
case * : ECHO "unknown suffix on" $(>) ;
}
}
rule ResourceCompiler {
Depends $(<) : $(>) ;
Clean clean : $(<) ;
}
actions ResourceCompiler { rc $(RCFLAGS) $(>) }
rule MessageCompiler { Depends $(<) : $(>) ; Clean clean : $(<) ; }
actions MessageCompiler { mc $(MCFLAGS) $(>) }
From: "Yannick Cornet" <yannick.cornet@openwave.com>
Date: Fri, 6 Jul 2001 16:33:58 +0200
Subject: Multiple Jamfile
I would like to compile a component multiple times, using different flag
options defined in separate jamfiles, all in one pass (one jam invocation).
However I could not find the way to specify to jam to run through more than
one jamfile in the same subdirectory. I suppose this must have been asked
before, can anyone help me answer this?
Date: Fri, 6 Jul 2001 17:03:09 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Multiple Jamfile
In Jam, I'd say you want to create several targets from the same
source(s), and set the CPPFLAGS, CFLAGS or somesuch on each target.
OPTIM on fastTarget = -O2 ;
OPTIM on debugTarget = -g ;
In Jamfile: "include otherfile ;", methinks. I only ever use that to pull
in Jamrules, but it ought to work for you too.
From: "Yannick Cornet" <yannick.cornet@openwave.com>
Subject: Re: message & resource compilers
Date: Fri, 6 Jul 2001 17:17:39 +0200
I am not sure if this will help, I am also quite new to JAM but this works for us:
JAMRULES:
RSC = rc ;
MSC = mc ;
RSC_FLAGS = "/l 0x409" ;
rule UserObject {
switch $(>:S) {
case .idl : ObjectFromIdl $(<) : $(>) ;
case .mc : MessageCompiler $(<) : $(>) ;
case .rc : ResourceCompiler $(<) : $(>) ;
case * : EXIT "Unknown suffix on" $(>) "- see UserObject rule in Jamfile(5)." ;
}
}
rule ResourceCompiler { Depends $(<) : $(>) ; Clean clean : $(<) ; }
rule MC { Depends $(<) : $(>) ; Clean clean : $(<) ; }
rule MessageCompiler {
local _h = $(>:S=.h) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MC $(<) $(_h) : $(>) ;
}
actions ResourceCompiler { $(RSC) /fo "$(<)" $(RSC_FLAGS) $(>) }
actions MC { $(MSC) $(MSC_FLAGS) $(>) }
JAMFILE:
MessageCompiler MSG00409.bin : UIStrings.mc ;
INCLUDES res$(SLASH)Basic.rc2 : MSG00409.bin ;
LinkLibraries Basic.dll : <libs> ;
Dynlib Basic.dll :
Basic.rc
StdAfx.cpp
Basic.cpp
;
where Dynlib is simply calling main but changes some link flags and also
builds the lib target, where LinkLibraries creates the library dependencies
of targets. I use the standard jambase.
* * *
Date: Fri, 6 Jul 2001 13:02:59 -0500 (CDT)
Subject: Re: Multiple Jamfile
we have done this by creating new sibling directories, and putting jamfiles
in them that use the source from the original dir. This seems to work quite
well. You just set one of the search macros to point to the source dir.
Otherwise, all rules stay the same, and just set some of the subdir flags differently.
From: Alain KOCELNIAK <alain@corys.fr>
Subject: Installation on NT fails
Date: Mon, 9 Jul 2001 08:59:57 +0200
I'm a new jam user.
I encountered a problem during jam installation on NT :
- I follow step by step the README file :
* uncomment in Makefile the lines for the NT platform
* set MSVCNT="C:\Program Files\Microsoft Visual Studio V60\Vc98"
* run nmake
- The first step of compilation ( nmake ) works well :
G:\Jam\Jam.2.3\nmake
Microsoft (R) Program Maintenance Utility Version 6.00.8168.0
cl /nologo /Fejam0 -I "C:\Program Files\Microsoft Visual Studio
V6.0\Vc98"/include -DNT command.c compile.c execunix.c ...
command.c
compile.c
execunix.c
...
Generating Code...
Compiling...
parse.c
pathunix.c
pathvms.c
...
Generating Code...
- The second step ( jam0 ) fails :
jam0
Compiler is Microsoft Visual C++
...found 131 target(s)...
...updating 29 target(s)...
Cc bin.ntx86\command.obj
command.c
jam.h(73) : fatal error C1083: Cannot open include file: 'fcntl.h': No
such file or directory
cl /nologo /c /DNT /Fobin.ntx86\command.obj /I"C:\Program\include
/IFiles\Microsoft\include /IVisual\include /IStudio\include
/IV6.0\Vc98"\include command.c
...failed Cc bin.ntx86\command.obj ...
Cc bin.ntx86\compile.obj
compile.c
jam.h(73) : fatal error C1083: Cannot open include file: 'fcntl.h': No such file or directory
cl /nologo /c /DNT /Fobin.ntx86\compile.obj /I"C:\Program\include
/IFiles\Microsoft\include /IVisual\include /IStudio\include
/IV6.0\Vc98"\include compile.c
...
It seems that the MSVCNT path is not used correctly : a /I directive is
added after each space character in it ...
I tried also with MSVCNT defined without double-quote, but then the first
step failed ( nmake ).
What is wrong in the procedure I followed ?
From: Alain KOCELNIAK <alain@corys.fr>
Subject: RE: Installation on NT fails
Date: Mon, 9 Jul 2001 11:01:41 +0200
I put your hack in Jambase file but it changes nothing.
Should I put it directly in Jambase file or in Jambase.c file ?
Date: lundi 9 juillet 2001 09:55
Nothing, but the default Jambase doesn't like the spaces in the
MSVCNT variable. I made the following chnage to Jambase
to fix this:
#if $(OS) = SUNOS && $(TZ)
#{
# Echo Warning: you are running the SunOS jam on Solaris. ;
#}
# Rule to unsplit a variable imported from the environment
# MSVCNT for us. We need it in a single variable with
# spaces and all. Jam splits it on space e.g.
# MSVCNT=D:/Program Files/DevStudio/VC
# turns into:
# $(MSVCNT[1]) = "MSVCNT=D:/Program"
# $(MSVCNT[2]) = "Files/DevStudio/VC"
# This rule puts the spaces back.
# Call it with the name of the variable
rule respace {
local _re ;
local _name ;
# Get the contents of the variable ;
_name = $($(1)) ;
if $(_name[2-]) {
local i ;
local space = " " ;
_re = $(_name[1]) ;
{
_re = $(_re)$(space)$(i) ;
}
$(1) = $(_re) ;
}
}
if $(UNIX) {
if $(OS) = QNX {
AR default = wlib ;
CC default = cc ;
...
#$(MSVC)\\lib\\kernel32.lib
;
LINKLIBS default = ;
NOARSCAN default = true ;
OPTIM default = ;
STDHDRS default = $(MSVC)\\include ;
UNDEFFLAG default = "/u _" ;
} else if $(MSVCNT) {
ECHO "Compiler is Microsoft Visual C++" ;
respace MSVCNT ;
AR default = lib ;
AS default = masm386 ;
CC default = cl /nologo ;
CCFLAGS default = ;
C++ default = $(CC) ;
C++FLAGS default = $(CCFLAGS) ;
LINK default = link ;
Date: Mon, 25 Jun 2001 19:50:39 -0700 (PDT)
Subject: Re: Solved: Need help with SubDir
[A bit off-topic for the Jam list -- if you're not curious about using
Ant, you can skip this.]
The build I was dealing with couldn't do that since it had build-order and
different-classpath issues, and trying to deal with getting that all
correct in Jam would've been a lot less straightforward than just putting
it all into Ant (plus it has all the other Java-oriented built-in stuff).
I don't think so -- what it does is associate a dependency between files
that are depended on and the files that depend on them. Eg., if Foo.java
is depended on by Bar.java and Blat.java, and Foo.java is newer than its
classfile, using the <depend> task before the <javac> task will cause not
only Foo.java to be recompiled, but Bar.java and Blat.java as well.
Very few tasks will run regardless of whether the files involved in the
target are up-to-date (<echo> is the only one that comes to mind). The
<javac> task checks source-files against class-files and only hands off to
the compiler those source-files that are newer than their class-files or
that don't have a corresponding class-file.
If all your .java files can be compiled at once, I can't think of any
reason why you'd need to duplicate a compile target in every build-file.
If you do need to for some reason, you should probably be able to use the
XML include mechanism (for now -- a better include should be coming with
Ant2) and still just have a "standard" in one place, which gets read in --
or you could have the subproject build-files use an <ant> task that runs a
target in some "standard targets" build-file.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Fri, 13 Jul 2001 17:23:59 +0100
Subject: jam -d0 returns failure?
I'm using the external makefile feature of Visual C++ to build some code
using Jam. If I invoke jam with -d1 or -d2, it works fine. If I invoke jam
with -d0, jam returns an error, but builds the project correctly anyway.
From: andreas.held@gretagimaging.ch
Date: Mon, 16 Jul 2001 11:52:50 +0200
Subject: Dependencies and SubInclude
I am quite new to Jam so please excuse if I am trying to do something stupid.
Anyway, I am trying to compile a DLL using several subprojects that are located
in subfolders. What I am doing now is to place a Jamfile into each subfolder,
specifying the compilation rules for the files contained in those folders. Using
LOCATE_TARGET I copy all object files to a central location. In the main Jamfile
I then add the targets for building my library. However, how can I make sure
that the SubInclude statements are being processed before the MainFromObjects
rule? Actually, what happens is that first the MainFromObjects is being
processed and only then the different SubIncludes are carried out. Is there a
On a more general note, what I am actually trying to do is to build a DLL by
including a configurable part of all object files. Is there a general way for
doing this?
Here is my top Jamfile:
advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib
;
ImgProcPhoenix\\Debug\\ImgProcPhoenixD.lib IntelPLSuite2.5\\lib\\msvc\\ipl.lib ;
From: "Bill Clark" <bill@occamdev.com>
Date: Fri, 13 Jul 2001 14:31:53 -0700
Subject: jam vs. cons?
We're looking at make replacements. Does anyone have thoughts
on the relative merits of Jam and Cons (http://www.dsmit.com/cons/)?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Tue, 17 Jul 2001 15:28:44 +0100
Subject: Changing behaviour of default SubDir rule, w.r.t. ALL_LOCATE_TARGET -- a good idea?
I've been attempting to get my output files placed in "Debug" or "Release"
directories[1]. The LOCATE_TARGET variable does exactly what I want, except
that it's reset every time the SubDir rule is invoked.
The ALL_LOCATE_TARGET doesn't do what I want -- it causes all of the output
from every subdirectory to appear in the same place. In fact, I'm not sure
what the SubDir rule was attempting to do:
LOCATE_SOURCE = $(ALL_LOCATE_TARGET) $(SUBDIR) ;
LOCATE_TARGET = $(ALL_LOCATE_TARGET) $(SUBDIR) ;
This appears (to me at least) to do the following:
1. If $(ALL_LOCATE_TARGET) is not set, then LOCATE_SOURCE and LOCATE_TARGET
become $(SUBDIR). This makes sense - the output files are dumped into the
correct subdirectory.
2. If $(ALL_LOCATE_TARGET) is set to "Debug", for example, then they're set
to Debug $(SUBDIR).
In case (2), above, only the first token of $(LOCATE_TARGET) is ever used.
So why doesn't it just do it with an 'if'?
Anyway, I've changed the SubDir rule in my local Jambase so that the
relevant lines look like this:
if $(ALL_LOCATE_TARGET) {
LOCATE_SOURCE = [ FDirName $(SUBDIR) $(ALL_LOCATE_TARGET) ] ;
LOCATE_TARGET = [ FDirName $(SUBDIR) $(ALL_LOCATE_TARGET) ] ;
} else {
LOCATE_SOURCE = $(SUBDIR) ;
LOCATE_TARGET = $(SUBDIR) ;
}
Which does exactly what I want. My question: Is this a good idea? Will it
have bad side-effects that I've not forseen?
[1] It's a little more complicated than that, but this serves for the example.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Tue, 17 Jul 2001 18:46:04 +0100
Subject: Invoking external build processes
What's the best way to invoke external build processes from jam?
Three scenarios:
1. External Makefile; e.g. Linux kernel needs to be built as part of the build process.
2. MS Developer Studio .dsp file; e.g. id3lib builds with a .dsp file.
3. Jamfile; e.g. freetype needs to be built and linked with, but we don't
want to touch the included Jamfiles.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Invoking external build processes
Date: Wed, 18 Jul 2001 16:54:20 +0100
Nice try, Arnt, but I'm being a little slow today. How do I persuade jam to
always build something?
I've got to deal with the following:
I've got a copy of id3lib in my TOP/lib/id3lib directory. It's to be built
on Win32, so the simplest thing to do is to invoke developer studio as follows:
msdev $(TOP)\lib\id3lib\prj\id3lib.dsp /MAKE "id3lib - Win32 Debug"
or
msdev $(TOP)\lib\id3lib\prj\id3lib.dsp /MAKE "id3lib - Win32 Release"
I'd like to invoke the rule something like this:
MSDev id3lib : TOP lib id3lib prj ;
This would make a target id3lib, which would require the same-named .dsp
file in the directory named as the second argument to be built.
I'd then want to invoke another rule, like this:
UseMSDev mainapp : id3lib ;
Which would cause mainapp to depend on id3lib.
1. Does this make sense?
2. Does it seem like a good way to do it?
3. How do I communicate to the linker when building mainapp that it needs to
drag in the stuff generated in the MSDev rule? I considered adding a third argument:
MSDev id3lib : TOP lib id3lib prj : id3lib.lib ;
This would (implicitly) attempt to build
$(TOP)/lib/id3lib/prj/$(DEBUG_OR_RELEASE)/id3lib.lib by invoking $(2).
But, how do I communicate $(3) to the UseMSDev rule?
Date: Wed, 18 Jul 2001 18:31:53 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Invoking external build processes
ALWAYS target ;
I can't really say. It kind of seems to make sense, but the sense eludes
me. A sure sign that I'm too tired. I'll try and reread it tomorrow morning.
Wouldn't a simple Depends mainapp : thestuffgenerated ; do that?
Date: Fri, 20 Jul 2001 11:44:28 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Invoking external build processes
The USeMSDev rule could say either
Depends $(<) : $(>).lib ;
or
LINKLIBS on $(<) += $(>).lib ;
or both. That ought to be enough. Of course, the hardcoding of .lib is
rather a hack. Jam's a bit weak on library handling, I think.
For .lib, substitute .dll, or whatever.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sun, 22 Jul 2001 19:28:33 -0400
Subject: Jam's dangerous syntax
I think this must be the 2nd time in the past 3 weeks I've found a bug
lurking in my Jam code because I forgot a semicolon:
my-rule $(args) : $(more-args) # oops
another-rule an-arg ;
The 2nd line is silently interpreted as part of the first rule invocation!
I'm not sure what the answer might be other than to surround all calls with [].
At least I'd find out about the problem by the end of the enclosing block.
Date: Mon, 23 Jul 2001 09:28:16 +0100
From: Julian Gardner <joolz@rsd.tv>
Subject: New User
I have just spent the weekend playing with Jam and found that once I had
made the separate jamfiles I have got most of my project building.
I need some help here please
I have a file called TEXT.C and this includes numerous .LNG files, some
of the .LNG files are taken from raw unicode and converted using an
in-house convertor. How do I go about setting up the dependencies and
conversion
e.g. in my makefile
arabic.lng: arabic.unicode
convertArabic arabic.unicode arabic.lng
text.c: arabic.lng english.lng
$(CC) text.c
Date: Mon, 30 Jul 2001 12:45:51 +0200
From: David Turner <david.turner@freetype.org>
Subject: FTJam 2.3.5 release
I'd like to inform you that "FT Jam" release "2.3.5" is now
out. The purpose of this release is mainly to:
- cleanup the build system (Makefile, Jamfile, etc..) to
perform proper compilation on Unix and Windows systems
- add a new directory named "builds" containing specific
Makefiles to build the program with Visual C++,
Borland C++ and Mingw (gcc) on Windows..
- updated the documentation. A new file named "INSTALL"
was added, detailing how to compile and install the
program on your system
- implement a new builtin, FAIL_EXPECTED, for the Boost.Build system...
Source packages, as well as Win32 and Linux binaries are
available. Please have a look at:
http://www.freetype.org/jam/index.html
FreeType mirror sites).
Note that the web pages have been slightly updated. The
complete description of changes between FT Jam and Jam is now at:
http://www.freetype.org/jam/changes.html
Note that I'll try to make a second RFC on this list in
order to explain my intentions regarding Jam/FTJam for
the near future..
PS: Since the Jam documentation itself is so "rough", I have
also started a new documentation for the Jam command
language. A draft can be read at:
http://www.freetype.org/jam/syntax.html
Date: Mon, 30 Jul 2001 13:56:53 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Jam's dangerous syntax
Compilers usually "solve" such problems with warnings. In this case, if
jam sees an argument on the beginning of a line, and that word is also the
name of a rule, then warn.
Of course, compilers genereally don't have the option of changing the
language to remove the problem.
Date: Mon, 30 Jul 2001 13:35:11 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Invoking external build processes
I won't comment on 1. and 2. here, but the FreeType 2 Jamfiles are already
designed in a way that let you use them directly in other projects
(i.e. without touching them).
To do that, do something like that in your own Jamfile :
FT2_TOP = [ FSubDir path to freetype2 ] ;
SubInclude FT2_TOP ;
I use it for a custom project, seems to work very well..
Date: Mon, 30 Jul 2001 13:54:09 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: New User
You need a custom rule for each type of action, and you invoke that rule
for each "pair" of files. I say "pair" because it really is one target to
be built and zero or more that form the basis for building.
Maybe something like this:
rule FromUnicode {
Clean $(<) ;
RmTemps $(<:S=.o) : $(<) ;
Depends $(<) : $(>) ;
INCLUDES text.c : $(<) ; # a bit of a hack, really
}
action FromUnicode { convertArabic $(<) $(>) }
FromUnicode arabic.lng : arabic.unicode ;
Main yourfinalname : text.c other.c third.c ;
Jam will understand that text.c includes arabic.lng, and thus that text.o
depends on arabic.lng. It also knows that arabic.lng depends on
arabic.unicode, and all the right things will get executed when to 'jam
yourfinalname'.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Jam's dangerous syntax
Date: Mon, 30 Jul 2001 09:19:51 -0400
I think I'd want to be able to turn that warning off for general
consumption, but leave it on during development. Aside from that small
refinement, I think you have hit on a beautiful solution.
From: Vladimir Prus <ghost@cs.msu.su>
Date: Tue, 24 Jul 2001 19:40:46 +0400
Subject: Unnecessary recompilation
Consider a simple Jamfile
Main a : a.cpp helper.cpp ;
Main b : b.cpp helper.cpp;
The problem with it is that helper.cpp gets compiled twice. Is it bug or
feature I don't understand? How it can be changed? I understand that I can
simply grab C++ rule from Jambase and add "together" modifier, and then do
the same with C rule. Any better way?
Date: Thu, 26 Jul 2001 18:39:15 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Anyone else using Jam for BIG projects?
I work for Cisco Systems where JAM has been used for about 3 years as the
primary build tool on an embedded systems project I have been involved with
for 1 year. We have been working in somewhat of a vacuum with respect to
JAM (though we have communicated directly with Christopher Seiwald, we
haven't really participated on this mailing list much since Karl Klashinsky
left our team). As a first step to changing that isolation, I have read
all recent traffic on the archive for this list (since June 1, 2001 and
selected bits prior) and I have peeked at both Boost.Build and ntjam -- way
cool stuff by the way -- which makes me regret being so insular over here.
I have three questions for you. Any answers will be appreciated...
Our project currently consists of about 15,000 source files (several
million LoC) including 18 different source suffix types, two different
target CPUs, many different code variants for different
platforms/products/build types. The code is all located in a large,
complex directory structure managed by local Perl code over ClearCase (to
give us a copy-out model, and multi-site capabilities) with several hundred
engineers working at many sites on four continents rapidly evolving and
growing the source base. We have about 10,000 lines of Jam source to
define rules and actions for our many source types and another 65,000 lines
of code that consists mostly of rule invocations to build things. On a
typical 20 CPU Sun Enterprise E6500 build server, we are looking at around
30-60 minutes to build the main targets in one of our workspaces depending
on server load. My guess at this point is that we may be the biggest kid
on the JAM block. So my first question is:
Is anyone else out there using JAM for anything as complex as this?
Unfortunately we find that JAM has some problems in our large and complex
environment. Most particularly, dynamic header scanning is becoming
prohibitively time consuming for us. We believe it is the most significant
redundant operation performed at each jam (most of our source and header
files are read end-to-end for the regexp pattern matching at each jam --
maybe 25,000 files). We do have a few optimizations coded into our huge
home-grown header scanning rules that we use to avoid big chunks of this
pain, and we also use a redundant second set of jam infrastructure (one
automatically generated from the other to mangle the dependency tree for
speed based on hints from the engineers workspace). So my second question is:
Has anyone working on easing the pain of header scanning large numbers
of source files?
Today my group is at the point where we need to make some big changes for
the stability, and extensibility of our tooling and we feel that JAM in its
current state makes this very difficult. I am looking at either:
- altering the JAM executable significantly to fix some things we see
as design weaknesses and to add some major new features we need (beyond
those I have read about here), or
- really biting the bullet and switching to some other build tools
(e.g., I am looking at GNU Cons, GNU Make 3.79.1, and others).
By the way, GNU Cons <http://www.dsmit.com/cons> looks interesting --
similar dynamic header scanning to JAM, but nice extensible OO Perl and
infinitely more flexible than regexp scanning -- maybe slow though, I
haven't tried it. GNU Make 3.79.1 is used widely here at Cisco for other
even larger more complex code bases so we have some local expertise with it
and its predecessors. That version has some JAM-like parallelism features
and several of our users would love us to switch to it, but I am guessing
it would still likely require some recursive make-invoking-make nonsense,
which would likely be costly in our environment due to our richness of
source types. The make team here at Cisco has some nifty tooling for
handling dynamic header dependency calculations without scanning though,
but I think we can migrate that to the JAM environment. So anyway, my
third question is:
Does anyone have any thoughts on other build tools that might be worth
exploring to handle a project of this complexity (like GNU Cons, GNU make
3.79.1, or others)?
I would be interested in comparing notes with any of you who use JAM for a
large complex project. Maybe we can exchange some ideas about this
stuff. And I promise, if we do decide to go down that road of modifying
JAM, I will be back here at this mailing list to discuss what are thinking
about doing and to seek feedback and possibly collaboration opportunities.
Date: Tue, 31 Jul 2001 11:24:01 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Unnecessary recompilation
(I suppose the missing space on the second line is just a typo.)
It sounds like a pure bug in jam - jam doesn't understand that the two
helper.o targets are the same.
From: Vladimir Prus <ghost@cs.msu.su>
Date: Tue, 31 Jul 2001 16:04:20 +0400
Subject: Problems with: custom rule; cross-directory dependencies
The practical task motivating my post is the following. I need to make an
executable from two source files: main.cpp and parser/asm.wd.
Transformation that need to be applied to parser/asm.wd are:
- parser/asm.wd -> parser/asm_parser.whl, parser/asm_lexer.dlp
- parser/asm_parser.whl -> parser/asm_parser.cpp, parser/asm_parser.h
- parser/asm_lexer.dlp -> parser/asm_lexer.cpp, parser/asm_lexer.h
There are tree utilities which can perform these transformation. Resulting
*.cpp files should be compiled and linked in the executable.
The problems are:
1. Jam assumes each source in main rule should end up as object, which is not
Main main : main.cpp parser/asm_parser.cpp parser/asm_lexer.cpp ;
but this is hardy convenient.
2. Writing rules for each transformation is boring. I've wrote rule called
'UserAction' and used it like this
UserAction asm_parser.cpp asm_lexer.h : asm_parser.whl : whale ;
It worked when wd file was in the same directory with main.cpp but when put
in parser dir, a problem appeared:
Main main : main.cpp parser/asm_parser.cpp parser/asm_lexer.cpp
Here, Main rule adds grist to parser/asm_parser.cpp
UserAction asm_parser.cpp asm_lexer.h : asm_parser.whl : whale ;
Here UserAction adds grist to asm_parser.cpp, and we have two targets:
<somegrist>parser/asm_parser.cpp and
<somegrist!parser>asm_parser.cpp, which are not considered equivalent. Note
that Main rule erases existing grist on sources, so saying anything like
<parser>asm_parser.cpp in Main rule is to no purpose.
I wonder if anybody has ready solutions for those problems, or plans to do
something about them, or simply ideas which changes are required.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Tue, 31 Jul 2001 15:21:52 -0400
I've been developing a build system for boost (www.boost.org) based on Jam
that is designed to handle problems like the ones you're describing, but
have not thrown any really huge build jobs at it yet.
I think my build system may have some similar problems eventually. When you
are building a collection of targets on multiple compilers and build
variants, what do you really need to do to identify header files properly?
Well, if each header target gets scanned only once for #includes, then you
need to grist header files with:
1. The header name
2. The header search algorithm used by the compiler
3. The complete list of user (#include "...") and system (#include <...>)
paths being used to compile the source file
4. Possibly even the directory of the #including file, depending on the
search algorithm.
That means the same header file must be gristed differently when used in
different builds in order to get the correct dependencies registered. If
each distinct target (as opposed to file) is scanned separately, the same
header will be scanned many, many times.
I've been wondering if it would be reasonably easy to use Jam to generate
the equivalent of gmake's .d files, which could help us to avoid some of the
scanning. I don't have any idea about the feasibility of that idea, though.
It's probably worth noting that cons' Python derivative, scons, is now under
development at sourceforge. I think that will be a more suitable tool for
large projects, mainly because I know that a large project build system is a
codebase all its own and I think Python is better suited to large projects
than Perl. Also, a bunch of experienced and energetic people are working on it.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Anyone else using Jam for BIG projects?
Date: Wed, 1 Aug 2001 12:38:16 -0700
Our situation is not as large as you, but its comparable, I think.
Nt) with 6 variants (various levels of debug, purify, quantify)
We have some 6000 source files with about 20 file types.
We have appoximately 500 Jamfiles with 12000 lines of rule
invocations. (Rules are never defined in Jamfiles, only Jamrules)
Our Jambase is so minimal, it contains juts 37 lines. All it really
does is call $TOP/Jamrules and ./Jamfile. We have no recompiled
or relines jam executable for the last 2 years. We last recompiled
to fix a bug in jam AIX archive support (the bug still exists
in the official jam release).
Our Jam system as some 30,000 targets.
It takes about 8 hours to build the system on our fastest
machines, 1 hour is we use -j10. On Solaris it takes
3 days ! (4 hours if we use multiple machines, -j 6, and
JamShell to rsh the commands across the network)
We probably have 1.5 million lines of source code.
No, and I don't think it is really required. And something
like the Boost Build System is the wrong way to go, in my
oppion.
On our system we have implemented some very straight forward
solutions to shorten build times. We also have a system with
is as flexable as the Boost system (I believe) but much easier
to work with and understand.
Here is what we do.
We have a Bld component which contains all build and build suport
code. This component includes some little gems such as idl.pl
and link.pl which augment these portions of the build. For
example. we need to pre and post process IDL files and the IDL
output. Our jam rules know about the build dependenies but nothing
about the complexities associated with idl compilation. Link.pl
has specialized knowledged about how to handle the various linker
types that we use on each platform. It preprocess command lines
to make jam's job easier. It also handles modifing the link
for purify and quantity. Jam communicates with link.pl via
environment variables.
Periodically, our build team produces reference builds on each
platform. Developers then "shadow" the reference. The shadow
process produces a copy of the development tree, but only links
of the files (on Unix anyways). Shadowing takes only a few minutes.
unmodified shadow will build nothing, and takes less than 1 minute
normally.
Our build process uses a build/export/realease model. So "jam"
will build local targets, "jam exports" builds and exports (to
the rest of the development tree) those targets that need exporting.
"jam release" builds a release (installation tree) ready for packaging.
Calling jam in any directory ONLY builds that directories targets,
sub-directory targets, and dependencies (where ever they may be).
So if you modity a header file, say, and then change to the Vman/src
directory and "jam exports", only Vman is build. None of the other
200 some odd executables are built. Header file scanning only occures
for the files directly/indirectly required for Vman.
Calling jam from the top of the delopment tree builds all targets.
Even when building the who tree, header files scan takes less than
5 minutes, and that is a very small fraction of the total build
time, there is no needs to optimize it further.
Note, each file is scanned only once during a build, regardless
of how many times it is referenced. This is because exported
headers files are NOT gristed. Of course, we include header files as
#include <Cmp/File.h>
so the component (Cmp) becomes part of the jam node name and effectively
grists the header file.
jam is so much easier on the end users. Gmake would not handle
our whole build tree, at least the version we had 3 years ago
would not. Remember, our jam tree is equal to a make file
with 20000 lines (or more) and 30000 targets. Further, we wanted
a build process which was local (current working directory)
sensitive, which is hard to achive in make unless to degress
into a non-declaritive build system.
I would not switch from jam, there is nothing to equal it at the present.
The built distribution rule set are a fine example of jam rules,
but it will always need to be rewritten for any serious project.
Just like with make, I have always ended up completely redefining
the build in set of rules.
The new syntax helps some, but not enough to cause me to upgrade.
Jam has its problems, no doubt -- but I'll keep then to myself until I
have time to do something about them :)
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Unnecessary recompilation
Date: Wed, 1 Aug 2001 12:49:17 -0700
Main a : a.cpp helper.cpp
calls Objects a.cpp helper.cpp
which calls Object helper.o : helper.cpp
which calls C++ helper.o : helper.cpp
Main b : ...
also makes a similar call sequence,
including C++ helper.o : helper.cpp
Since the C++ rules was invoked on
helper.o twice, Jam feels the need to
invoke the C++ action twice.
Thats just the way that Jam was written.
You can see lots of evidence of this
behavior in the distrubuted jam rules.
Look for lines like
if ! $($(<)-included)) { $(v)-included = true ; }
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Problems with: custom rule; cross-directory depende
Date: Wed, 1 Aug 2001 13:09:44 -0700
I would add extension .wd to you Objects rule
(or UserObjects rule)
somthing like
case .wd:
switch $(<:S=) {
case *_parser : C++ $(<) : $(>:S=_parser.cpp} ; Whale $(>);
case *_lexer : C++ $(<) : $(>:S=_lexer.cpp} ; Whale $(>);
}
Modify your Objects rule like (do't take me too literally
rule Objects {
local i j s x ;
makeGrsistedName s : $(<) ;
{
objectFiles x : $i ;
{
Object $(j) : $(i);
}
}
}
where objectFiles convert .cpp to .o, .wd to _parser.o and _lexer.o
And Whale rule
rule Whale {
if ! $($(<)-whaled) {
WhaleDo $(<:S=_parser.cpp) $(<:S=_parser.h)
$(<:S=_lexer.cpp) $(<:S=_lexer.h) : $(<) ;
}
}
action WhaleDo { Depends $(<) : $(>) ; }
action WhaleDo { Whatever }
variations on this are possible, for example, you could divide Whale into
two rules Lexer and Paser, say.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Wed, 1 Aug 2001 18:05:50 -0400
It seems a little unfair for you to toss that remark off without an
explanation. I respect your experience in building large projects. What
about Boost.Build do you think is inappropriate?
Subject: RE: Anyone else using Jam for BIG projects?
Date: Wed, 1 Aug 2001 16:55:42 -0700
Yes, Sorry, I guess its a bit unfair. A few weeks ago (months ago :)
I intended to send you a more formal (and private) critism
about the Boost system, but one things and then another, ...
By the time I sat down to write a reply, I had convinced my self
that you were far enough along that major critism could add no value.
(From my frequent job as code reviewer, I know that most often,
going back five steps is not an option people will entertain).
If I was wrong, then I spend a Saturday and give you some feed back
that you can use.
What do I think is inappropriate ? In a nut shell I think that
the tact of adding "requirements" in the way that you have makes
the user's job of specifing jam files more difficult. What
would I propose as a solution -- well, thats what I would need
a day to formulate.
Date: Wed, 01 Aug 2001 17:39:54 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Re: Anyone else using Jam for BIG projects?
Thanks very much for your prompt and thought-provoking reply! Comments below....
I downloaded "build_system_2001_6_18.zip" and have read a little bit about
your system. I will read more about it when I get a chance. It looks very
interesting. With the large amount of JAM source we have in our workspaces
and with lots of vocal engineers, I am a bit leery of adopting anything
that will require major changes to our users' Jamfile code, but I am not
dismissing Boost.Build out of hand. I will familiarize myself with it
before making any big decisions about it. So you can probably count on
some naive questions from me on this list when that happens. :-) At the
very least, I expect to learn some new things while digging into
Boost.Build. Thanks for your efforts on this project!
We don't grist them with this information but we do store all of this
information (associated with the header name as it will be included) by
using indirectly named variables which we can retrieve later. For example:
# early on in the Parsing Phase-executed code in our header exporting
code (we require explicit exports for our public header files so we can
control and track API access) we do this
x = <header name and directory as it will be included> ; # e.g., x
foo/bar.h ;
$(x)_blah = <information we need later in the build> ;
...
# later on in the Binding Phase when scanning files and executing our
local header rules
if $($header_name_as_included)_blah) {
# This header has the "blah" information set for it so go ahead
and use it...
... $($1)_blah) ...
}
We also attach some info to just the header name base/suffix-keyed indirect
variable since some headers get included this way and the search paths take
care of picking up the right ones. I.e., as above but:
x = <header name base and suffix only> ; # e.g., x = bar.h ;
$(x)_twiddle = <information we need later in the build> ;
Yes, but we have decided not to worry about being perfect about this. We
don't want re-scanning so we accept a few spurious dependencies. In the
spirit of JAM's built-in promiscuity with respect to adding header
dependencies based on regexp pattern matching, without regard to
conditional compilation, we simply accept that a few spurious dependencies
will arise from our technique when two different public headers in
different directories have the same base/suffix (which occurs surprisingly often).
Excellent! I had wondered if anyone else out there had been thinking along
these lines. I am very excited that someone else is interested in this,
David. The GNU make 3.79.1 community elsewhere in Cisco has set up their
build to use these .d files very effectively. The essentially get 100%
accurate header dependency information (re-scanned in every context where
every header is included, with appropriate preprocessor activity) and they
get it a near zero processing cost since it gets generated by the compiler
(C, C++, and we could easily make our other dozen compilers do the same) as
a byproduct of building the files. At first glance this looks like a bit
of a chicken and egg problem in that you get the dependency information
off-by-one in the sense that you have to build your code to get its
dependency info in order to know whether to build your code in the first
place. But since the only way that the dependency profile is able to
change is by the alteration of one of these dependent files, you can be
assured of know when this dependency info is inaccurate, and you can drive
the build accordingly. I am very interesting in incorporating this
strategy into JAM. Not exactly making jam generate ".d" equivalent files
in Jam syntax, but since the compilers create these things in "make"
syntax, just make it possible for JAM to use this information (maybe using
new internal code, an external code mechanism like a plug-in module of some
kind, I think there are a few ways we could do this...). Anyway, I hope
something like this is feasible, but there are certainly some issues with
respect to generated code that will make this challenging in our (Cisco's)
environment, but I have some thoughts about that. I would love to talk
more about this with any interested folks.
Cool. I'll take a peek. Thanks for tip.
I am very glad to hear that. I will be socializing on this mailing list
some ideas our team has about significant changes to JAM. I want to see,
if we go this far into changing JAM, will we be going there on our own or
will the existing JAM community rally around this stuff and cheer. And I
am hoping to hear from folks using JAM in other contexts about whether
these ideas make sense in their contexts. We are very open to being
convinced that these changes are not needed. Anyway, we have an
embarrassingly long list of changes (currently there are about 29 or 30
fairly big things) on our wish list. There are maybe 6 or 7 themes among
them that I will bring to the list separately for discussion (one of which
is the .d stuff mentioned above). Thanks for your interest!
Date: Wed, 01 Aug 2001 18:22:32 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: RE: Anyone else using Jam for BIG projects?
Thanks for your response! Your environment looks very similar to ours in
complexity (and I think also in the architecture of your JAM code).
My comments are below...
It sounds to me like you are in the same ballpark with us in terms of build complexity.
Well, from my viewpoint, the jury is still out on this question. I need to
read more about Boost.Build.
Verrrry similar to our build environment. We have a "build" component (and
many other build support components under "buildx/*", "util", "tools/*").
Again, we have very similar tooling. To create our duplicate JAM
infrastructure (which we call "BOB" -- Binary-Oriented Build) takes us just
under a minute. The resulting infrastructure knows only where binaries are
located, not what their dependencies are (i.e., in the duplicate
infrastructure, jam does *not* know how to build these entities at
all). Build times for us run around 3 minutes in a BOBbed workspace.
We also use this extensively. Building from a component directory in our
environment takes on average 39 seconds (the last time we checked). This
technique essentially clips off the entire dependency tree outside of the
component except for those headers included and static libraries linked
from outside. Most of our engineers work mostly in this way, but whole
tree builds are required for some (such as infrastructure components) who
use BOB unless their APIs change in which case they have to do a BOB-less
whole tree rebuild. So most of our folks have build times under a minute,
the rest get to build in 3 minutes or so most of the time but occasionally
they take the big hit on API changes (typically about 15-20 minutes on a
tree that has been fully built previously and has minimal changes). About
50-60% of that time is spent executing JAM source and header scanning.
Our environments differ there.
We do the same.
We do the same, and headers are only ever scanned once in any build. We
take great care to ensure all build changes preserve this, since header
scanning is such a big part of our build.
The Cisco make gurus use make recursively to achieve this. The cost is
that build rules (how to build this suffix from that suffix) must be read
at every recursive make invocation. Since their rule files are small (few
source types) this is not an issue for them. Ours are big and so I expect
we will wan a single make from the workspace root as well as makes from the
component directories. I think this can be accomplished as JAM does it, by
using context-setting code, and by "including" component makefiles inside
the root makefile.
Also, I am not so sure that JAM is easier on the end users than all of the
above tools. Cons looks pretty user-friendly. Also, our end-users (i.e.,
the engineers inside Cisco who are my customers essentially) are pretty
vocal about their dislike of Jamfile coding and most come from a make
background of course.
Any specific thoughts about weaknesses of GNU Cons?
I feel the same about that.
Same here. I assigned one of my engineers to test out JAM 2.3 shortly
after its release. We found too many bugs in it to feel safe migrating to
it. Off the top of my head I remember a couple of nasty ones:
- 4 or 5 rules in the Jambase were removed accidentally!
- new semantics introduced for the NOCARE rule that broke generation of header files!
Though some of the new features look attractive, we have decided we will
not move to a new JAM version without working up a pretty hefty regression
suite first. The risk of having several hundred engineers on several
continents sitting on their hands due to a JAM bug is just too high for us.
Well, that's where we are right now. Fixing the remaining problems in our
build infrastructure will be challenging using the existing JAM (currently
a 2.2 derivative for us). We think changing JAM will be easier than living
with its limitations.
I plan to bring forth some ideas that my team has about JAM changes over
the next few weeks, and I am interested in hearing feedback from the list
on these topics.
P.S. From your e-mail address I am guessing you work for MDSI in
Richmond? I used to have a BC.CA address until recently myself. I'm from
Victoria. Cheers...
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Anyone else using Jam for BIG projects?
Date: Wed, 1 Aug 2001 21:44:00 -0700
I just timed the header file scan portion of a jam invokation on
a local file system on HPUX, less than 1 minute.
2 minutes, 10 seonds, when everything is over NFS.
So the question would be, why does would you scan times be
some much higher as to be a problem for you.
Remember, people who need to build the who tree will be
waiting hours anyways.
To speed up dependency analysis without changing jam itself,
you could use makedepend and process the output to create
a jam.deps file in each compoment which you then include into
your jam files. You could use jam itself to rebuild the
dependencies, but that would definitely be a pre-invokation of
jam. This would also be error prone unless jam was invoked
from within a wrapper script.
(This was one of the reasons we went to jam in the first place.
Please happly live with systems like this using make, but Ive
been burnt too many times by builds that did not refresh their
dependencies.)
If you want the makedepends like step to be automatically run
within the same build of jam, then you would be looking at
more extensive changes to jam.
In make0, before headers is called, call a new function,
dependencies say. In dependencies you would have to
a) determine is makedepend thing needs to be called. This
is the worse part, but could be done easily with a perl
script.
b) invoke make depends with all the include "stuff" required.
c) load the dependencies into jam (easy)
d) CLEAR the HDRRULE on all the files for which dependencies
where found (this will stop jam's traditional scanning)
I never did any of this because I'm not willing to force the
jamfiles to impose that all files in a directory get built with
the same C++/C/yacc/Lex (whatever) options and using the same
include path. This might be the way that 99% of the sources
are built, but its always that 1% that kills you.
As to users not liking writing Jamrules -- except for the semi-colon
thing, the jamfiles are so easy to build that most of ours are
generated automatically from our Rose Model. A rule file
is rarely more complicated than.
SubDir TOP x y z ;
Component y ;
ExportHdrs header.h header.h header.h ;
ExportIDL file.idl ;
Library libY : file.idl file.c file.cpp ;
Library localY : file.c file.cpp ;
ExportLib libY ;
ExportBin YApp ;
Server YApp : YMain.cpp ;
CommonLibraries YApp ;
LinkLocal YApp : localY ;
LinkLibraries YAapp : libA libB libC libD ;
(just more lines and more files :)
If your users need to anything more than to declare what goes
into the library or executable, what is local and what us exported,
the your jamrules need to be reworked.
(I have a component called "Fake" which fakes out all of our projects
build tools. This allows me to simulate a build in order to debug
a new set of jamfiles without having to actually build the system.
I also have a jam.exe which I have doctored to tell me why an target
is getting rebuilt. It produces too much output to be generally
usable, but it has help track down bad dependency statements)
Yes, I'm from/in/born/raised (and unless somebody rescues me, probably
die) in BC. I work in Richmond and live in Coquitlam. The commute is
the worse part.
I've always liked Victoria -- seems much more idealic than Vancouver and suburbs.
Somedays (many days :)) I wish that somebody would offer me a job
someplace warmer and DRIER. Do you know of somebody who needs a
good system/software architect, knows all the buzz-words, has bad spelling,
can out code even 16 year old hackers, dreams in UML, perl, java, and c++,
designs embedded software or enterprise systems, usually while stuck in traffic
(and builds visual basic code to turn the rose models into highly structed
code templates for developers to code against) ?
Date: Wed, 01 Aug 2001 22:23:23 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Whitespace As Delimiter -- Yuk!
You can kill that "missing-space-before-semicolon" bug once and for all by
tweaking your JAM source as follows:
In scan.c, in yylex(), around line 300 it has:
if( ( c = yychar() ) == EOF || !inquote && ( isspace( c ) ) )
break;
}
/* Check obvious errors. */
You can tweak that to:
if( ( c = yychar() ) == EOF || !inquote && ( isspace( c ) || c == ';' ) )
break;
}
/* Check obvious errors. */
Unfortunately that's a bit like trying to soak up a flood with a single
bread crumb and you *cannot* kill all other such token merging problems
this easily. Just try adding in a similar check for ':' and you'll see
what I mean. That addition would break the code that uses the $(foo:BS)
syntax, for example. JAM (unlike most programming languages) considers
that entire expression to be a single token (i.e., one lexical atom). More
typically the scanner/lexical analyzer would stream out 6 tokens to the
parser for this expression).
I think the white space delimitation of tokens in JAM's language is an
unfortunate weakness. It causes us at Cisco many headaches in the form of
difficult-to-track-down core dumps (bus errors or segmentation faults) most
of the time, and just plain wrong behavior without any error messages at
other times. This is unacceptable behavior for a production tool under any
circumstances, but for a tiny little error of omitting a space character it
is doubly so.
I advocate completely gutting out the home-grown scanner from JAM, and
replacing it with a nice normal extensible lex-built scanner in which
whitespace is irrelevant to token separation, as it is in most programming
languages. This change implies some significant language usage changes
though. For example, code like this:
SubDirCcFlags -DDEBUG=1 ;
would have to be changed to something like this:
SubDirCcFlags "-DDEBUG=1";
to avoid separating this single SubDirCcFlags parameter into a bunch of
tokens (which would later be parsed by jam into an expression that jam
itself would try to evaluate -- not really what is wanted here). There are
many other things that would require change too, since expressions like this:
$(foo)bar
would be indistinguishable from expressions like this:
$foo) bar
when delivered as a token stream by a typical scanner. Of course these
mean very different things to jam. So to code the former you would need to use:
"$(foo)bar"
So changing the scanner the way I am advocating would require people to use
quotes in a lot of places where they are not needed today. Would that be a
difficult change for you and your jam users to accept?
Note that a scanner change like this also implies a major rewrite to the
parser since the parser would be receiving a completely different kind of
input stream.
I would be very interested in hearing people's thoughts on this.
P.S. I don't accept the argument made in the JAM Language document:
Jam/MR requires whitespace (blanks, tabs, or newlines) to surround
all tokens, including the colon (:) and semicolon (;) tokens. This is
because jam runs on many platforms and no characters, save
whitespace, are uncommon in the file names on all of those platforms.
In my experience, UNIX, the Mac, Windows (all flavors), and even very old
PC-DOS and MS-DOS filenames (to choose a few of JAM's many platforms) could
*all* contain whitespace and frequently they did. Let's delimit names with
quotes not white space. Personally, I am used to using quotes even when
not required in some contexts in some languages, just to be explicit. And
of course, if one wants to embed a quote in one of these strings (e.g., to
have a filename contain one of these characters) one can use the \" escape mechanism.
Date: Thu, 2 Aug 2001 12:17:37 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Unnecessary recompilation
I understand the logic that makes it happen, but fail to see any argument
for this behaviour.
That sounds like the author realized what was happening... but I still
don't see any reason for it. Shouldn't the action execution code think
"Oh, I've done this exact action, no need to do it again"?
Date: Thu, 2 Aug 2001 12:34:51 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Anyone else using Jam for BIG projects?
What irritates me is when a build that involves ten seconds of compiler
time takes much more than ten seconds on the wall clock. Yes, I'm serious.
I also use a hacked jam that tries to compile the "right" file first, to
deliver any error messages quickly :)
Before the make, jam should look for and perhaps read a cache file. Don't
read it if it's older than a day or two, maybe.
headers() should, for each file, either take information from that file or
do the scanning, depending on whether the header file is newer than the
cache file or not.
At the end, jam should write a new cache file.
What I describe should not cause any such change... or am I wrong?
Date: Thu, 2 Aug 2001 12:41:11 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Anyone else using Jam for BIG projects?
I am wrong. Changing some variable in a Jamfile might not cause the
include information to be updated.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Unnecessary recompilation
Date: Thu, 2 Aug 2001 12:01:57 +0100
Surely it's a little more complicated than that -- you could have another
rule that added a "C++FLAGS on" to one of the targets, and then invoked
MainFromObjects. This makes it difficult to tell whether the C++ rule is
exactly the same.
I admit, however, that:
a) It's not impossible to deal with this case, and...
b) You'd have to be totally insane to do this without specifying a different
output file/directory, to avoid confusing the two resulting .o/.obj files.
Date: Thu, 2 Aug 2001 13:09:43 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Whitespace As Delimiter -- Yuk!
Jam is at risk of forking. This is the sort of change that makes the fork
certain. Unless David, David and the new perforce hire all agree... IIRC
perforce hired someone who has jam as a large part of his job description
as of August 1.
Date: Thu, 2 Aug 2001 13:23:25 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Unnecessary recompilation
Rules are hard. Actions should be easier. Byt the time jam is about to run
an action, it can tell _exactly_ what the action is and omit it if the
same action has been run in the exact same circumstances.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Thu, 2 Aug 2001 10:24:17 -0400
A more recent version in available via anonymous CVS from sourceforge under
the boost/tools/build module.
I wouldn't suggest it, at least not at this point. You have a large and
stable system. Boost.Build is just getting up to speed.
If you think it holds real promise for your work, your feedback (and
especially code contributions) would be much appreciated.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Thu, 2 Aug 2001 10:30:03 -0400
Boost doesn't work that way. We thrive on peer review feedback.
Actually, yes, I would appreciate it.
Hmm, if anything it's the default-BUILD section of a target specification
that has been superfluous for me in practice. The requirements section has
been really useful and easy (for me, of course!), though I'd love to hear a
simpler interface proposal.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Date: Thu, 2 Aug 2001 10:37:58 -0400
My view, in brief, is that whitespace delimiting is great for top-level
Jamfiles but lousy for people writing Jam rules. The underlying Jam language
is rather weak for the sort of large build-system construction jobs that I'm
trying to throw at it. [It probably would also help everyone to have ";" be
a delimiter].
Date: Thu, 02 Aug 2001 07:45:20 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Re: Anyone else using Jam for BIG projects?
Exactly what I was thinking. I still plan to absorb as much as possible
from Boost.Build though. It looks very innovative.
Me too. I think my contributions to this list will probably have to occur
before 8 or after 5 (Pacific time) since daytime is usually pretty
intense. But I wouldn't want it any other way. :-)
Date: Thu, 02 Aug 2001 09:14:38 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Re: Anyone else using Jam for BIG projects?
Apparently there is (in Solaris anyway). Some of our developers here have
suggested a similar strategy.
Date: Thu, 02 Aug 2001 09:52:51 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: RE: Anyone else using Jam for BIG projects?
Hmmm... I'm not sure. I just counted and we have 8,995 ".h" files
(comprising 1,863,096 lines) and 9,414 ".c" files (comprising 4,663,950
lines), plus many other file types are scanned in our system (we have
several other source types, with various include syntaxes). We also have
around 2000 lines of code in our HeaderRules.jam code that manages this
(i.e., the code that runs during the Binding Phase, there is additional
header code that runs in the parsing phase to set up for that code). Maybe
the combination of our heavy JAM language processing, lots of
interconnection between these files and large file volume is causing the
time difference? Do you have these metrics for your code? I did the
following to get the LoC numbers:
cd <source tree root>
find . -name '*.h' | xargs wc -l | grep -v ' total$' | awk '{ s += $1
} END { print s }'
find . -name '*.c' | xargs wc -l | grep -v ' total$' | awk '{ s += $1
} END { print s }'
In our environment, a whole tree build from scratch takes around 45
minutes, never hours.
And I think you also take a big time hit for that unless you use the
"off-by-one" technique I mentioned previously and handle it very carefully.
I think there is a better way.
For first build, build everything (or everything required for this
developer's context). Generate .d files as a byproduct of the build almost
for free.
In subsequent builds, determine if any .d files are out-of-date with
respect to their corresponding source files -- or *any* of the included
files as found in the ".d" file, if so build all those sources in addition
to anything that depends on them as required of course. Because the only
way to alter the header include file dependencies is to touch either the
source file or one of the things it includes, you are guaranteed to catch
and update whenever necessary. I am very interested in making jam able to
use this technique.
We independently set compiler options, compile flags, include file search
paths, etc. for each source file. We would need to continue this practice.
Our users typically code even less complexity that that. We limit them to
using about 8 or 10 rules to build router executables (e.g., server
processes, client processes), static libraries, DLLs, and a few other
goodies. They are all simply coded like the above -- in their src/Jamfile
files. Our Jamrules are another thing. Much more complex.
I don't quite get the purpose of this. We use some test shells to unit
test/debug parts of our jam infrastructure, which includes a number of Perl
scripts and C-programs invoked during the build. There is also the "-n"
option to jam of course.
I have hacked something similar from an opposite perspective but for the
same purpose. Given a fully gristed target, it will display the complete
Depends (or optionally both Depends and INCLUDES) graph for that
target. This is very useful for understanding what's going on "inside" jam
when something goes wrong. Each target is only displayed once so the tree
doesn't get too gory (e.g., multiple references to a directories, for
example, were ugly before I changed that).
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Thu, 2 Aug 2001 13:22:33 -0400
I have some elisp which lets me "step" through the -d+5 Jam debugging
output, go to the beginning or end of a rule invocation, etc. I just
redirect Jam's output into a file:
jam -n -d+5 > c.jerr
open the file in emacs, and start walking through the results. If anybody
wants the elisp stuff, let me know and I'll post it.
Date: Thu, 02 Aug 2001 15:29:06 +0200
From: Patrick Frants <patrick@quintiq.com>
Subject: Re: Anyone else using Jam for BIG projects?
The worst problem with all make-like tools is that they forget forget
dependencies between between seaparate
invocations of the tool. At least under Win32 it would not be too difficult to
write a process that maintains a dependency
graph and keeps updating it with help of the FindFirstChangeNotification call.
I don't know unix too well, but surely there
must exist a similar call?
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Anyone else using Jam for BIG projects?
Date: Thu, 2 Aug 2001 21:53:00 -0700
You seem to have a very big tree of code but very
small results, if you can build that much stuff in 45 minutes.
We have links which take more that 30 minutes and produce executables
which are larger than 30 Megs.
You would be create a lot of .d files, which is probably OK.
You would end up timestamping a lot of .cpp and .h files
(or .d files), which, over NFS is relativity expensive.
Of course you wound not. A single C++ file can take 10 minutes to
compile on solatis (if its 2,000 lines long) and a executable can take
two hours to link (on AIX).
My problem is always -- wht did jam build X. If X depends on A, B, and C,
I want to know which caused X to recompile. If X includes, directly and
indirecty, 400 header files, then simply priniting the dependency tree
just gives you too much information.
I'va always tried for somthing like.
Building X
because A is newer
because B is newer
includes newer C
includes newer D
because C does not exist
so now I can see that D (at the very least, ws touched).
This has helped me find suff where a node depended on itself.
Also consider this
Depends a b : x ;
Depends x : y ;
TEMPORARY x ;
Depends c : a ;
Depends c : b ;
A bug in the handling of temporaries means that during dependency
processing, only the first dependency analyzed (a,b to x) will
propagate the timestamp of y to (the missing) x; The other target
(a/b) will always be cosidered out of date.
Date: Thu, 02 Aug 2001 22:53:04 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: RE: Re: Whitespace As Delimiter -- Yuk!
I think that parsing C++ with lex is easier than parsing jam with
lex. Whitespace is irrelevant to token separation in C++.
Also, consider the following additional example which makes normal scanning
of jam awkward:
rule foo {
...
}
actions foo {
...
}
To a scanner, these character sequences look essentially the same, but they
must be tokenized very differently. The {} after "rule" are just two
tokens delimiting a block. Everything inside the block has jam syntax and
must be scanned/parsed. The {} after "actions" mean something very
different to the scanner. They essentially delimit a character string,
which cannot be parsed by jam (in fact the grammar it uses in there is not
even known at this time, it could be Bourne Shell syntax or C-Shell syntax
or anything). This requires the scanner to be modal -- doable, but
yuk. There is nothing like this nastiness in C++.
They are essentially the same issue for us. They cause spurious parameters
to get attached onto the end of the rule whose termination has been
compromised. We manage this kind of stuff by a lot of parameter checking
code. E.g., if $(4) { EXIT "too many parameters to rule foo." ; }
But when the last parameter is a list of arbitrary length as it often is,
this doesn't help.
Yup we do this now.
All of the above would be nice. B and c are in ftjam already though,
aren't they?
This one is ABSOLUTELY ESSENTIAL!!!! It is so painful not having this.
Excellent. C-like file scope would be sufficient for my purposes.
Okay, would be nice.
I don't understand the above.
I don't think we have encountered this problem. Maybe because our use of
static libraries is minimal.
I have been following the current thread on this, but it is not an issue
for us. We force developers to either:
- spin out a library for the duplicate code, or
- use a CopyFile rule to copy the source, to separate the builds.
We also try to discourage the use of either technique and when this comes
up we look for alternatives.
I don't understand this one. Can't you just use this:
if ! $(x:G) { x = $(x:G=foo) ; }
to set grist if not set.
I guess you could say we use grist "everywhere". That is all targets, and
all headers use grist to allow us to unambiguously match them up when necessary.
Yup. This would be necessary.
I used to really want this. Now I look at this as another "would be nice"
feature. I was thinking of something like a shell or Perl backtick
thing. E.g.,
x = `ls *.c` ;
which essentially gives you your globbing too.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Thu, 2 Aug 2001 14:27:44 -0400
voila. Note that the latest release of FTJam has a nice feature that keeps
Jam from "wrapping" its call nesting indicator when the nesting levels get
deep (the wrapping ends up confusing my elisp).
;; Jam
(defun my-jam-debug-nesting ()
(let ((where (point)) (result nil))
(beginning-of-line)
(let ((start (point)))
(search-forward " ")
(setq result (length (buffer-substring start (- (point) 1)))))
(goto-char where)
result))
(defun my-jam-debug-move (nesting line-function)
;; Abusing mark/point here for hilighting purposes. No time to figure out
how
;; to do it "right"
(deactivate-mark)
(let ((line-form (list line-function 1)))
(eval line-form)
(while (> (my-jam-debug-nesting) nesting)
(eval line-form)))
(end-of-line)
(set-mark (point))
(beginning-of-line)
(my-activate-mark)
)
(defun my-jam-debug-out (line-function)
(my-jam-debug-move (- (my-jam-debug-nesting) 2) line-function))
(defun my-jam-debug-next ()
"go to next line in evaluation of current rule, or calling rule if no such line exists"
(interactive)
(my-jam-debug-move (my-jam-debug-nesting) 'next-line))
(defun my-jam-debug-prev ()
"go to prev line in evaluation of current rule, or calling rule if no such line exists"
(interactive)
(my-jam-debug-move (my-jam-debug-nesting) 'previous-line))
(defun my-jam-debug-finish ()
"go to next line in caller of current rule, or its caller and so on if no such line exists"
(interactive)
(my-jam-debug-out 'next-line))
(defun my-jam-debug-caller ()
"go to previous line in caller of current rule, or its caller and so on if no such line exists"
(interactive)
(my-jam-debug-out 'previous-line))
(defun my-jam-debug-mode ()
(interactive)
(local-set-key [\C-f10] 'my-jam-debug-prev)
(local-set-key [f10] 'my-jam-debug-next)
(local-set-key [\S-f11] 'my-jam-debug-finish)
(local-set-key [\C-\S-f11] 'my-jam-debug-caller)
(local-set-key [f11] 'next-line))
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Date: Thu, 2 Aug 2001 20:45:59 -0700
Subject: RE: Whitespace As Delimiter -- Yuk!
This is the best think that ever happened to jam (since it was
orginally written). Not to criticize Perforce or those of us
who contribute once in a while, but Jam has really suffered from
having no real "architect" to move it (and the user community)
forward. I have seen MANY great ideas either lost or simply
die because nobody has been taking a long term view of Jam.
Those who might have had the time to take on this role have
probably shyed away somewhat because its never been clear
whether the orginal authors wanted to past the torch or not.
Now it seems clear !
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Re: Whitespace As Delimiter -- Yuk!
Date: Thu, 2 Aug 2001 21:14:51 -0700
I would have to agree with the comments that Jam's lexer
and parser could be better. We had to increase our
Yacc stack to some 50000 nodes to parse our jam rules.
Thats because the current grammer is right-recusive
instead of left. Another "project" that I never got to :)
I don't agree that whitespace as a delimiter is a bad
thing. Careful design can build a lexer and parser that
can handle the Jam langauge and give reasonable feedback
about errors in parsing. Look at Doc++, which bsically
parses C++ in lex. If it can do that, we can surely
handle little things like $(whatever) stuff.
Making ";" into a special character, at least in certain
contexts would be good. But missing ";" are more of an
issue than forgetting to place a whitespace before the ";".
To make parsing (or execution) more fool-proof, it would be
nice if rules/actions could specify the number of arguments
they expect (the number of ":").
You might do something your self with ...
if $3 { EXIT ; }
as a type of assertion that the rule was called with 1 or 2
arguments.
Thinks that would have helped me are ...
a) globing
b) functions (the current [] notation}
c) substitution (like ksh's $(var#) $(var~) or perls =~ )
d) ability to read variables attached to objects
e) hiding rules (so they could not be called outside
of a lexical context)
f) "simple" whats out of date compared to X (makes -d, I believe)
(ie, why an I building X)
g) rule dependencies (like Sun's make, it remembers what
command(s) it ran last time for a target, and if the list
changes, it assumes that target is out of date)
h) knowledge that objects that are in a archive are
in the archive (right now, you can have a different
TARGET on a library and a library member, and if so,
the behaviour is weird)
i) Collapse multiple, identical "actions" on the same target.
(same targets, same sources) (but I can think of several
problems with this, not the least of which, jam might
end up spining during rule invocation a lot of rule
writers were not careful)
j) Simple way to grist a value only if it did not already
have grist. (This would allow gristing to be used
everywhere, for example in the C++ rule and the Objects
rule. Which ever rule added grsitig first would win
(unless the grist was reset with $(x:G=y))
I would not objects to jam saving dependency information
someplace (g) about would basically force this. But jam
would need to timestamp the dependency information so that
it could invalidate it if a file was modified.
I use to wish for access to the shell during rule invocation,
and that might still be useful. The fact that somebody could
invoke side effects during the rule phase is something that
could be lived with.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Unnecessary recompilation
Date: Thu, 2 Aug 2001 21:31:23 -0700
Lets say that we made this change ... perhaps we
break something ...
So now I
x = <funny_grist>X ;
y = <other_grist>X ;
LOCATE on $x = $(cwd)/objs ;
LOCATE on $y = $(cwd)/objs ;
Build $x ;
Build $y ;
Sevral places in Jam I use the fact that a many jam nodes
(targets) can refer to the same file at binding time.
For example
Depends exports : <export>x.h ;
LOCATE on <export>x.h = $(TOP)/include/cmp ;
Depends cmp/x.h : <export>x.h ;
Depends <export>x.h : x.h ;
SEARCH on x.h = $(SUBDIR) ;
Ln <export>x.h : x.h ;
Now, when header file scaning finds #include <cmp/x.h> jam will know
that its really the same files a <export>x.h.
also
Depends x.o : <grist>x.o ;
(now jam x.o builds ALL x.o's, but rarely is there more than
one x in the build tree).
Date: Thu, 02 Aug 2001 23:35:01 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: RE: Anyone else using Jam for BIG projects?
I guess we have some fast build servers. :-) But we think that its still
pretty darn slow. Jam still spends a lot of time goofing around before it
gets down to business and fires off the first shell commands. Also we
don't have the kind of linking pains that many similar sized projects have.
Our code is designed for modular delivery and linking is primarily dynamic
in our environment. We package large multimegabyte entities (though I do
not think any have hit 30M yet, they are growing in that direction rapidly)
but they are not all cross-linked so they build very fast. That is, our
packages bundle up DLLs and small executables into complete images or
smaller modules destined for Cisco routers.
We normally build locally on our build servers (we have build servers
locally wherever we have engineers). Tools are mounted over nfs though (on
a small handful of sites), but there are only small time penalties for
grabbing the tools once each.
We haven't noticed that kind of thing (in compilation of our C++ code), and
as noted above I think our linking requirements are simpler.
We usually resort to the -d+3 (for non-header dependencies) or -d+6 (for
headers) output for this, or we use our locally hacked jam (which I aluded
to above in the previous e-mail) to dump the dependency tree and pore over it.
Our hacked jam shows the dependency tree just like the output above, but
currently it does not annotate stat info. It could be extended to do that
easily I think. Currently when we need that info we use the -d+6 output
(i.e., make, time, make* newer, made+ old, made+ update, give that info).
I didn't know about this. Thanks.
Date: Fri, 3 Aug 2001 09:57:48 +0100
From: Paul Haffenden <pjh@unisoft.com>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Someone posted a fix for this a long time ago, to make it left recursive,
and it seems to work for jam2.3.
I have done this using a :E modifier in the variable handling,
and the code was sent to Perforce. I've since changed the
syntax, anyone is welcome to the code (expand.c).
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Date: Fri, 3 Aug 2001 09:22:32 -0400
$(x:G?=grist)
There is quite a lively discussion going on on the scons development list at
sourceforge about this very issue. Cons (and scons) uses an innovative
scheme for cacheing dependency information that I find quite compelling. I
am beginning to think we should just build this mechanism into the
underlying Jam engine. I need to look at it a bit more carefully, but it
seems like it could solve lots of problems.
Date: Fri, 3 Aug 2001 15:50:06 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
While reading mail makes for a nice change from forcing a huge chunk of
broken source code to compile on a similarly broken beta-quality and
overhyped CPU (<- letting off steam), I don't particularly want to
subscribe to another mailing list.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Date: Fri, 3 Aug 2001 10:04:03 -0400
Here is some information.
http://www.dsmit.com/cons/dev/cons.html#signatures
Yes, I had read the section, but it wasn't enough to bring
understanding. The crucial piece of information I was missing was how Cons
computes signatures for non-derived files. Let me just restate the
algorithm to make sure I got it:
1) derived file - its signature is computed from all the signatures of
it's source files and the command line used to build it. This signature
is stored in a .consign file next to the derived file.
2) non-derived file in the source hierarchy - its signature is computed
from the contents of the file and nothing else. This signature is stored
in a .consign file next to the derived file.
3) non-derived file not in the source hierarchy - its signature is
computed from the file's name (the absolute path right?) and
timestamp. This signature is not stored explicitly, but is stored
implicitly in the signatures of derived files that depend on it.
I noticed that the timestamp is also stored in the .consign file. I assume
this is stored so that the non-derived file's signature will not be
recalculated unless its timestamp differs. This is only an optimization
right? Is the timestamp for derived files in the .consign file used for anything?
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Re: Whitespace As Delimiter -- Yuk!
Date: Fri, 3 Aug 2001 11:35:10 -0700
You should try to read the Doc++ lex grammer ... its a full
state machine in itself. It does not always ignore white space.
It counts qoutes, "{", "(", switches state as it steps into
and out of structs, classes, functions, etc.
Sorry ... A Rule a b c : d e f : g h i ;
Has 3 parameters ... any parameter can have any number of elements.
Do you have rules like
rule X {
local i ; {
if $($(i)) { }
}
}
Sun's make did two things I really like.
1) they are "integrated" with the compilers using an normally
undocument (or it use to be undocument) command line arguement.
Each time a file was compiled, a file .make.dependencies, I believe,
is updated by the compiler (or make looking at special compiler output).
The file lists for each target (.o file) which files were read during
the compilations.
2) make had a file called .make.state (I believe) which was also maintained.
Each time a target was built, the make state would be updated. The next
you called make, make compared the exact commands lines for the previous
run, and if they differ, would assume the target needs to be rebuilt.
would
be stored in the make state.
a.o: CPP -o a.o a.cpp
Now you want to turn debugging on. export CXXFLAGS=-g; make a.o
Make says "last time I did "CPP -o a.o a.cpp", but this time the macro
expands to "CPP -g -o a.o a.cpp", thats different, therefore, a.o needs
to be rebuilt.
Notice, the user does not need to touch or remove anything to get this to happen.
This is also not an issue for us. By you must admit that something like
Main A : a.cpp c.cpp ;
Main B : b.cpp c.cpp ;
would be nice if c.cpp was compiled to c.o only once. The above style
is more deductive than forcing the user to build a library simply to
avoid a second compile. In this case, .o files would work just as well.
If fact, there are cases when you need to force an object file to be linked
into an application. Now I have to do things like ..
Main A : a.cpp ;
Object c.cpp ;
ExtraObjects A : c.o ;
but with grsiting, its a bit more tricky.
Your correct, a rule could be writen to grist stuff that not already
gristed. But usage of the $(X:G=g) notation is so nice and easy, that
you often don't build a temporary variable to get gristing sorted out.
A form like $(X:X=g) (where :G adds grist only if no grist is present)
would be convent.
I'm not sure if everything should be/needs to be gristed. But gristing
should be consistent.
For example, in the rules distributed with Jam, the C++ rule does not
grist its inputs. The Object rules does not grist its inputs, the Objects rule does.
So, mixing C++ rules and Objects rules in the same Jamfile is risky.
C++ a.o : a.cpp ;
MainFromObjects a : a.o ;
(very possibility) refer to differ a.o files
From: "Jerry Nettleton" <nett@mail.com>
Date: Fri, 03 Aug 2001 12:47:31 -0600
Subject: Re: Anyone else using Jam for BIG projects?
Since I'm new to Jam, I can't really offer much but you raise some interesting
Jam issues and limitations for large projects (which leads to even more questions).
I recently started looking for a better solution to improve our build process.
We have around 30,000 C/Java sources files and 33,000 generated files with
about 15 million lines of code on 6 platforms (UNIX and Windows NT/2000).
I thought Jam could help manage all of the dependencies but since it doesn't
support Java I'll probably have to improvise a little.
I like what Randy's message described with lots of Jamfiles so that individual
directories can be compiled separately. You might be able to improve build time
but it would probably increase developer compile times.
I would be interested in looking at other possibilities especially if it support
Java component dependencies into consideration.
Do you have some good advice for rookie jammers with large projects?
What are the most important things to learn first to fully understand Jamfiles?
Do you have any good examples or tutorials that you could share?
How should I organize Jamfiles to manage the build process (one or many Jamfiles)?
What kind of issues do I need to consider to support multiple platforms?
Do you think enhancing Jam to parse Java dependencies is feasible?
Date: Fri, 3 Aug 2001 14:42:28 -0700 (PDT)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: The Jam Annoyance FAQ
1. Why white space is the only delimiter in jam?
It was a _mistake_. At the time ('93) whitespace in file names
You know, $sys$device:[user.jam]parse.c;3 VMS.
I still think sparse quoting makes Jamfiles easy to read, but
requiring whitespace around the ; is very error prone.
If there is a compatible way of changing this without a "mode
bit," I'm all ears.
2. Why not cache header file scans to improve speed?
a. Because it is not that much speed (YMMV).
b. Because caching is more complex and less reliable.
c. Because I hated build tool turds.
Those were the reasons when Jam was written, and they still
stand fairly well now. Caching is more important for Make,
which must re-evaluate header dependencies for every subdirectory,
rather than just once for the whole tree with Jam.
3. Why no shell invocations during parsing? I want "x = `ls` ; "
Jam was born in a dirty environment, where "mkmf" scripts
scrounged up all sorts of garbage and built, well, whatever came
out. In reaction to this squalor, Jam insisted that how to
build the source is part of the source and not part of the build.
The goal was to ensure that a second "jam" invocation would do
a deterministic _nothing_.
This asceticism could probably be relaxed now, but I doubt
portability madmen like us would use it: native OS tools vary
widely ("think different"), and if you have to build a tool to
drive Jam you've lost a little sanity. What builds the build tools?
mailing list?
We are trying. Jam has been a bit of a stepchild here at
Perforce, but (as has been mentioned) we have hired an open
source engineer to act as Jam curator.
That doesn't mean it will take on all changes. One of Jam's
virtues is leanness, both in code and in concepts, and we will
continue to guard that. There are, however, egregious gaps in
functionality (like regexp handling and access to target-specific
variables) and porting coverage that are certain to be filled.
From: "Christopher Seiwald" <seiwald@perforce.com>
Sent: Friday, August 03, 2001 5:42 PM
Subject: The Jam Annoyance FAQ
I don't have any suggestions yet, I'm afraid, but I think the problem goes
beyond the whitespace requirement. It is also very easy to leave out a
semicolon and get silent acceptance of an unintended Jam program. I like the
suggestion to warn when a list of tokens is split across lines and lines >=
2 start with a rule name.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Date: Fri, 3 Aug 2001 19:24:42 -0400
True, but it's a special case. It is so common in my experience that A and B
actually have different requirements (e.g. one is actually a DLL and so
uses -DBUILDING_B_DLL which changes the actual content of the corresponding
.o files), that I chose in Boost.Build to have the user explicitly make a
library to get the commonality you desire.
Date: Mon, 6 Aug 2001 17:15:37 -0700 (PDT)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Anyone else using Jam for BIG projects?
We've been using a minor variant of jam 2.2 on a project roughtly this large,
perhaps a bit larger, depending on which variant you build. Millions of
lines of code, >20,000 source files, >4 different platforms, many variants
of our builds, etc. Build times vary from platform, the newest hardware
can build it in about 3hours using 2 jobs. Products of the build are all
placed into a build tree separate from the src tree, its about 1.5Gbytes
after a full build.
Our Jambase is about 5,000 lines, our Jamrules file is about the same.
Almost all of our platform differences are expressed in the Jamrules file.
The Jambase creates rules which allow us to express the build in terms of
applications, shared objects, libraries, compilation units. Dependencies
between compilation units determine include rules. Depending on platform,
linking a library to a shared object may generate an intermediate archive
or link with the .o files directly, automatically. Jam's ability for us to
create these abstractions has really saved us over the years, although it
had an initial cost in creating the initial jambase/jamrules files.
Jam has really worked out well for us. You mention you use ClearCase. We
had that once - our experience moving to Perforce has been extremely
positive...but that's a different discussion.
Yes. One of the mods to Jam I've done is to implement a header cache.
I've just recently received permission to return our changes back to the
public domain - I'll need to wait for a rainy weekend now to do it, as I
need to work outside of our corporate firewall...
One good test of a build system is the length of time it takes to do a
nothing-to-do build. If we do a full clean build, and all the targets
are built, initially a nothing-to-do build was taking about 3minutes. This
was considered too long. After implementing the header cache, this time
was reduced to 1min. On newer build platforms this is less than 20 seconds.
As a side note, our experience with Make based systems compares very poorly
in this metric. Do-nothing-builds often end up doing quite a lot when you use make.
The approach in the header cache is to create a file which records the
results of the previous jam header scan. For every file jam scans for headers,
the file contains:
@filename@ timestamp @header1@ @header2@ @header3@ ...
the filename is the filename jam opened the file with - either absolute or
relative depending on how you use Jam. Since the cache is written into the
same directory Jam is invoked in, the filenames remain consistient between
invocations. The timestamp is the last modified time of the file when it
was last scanned. The list of headers is the list returned by your regex
finding headers.
When you invoke Jam the next time, this cache is read in, put into a hash
table (which Jam has good internal support for) and whenever the routine
is called to scan a file for headers, the filename is looked up in the cache
and if the timestamps agree the cache results are returned. Whenever jam
exits, the cache is written.
If you alter the header regex you need to delete the cache file, as the
pattern isn't recorded in it anywhere and Jam will use the old results
which would be incorrect. Oh well. Other than that, the cache is
always valid - when a file is modified the cache line for it becomes invalid
and a new one is generated.
The cache always grows - if you delete files, they remain in the cache. Oh well.
The modification ended up being quite nice I think - although it does violate
one of Christophers goals for Jam, which is to not have Jam generate turds.
For large builds, I think the modification is worth it. YMMV.
I'll try to get the mods back into my branch in the public depot. As
we have a number of mods, I'm not sure how to return them all - but it
will take some time, sorry.
I don't see any design weaknesses of Jam itself, there are some stylistic
things you may like or dislike (whitespace, etc.) I like it the way it is.
We've modified it to handle things like serialized output from multiple
jobs, adding a '@' token to the start of each action line output. These
two combine to allow us to implement a automated test facility, so each of
our returns is test compiled/linked on all our platforms automatically. If
any errors are found, the logfile can be reliably parsed to generate
meaningful mail for the developers. Without the speed of Jam/Perforce,
this wouldn't have worked... There are some other minor mods we've made,
nothing you would call fixing a design flaw though.
The default Jambase is a good example, but for large projects you'll end up
customizing it heavily. Its too bad we can't share our Jambase
customizations more easily, but that's life.
We looked around. Jam seemed the most promising, and with customization
that has proven to be true I think. I don't see anything else that would
be able to replace what we have. We have developers who can build tiny
portions of the product with <1second startup time. Change what you
want built slightly and your startup times can go up to 1min, and at that
point the entire product suite is being dependency checked. Its really
remarkable I think.
Date: Mon, 06 Aug 2001 19:01:47 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: RE: Re: Whitespace As Delimiter -- Yuk!
I'm not knowledgeable about Doc++, but C++ does not require whitespace to
separate tokens. I believe all lex scanners are state machines though.
Also it is not all that difficult to make a scanner handle jam's
weirdness. It just requires more trickiness than lex was designed to
handle. The relevant lex code can be written like this:
\{ {
if ( gScanMode == kScanModeActions ) {
yylval = CollectStringUpTo( "}" );
gScanMode = kScanModeNormal;
return( T_Actions ); /* an "actions" token, a character string */
} else {
return( T_OpenC ); /* a simple { token */
}
}
I.e., if you are scanning normally and see a { just return that
token. Otherwise, if you are in the kScanModeActions scanning mode, then
gobble up the entire character string delimited by the next close curly
bracket (export that complexity to the routine called here). And later in
the lex code when it detects the keyword "actions" token, toggle the
gScanMode switch to kScanModeActions.
Yeah, but we force limits of some parameters. For example, some rules
expect a single element in the nth parameter, so we check that the second
element in that parameter is empty (weak check, but better than none at
all). An example of this is below (look for where it checks $(_includer[2]))
Kind of. To be really specific, we code all of our rules as follows:
# standard documentation header goes here
rule foo {
# first we set this variable
local _rulename = foo ; # used in Error routine and elsewhere to identify which rule is generating the error
# then we gather parameters... define local vars to give names to all parameters, e.g.:
local _includer = $(1) ;
local _inclusions = $(2) ;
# ...
# then we do parameter verification... check numbers and types of
parameters, e.g., # Make sure that scanned file is indeed a Bag generated file.
switch $(_includer) {
case *.[ch] :
# Expected file pattern -- nothing more to do.
case * :
# Unexpected file pattern.
Error "Scanned file '" $(_includer) "' is not a .c or .h generated file." ;
} # end switch $(_includer)
if $(_includer[2]) {
Error "More than one includer '" $(_includer) "'." ;
} # end if $(_includer[2])
# Make sure that there is at least one inclusion
if $(_inclusions) = "" {
Error "No included files for" $(_includer) ;
} # end if $(_inclusions) = ""
# Make sure that there are no extra parameters
if $(3) {
Error "Extra parameter '" $(3) "' for scanned file '" $(_includer) "'." ;
} # end if $(3)
# ...
# then we process the rule (do whatever rule is supposed to do)
# ...
}
We use gcc and it can create ".d" files containing this info when passed
"-MD" or "-MMD". I am hoping to restructure things to use this info in jam.
Interesting. We don't track any kind of dependency for this now. We have
considered making all targets being built from a user Jamfile (i.e.,
component owner Jamfile, which can contain SubDirCcFlags, and other
build-altering settings) depend upon that Jamfile to partially handle
this. We wouldn't catch environment variable changes or command line "-s"
parameters to jam this way though.
We don't permit the above. Instead we give them two options (CopyFile or
make a library). But we push the library option (more precisely, we
usually push the DLL option).
One syntax change I would like to suggest for jam's future is something to
alter the way the ":" operator behaves in these cases. Targets can have
the following attributes using this mechanism: B,S,M,D,P,G, and various
other things like U,L,R can be applied to provide conversions, and you can
also do the X=x stuff too. I would like to expand this mechanism to
support arbitrary attributes, and arbitrary attribute name lengths (not
just single characters). I am thinking something like this might work:
$(foo:Suffix)
$(foo:Directory,Base,Suffix,Uppercase)
$(foo:Grist=$(bar))
$(foo:MyAttribute)
$(foo:MyAttribute="whatever")
And I suppose for "almost" backward compatibility we could have "B" being a
synonym for "Base" and so on. That is, this would be backwardly compatible
with single character items but things like ":BS" would have to be re-coded
as ":B,S", or better yet as ":Base,Suffix"). Of course a translator could
be provided to parse existing files and rewrite them to help people convert
over legacy code.
Date: Mon, 06 Aug 2001 17:15:28 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Re: Anyone else using Jam for BIG projects?
I am a bad one to answer your "Java in Jam" questions because I have never
tried that, but I have some answers to some of your issues below since
nobody seems to be chiming in on this.
Have you looked at Ant (part of the Apache Jakarta project)?
http://jakarta.apache.org/
I saw it mentioned a few times in the archived discussions from
I don't follow you there. Build time versus compile time? Using local
well and gives fast local builds at virtually no additional cost to the
global builds (unlike recursive make where make is re-invoked in each
subdirectory, one jam is invoked from the root only, and it reads/executes
the subdirectory Jamfiles).
We have an elaborate dependency structure with explicitly exported public
APIs and API version number tracking (API versions used are compiled into
all objects so they can later be checked at load time for compatibility
with API versions being exported on a running box). By policy Java is
explicitly prohibited in our code base so we haven't looked at that at
all. Handling Java has been discussed on this list a few times and some
code examples are in there so you could pull the archive and search for
Java in the text.
Play with small Jamfiles first. Understand the basic tooling before
getting started on anything big. Make predictions about how to do
something, and test the predictions with small code fragments in a little
Jamfile. Code with lots of ECHOs so you can see the flow. When your
predictions all start being correct then you know you understand. :-)
I suggest this: understand the phases (parsing, binding, update) -- i.e.,
what happens in each phase, and how the Depends and INCLUDES built-in rules
work, understand the debugging tools (especially "-d+3", and what every
possible output line in that output means).
Unfortunately no. We produced about 8 hours of Video on Demand tutorials
and lots of different slide sets, and internal docs for our own use, but
they contain proprietary stuff about our code base, and mostly they tell
people how to interact with our locally written jam rules. So
unfortunately it wouldn't be much help even if I could share it.
I suggest one at the top that reads one from each subdirectory, and a
Jamrules file at the top containing any customized rule and action
definitions. If you want the build to behave differently when at the top
or in a sub dir, you can test the value of the TOP variable to detect
this. We have a few different Jamfiles at the top, some with component
info, packaging info, and a few in the components to handle API exports,
shared build contexts, and a normal Jamfile (with code to build
executables, DLLs, static libraries for that component).
Different compilers, linkers, etc, and/or different command line parameters
to them. We have a file per target host that defines variables and the
infrastructure picks up the appropriate files when necessary to cause the
right values to be set for the platform you are building for. In some
cases you may want some component Jamfiles executed more than once in
different contexts (which is easy to do with a loop around the include statement)
I have only read the last few months of this list but it looks like people
are still struggling with some issues on this. I have not seen any
comprehensive posting of how to handle Java in jam.
Date: Tue, 7 Aug 2001 12:13:09 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Excellent idea. The (few) times I've dealt with that I've always forgotten
what the letters mean.
Your action item for today is to write code that makes jam accept :B,S and
emit a warning (with file name and line number) if it sees :BS.
I don't think anyone's going to write a converter, and a quick hack to
provide "," now and synomyms next year makes sense to me at least.
From: "Ian Mellor" <mellorian@hotmail.com>
Date: Tue, 07 Aug 2001 11:56:02 -0700
Regarding 2.a, one particularly bad situation is when multiple header
files have the same name. For example, MSVC supports precompiled header
files, and common convention is for each directory to contain a
"precomp.h" file, which specifies what will be included in the
precompiled header file. Every .cpp file must then include the precomp.h file.
If HDRGRIST is blank, then the header scan goes quickly, but Jam can't
tell the difference between the various precomp.h files, and is quite
confused during incremental builds. If HDRGRIST is $(SUBDIR) then
header scans are glacial because Jam marks all headers it finds with the
HDRGRIST. Therefore, if there are N precomp.h files, Jam scans all the
base headers N times, and there are N different gristed names for each
of the base headers.
One way to solve this problem without introducing build tool turds (I
hate 'em too), would be to grist header files by the full pathname where
they were found. But this could complicate specifying dependencies for
generated header files.
I wrote a workaround that introduces PRECOMPHDRS and PRECOMPGRIST
(similar to HDRGRIST). PRECOMPHDRS lists which header files should be
gristed with PRECOMPGRIST; all other header files are gristed with
HDRGRIST. This allows the precompiled headers to be gristed, while at
the same time avoiding gristing system headers or common headers. This
brings the header scan back down to a reasonable time, but there are
some holes in this workaround and it's still not as good as it could be.
(These 2 new variables could be more aptly named, since their
application can be more general than precompiled headers).
From: Christopher Seiwald [mailto:seiwald@perforce.com]
Sent: Friday, August 03, 2001 2:42 PM
Subject: The Jam Annoyance FAQ
1. Why white space is the only delimiter in jam?
It was a _mistake_. At the time ('93) whitespace in file names
You know, $sys$device:[user.jam]parse.c;3 VMS.
I still think sparse quoting makes Jamfiles easy to read, but
requiring whitespace around the ; is very error prone.
If there is a compatible way of changing this without a "mode
bit," I'm all ears.
2. Why not cache header file scans to improve speed?
a. Because it is not that much speed (YMMV).
b. Because caching is more complex and less reliable.
c. Because I hated build tool turds.
Those were the reasons when Jam was written, and they still
stand fairly well now. Caching is more important for Make,
which must re-evaluate header dependencies for every subdirectory,
rather than just once for the whole tree with Jam.
3. Why no shell invocations during parsing? I want "x = `ls` ; "
Jam was born in a dirty environment, where "mkmf" scripts
scrounged up all sorts of garbage and built, well, whatever came
out. In reaction to this squalor, Jam insisted that how to
build the source is part of the source and not part of the build.
The goal was to ensure that a second "jam" invocation would do
a deterministic _nothing_.
This asceticism could probably be relaxed now, but I doubt
portability madmen like us would use it: native OS tools vary
widely ("think different"), and if you have to build a tool to
drive Jam you've lost a little sanity. What builds the build tools?
mailing list?
We are trying. Jam has been a bit of a stepchild here at
Perforce, but (as has been mentioned) we have hired an open
source engineer to act as Jam curator.
That doesn't mean it will take on all changes. One of Jam's
virtues is leanness, both in code and in concepts, and we will
continue to guard that. There are, however, egregious gaps in
functionality (like regexp handling and access to target-specific
variables) and porting coverage that are certain to be filled.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: The Jam Annoyance FAQ
Date: Fri, 10 Aug 2001 20:03:47 -0400
The problem is that the files that can be #included from a single header
file can depend on lots of things. At the very least, they depend on the
#include path(s - if you distinguish <> from "" as many compilers do), which
can vary from source file to source file. Then there is the compiler's
search algorithm... for example, MSVC will search the directories of the
chain of files that resulted in including the given header before it moves
on to looking at the #include path. Altogether nightmarish if you want to
get it precisely right.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: The Jam Annoyance FAQ
Date: Sat, 11 Aug 2001 09:44:51 -0400
Are you certain that's the case? If so, a big improvement could be made if
Jam would only scan each unique header file once, but that it would run
HdrRule with the results of the scan on each target. In other words, given:
<foo!bar>x.h the scan proceeds once, producing the list of #included files,
and is cached. When it comes time to scan <baz>x.h which is found to be
bound to the same file as <foo!bar>x.h, the cached scan results are used to
invoke the HdrRule. Of course, this only works if HDRSCAN is set the same on
both targets.
FWIW, Boost.Build is gristing each header with the directory of the
#including file and the entire include path ($(HDRS)) concatenated with '#'
characters -- which still isn't perfect but stands a better chance of being
correct than what you've proposed. It doesn't seem to be causing serious
slowness on the moderate builds I've been doing. It /does/ make the build
system more complicated and hard-to-understand than I'd like.
What's the difference between a build tool turd (e.g. header cache) and an
object file, other than the pejorative name? They serve similar purposes:
they represent an intermediate state of the total build process which is
used to reduce the amount of work required at subsequent build invocations.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sat, 11 Aug 2001 15:47:17 -0400
Subject: Minimal Jambase
I started to try to do something like that myself, but it seems there's a
lot I can't live without. For example, the SubDir rule at least needs the
definitions of $(DOTDOT), FSubDir, FDirName, FGrist, and (indirectly)
$(DOT). What's your secret?
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sat, 11 Aug 2001 16:07:07 -0400
Subject: FSubDir documentation fix
The comments on FSubDir appear to be completely wrong:
# If $(>) is the path to the current directory, compute the
# path (using ../../ etc) back to that root directory.
# Sets result in $(<)
I suggest the following replacement:
# Given $(<), the tokens comprising a relative path from D1 to
# a subdirectory D2, return the relative path from D2 to D1,
# using ../../ etc.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Date: Mon, 13 Aug 2001 10:29:15 -0700
Subject: RE: Minimal Jambase
# Bare Minimum Jambase for Advantax R7 Developmment
if $(UNIX) {
DOT default = . ;
DOTDOT default = .. ;
SLASH default = / ;
} else if $(NT) {
DOT default = . ;
DOTDOT default = .. ;
SLASH default = \\ ;
}
JAMFILE default = Jamfile ;
JAMRULES default = Jamrules ;
# Include TOP/Jamrules.
include $(TOP)$(SLASH)$(JAMRULES) ;
# Include Local Jamfile
ruleUp dummy ;
include $(JAMFILE) ;
dummyUp dummy ;
# Include top level Jamfile
# (and all sub-Jamfile)
SubInclude TOP ;
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 13 Aug 2001 14:04:16 -0400
Subject: Re: [recursive "" includes not tracked?
Good question! I don't see any, either. That's a surprising oversight
AFAICT.
It looks like a simple Jam modification to set the bound path of each
scanned header into a target-specific variable (e.g. BINDING) might do the
trick.
Sounds like you don't have FTJam. SUBST is a built-in rule in FTJam.
http://freetype.sourceforge.net/jam/index.html#where
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 13 Aug 2001 15:38:24 -0400
Subject: Re: Minimal Jambase
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Date: Mon, 13 Aug 2001 14:21:48 -0700
Subject: RE: Minimal Jambase
The developer has to set TOP. This was a requirement
of the Jambase that I started with (for multiple diurectory
projects).
Subject: RE: The Jam Annoyance FAQ
Date: Mon, 13 Aug 2001 15:49:00 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
I spent hours tracking down the problem, so unless you've down likewise
and reached a different conclusion, then yes I'm quite certain. :-)
That's another way of phrasing the solution I proposed. The part you've
quoted above is merely my temporary hack around the problem, without
changing any Jam code.
That's "more correct" yes, but it causes exactly the problem I described
as needing to be solved (but it's even more aggressive about uniquely
gristing headers, so it may exacerbate the problem). If you're happy
with the speed, more power to you. I'm not happy with the speed, and
saw the header scan time go from <1 second to >10 seconds simply from
gristing all headers. My hack described above brought it back down to
<1 second.
My intuitive definition of "build tool turds" is any file produced by
the build tool, versus by the compiler/linker/etc. These files often go
in special locations, and generally are not removed by the build tool's
"clean" (or equivalent) command.
Date: Wed, 15 Aug 2001 17:07:29 -0400
Subject: calling an executable prior to building the dependencies??
I would need some help to figure out whether this problem can be
solved with Jam.
A tool takes xml documents as input and generates a binary file.
To ensure the compilation process will not generate any error,
I want the target to depend on all the files that are referred
in the xml document. First, the relevant information can spread
across multiple line, is the Jam regexp engine capable of parsing
accross more than one line?
The same tool used to compile the xml document can be invoked to
output, as a text file, a list of dependencies.
Is there any way to make jam call this tool first and retrieve
those dependencies and later use them on with the DEPEND rule?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: JAM
Date: Tue, 21 Aug 2001 14:05:47 -0400
Download prebuilt binaries from:
http://prdownloads.sourceforge.net/freetype/ftjam-2.3.5-win32.zip
Sources available, also:
https://sourceforge.net/project/showfiles.php?group_id=3157&release_id=45917
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sun, 26 Aug 2001 13:09:03 -0400
Subject: Possible bug in execnt.c?
[I admit I'm actually looking at FTJam sources here, but I doubt it has been changed]
In execnt.c I spy the following:
/* Trim leading, ending white space */
while( isspace( *string ) )
++string;
p = strchr( string, '\n' );
while( p && isspace( *p ) )
++p;
It doesn't look like it's trimming the trailing whitespace as claimed by the comment.
From: Alain KOCELNIAK <alain@corys.fr>
Subject: Debugging level and return status
Date: Thu, 30 Aug 2001 10:32:22 +0200
When I run jam with option -a -d+2 (to force build and to show action text ),
actions are printed and executed
but jam exits with EXITBAD status, all action executions are OK.
Without -a -d+2 options jam exits with EXITOK status.
( -a is used to force the build, the problem comes from -d+2 )
Is it a normal behavior ( debugging level => EXITBAD ) ???
I try to search in the source, for the moment I'am in make1.c :
- This function return : counts->total != counts->made
- With -d+2 option : counts->total is always 0 ( counts->made is not zero )
- Without -d+2 option : counts->total is equal to counts->made
Date: Fri, 7 Sep 2001 21:31:21 -0700 (PDT)
Subject: /usr/bin/ld: cannot find -lqt
I'm getting the subject line error while trying to compile
a program under RedHat7.0. The Jamfile refers to a jamdefs
file which has the following OS switch in it:
case QT :
if $(UNIX) {
SubDirHdrs /usr/local/qt/include/ ;
extras += -lqt ;
...etc
My ld.so.conf file includes the path /usr/local/qt/lib in
it. I'm a novice with Jam so am struggling a bit with
this. What library is the ld looking for here? I'd
appreciate any advice on how to get around this linking
issue, if that's what it is.
Date: Sun, 9 Sep 2001 20:57:10 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: /usr/bin/ld: cannot find -lqt
It's referring to Qt, http://www.trolltech.com/qt/.
May I ask what program that is?
From: "Jeremy Furtek" <jeremyf@believe.com>
Date: Mon, 10 Sep 2001 10:30:12 -0700
Subject: Question on jam dependencies and the Clean rule
The Jambase file defines actions for the Clean rule, yet no procedure. Many
default Jambase rules use the Clean rule:
Clean clean : somefile ;
When invoking jam with the clean target:
jam clean
'somefile' gets removed.
My question is: How does this happen?
My first guess was that jam would have some default interpretation of the
Clean rule to establish a dependency between the clean target and
'somefile'. In other words, all targets before the colon (:) are made to be
dependent on all files after the colon.
Yet, in looking at the default rules in Jambase, there are a number that
have the following statement:
Depends $(<) : $(>) ;
indicating that the dependency must be manually specified. My own (limited)
experience in writing rules seems to confirm that there isn't an automatic
dependency.
So how exactly does "jam clean" work? Any hints or corrections to my
assumptions would be greatly appreciated.
Date: Mon, 10 Sep 2001 11:32:29 -0700 (PDT)
Subject: Re: Question on jam dependencies and the Clean rule
"clean" is specified as a NOTFILE, so its (nonexistant) timestamp is never
checked, so its dependencies are always "newer". So:
Clean clean : somefile ;
says: Do "Clean somefile" if "somefile" is newer than "clean", which it
will always be.
From: "Jeremy Furtek" <jeremyf@believe.com>
Subject: RE: Question on jam dependencies and the Clean rule
Date: Mon, 10 Sep 2001 11:56:34 -0700
I understand the NOTFILE modifier and the basic dependency mechanism (I
think ;-)). My question is:
Why is 'somefile' considered a dependency of 'clean'?
There is no procedure in Jambase for the 'Clean' rule that establishes the
dependency - only an action (at least in Perforce Jam 2.3.1). I can think of
two possible sources for the dependency.
1.) The statement "Clean clean : somefile" forces 'somefile' to be a
dependency of 'clean.' In the more general case, all of $(>) become
dependencies of $(<). If this is true, then why do other procedures in
Jambase explicitly do this:
Depends $(<) : $(>) ;
2.) The lack of a procedure for 'Clean' indicates some other default case
that establishes the dependency.
Since neither one of these is entirely satisfactory, I am hoping that there
is something that I am missing.
Date: Mon, 10 Sep 2001 12:52:09 -0700 (PDT)
Subject: RE: Question on jam dependencies and the Clean rule
I think maybe you're thinking of dependencies a little wrong. As the doc
puts it:
Jamfiles contain rule invocations, which usually look like:
RuleName targets : targets ;
The target(s) to the left of the colon usually indicate what gets
built, and the target(s) to the right of the colon usually indicate
what it is built from.
If there were a rule for Clean that associated a *dependency* between the
target-to-build (ie., the pseudo-target "clean") and the targets to build
it from (ie., the files to be removed), then those files would need to
exist before (pseudo-)target "clean" could get "built", since you would
have told Jam "clean depends on somefile (existing and being up-to-date),
so go check that and, if need be, build it first, then build clean" (which
would be rather pointless, building something just so you could remove it
From: "Jeremy Furtek" <jeremyf@believe.com>
Subject: RE: Question on jam dependencies and the Clean rule
Date: Mon, 10 Sep 2001 13:29:38 -0700
My thinking was that the dependency of 'clean' on 'somefile' could be one in
which the file would not be created if it did not already exist, and the
update action would be one that removes the file. (In this case I mean
"dependency" in a very general way. I'm not sure if that sort of
"dependency" exists in jam.)
In any case, I was definitely not interpreting things correctly. I'll have
to go back and reread all of my Jamfiles again. Thanks for clearing this up.
From: "Jeremy Furtek" <jeremyf@believe.com>
Date: Wed, 12 Sep 2001 16:14:45 -0700
Subject: Setting a variable "on" a target
I have the following minimal test case:
# Jamfile
LINKFLAGS = /DEBUG ;
# CASE 1
LINKFLAGS on test.exe += /NONSENSE ;
# CASE 2
#LINKFLAGS on test.exe = $(LINKFLAGS) /NONSENSE ;
Main test : main.cpp ;
(I commented out case 2 when running case 1 and vice versa)
The output of Case 1 is:
link /nologo /NONSENSE /out:test.exe main.obj advapi32.lib libc.lib
oldnames.lib kernel32.lib
The output of Case 2 is:
link /nologo /DEBUG /NONSENSE /out:test.exe main.obj advapi32.lib
libc.lib oldnames.lib kernel32.lib
I would expect the two to be identical. The "+=" operator, in conjunction
with the "on" restriction, seems to be overwriting the value.
I am using Jam 2.3.1 on NT, downloaded from the Perforce FTP site. I found a
thread on this in the mailing list archives from October of 1999 that went unanswered.
Is this a bug in Jam or a bug in my thought process? ;-)
From: "Jeremy Furtek" <jeremyf@believe.com>
Sent: Wednesday, September 12, 2001 7:14 PM
Subject: Setting a variable "on" a target
The first line sets the /global/ LINKFLAGS variable.
The second line appense /NONSENSE to LINKFLAGS on test.exe, which is
currently empty
The third line sets LINKFLAGS on test.exe to the global LINKFLAGS followed by /NONSENSE.
Date: Thu, 13 Sep 2001 12:17:16 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Setting a variable "on" a target
The third line sets LINKFLAGS on test.exe; should _reading_ LINKFLAGS in
that context read the global variable, or the old 'on text.exe' variable?
If the latter, then I guess the old local value is a copy of the global
one, so the example jamfile doesn't illustrate difference.
Changing case 2 to
# CASE 2
LINKFLAGS on text.exe = /TEST ;
LINKFLAGS on test.exe = $(LINKFLAGS) /NONSENSE ;
makes the jamfile show the difference.
Put another way, is the right-hand side of the expression evaluated in the
context "on main.exe" or in the global context?
It's a tricky question. I don't know what jam does now, and I also don't
know which is the more desirable behaviour. Opinions?
Subject: RE: Setting a variable "on" a target
Date: Thu, 13 Sep 2001 12:40:38 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
There's a bug, such that "x on y += z;" does exactly the same thing as
if you used "=" instead of "+=". Same problem with "x on y ?= z;". I
fixed that in my copy of Jam, but can't find anything in my Sent Items
suggesting that I ever posted the fix. The fix requires code changes to
Jam, and also changes to Jambase, and likely any Jamrules or Jamfile
that uses the "on .. +=" or "on .. ?=" syntax.
The code changes need to happen in rules.c, addsettings(). Depending on
how you fix it, you'll need to update the callers. For example, I also
updated the debug output code in compile.c, compile_settings().
The main thing I did was just change the "int append" parameter to "int
flag" to use the VAR_SET/VAR_DEFAULT/VAR_APPEND flags, rather than being a boolean.
Sorry I don't have time to package up a nice set of diffs for you to
apply. The changes aren't hard, though.
From: David Abrahams [mailto:david.abrahams@rcn.com]
Sent: Wednesday, September 12, 2001 6:10 PM
Subject: Re: Setting a variable "on" a target
The first line sets the /global/ LINKFLAGS variable.
The second line appense /NONSENSE to LINKFLAGS on test.exe, which is
currently empty
The third line sets LINKFLAGS on test.exe to the global LINKFLAGS
followed by /NONSENSE.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 17 Sep 2001 22:52:04 -0400
Subject: Useful (?) Jam modifications
I have recently completed a series of modifications to the (already
modified) FTJam source code. Please let me know if they are of any use to
you and I will post them.
Modifications:
1. A fix for the Windows NT line-length limitation. This fix works under the
following limited circumstances:
a. The build action is a single line.
b. JAMSHELL on the target is set to "%"
Though not completely general, it should be enough to handle link commands
for those compilers which can't be made to use command files (e.g. GCC).
2. A hook which allows you to find out what path a target is bound to. If
you set BINDRULE on the target, the rule named by $(BINDRULE) will be called
with the target name and the path to which it was bound. This is needed for
accurate binding of header files, since the header search algorithm for many
compilers depends on the directory of the #including file (and sometimes the
file which #included that, etc.)
3. Argument list support. I find that Jam is simply too error-prone for
building systems of substantial size without it. You can now write:
rule foo ( x y : z * ) { }
this will check that foo is invoked with 1-2 arguments, the first of which
has 2 elements. The "*" is a modifier which indicates that the second
argument, if present, can be any length. Within the body of foo, x, y, and z
will be bound as local variables to $(1[1]) $(1[2]) and $(2), respectively.
Other allowed modifiers:
"+", which is like "*" except that at least one element is required.
"?", which indicates that the argument is optional.
If the arguments don't match the argument list, Jam will exit with an
appropriate error. You can still leave out the argument list, in which case
Jam operates in the usual permissive way.
Date: Tue, 18 Sep 2001 08:07:41 -0700 (PDT)
Subject: Re: Useful (?) Jam modifications
Is this like the change I offered at:
If it is, did you do it in some other way? (I'd be interested in seeing
how you did it.)
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Useful (?) Jam modifications
Date: Tue, 18 Sep 2001 14:28:29 -0400
What I did is a lot like that, but my approach is a bit more flexible
because it allows a rule to be invoked, which can do anything you want
(including setting that variable, or any other variable you choose).
Since the patch is almost as big as search.c, which is small, I'll just give
you my search.c:
/*
* Copyright 2001 David Abrahams
*
* This file is part of Jam - see jam.c for Copyright information.
*/
# include "jam.h"
# include "lists.h"
# include "search.h"
# include "timestamp.h"
# include "filesys.h"
# include "variable.h"
# include "newstr.h"
static void call_bind_rule(
char* target_,
char* boundname_ ) {
LIST* bindrule = var_get( "BINDRULE" );
if( bindrule ) {
/* No guarantee that target is an allocated string, so be on the * safe side */
char* target = copystr( target_ );
/* Likewise, don't rely on implementation details of newstr.c: allocate
* a copy of boundname */
char* boundname = copystr( boundname_ );
if( boundname && target ) {
/* Prepare the argument list */
LOL args;
lol_init( &args );
/* First argument is the target name */
lol_add( &args, list_new( L0, target ) );
lol_add( &args, list_new( L0, boundname ) );
if( lol_get( &args, 1 ) )
evaluate_rule( bindrule->string, &args );
/* Clean up */
lol_free( &args );
} else {
if( boundname ) freestr( boundname );
if( target ) freestr( target );
}
}
}
/*
* search.c - find a target along $(SEARCH) or $(LOCATE)
*/
char *
search(
char *target,
time_t *time ) {
FILENAME f[1];
LIST *varlist;
char buf[ MAXJPATH ];
int found = 0;
char *boundname = 0;
/* Parse the filename */
file_parse( target, f );
f->f_grist.ptr = 0;
f->f_grist.len = 0;
if( varlist = var_get( "LOCATE" ) ) {
f->f_root.ptr = varlist->string;
f->f_root.len = strlen( varlist->string );
file_build( f, buf, 1 );
if( DEBUG_SEARCH ) printf( "locate %s: %s\n", target, buf );
timestamp( buf, time );
} else if( varlist = var_get( "SEARCH" ) ) {
while( varlist ) {
f->f_root.ptr = varlist->string;
f->f_root.len = strlen( varlist->string );
file_build( f, buf, 1 );
if( DEBUG_SEARCH ) printf( "search %s: %s\n", target, buf );
timestamp( buf, time );
if( *time ) {
found = 1;
break;
}
varlist = list_next( varlist );
}
}
if (!found) {
/* Look for the obvious */
/* This is a questionable move. Should we look in the */
/* obvious place if SEARCH is set? */
f->f_root.ptr = 0;
f->f_root.len = 0;
file_build( f, buf, 1 );
if( DEBUG_SEARCH ) printf( "search %s: %s\n", target, buf );
timestamp( buf, time );
}
boundname = newstr( buf );
/* prepare a call to BINDRULE if the variable is set */
call_bind_rule( target, boundname );
return boundname;
}
From: Michael Linehan <mlinehan@baltimore.com>
Date: Fri, 5 Oct 2001 13:46:20 +0100
Subject: Can I specify a full library name in a Jamfile
I am trying to build an executable which links a library libXXX.so.1.2.3.4.
However, when I put this in the external libraries section of the jamfile, I
get the error "Unable to find library libXXX.so.1.2.3
The trailing .4 has been removed. The relevant section in my jamfile looks like:
ExecutableLinksWithExternal MyTest :
libXXX.so.1.2.3.4
;
From: "Jeremy Furtek" <jeremyf@believe.com>
Date: Mon, 8 Oct 2001 17:16:45 -0700
Subject: Header file peculiarity in default rules?
Upon testing my Jam build system on multiple platforms, I came across the
following difference in Jambase default rules:
The Object rule sets the HDRS variable on a target object file as follows:
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIRHDRS) ;
The HDRSEARCH variable, set a few lines below the above statement, adds the
$(STDHDRS) value to the list that is searched for header file dependencies:
HDRSEARCH on $(>) = $(HDRS) $(SUBDIRHDRS) $(h) $(STDHDRS) ;
The default actions for Cc/C++ look like this:
$(C++) -c $(C++FLAGS) $(OPTIM) -I$(HDRS) -o $(<) $(>)
For Windows NT platforms, however, there is an override that adds the
STDHDRS path to the include path list:
$(C++) /c $(C++FLAGS) $(OPTIM) /Fo$(<) /I$(HDRS) /I$(STDHDRS) /Tp$(>)
The net result of this is that the STDHDRS variable is included on the
command line for NT, yet not on other platforms.
Is there a reason for this? I would consider this to be nitpicking, and I
can certainly come up with a workaround. If it is intended, I figured that
the reason might save someone else the time that I spent tracking it down.
(using Perforce Jam 2.3)
From: Vladimir Prus <ghost@cs.msu.su>
Date: Tue, 9 Oct 2001 11:57:51 +0400
Subject: RmTemps problem with independent targets
Suppose I want Jam to convert tex file to pdf using pdflatex. pdflatex
creates a number of auxillary files that I want to clean. Here is what I do:
rule pdflatex {
local file = $(1) ;
Depends all : $(file).pdf ;
Depends $(file).pdf : $(file).tex ;
RmTemps $(file).pdf : $(file).log $(file).aux ;
pdflatex-actions $(file).pdf : $(file).tex ;
}
actions pdflatex-actions { pdflatex $(>) }
pdflatex file ;
Jam does not clean *.aux and *.log files. The problem is that $(file).pdf has
two actions associated with it: first creates *aux and *log file, the second
removes them. But binding targets for both actions occur at the same time,
before executing actions. So when binding targets for RmTemps action, which
is defined as:
actions ..... existing RmTemps { $(RM) $(>) }
aux and log files are not found, and not passed to rm command due to
"existing" modifier.
Workarounds exit (e.g. Depends $(file).pdf : $(file).aux $(file).log ; ) but
are very non-intuitive. I took me a considerable time and looking through the
code to understand what's going on. Would be very interested in suggestions
on real fix.
Date: Wed, 17 Oct 2001 15:18:31 -0400
From: Michael Gentry <mgentry@sharemedia.com>
Subject: Working IDL Rule?
Does anyone have a working IDL rule?
I actually have a working IDL rule, but only for "simple" structures.
I'm building a static library (no nested source directories) in a
directory which contains both C/C++ files and some IDL files. This
works just fine. However, I also have a "shared" subdirectory for
creating the shared objects/library from the same C/C++ and IDL files
(only the .o's and .so's go in it). This works just fine, too, but it
regenerates the C++ files from the IDL files in the TOP directory (it
doesn't need to do this ...), which forces uneeded re-compiles when
running Jam again (as the static library is now out of date).
I'm convinced it is either a gristing or dependency problem, but I'm
pretty stumped as to where at the moment (I've spent a few days trying
to get it all working). If anyone has a working IDL rule for this type
of setup they could share, I'd appreciate it.
/dev/mrg
PS. I could post a lot more details if needed, but thought I'd try a
more sparse approach first.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Working IDL Rule?
Date: Wed, 17 Oct 2001 13:21:29 -0700
Differ IDL compler work differently. We use BEA Weblogic Enterprise
and here is the parts from our Jamrules which drive IDL ...
idl.pl is a wrapper arround the idl compiler which "fixes" its output.
It also does a cd so that the idl compiler is invoked in the correct
directory. BEA's idl compiler always produces outout in the cwd, which
is not what we want. (there is no way to specify the output file names).
Any existing outputs have to be remove first in case they are read only.
We also build a control file which controls transaction defaults on the
IDL interface.
The Object rule has been modified so that it understands the IDL outputs.
Object file_c.o : file.idl ;
Object file_s.o : file.idl ;
and this explains the gate on rule Idl.
Also
Object any.o : file.skel ;
Object any.o : file.stub ;
Allows the control of what portion of the idl output goes into a library.
For example
Library client : source1.cpp source2.cpp file.stub ;
Library server : source3.cpp file.skel ;
Notice the objectFiles rule, which knows how to map
.idl, .skel, and .stub to .o files.
# Idl file.idl ;
# Builds 4 output files file_c.h file_s.h file_c.cpp file_s.cpp
# _c files are for the client ONLY, _c, _s files are for the server
rule Idl {
local g s c n ;
if ! $($(<:G=)-idl) {
# Cheesy gate to prevent multiple invocations
$(<:G=)-idl = true ;
makeGristedName g : $(<:G=) ;
n = $(<:G=) ;
s = $(n:S=_s.h) $(g:S=_s.cpp) ;
c = $(n:S=_c.h) $(g:S=_c.cpp) ;
IdlRm $(c) $(s) : $(g) ;
IdlDo $(c) $(s) : $(g) ;
IdlMv $(c) $(s) : $(g) ;
}
}
rule IdlDo {
local h i ;
# special case because of how idl.pl works
MakeLocate $(<[1]) $(<[3]) : $(LOCATE_COMPONENT) ;
MakeLocate $(<[2]) $(<[4]) : $(LOCATE_SOURCE) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
Depends $(IDLS) : $(<) ;
Clean clean : $(<) ;
# headers often need to be pre-generated
# for dependencies analysis to work
Depends $(HEADERS) : $(<[1]) $(<[3]) ;
# alias to non gristed form
for i in $(<) {
if $(i) != $(i:G=) { Depends $(i:G=) : $(i) ; }
}
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIR_HDRS) ;
# Build a "default" ICF file
Depends $(<) : $(>:S=.xx) ;
Depends $(>:S=.xx) : $(>) $(ICFTMPLT) ;
MakeLocate $(>:S=.xx) : $(LOCATE_SOURCE) ;
RmIfLink $(>:S=.xx) ;
IdlIcf $(>:S=.xx) : $(>) $(ICFTMPLT) ;
ICFFILE on $(<) += $(>:S=.xx) ;
# tell jam that _c.cpp file includes DbugDebug.h
# (added by IdlDo action (idl.pl))
INCLUDES $(<[2]) : Dbug/DbugDebug.h ;
# in case Idl rule not called fro Object rule ...
# (or, $(>) in Object rule not the .idl file, but rather
# one of the aliases.
ScanFile $(>) ;
}
rule Library {
local o ;
objectFiles o : $(>) ;
LibraryFromObjects $(<) : $(o) ;
Objects $(>) ;
}
rule Object {
local h ;
# locate object and search for source, if wanted
Clean clean : $(<) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
# Save HDRS for -I$(HDRS) on compile.
# We shouldn't need -I$(SEARCH_SOURCE) as cc can find headers
# in the .c file's directory, but generated .c files (from
# yacc, lex, etc) are located in $(LOCATE_TARGET), possibly
# different from $(SEARCH_SOURCE).
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIR_HDRS) ;
# handle #includes for source: Jam scans for headers with
# the regexp pattern $(HDRSCAN) and then invokes $(HDRRULE)
# with the scanned file as the target and the found headers
# as the sources. HDRSEARCH is the value of SEARCH used for
# the found header files. Finally, if jam must deal with
# header files of the same name in different directories,
# they can be distinguished with HDRGRIST.
# $(h) is where cc first looks for #include "foo.h" files.
# If the source file is in a distant directory, look there.
# Else, look in "" (the current directory).
ScanFile $(>) ;
RmIfLink $(<) ;
switch $(>:S) {
case .asm : As $(<) : $(>) ;
case .c : Cc $(<) : $(>) ;
case .C : C++ $(<) : $(>) ;
case .cc : C++ $(<) : $(>) ;
case .cpp : C++ $(<) : $(>) ;
case .pc : Cc $(<) : $(>:S=.c) ;
ProC $(<:S=.c) : $(>) ;
case .f : Fortran $(<) : $(>) ;
case .idl :
switch $(<:S=) {
case *_c : C++ $(<) : $(>:S=_c.cpp) ; Idl $(>) ;
case *_s : C++ $(<) : $(>:S=_s.cpp) ; Idl $(>) ;
}
case .skel : C++ $(<) : $(>:S=_s.cpp) ; Idl $(>:S=.idl) ;
case .stub : C++ $(<) : $(>:S=_c.cpp) ; Idl $(>:S=.idl) ;
case .l : C++ $(<) : $(<:S=.cpp) ;
Lex $(<:S=.cpp) : $(>) ;
case .s : As $(<) : $(>) ;
case .y : C++ $(<) : $(<:S=.cpp) ;
Yacc $(<:S=.cpp) : $(>) ;
case * : UserObject $(<) : $(>) ;
}
}
rule Objects {
local i j s x ;
makeGristedName s : $(<) ;
for i in $(s) {
objectFiles x : $(i) ;
for j in $(x) {
Object $(j) : $(i) ;
Depends $(OBJ) : $(j) ;
# Alias gristed name as ungristed
Depends $(j:G=) : $(j) ;
NOTFILE $(j:G=) ;
}
}
}
rule objectFiles {
local _i ;
$(<) = ;
for _i in $(>) {
switch $(_i:S) {
case .idl : $(<) += $(_i:S=_c$(SUFOBJ)) $(_i:S=_s$(SUFOBJ)) ;
case .skel : $(<) += $(_i:S=_s$(SUFOBJ)) ;
case .stub : $(<) += $(_i:S=_c$(SUFOBJ)) ;
case * : $(<) += $(_i:S=$(SUFOBJ)) ;
}
}
}
actions IdlDo bind ICFFILE { $(IDL) $(IDLFLAGS) -I$(HDRS) $(>) $(ICFFILE[1]) }
actions quietly IdlMv {
test $(<[1]:D=$(>:D)) = $(<[1]) || $(MV) $(<[1]:D=$(>:D)) $(<[1])
test $(<[2]:D=$(>:D)) = $(<[2]) || $(MV) $(<[2]:D=$(>:D)) $(<[2])
test $(<[3]:D=$(>:D)) = $(<[3]) || $(MV) $(<[3]:D=$(>:D)) $(<[3])
test $(<[4]:D=$(>:D)) = $(<[4]) || $(MV) $(<[4]:D=$(>:D)) $(<[4])
}
actions ignore quietly IdlRm {
$(ISLINK) $(<[1]) && $(RM) $(<[1])
$(ISLINK) $(<[2]) && $(RM) $(<[2])
$(ISLINK) $(<[3]) && $(RM) $(<[3])
$(ISLINK) $(<[4]) && $(RM) $(<[4])
}
actions quietly IdlIcf {
$(TOP)/$(OS)/tools/icf.pl $(>[2]) < $(>[1]) > $(<)
}
Date: Fri, 19 Oct 2001 15:09:03 -0400
From: Michael Gentry <mgentry@sharemedia.com>
Subject: Re: Working IDL Rule?
Thanks to the examples from Randy Roesler, I was finally able to get my
IDL rules working for omniORB on Linux while building shared libraries.
The gating in the IDL rule really helped. I had tried that before, but
I guess I didn't get it quite right. It now only runs the IDL compiler
once (generating the C++ header/source) even though I build a shared
library in a subdirectory off the same source.
Here are the rules I'm using in case they'll help anyone else:
rule UserObject {
switch $(>:S) {
case .idl : Idl $(<) : $(>) ;
case * : EXIT "Unknown suffix on" $(>) "- see UserObject rule in Jamfile(5)." ;
}
}
rule Idl {
local i = $(>:G=) ;
local h = $(i:S=.hh) ;
local s = $(i:S=.cc) ;
Depends $(<) : $(i) $(h) $(s) ;
C++ $(<) : $(s) ;
if ! $($(<:G=)-idl) {
local g ;
local n ;
$(<:G=)-idl = true ;
makeGristedName g : $(<:G=) ;
n = $(<:G=) ;
Idl1 $(h) $(s) : $(>) ;
Clean clean : $(s) $(h) ;
}
}
actions Idl1 {
$(IDL) $(IDLFLAGS) $(>)
$(MV) $(>:B)SK.cc $(>:S=.cc)
}
Then in TOP/Jamfile to create the static library:
Library libum.a : $(LIBUM_IDLS) $(LIBUM_SRCS) ;
And in TOP/shared/Jamfile to create the shared library:
SharedLibrary libum.so : $(LIBUM_IDLS) $(LIBUM_SHARED_SRCS) ;
(The SharedLibrary rule is very similar to the standard Library rule.)
A note about the Idl1 action: omniORB's idl compiler creates both the
skeleton and the stub in a single file (foo.idl => fooSK.cc, foo.hh), so
after it creates the file, I move it back to the original filename minus
the SK suffix on the basename.
/dev/mrg
Date: Wed, 24 Oct 2001 09:26:37 +0200
From: Toon Knapen <toon@si-lab.org>
Subject: rule targets & directory tree
I'm a newbie to Jam programming but want to define my own rules for
generating PDF files from latex files.
If I'm in the subdirectory containing the latex file, `jam doc`
generates the PDF. If I'm in one of it's parents, `jam doc` does not
generate them.
Can anyone point me in the righ direction (I'm sure it has something to
do with grist and being able to locate the target and sources but .. sigh ;-(
So the Jamfile in my TOP directory reads :
# Jamfile in TOP
SubInclude TOP main doc ;
The Jamrules in my TOP directory read :
#Jamrules in TOP
Depends doc : femtown_$(MODULE)_module.pdf ;
rule latex2pdf-doc-gen {
local _s ;
_s = [ FGristFiles $(>) ] ;
LOCATE on $(<) = $(LOCATE_TARGET) ;
Depends $(<) : $(_s) ;
Clean clean : $(<) $(<:B).aux $(<:B).toc $(<:B).log ;
}
actions latex2pdf-doc-gen {
echo --- action --- $(>)
pdfelatex $(>)
}
and finally the Jamfile in TOP/main/doc reads :
# Jamfile in TOP/main/doc
SubDir TOP main doc ;
LATEXSOURCE = femtown_main_module.tex ;
latex2pdf-doc-gen femtown_main_module.pdf : $(LATEXSOURCE) ;
Date: Tue, 23 Oct 2001 13:47:00 +0200
From: Toon Knapen <toon.knapen@si-lab.org>
Subject: rule targets & directory tree
I'm a newbie to Jam programming but want to define my own rules for
generating PDF files from latex files.
If I'm in the subdirectory containing the latex file, `jam doc`
generates the PDF. If I'm in one of it's parents, `jam doc` does not
generate them.
Can anyone point me in the righ direction (I'm sure it has something to
do with grist and being able to locate the target and sources but .. sigh ;-(
So the Jamfile in my TOP directory reads :
# Jamfile in TOP
SubInclude TOP main doc ;
The Jamrules in my TOP directory read :
#Jamrules in TOP
Depends doc : femtown_$(MODULE)_module.pdf ;
rule latex2pdf-doc-gen {
local _s ;
_s = [ FGristFiles $(>) ] ;
LOCATE on $(<) = $(LOCATE_TARGET) ;
Depends $(<) : $(_s) ;
Clean clean : $(<) $(<:B).aux $(<:B).toc $(<:B).log ;
}
actions latex2pdf-doc-gen {
echo --- action --- $(>)
pdfelatex $(>)
}
and finally the Jamfile in TOP/main/doc reads :
# Jamfile in TOP/main/doc
SubDir TOP main doc ;
LATEXSOURCE = femtown_main_module.tex ;
latex2pdf-doc-gen femtown_main_module.pdf : $(LATEXSOURCE) ;
From: "eschner" <eschner@sic-consult.de>
Date: Thu, 25 Oct 2001 14:00:43 +0200
Subject: [newbie] generated files
as said in the subject, I am new to jam. Unfortunately, neither
documentation nor mailing list archive could help me with the following problem:
During our build process a couple of C-sources is generated from some text
file. Number and names of generated files vary unpredictably.
Corresponding objects should enter and leave the library depending on the
presence of the source.
Up to now I found myself unable to put the names of all those files into any
rule because there seems to be no globbing and no variable reading from files.
What did I miss? What can I do?
Date: Thu, 25 Oct 2001 14:07:05 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: [newbie] generated files
I'm not sure whether it'll work, but I think calling Depends a little might help.
Somewhere, there's a jam rule that decides that mumble.c will have to be
generated. Right? That rule could also say "Depends gargle : mumble.c ;"
and then anyone building executable gargle should compile mumble.c and
link mumble.o into gargle.
From: "Arnaldur Gylfason" <arnaldur@decode.is>
Date: Thu, 25 Oct 2001 13:47:34 +0000
Subject: Re: rule targets & directory tree
I assume you define MODULE correctly somewhere.
Don't you need SubDir TOP ;
at the head of the TOP Jamfile?
Apart from this my guess is you need gristing on the doc source :
femtown_$(MODULE)_module.pdf
Try putting the Depends clause within the rule latex2pdf-doc-gen after
gristing:
rule latex2pdf-doc-gen {
local _s ;
_s = [ FGristFiles $(>) ] ;
LOCATE on $(<) = $(LOCATE_TARGET) ;
Depends doc : $(_s) ;
Depends $(<) : $(_s) ;
Clean clean : $(<) $(<:B).aux $(<:B).toc $(<:B).log ;
}
You could also specify
NOTFILE doc ;
ALWAYS doc ;
in Jamrules.
Date: Wed, 24 Oct 2001 09:26:37 +0200
From: Toon Knapen <toon@si-lab.org>
Subject: rule targets & directory tree
I'm a newbie to Jam programming but want to define my own rules for
generating PDF files from latex files.
If I'm in the subdirectory containing the latex file, `jam doc`
generates the PDF. If I'm in one of it's parents, `jam doc` does not generate them.
Can anyone point me in the righ direction (I'm sure it has something to
do with grist and being able to locate the target and sources but .. sigh ;-(
So the Jamfile in my TOP directory reads :
# Jamfile in TOP
SubInclude TOP main doc ;
The Jamrules in my TOP directory read :
#Jamrules in TOP
Depends doc : femtown_$(MODULE)_module.pdf ;
rule latex2pdf-doc-gen {
local _s ;
_s = [ FGristFiles $(>) ] ;
LOCATE on $(<) = $(LOCATE_TARGET) ;
Depends $(<) : $(_s) ;
Clean clean : $(<) $(<:B).aux $(<:B).toc $(<:B).log ;
}
actions latex2pdf-doc-gen {
echo --- action --- $(>)
pdfelatex $(>)
}
and finally the Jamfile in TOP/main/doc reads :
# Jamfile in TOP/main/doc
SubDir TOP main doc ;
LATEXSOURCE = femtown_main_module.tex ;
latex2pdf-doc-gen femtown_main_module.pdf : $(LATEXSOURCE) ;
From: "Achim Domma" <achim.domma@syynx.de>
Date: Thu, 25 Oct 2001 22:11:17 +0200
Subject: [newbie] setup sql database with jam
I have about 50 sql scripts which I use to setup a database, and I want to
do this with jam. I tought about putting different groups of files under
differnt targets, so that it should be possible to rebuild differnt parts of
the database (for example only rebuild stored procedures, but let tables
unchanged). I tried to write a UserObject rule and a Sql rule but jam does nothing.
Could somebody give me a hint ? I have no experiences with make and can't
find my starting point.
Date: Mon, 5 Nov 2001 15:51:51 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: A few extensions to Jam
Some of the development groups at Alias|Wavefront have been using a minor
variant of Jam for a couple years with great success. We are using Jam
to build large products on multiple platforms. Like most users of Jam,
we have customized a local Jambase quite heavily. Along with the Jambase
customizations, we have extended Jam itself in a variety of ways.
The extensions to Jam may be useful for other groups working with large
projects, this mail announces the availability of the extensions.
I have a branch on the Perforce public depot at:
//guest/craig_mcpheeters/jam/src/...
That branch now contains all of the changes we have made to Jam, in a form
that is hopefully usable by a variety of people. There are 12 independent
extensions, and a few simple fixes.
There is a file in the branch called Jamfile.config which lists the extensions
in some detail. Briefly, they are:
* a header cache. Jam normally scans all source files for headers at each
run. This can be time consuming on large source trees. The header cache
saves the results of the current header scan, and it is re-used the next
time jam is run. This can save several minutes of startup time on large projects
* the output from a run of Jam using several jobs is now optionally serialized
* enable command buffers to grow dynamically. Some platforms are able to
accept multi-megabyte command buffers. With this extension, jam can generate them
* a new command line option (-p) to disable dependency checking on headers.
This is useful if you modify a header which would trigger a large build
which you want to defer
* a new debug level (-d+10) which outputs the dependency graph in a slightly
more readable format than the other debug levels offer
* a few extensions to tune Jam's output for working with large builds
* a 'NOCARE file' that works on intermediate files
* slightly improved jam errors and warnings. The Jam file and line number
are appended to the error message. These may not always be completely
accurate, but they are better than nothing when trying to find an error
somewhere in several hundred files
* there are two new modifiers for variable expansion: $(file:/) and $(file:\\)
which make all slashes uniformly forward or backward
A few of these are really useful for large projects. in particular, the
header cache, the serialization of Jam's output from multiple jobs,
enabling large command buffers and being able to disable header
dependencies are all essential for our work with large projects. By
'large' I mean several 10's of thousands of source files. Some of the
other extensions are useful but not as critical.
Any of the extensions can be built by specifying on the command line when
you build Jam that you want it. For example, to build in the header
cache, do:
jam -sHeaderCache=1
to build in all of the extensions:
jam -sAllOptions=1
See the file Jamfile.config in my branch for more details.
I don't consider this to be a new version of Jam. It is based entirely on
the Jam mainline, and contains extensions which are compatible with Jam.
If you are working on smallish projects, you may not find much in here that
is attractive. If you are working on medium to large projects, some of
this may be worth checking out.
Some of these extensions may end up in the Jam mainline. Depending on how
badly you want the extensions you are free to grab the files from my branch
directly or to wait and let the dust settle over any mainline merging that
goes on.
If you have any suggestions for improvements, please forward them to me.
Note however that I work on this project as a background task...at home.
So I may not reply as quickly as I would like. All feedback is welcome.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 5 Nov 2001 19:28:57 -0500
Subject: Re: A few extensions to Jam
I don't think much is happening with the Jam mainline.
I, too have been making a bunch of Jam core language changes, and have fixed
a number of bugs as well. The core language changes are described here:
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/~checkout~/boost/boost/tools/
build/build_system.htm#core_extensions
It would be good if those of us working on Jam enhancements would get
together, instead of working in isolation.
From: "Craig McPheeters" <cmcpheeters@aw.sgi.com>
Sent: Monday, November 05, 2001 6:51 PM
Subject: A few extensions to Jam
Are these changes based on Jam or FTJam? I'm quite interested in trying out Jam, but
having gone through the mail aliases I'm a little worried about the apparently
stalled mainline, and the various different versions that seem to be popping up. I
have a large project to work with (*.h = 10k files / 3m LOC, *.c = 23k files, 24.5m
LOC, plus other goop on top), so I think that in my case the header caching would be
vital. However, some of the stuff done for Boost also looks useful, but it doesn't
appear that I can have both at once.
p.s. How about 'Compote' as a posher alternative to 'Marmalade'?
[From Old French composte, mixture, from Latin composita, feminine past participle of
compnere, to put together]
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 6 Nov 2001 11:30:46 -0500
Subject: Re: Your Jam changes
I'm interested in most of Craig's enhancements, so if you want to merge them
into our source base I'd be happy to look at your patches.
The latest stuff is in the boost cvs repository at sourceforge in the
boost/tools/build module.
Date: Tue, 06 Nov 2001 16:37:30 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Your Jam changes
I wouldn't mind doing this if I was fairly confident that the result would actually
work for us, and ascertaining that that was the case will itself take some
considerable time. I'm not desperate for this, as I can easily test using the
existing versions, I'm more concerned by the apparent (?) lack of coordination
between the 3 (?) versions of Jam that seem to be floating around.
From: "Achim Domma" <achim.domma@syynx.de>
Date: Tue, 6 Nov 2001 17:43:06 +0100
Subject: introduction for Jam
I want to learn how to use Jam, but can not find much documentation. The
short docu I found is rather for people switching from 'make', but I don't
know much about 'make'. Could somebody point me to a tutorial or send my
some simple (but not trivial ;-) ) examples ?
Date: Tue, 6 Nov 2001 10:15:55 -0800 (PST)
From: Laura Wingerd <laura@perforce.com>
Subject: Re: introduction for Jam
I conducted a Jam tutorial session at this year's
and the slides useful. See the links at:
http://www.perforce.com/perforce/conf2001/index.html#jam
Date: Tue, 6 Nov 2001 11:26:17 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: A few extensions to Jam
I've taken a look at your core extensions, thanks for providing the URL.
It seems that you are taking a more aggressive approach to changes than I
am - which is ok, we just have different design goals. I've been trying to
restrict myself to changes which are under-the-covers, without changing the
language itself. A lot of the stuff you've got is interesting, while
loops, modules, etc. In Linux terms, perhaps I'm working in a stable
series and you're working in the next experimental/more aggressive series.
ie, 2.4 vs 2.5? (if that makes sense to you...)
At the moment, the time I have to work on this is constrained in a variety
of ways. What I have is working for us.
I would prefer to not find myself maintaining a branch of jam years down
the road. If there are changes made in the Jam mainline, I should be able
to easily incorporate them, and will do so. In an ideal world, many of the
changes I have would be accepted into the mainline and the delta between
the two would decrease. In reality, some of the changes I have may be
controversial and unnecessary by smaller projects.
Along this line, one of the nice things (among many) that I liked about jam
was its design purity. Its a tiny program which does a complex thing very
elegantly and simply. I would like to see this original quality maintained,
and one of the difficult problems for us is to decide when its 'done'.
I believe that for the internal Perforce development community, its probably
already considered done. Their products are known to be small and efficient,
I'm not aware of them working on multi-million line, 100Mbyte products. What
they have works for smallish projects. Note that the Perforce server is
'only' a few Mbytes large. Also note that in this world, small capable
programs are a good thing :-)
Are modules necessary? No, probably not. Would some people find them useful?
Yeah, I think so. Where do you draw the line?
Which is a long-winded way of saying, I prefer to minimize the amount of work
I'm doing in my branch, and would like to keep the delta between it and the
mainline as small as possible, in order to encourage its incorpration into
the mainline.
You are of course welcome to all of the changes I have made. This may make
sense in the context of a stable and experimental/more aggressive series of branches.
in a subsequent message you add some of the smaller changes you've made
in your branch:
Some of this sounds good, and it would belong in the stable series (I may be
pushing this analogy too far...) Earlier I mentioned my time is constrained
however, that's true and while I would like to incorporate some of your
changes, in reality this work will likely be pushed many months down the
road. Sorry about that.
Date: Tue, 6 Nov 2001 11:36:01 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Your Jam changes
My changes are based on Jam, I did most of this work almost 2 years ago.
My branch is a direct branch from the jam mainline, with integration
history to make it easy for the jam folks to see the differences.
Yeah, your project sounds large enough to benefit from some of my changes.
It also sounds like my branch and Boost may be compatible - although there
would be some merging required. I also mentioned earlier that I don't want
to spend a great deal of time on this - sorry about that. You can probably
have both at once, but not for free, there is some work required.
Date: Tue, 6 Nov 2001 11:57:03 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Your Jam changes
I wouldn't be overly concerned about the different versions. Its a good thing.
The different versions each seem to have different policies for accepting
changes. The policy in my branch is that I would accept changes (from myself)
which were as minimal as possible and enabled the use of Jam on large
internal projects. My policy was to avoid changes to the language where
possible, with a goal of minimizing the differences in order to ease future
integrations from the mainline, and the reverse.
The policy in the FTjam branch seems more open to changes in the language,
and other non-critical-but-useful changes. I like some of the extensions
David has created, although they don't match the policy I have established
for my branch. Different design goals, its all perfectly natural.
I'm not sure what the policy is in the mainline yet. Without an established
policy its hard to know what types of changes to propose. Give it time
though, I am. :-)
This is one of the benefits of open source, although I agree there is an
associated complexity.
Date: Tue, 06 Nov 2001 23:11:05 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Your Jam changes
I've no objection to doing the work, but I don't want to end up with it being yet
another version of Jam. However, I've looked through the mail archives, and I can't
recollect seeing any activity on the baseline version for the last 6 months, which
makes me kinda nervous.
$ head -1 /dev/bollocks
subjectively pursue shrink-wrapped best-of-breed quality vectors, going forwards
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 6 Nov 2001 19:10:17 -0500
Subject: Re: A few extensions to Jam
Actually, these are just some of the bug fixes. #2 is actually a fairly
large change.
It doesn't matter too much to me; I think software lives where it is
maintained and improved. From that point of view Jam seems to be dying at
Boost. Accordingly, I hope to steal most of your work ;-)
Date: Wed, 7 Nov 2001 13:38:58 -0800
From: Richard Geiger <opensource@perforce.com>
Subject: Jam at Perforce
I can understand how somebody could draw that conclusion. By most
appearances, work on Jam development at Perforce has been slow over
the past few years.
Nonetheless, Perforce does care about Jam and the future of Jam, as
evidenced in part by hiring me, as an Open Source Engineer.
In this role, I'm be concerned with all things Open Source at
therein, including Jam.
Over the next month or two, I'll be reviewing the state of Jam in
http://public.perforce.com/public/jam/index.html, along with changes
submitted to //guest/.../jam/... branches, reviewing and integrating
such changes (and possibly changes from internal-use versions at
Of course Perforce (and, ultimately, Christopher) reserves the right
to guide what gets into the jam "mainline" at Perforce, which will
probably not be all things to all people. There's certainly nothing
wrong with having other variants in the world; after all, that's how
Open Source is supposed to work.
I've already been in touch with some of the contributors to
//guest/../jam/... branches in the Public depot, and will be
contacting others in the coming weeks to coordinate the integration of
new functionality and bug fixes into a new //public/jam release.
If you are working on Jam development outside of the Perforce Public
Depot, and are interested in having your changes make it into the new
release, please contact me at "opensource@perforce.com", with the word
"Jam" in the Subject line.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Jam at Perforce
Date: Wed, 7 Nov 2001 18:42:12 -0500
I'm happy to hear from you! I had heard that you were being hired, but since
there has been no noise since, you can see how I drew that conclusion.
In some ways, I am glad that it has turned out this way: if I'd thought that
there was a good chance of conservative changes being rolled back into Jam,
I don't think I would have implemented many of the improvements I've come up
with. I'm convinced that these improvements will be instrumental in
implementing a robust, capable build system on the scale of Boost.Build.
Not being a P4 expert, I had a tough time understanding what might be the
"right" way to branch the Jam source. It seemed to me as though most people
had simply checked in a copy of the Jam source without creating a true
branch. Guidance would be appreciated; I'd like to try to keep a copy of
Boost Jam where you can get at it.
That's true, but a community has a better chance to thrive if momentum and
development effort is concentrated behind a single version of the code. I'd
love to convince you that all of my changes are important, judicious, and
appropriate, but I'm ... realistic about it.
Date: Thu, 8 Nov 2001 12:18:06 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Jam at Perforce
Jam's badly enough documented as it is. Multiple mostly compatible
versions means that parts of the documentation will be wrong, according to
my experience.
When some parts are wrong, no part is really trustworthy.
Date: Thu, 08 Nov 2001 12:41:26 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Some Jam questions and observations
I've now had a chance to look through the documentation for the various
versions of Jam. I'd love to hear the thoughts people who have used Jam
for real and on large projects - some of my questions/observations below
are I'm sure based on my ignorance of how a certain thing can be done in
Jam - please make allowances!
First, a bit of context. I'm at the very, very early stages of looking for
a replacement for make, and Jam seemed to be one of the best candidates.
The system I'm looking is very big - I've read the descriptions of other
large users of Jam, and this is the bigger than any I've seen so far. I'm
therefore concerned primarily by scalability and manageability issues.
The weakest part of Jam seems to be the header file scanning. Scanning
many millions lines of code every time it is run, and scanning some of it
many, many times on each run is just not feasible on such a large source
tree - simply running a 'find . -type f > /dev/null' on my source base
takes 2 minutes, and a 'find . -type f | xargs cat > /dev/null' takes 9:30
minutes, and that's on a filesystem striped & mirrored across 24 disks.
I'm not exactly clear how in detail the header file scanning works - if a.h
includes b.h, and then x.c and y.c both include a.h, does a.h just get
scanned , or does a.h + b.h get scanned? In either case, do the header
files get scanned twice, or is Jam clever enough to realise that when it
processes y.c it has already scanned them when processing x.c, so it can
just reuse the scan results?
Why doesn't Jam make use of the ability of many compilers to list the
header files as they are read? I appreciate that on the 'first pass' this
information won't be there, but in that case you are going to have to
recompile everything anyway. If this information was collected and used to
fill the header cache stuff added by Craig, couldn't the header scanning be
avoided? I'm thinking along the lines of a rule that turns this behaviour
on in the C compiler, and that can then parse the resulting file and squirt
it into the dependency graph. With our compilers, if you set the
environment variable SUNPRO_DEPENDENCIES to a filename the compiler will
write the dependency info in the familiar <target> : <dependencies> form
compiler support is there for this, the existing regexp scan mechanism
could be used instead.
I also can't find any way of setting variables based on the output of a
shell command - is this not possible? In our case, the variables
automatically set by Jam from the machine environment (OS, OSPLAT etc.)
aren't sufficient, and I can see no way of doing this. The sort of things
that we need are the user's uid, the date/time etc. Is this a deliberate
omission?
Another feature that seems to be missing is command dependency checking.
With the make that we currently use, the command-line used to build each
target is recorded along with the file dependencies of the target, and if
the command-line changes (e.g. because some external environment variable
has been modified), the target is rebuilt. This seems like it would be a
useful addition, and would not be difficult to slot into Craig's existing
caching mechanism.
I'm also concerned about migration - the exiting Makefiles consist of 5k
files and a total of 260k LOC. Obviously migrating that in one fell swoop
is not practical. Has anyone had any experiences with this, for example
running a mixed make/jam environment during the transition?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Jam at Perforce
Date: Thu, 8 Nov 2001 08:35:24 -0500
I agree. I'm trying to correct some of that by supplying supplemental
documentation, but it would be much better to have everything in one place.
Date: Thu, 8 Nov 2001 10:53:03 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Some Jam questions and observations
I believe that the original Jam scans each target. If a header is represented
in the dependency graph more than once via grist, its scanned more than once.
In your example, if there is nothing fancy going on with grist, the header is
scanned only once. In a system the size you are mentioning, you'll probably
need grist to make target names unique, in which case each combination of
grist+file would be scanned.
With the cache in place, each physical file is only scanned once. The cache
uses the boundname of the header.
I can't speak for why Christopher made this decision, but I'm glad he did!
I would mention: portability, simplicity as being the small obvious reasons.
The real reason would be that the results of the scan are needed before the
dependency graph can be run - its needed before any of the compiles start
up. The header dependencies determine the order of the graph which is
critical to the way jam works.
I don't know of a way to set a Jam variable to the result of a command
you run. We needed a similar thing though. What we do is combine some
perl/sh in our build, and create a series of files with the info we
need. On compile lines, if you need that info, either use something like:
CC -o foo -set_version `cat .version`
or create a header and include it. There are usually ways around it.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Some Jam questions and observations
Date: Thu, 8 Nov 2001 14:15:01 -0500
Right. And in a system that is really keeping track of #include paths the
way it should, that could result in a huge explosion of scanning. Why? It's ugly:
Take the Microsoft compiler. The rules for finding a file included as
#include "name "
Are that you first look in the directory of the file in which the #include
appears, then you look in the directory of the file which included that
file, and so forth until you get to the source file being compiled. THEN you
go on to look at everything in the #include path given with -I.
The upshot is that SEARCH must potentially be set differently on a header
file for each combination of -I #include paths and each directory chain of
files which include it. The only way to accomplish that is by gristing the
header differently for each of these situations, resulting in different
logical targets.
cache
So, technically, it would be scanned twice if it was bound in two different ways:
foo/baz.h vs. foo/bar/../baz.h
I'm sure it's still an enormous improvement.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Some Jam questions and observations
Date: Thu, 8 Nov 2001 11:56:54 -0800
May I make a suggestion for those who wish to use the
compiler enabled dependency analysis.
Have each Jamfile include a Jam.deps file.
Wrap the compiler command in a ksh or perl command,
that script invokes the compiler and then transforms
the dependency output from the compiler into
DEPEND directives for Jam. Update the Jam.deps file.
Now, in your Jamrules, disable jams header file scanning.
The old dependencies (in Jam.deps) will always be
acceptable because either
a) source and all headers have not changed, dependencies are accurate
b) source has changed, dependencies don't matter
c) one or more headers have changed, but the
dependencies from the source to the
at least one of these headers will be accurate
because those files have not chnaged.
(this is why sun's make integration with the compiler actually works).
The bigest problem with this approach is that the
DEPEND directives will cause Jam to try to build
a dependency even if the dependency is no longer relevant.
For example, you remove a header file and modify the
source to not require that file. The
DEPEND object : header will cause jam to complain that
it can not find the header. A NOCARE directive will fix that,
but this will (might) break any generated header files.
Date: Thu, 08 Nov 2001 23:10:42 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Some Jam questions and observations
If you rebuilt the Jam.deps file every time you built the file, wouldn't
that remove the problem? I accept that the first time you rebuilt you
would have an unnecessary dependency, but the current header scanning is
prone to this anyway, as the documentation points out.
Date: Thu, 08 Nov 2001 23:24:45 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Your Jam changes
Any chance of integrating it back into your perforce depot?
$ head -1 /dev/bollocks
uniquely merge visionary service providers, going forwards
Date: Thu, 08 Nov 2001 23:40:39 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Re: Your Jam changes
http://freetype.sourceforge.net/jam/index.html#where-ftjam:
o Through the Perforce public depot
The FreeType Jam sources are located in the directory named
//guest/david_turner/jam/src. They've been submitted to the Jam development
team, but will stay there until the changes are integrated back into the
main Jam sources (hopefully).
$ head -1 /dev/bollocks
aspirationally commoditize clicks-and-mortar total solutions
Date: Fri, 09 Nov 2001 01:20:43 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Your Jam changes
As David set the challenge above I thought I'd rise to it ;-)
I've thrown together a merged version of:
1. FTJam 2.3.5 from //guest/david_turner/jam/src/...
2 Craig's mods from //guest/craig_mcpheeters/jam/src/...
and put the resulting hairball in
//guest/alan_burlison/jam/src
in the perforce repository at http://www.perforce.com. I do mean thrown -
I've made no attempt to pick and choose what goes in (mainly because I'm
not qualified to do so), and by-and-large I have just let Perforce merge as
it sees fit, so for example there is now a :T modifier and a :\ modifier
which both probably do the same thing.
However, the resulting wad does compile and appears to run, although I have
no way of really testing it properly, so I'm sure there will be a whole
crop of jucy bugs in it. I'm hoping that if nothing else it might hasten
the discussion of what the *real* merged version might include, and
demonstrate that it is not too daunting a task.
If nothing else I've learned a bit more about Perforce.
BTW, has anyone ever run Purify on Jam?
$ head -1 /dev/bollocks
concisely kick-start leading-edge drivers
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Your Jam changes
Date: Thu, 8 Nov 2001 21:01:24 -0500
I am a a CVS clown, but a P4 idiot.
If it's a limitation for you that I'm not integrated at the depot, I can
figure out how to do it, but it would take a little investment to figure out how.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Your Jam changes
Date: Thu, 8 Nov 2001 21:08:04 -0500
Those aren't my changes. Boost Jam includes MANY enhancements beyond FTJam.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Thu, 8 Nov 2001 22:16:41 -0500
Subject: Re: Your Jam changes
Rats! I have obviously not been clear enough. Here is how Richard Geiger
summarized the situation:
Boost.Build has heretofore worked with FTJam, but is evolving
and will, going forward, require the use of a still newer
Jam variant, "Boost Jam", which is based on FTJam.
The new features and bug fixes I've discussed are all only in the Boost Jam source base.
Thanks for taking the time to do this. Let me know if you'd like to try
again with my sources ;-)
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Fri, 9 Nov 2001 00:26:44 -0500
Subject: Re: Your Jam changes
Okay, to reduce confusion I've submitted Boost Jam back into the Perforce
Public Depot at file://guest/david_abrahams/jam/src.
Documentation for new features is still not available at the Depot, but can be viewed at:
From: "Alan Burlison" <Alan.Burlison@sun.com>
Sent: Thursday, November 08, 2001 6:24 PM
Subject: Re: Your Jam changes
Is that the version you have just put back into the depot?
$ head -1 /dev/bollocks
contain heterogeneous unprecedented VLDBs, going forwards
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Fri, 9 Nov 2001 07:35:53 -0500
Subject: Re: Your Jam changes
Somebody found a bug this morning; I've checked in a fix and regression
tests both at Boost SourceForge CVS and in the public depot at sourceforge
in //guest/david_abrahams/jam/src.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Some Jam questions and observations
Date: Tue, 13 Nov 2001 11:20:16 -0800
Because Jam currently scans (from scratch) on invokation,
it will never see an "old" include dependency.
Yes, rebuilting Jam.deps on each invokation would be required,
but this should be cheap enough, as only direcctories with
actual recompilations need to have their Jam.deps updated.
I have not been able to think of a way to do Sun Make's
command "dependency" stuff without actual support from the jam engine.
I would suppose that Jam.deps type of solution might work
for Jam and Java as well !
Date: Tue, 13 Nov 2001 22:40:35 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Some Jam questions and observations
Nothing is 'Cheap' when it involves scanning several million LOC.
$ head -1 /dev/bollocks
correlate homogeneous e-services, going forwards
Date: Tue, 13 Nov 2001 21:46:13 -0500
From: Alex Nicolaou <alex@freedomintelligence.com>
Subject: Re: Some Jam questions and observations
For java the compiler is supposed to do dependancies, but this they
unfortunately discarded as a feature around the time of 1.2. I recommend
jikes ++ $(find . -name '*.java')
or some variation, do you have a project where this approach doesn't
work well? Because this should produce instantaneous compiles for almost
any project, after the first which may take several seconds. Jam should
only be needed by whomever builds the release product, and even then we
have our jam rule invoke jikes for 100 java files at a time.
Date: Fri, 16 Nov 2001 11:17:34 -0500
From: "Thatcher Ulrich" <tulrich@oddworld.com>
Subject: workaround for embedded space problem on Windows
So I've been evaluating Jam as a build tool for my company, and
basically I like it. However, our project is Windows-based, and I ran
into the problem of jam not being able to spawn the compiler & tools
if the path name contains an embedded space character. Unfortunately
this is the default for MSVC, and I noticed from the archives of this
mailing list that I'm not the first to butt heads with this problem.
Anyway, the consensus workarounds from the list archives were:
1. don't use spaces in the paths of tools.
2. use the 8.3 aliases of the paths (e.g. "c:/Progra~1/" instead of
"c:/Program Files/").
Neither workaround is acceptable to me. The first for legitimate
political reasons; this is a Windows shop and I only have so much
leeway to inconvenience people, and the second because the 8.3 alias
is not reliable -- e.g. on my workstation, "c:/Progra~1/Micros~1"
aliases to "c:/Program Files/microsoft frontpage", which is not going
to find the compiler! It's a marginally OK, but cheesy, workaround
for a single workstation.
Anyway, the point of all this is that after perusing the Jam source
code, I discovered a very simple workaround that seems to actually
work, on my Win2K box at least. Put the following line in your Jamrules:
JAMSHELL = cmd /C % ;
and make sure your tools path has quotes around it when Jam spawns the
command line, for example these are the definitions from my sample
Xbox project (which uses a special version of the MSVC tools):
JAMSHELL = cmd /C % ;
XDK = "c:/Program Files/Microsoft Xbox SDK" ;
VISUALC = "$(XDK)/xbox/bin/vc7" ;
C++ = \"$(VISUALC)/cl.exe\" ;
LINK = \"$(XDK)/xbox/bin/vc7/link.exe\" ;
AR = \"$(XDK)/xbox/bin/vc7/lib.exe\" ;
Note the \" around the tool .exe's. I also found I needed to put \"
's around various include and lib paths in the command-line switches.
Hope this helps; I didn't come across this workaround in the list
archives. Apologies if it's in the FAQ or something. If it's not, it should be!
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: workaround for embedded space problem on Windows
Date: Fri, 16 Nov 2001 12:00:02 -0500
Ooh, very sneaky! You're getting Jam to delay expansion of the path until
it's already past the phase of reading environment and command-line
variables.
The version of Jam I'm working with for Boost.Build contains an extension
which interprets these variables differently if they are surrounded by
double quotes. The problem and solution are described here:
http://www.boost.org/tools/build/build_system.htm#variable_splitting
and here:
http://www.boost.org/tools/build/build_system.htm#variable_quoting
Date: Fri, 16 Nov 2001 12:36:04 -0500
From: "Thatcher Ulrich" <tulrich@oddworld.com>
Subject: Re: workaround for embedded space problem on Windows
I'm not actually clever enough to have done that on purpose... I was
just trying to exercise the other code path to see if it worked better
Date: Sat, 17 Nov 2001 16:19:55 -0800 (PST)
From: cmcpheeters@aw.sgi.com (Craig McPheeters)
Subject: Fix applied to Jam in my guest area
There is an update to my guest area on the Perforce server. I found a
defect in my branch of Jam which I've now fixed.
The problem was in the dynamic command size extension. It wasn't properly
dealing with targets identified as piecemeal, now it does.
I've also incorporated a change which reverts the execcmd() function in
execunix.c to its 2.2 behaviour on NT. The 2.3 change was to create .bat
files for all targets. The problem I was running into is that there seem to
be limitations on how long lines can be in .bat files, and we were exceding
it when invoking rc.exe via a .bat file. The 2.2 logic would create fewer
.bat files and works for us. This change may or may not make it into the
mainline, as the original change must have been made for a reason, and I've
reverted to the prior logic.
The list of extensions is again:
* header cache
* serialized output
* dynamic command size
* slightly improved warnings
* slash modifiers $(foo:/) $(foo:\\)
* optional headers dependencies (jam -p)
* dependency graph debug dump (jam -d+10)
* no care internal nodes
* NT batch file fix when running more than one jam on a machine
For details, see:
//guest/craig_mcpheeters/jam/...
From: "Axelsson, Andreas" <Andreas.Axelsson@dice.se>
Date: Thu, 22 Nov 2001 15:01:51 +0100
Subject: Wildcards?
As a beginner in Jam, I'd like to know if there are any ways of parsing
wildcard file specifiers and setting up the kind of more generic
pattern-rules that one can do in GNU make or similar? I need to perform
various tasks on an unknown ammount of files and adding them to an explicit
tagrget list just isn't a good option for me right now. I need something
like the setup below:
####################
rule BuildTile { Depends $(1) : $(FOLDER)/*.tile }
rule ConvertTile { Depends $(1) : $(FOLDER)/*.tga ConvertTile $(1) }
actions ConvertTile { MyConverter $(1) $(1:S=tile) }
ConvertTile $(FOLDER)/*.tile
BuildTile Tiles ;
From: "Axelsson, Andreas" <Andreas.Axelsson@dice.se>
Date: Thu, 22 Nov 2001 09:49:30 +0100
Subject: Jam and Wildcards
As a beginner in Jam, I'd like to know if there are any ways of parsing
wildcard file specifiers and setting up the kind of more generic
pattern-rules that one can do in GNU make or similar? I need to perform
various tasks on an unknown ammount of files and adding them to an explicit
tagrget list just isn't a good option for me right now. I need something
like the setup below:
####################
rule BuildTile { Depends $(1) : $(FOLDER)/*.tile }
rule ConvertTile { Depends $(1) : $(FOLDER)/*.tga ConvertTile $(1) }
actions ConvertTile { MyConverter $(1) $(1:S=tile) }
ConvertTile $(FOLDER)/*.tile
BuildTile Tiles ;
From: "Khidkikar, Sanket" <skhidkikar@atsautomation.com>
Date: Mon, 26 Nov 2001 09:26:47 -0500
Subject: bootstrapping woes
I have problems bootstrapping jam on my QNX 6 system (with no Yacc
installed)
Here is the story:
-I run make
-When I get to the point where the jamgram.y is to be parsed to create
jamgram.c and jamgram.h (I already have these when I untar the sources)
-Complains that yacc is not found..removes the jamgram.c and jamgram.h files
-Compalins that there is not jamgram.c (and .h) to create jamgram.o
....skipping
-Cannot make the archive since jamgram.o is missing
-bootstrap is unsuccessful.
I tried to bypass this problem by tinkering make1.c
That allowed me to make the jam binary. But then ,when I try to use it to
build boost libraries I get "memory fault (core dumped) "
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: bootstrapping woes
Date: Mon, 26 Nov 2001 09:58:08 -0500
If you are building Boost.Jam, you can keep it from trying to run yacc by
setting YACC to "" in your environment. This might be tricky (i.e., you
might need to escape the quotes). On my Win32 system:
make YACC=\"\"
I'm not sure why jam might be dumping core, though.
You might be able to give me enough information to help me help you, though:
why not try running jam with -d+5 and sending me the output of that?
If you build jam with CCFLAGS and LINKFLAGS set appropriately for debug
symbols you could send me a stack backtrace, which would be /really/ helpful.
P.S. We are starting to post pre-built Jam executables at boost. Once we
figure out how to build Jam for your system, would you consider contributing an executable?
From: David Abrahams [mailto:david.abrahams@rcn.com]
Sent: Monday, November 26, 2001 9:58 AM
Subject: Re: bootstrapping woes
If you are building Boost.Jam, you can keep it from trying to run yacc by
setting YACC to "" in your environment. This might be tricky (i.e., you
might need to escape the quotes). On my Win32 system:
make YACC=\"\"
I'm not sure why jam might be dumping core, though.
You might be able to give me enough information to help me help you, though:
why not try running jam with -d+5 and sending me the output of that?
If you build jam with CCFLAGS and LINKFLAGS set appropriately for debug
symbols you could send me a stack backtrace, which would be /really/ helpful.
P.S. We are starting to post pre-built Jam executables at boost. Once we
figure out how to build Jam for your system, would you consider contributing an executable?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: bootstrapping woes
Date: Mon, 26 Nov 2001 11:11:08 -0500
Note that if it already tried to run yacc and even marginally succeeded, you
might need to restore your source tree to a pristine state first, since yacc
will overwrite jamgram.* for most values of * ;-)
From: Mike Chen <Mike.Chen@corp.palm.com>
Date: Wed, 28 Nov 2001 13:23:07 -0800
Subject: colorizing output
I'm new to the list and I have a question that I hope one of you
experts can answer.
I'd like to colorize the console output in Win2000 so that errors and warnings
are more easily distinguishable from other status output. Ideally, I'd like
different colors for errors, warnings, and status (with maybe other types
in the future).
I was able to get part of the way there by changing execunix.c to
use SetConsoleTextAttribute() before spawning the command and then
restoring the old color after the command runs. However, this makes
all output from the executed program the same color. In order to
differentiate between errors and warnings, I guess I might have to
parse the output before it goes on screen so I can choose the right
color, though I'm not sure how I'd do that when the output is going to stderr.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: colorizing output
Date: Wed, 28 Nov 2001 18:18:06 -0500
I use emacs on Win2K which handles all of that for me. Also, the Jam
extensions I've implemented at Boost include file and line number output
from -d+5, so that you can use emacs as a kind of post-mortem debugger, to
step through Jam execution.
From: Ramon Lim <Ramon_Lim@Allegis.com>
Date: Tue, 27 Nov 2001 17:12:45 -0800
Subject: Does perforce stop execution chain if it encounters an error??
I have two targets that are dependent on one another. If the action behind
one of the rules fails, I want execution to stop. I noticed that in the
code below, although I have an error in the actions schemaValidation, I
still proceed to rulesValidation. Is there a way around this?
I run the command >> jam -t schemaValidation
NOTFILE schemaValidation ;
NOTFILE rulesValidation ;
NOTFILE build ;
# schema Validation
# validate schema and generate output file
actions schemaValidation {
//Error occurs here
testwork.js ;
}
rule schemaValidation {
Depends schemaValidation : $(1) ;
Depends $(1) : $(2) ;
}
# rulesValidation
# Compile Rules and generate output file
actions rulesValidation { }
rule rulesValidation {
Depends rulesValidation : $(1) ;
Depends $(1) : $(2) ;
Depends $(2) : $(3) ;
}
# executer by jam -t tester
rule build {
Depends all : schemaValidation ;
Depends all : rulesValidation ;
Depends $(rulesValidation) : $(schemaValidation) ;
schemaValidation $(1) : $(2) ;
rulesValidation $(3) : $(4) ;
}
build $(appPath)\\output\\schemaEntity.xml : $(appPath)\\schemaEntity.xml :
$(appPath)\\output\\rules.txt : $(appPath)\\rules.txt ;
From: "Wang, Jason N (USRL)" <Jason.N.Wang@am.sony.com>
Date: Tue, 27 Nov 2001 17:36:51 -0800
Subject: Looking for Jam to Makefile conversion utility
I guess I am repeating Kevin's 4 years old question. Is there exist a
utility which can convert a Jamfile into a Makefile? What should I do if I
port a jam project on a platform which cannot have a jam environment?
From: Ramon Lim <Ramon_Lim@Allegis.com>
Date: Wed, 28 Nov 2001 12:02:39 -0800
Subject: Question about stopping dependency tree flow if an error occurs
NOTFILE schemaValidation ;
NOTFILE schemaCheck ;
NOTFILE rulesValidation ;
NOTFILE build ;
Depends rulesValidation : schemaCheck ;
Depends schemaValidation : rulesValidation ;
Depends all : schemaValidation ;
Depends all : rulesValidation ;
Depends all : styleSheetBuild ;
# SCHEMA VALIDATION
actions schemaCheck {
# An error occurs here
testwork.js ;
}
rule schemaCheck {
# Only run if the output version of file is less current than real file
Depends schemaCheck : $(1) ;
Depends $(1) : $(2) ;
}
actions schemaValidation { ECHO "ACTION: In the schema validation" ; }
rule schemaValidation { ECHO "RULE: Schema Validation " ; }
# RULES VALIDATION
actions rulesValidation { }
rule rulesValidation {
# Only run if the output version of file is less current than real file
Depends rulesValidation : $(1) ;
Depends $(1) : $(2) ;
}
rule build {
# make rules validation dependent on schemaValidation
schemaCheck $(1) : $(2) ;
rulesValidation $(3) : $(4) ;
}
build output\\schemaEntity.xml : schemaEntity.xml : output\\rules.txt : rules.txt ;
I am fairy new to jam and had a question. In the code above, I have the
rulesValidation target dependent on the schemaValidation target (each of
these targets only run a certain action if a certain file is more current
than its output version ). So basically, the correct flow should be
schemaValidation -> rulesValdation.
ISSUE 1: I am encountering that if an error occurs in the action
schemaValidation, it continues to the target rulesValidation? Is there a
way to stop the dependency flow in this case?
Subject: RE: colorizing output
Date: Wed, 28 Nov 2001 18:16:16 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
I did this a while ago. You're right -- it's more complicated than just
using SetConsoleTextAttribute.
I'll try to package up the change soon and post it here.
From: Mike Chen [mailto:Mike.Chen@corp.palm.com]
Sent: Wednesday, November 28, 2001 1:23 PM
Subject: colorizing output
I'm new to the list and I have a question that I hope one of you
experts can answer.
I'd like to colorize the console output in Win2000 so that errors and warnings
are more easily distinguishable from other status output. Ideally, I'd like
different colors for errors, warnings, and status (with maybe other types
in the future).
I was able to get part of the way there by changing execunix.c to
use SetConsoleTextAttribute() before spawning the command and then
restoring the old color after the command runs. However, this makes
all output from the executed program the same color. In order to
differentiate between errors and warnings, I guess I might have to
parse the output before it goes on screen so I can choose the right
color, though I'm not sure how I'd do that when the output is going to stderr.
Date: Thu, 29 Nov 2001 23:45:50 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Does perforce stop execution chain if it encounters an error??
You can't have that. The dependency graph is an acyclic graph, and trying
to go against the basic design has very bad karma.
What you can do is make one of the things dependent on the other. That'll
achieve your goal, I guess... but it's a hack.
DEPENDE rulesValidation : schemaValidation ;
Date: Thu, 29 Nov 2001 23:48:32 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Looking for Jam to Makefile conversion utility
You cannot port a jamfile to a makefile, in general.
What you can do is write a simple tool that runs "jam clean", runs
"jam whatever" and captures command execution, and finally writes a makefile:
whatever: <all files that weren't deleted by jam clean go here>
<the captured commands go here>
It's not much, but I guess it's sufficient for bootstrapping.
From: Bryan Branstetter <BBranstetter@s8.com>
Date: Thu, 29 Nov 2001 17:36:30 -0800
Subject: jam clean
I've got a question regarding 'jam clean': it looks to be defined
'piecemeal', as my Jambase shows:
actions piecemeal together existing Clean { $(RM) $(>) }
However, when I compile my entire tree and run 'jam clean' on my Win2k box, I get:
C:\src>jam clean
Compiler is Intel C/C++
...found 1 target...
...updating 1 target...
Clean clean
The following character string is too long:
-- cut --
I have searched through all of our Jam extensions, and we don't redefine the
actions on Clean anywhere, so I'm stumped. I thought the idea of piecemeal
was to break the commands into chunks which the OS could handle, correct?
Any guidance would be greatly appreciated.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: jam clean
Date: Thu, 29 Nov 2001 21:20:41 -0500
Probably MAXLINE is not set right for your machine. My experiments show that
Win2K has a correct MAXLINE of 2047
From: Ramon Lim <Ramon_Lim@Allegis.com>
Subject: RE: Does perforce stop execution chain if it encounters
Date: Fri, 30 Nov 2001 15:40:05 -0800
I am running the following code with the command : >> jam build
This should run schema validation and then rules validation. If schema
validation fails and/or the outschema.txt is more up to date that the
schema.txt, I don't want the rules v. to run. I only want the rules v. to
run if the schema v. executed successfully AND if the rule.txt is more up to
date than the outrule.txt. I am running into trouble adding these two
conditions for rule v. to run.
Any help would be very useful.
# SCHEMA VALIDATION
actions schemaValidation {
# AN ERROR OCCURRS HERE
}
rule schemaValidation {
Depends schemaValidation : $(<) ;
Depends $(<) : $(>[1]) ;
Depends $(>[1]) : $(>[2]) ;
}
# RULES VALIDATION
actions rulesValidation {
}
rule rulesValidation {
ECHO "RULE: In the rule validation" ;
# If I execute with the code statement below, rule v. indeed does not run
# if schema v. fails but it does not check the timestamp for outrule.txt and rule.txt
Depends rulesValidation : $(<) ;
# If I execute the code below instead, timestamp for outrule.txt and rule.txt is checked properly and rule v. called
# but the status of the success of schema v. is ignored.
Depends rulesValidation : $(<[2]) ;
Depends $(<[2]) : $(>) ;
#IS THERE ANY WAY FOR ME TO MERGE THE TWO????
}
local schemaValidation ruleValidation ;
schemaValidation = "schemaValidation" ;
ruleValidation = "ruleValidation" ;
NOTFILE schemaValidation ;
NOTFILE ruleValidation ;
Depends ruleValidation : schemaValidation ;
schemaValidation $(schemaValidation) : outschema.txt schema.txt ;
rulesValidation $(ruleValidation) outrule.txt : rule.txt ;
Depends build : schemaValidation rulesValidation ;
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Does perforce stop execution chain if it encounters
Date: Mon, 3 Dec 2001 12:49:09 -0800
Try something like ...
You forgot to depend the rule validation on
the schema validation. Instead, you were depending
the set of all rule validations on the set of
all schema validations. The difference is that
individual rule validations are independent of
the schema validations.
Jam will build all tragets are can be built, pruning
a branch from the tree only when a dependency fails.
This means that jam could (or thinks it can) validate
all the rules. The fact that "validateRules" will
fail because "validateSchema" has no bearing on the
chances of any particulat validate rule invocation failing.
It did not help that you were not using $(<) as your
output or result file. And that you were giving one
of your NOTFILE infrastructure target the same name
as one of your specific targets.
rule validateSchema {
local schema = $(<[1]) ;
local source = $(>) ;
Depends validateSchema : $(schema) ;
Depends $(schema) : $(source) ;
}
rule validateRule {
local rule = $(<[1]) ;
local source = $(>) ;
local schema = $(3) ;
Depends validateRules : $(rule) ;
Depends $(rule) : $(schema) ;
Depends $(rule) : $(source) ;
}
actions validateSchema {
# use command false to force a failure
echo "validated" > $(<[1])
# false
}
actions validateRule { echo "validated" > $(<[1]) }
Depends all : validateSchema validateRules ;
NOTFILE validateRules validateSchema ;
validateSchema SchemaA.val : SchemaA.src ;
validateRule RuleB.val : RuleB.src : SchemaA.val ;
validateRule RuleC.val : RuleC.src : SchemaA.val ;
From: "Goral, Jack" <Jack_Goral@NAI.com>
Date: Mon, 10 Dec 2001 12:22:41 -0600
Subject: If you had to start from scratch...
If you had to start from scratch your big(?) project, would you go with
'jam' again instead of makefiles?
Subject: Re: If you had to start from scratch...
From: Jose Vasconcellos <jvasco@bellatlantic.net>
Date: 11 Dec 2001 11:30:07 -0500
Yes, jam provides a simple and concise way to describe the dependencies
of your project. It has a clear and clean mechanism for separating the
dependency description from the construction rules and actions. And it's
cross platform and fast. What more do you want?
Date: Tue, 11 Dec 2001 11:59:03 -0500
From: "Thatcher Ulrich" <tulrich@oddworld.com>
Subject: Re: If you had to start from scratch...
Well, those are the advantages. It has drawbacks too. The question
is, do the advantages outweigh the drawbacks, vs other solutions? (I
won't volunteer an answer because I haven't used it enough yet.)
From: "Kimpton, Andrew" <awk@pulse3d.com>
Date: Tue, 11 Dec 2001 12:58:12 -0800
Subject: Building different objects from different sources with the same n
We use Jam to build our product(s) from a single hierarchical source tree.
We place the Binary output files into a separate hierarchy using LOCATE_TARGET.
Many of our 'components' have dependencies on each other so to minimise
build times we use SubDir and SubInclude - for example :
Component A (an executable) uses Component B (a shared library) which in
turn is built from some static libraries (C & D) and other sources.
Our Jamfiles have
A/Jamfile :
Subdir TOP A ;
If !$(s) {
SubInclude TOP B ;
SubInclude TOP C ;
SubInclude TOP D ;
}
<Blah-blah>
LinkLibraries A$(SUFEXE) : B$(SUFLIB) ;
---
B/Jamfile :
Subdir TOP B ;
If !$(s) {
SubInclude TOP C ;
SubInclude TOP D ;
}
<blah-blah>
LinkLibraries B$(SUFSHR) : C$(SUFLIB) D$(SUFLIB) ;
---
We use the !$(s) so that individual pieces can be built independantly.
A problem has arisen however since components A & B both have a source file
xyz.c one in A/xyz.c the other in B/xyz.c
If only B is built things work fine, however if we build A and rely on
dependancies to trigger the build of B building B$(SUFDLL) uses A/xyz.c NOT
B/xyz.c . I've tried working with SEARCH_SOURCE, LOCATE_* et al all to know
avail. Things are perhaps made even more complicated by the fact that 90% of
the time the builds are running on Windows (the other 10% is Unix - and I
don't want to break those builds 8-)
Subject: Re: Building different objects from different sources with the same n ame
Date: Tue, 11 Dec 2001 15:40:34 -0700
Usually grist takes care of things like this. A/xyz.c will be known
as <A>xyz.c and B/xyz.c will be known as <B>xyz.c.
For the runs that do not work, I'd just run jam with a high debugging
level and redirect its output to a file. jam -d7 spits out more than
enough. Then just search for xyz.c within jam's output to see what is
happening with it. The two xyz.c files should be differentiated with
grist, as I describe above.
Subject: Re: If you had to start from scratch...
Date: Tue, 11 Dec 2001 15:18:22 -0700
From: "Matt Armstrong" <matt+dated+1008541109.ba82d0@lickey.com>
The only criteria for the project you gave was that it was "big", so
it is a little hard to make the decision based solely on that criteria.
An example of where jam may be weak are:
- compiling Java source -- I have no direct knowledge here, but I
don't think Java dependencies can be expressed easily in Jam.
- bigger projects (>5000 files). Jam scans all source files every
time you run it looking for header files, and automatically adds
them to the dependency tree. This is nice for smaller projects,
but the scan time is prohibitive for larger ones. There are
patches that allow jam to cache this dependency information, and
I really wish one would be incorporated into the upstream jam distribution.
- projects where you distribute source to a lot of folks -- they
may not want to deal with an obscure build tool.
Examples of where I think jam is strong are:
- non-huge projects -- it is very fast
- projects requiring support for many different compilers on
different platforms -- jam's ability to separate describing the
dependency tree (rules) from how to build targets (actions)
really shines here.
- projects that need to be built on multiple platforms -- jam's
platform independence shines here.
But in general, jam is my first choice, even for larger projects.
Date: Wed, 12 Dec 2001 12:25:09 -0800 (PST)
Subject: Re: If you had to start from scratch...
Jam and Java aren't a great match, but it is doable. The main problem is
if you feed the Java source files through one at a time, because then
you're loading in the silly JVM everytime (read: sloooow).
The first build-process Jam was ever used for (literally -- it was the
build-process that Jam was used in conjunction with during its
development) was >12000 files, and it was still faster than the Make-based
process it was replacing.
If I have a choice, I always choose Jam over Make.
Subject: Re: If you had to start from scratch...
Date: Wed, 12 Dec 2001 14:53:36 -0600
From: "Gregg G. Wonderly" <gregg@skymaster.c2-tech.com>
The ultimate issue is that with Java there are no header files. The class
file replaces those, so if you have to class file, you have to provide the
source file to the compiler. Thus from a clean slate, you compile everything
at once. From a partially dirty plate, you have to compile the things that
matter. Determining what matters is the issue. Change an 'interface', you
need to recompile all implementers, and their subclasses (how do you know who
the subclasses are, without foreknowledge?)!
From my perspective, you should always recompile all files for your production
build of anything that goes into a single jar, and then you need to recompile
anything that uses that jar next. Thus, there is a dependency tree based on
jar files, but never based on .class files...
From: Jose Vasconcellos <jvasco@bellatlantic.net>
Date: 12 Dec 2001 16:54:46 -0500
Subject: Jam with Java (Was: If you had to start from scratch...)
Java users interested in using jam may want to check out the
wonka project at http://wonka.acunia.com/ They have modified
jam to support java. Here's a link to the source:
http://wonka.acunia.com/acunia-jam.tar.gz
Subject: Re: If you had to start from scratch...
Date: Wed, 12 Dec 2001 16:38:22 -0700
Faster than make doesn't necessarily define fast enough, especially
with such an obvious improvement like header caching sitting right
there screaming "implement me! implement me!"
I just patched Jam for cached header dependencies on a project (using
Craig McPheeters' patches, with improvements that I'll post here eventually).
Our project is small by comparison to jam's original Sybase project --
the cached headers only number 3600. But people are reporting
speedups of the "jam finds nothing to compile" case on the order of
6-10x. Depending on hardware, the speedup ranges from a 100 to 11
second improvement down to a 32 to 6 second improvement. Considering
that some people are saving over a minute a compile, and they might
compile 10-20 times a day, this adds up to real $$$ over the course of
a project. And it saves a few impatient nerves as well.
Date: Thu, 13 Dec 2001 10:25:24 +0100
From: Chris Gray <chris.gray@acunia.com>
Subject: Re: If you had to start from scratch...
This is somewhat mitigated if you use Jikes. (BTW your OS shouldn't be
physically loading the JVM each time - but it still needs to re-initialise it.)
I'm not sure you need to go that far, but better safe than sorry. Certainly some
very puzzling things can happen if you don't recompile everything you should:
most notoriously, compile-time constants imported from some interface are
compiled into the byte code, so if you change them you need to recompile
every class which implements that interface.
As was already mentioned, Acunia has modified Jam for use with Java class
libraries. I'm not saying that our version already solves all these problems,
but it's probably a good place to start.
Date: Thu, 13 Dec 2001 11:17:42 +0100
From: Chris Gray <chris.gray@acunia.com>
Subject: Re: If you had to start from scratch...
Actually C doesn't have header files either. All C has is a
mechanism (#include) for incorporating arbitrary text from
another file. *By convention* we group related declarations
etc. into files which *by convention* have the extension .h,
and *by convention* we #include all the ones we need near
the start of each .c file. Many say that .h files should not
#include other .h files, and/or that .h files should only contain
declarations not definitions, but even these are not universally
agreed upon. Languages such as Java, Modula, Ada, these
days probably even COBOL, have much more explicit ways
to describe dependencies than C.
From: "Smith, Stephen" <stsmith@hrblock.com>
Date: Fri, 14 Dec 2001 14:56:16 -0600
Subject: Jam on OpenVMS
I just learned about Jam through Boost and have been
experimenting. I ran into a few issues using it on OpenVMS:
1) Objects compiled from C++ source must be linked with
CXXLINK. I hacked some code into the Main rule's
procedure to get around this:
if $(VMS) {
for suffix in $(>:S) {
switch $(suffix) {
case .cpp : LINK on $(<)$(SUFEXE) = cxxlink ;
case .cxx : LINK on $(<)$(SUFEXE) = cxxlink ;
}
}
}
I think it would make more sense for the default Jambase
to use variables C++LINK and C++LINKFLAGS, but I wasn't
that ambitious given that development of Jam seems to have stalled.
2) .cxx is the extension the OpenVMS C++ compiler (CXX) assumes
if passed a file with no extension. I imagine this has lead
some OpenVMS users to use the .cxx extension on all their C++
files. It also seems like a common enough extension that Jam
should support it. I added another case statement to the
switch statement in the Object rule's procedure:
case .cxx : C++ $(<) : $(>) ;
3) GenFile doesn't work very well. On OpenVMS the actions
associated with GenFile are:
actions GenFile1 { mcr $(>[1]) $(<) $(>[2-]) }
Unfortunately, this only works if $(>[1]) contains a directory
specification. Otherwise mcr assumes a default file specification
of SYS$SYSTEM:.EXE.
I solved this issue by replacing the above actions with the
following:
actions GenFile1 { image = "$" + f$parse( "$(>[1])" ) image $(<) $(>[2-]) }
4) I don't like the fact that GenFile passes the target name as the
first parameter. Doing that buys nothing significant and makes
the rule less useful in general. Is there another rule similar
to GenFile that I am not aware of?
5) Finally, I have a comment specific to the Boost version of Jam.
Boost Jam uses alloca(), which is not portable.
Date: Fri, 14 Dec 2001 16:35:58 -0500 (EST)
From: Janos Murvai <murvai@ncbi.nlm.nih.gov>
Subject: question about installation..
I tied to install jam program. I used make.
It was started to compile but then:
ld: fatal: Symbol referencing errors. No output written to jam0
make: *** [jam0] Error 1
Exit 2
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Jam on OpenVMS
Date: Fri, 14 Dec 2001 17:04:48 -0500
The alloca call comes from bison, which is the only parser generator I have
on my machine. The best I can sugest is to regenerate the parser yourself
using yacc. The build process does this, but only after it has bootstrapped Jam0.
Hmm, I wonder if I can ship the perforce version of the parser files for
bootstrapping purposes...
I don't always pay attention to this list. Please post messages regarding
Date: Fri, 14 Dec 2001 17:21:04 -0500 (EST)
From: Janos Murvai <murvai@ncbi.nlm.nih.gov>
Subject: Re: Jam on OpenVMS
I have never used the yacc. How can I do thet?
From: "Richard Goodwin" <richardg@imageworks.com>
Date: Fri, 14 Dec 2001 15:01:03 -0800
Subject: Building variants of the same target
I am working on a system where I need to build variants of the same target.
EX: If I am building a libgraphics.a I need to build both debug and
optimized versions of this lib. Is there an easy way I can modify/extend
Jambase or Jamrules to do this without too much work.
Note: I would like to stay with stock jam (not use boost or another jam variant).
Subject: Re: Building variants of the same target
Date: Mon, 17 Dec 2001 09:07:41 -0700
If you don't mind running jam once to build the debug version and once
to build the release version it is pretty simple. See the Jamfile
that builds jam itself for examples of how it sets the LOCATE_TARGET
variable based on what os is being built. You'd do that and similar
things with CCFLAGS, etc.
Date: Tue, 18 Dec 2001 12:47:49 -0800 (PST)
Subject: Re: Building variants of the same target
Perusing the mailing-list archive is a good way to look for answers to
this sort of question. For example:
Date: Tue, 18 Dec 2001 21:35:42 -0700
Subject: Jam fix for "on target" variables during header scan
I just submitted a bug fix for jam to the perforce public depot. See
the change provided below if you'd like to grab the fix before the
next release of jam.
In addition to what is outlined in the change description, I'll
describe how I came across this bug and why it has real world
implications.
In implementing Craig McPheeters' header caching in our copy of Jam, I
noticed a large number of object files, directories, and other "non
source" files in the header cache. Investigating this, I found that
somehow the global values of HDRSCAN and HDRULE got set.
It turns out that a few of the source files in our tree #include
themselves. When this happens, the target is present in both $(<) and
$(>), so when HdrRule propagates HDRSCAN and HDRRULE from
$(<) to $(>), it sets these "on target" values on $(<).
During the execution of HdrRule, the "on target" settings on $(<)
actually serve as a temporary storage area for the *global* values of
the settings (the "on target" and global values are swapped in
pushsettings() before the HdrRule is called). So any change on $(<)'s
"on target" settings will actually be swapped out into the global
values after the HdrRule finishes.
This is how HDRSCAN and HDRRULE got set globally, and every subsequent
target that jam considered was scanned for C style #include lines, be
them object files, directories, or other binary files.
Fixing this bug reduced the number of files in the header cache from
1500 to 1100. This means that roughly 400 object files were not
header scanned (as is appropriate), so things go a little faster.
Change 1179 by matt_armstrong@... on 2001/12/18 20:07:50
Description of the bug this change fixes:
If a HDRRULE does this:
FOO on $(<) = a b c ;
Then after the HDRRULE finishes, the *global* variable FOO
will be set, and the "on target" variable FOO on $(<) won't be
changed at all.
This can occur with the default Jambase's HdrRule if a file
#includes itself.
Affected files ...
... //guest/matt_armstrong/jam/hdrscan_on_target_fix/make.c#2 edit
... //guest/matt_armstrong/jam/hdrscan_on_target_fix/rules.c#2 edit
... //guest/matt_armstrong/jam/hdrscan_on_target_fix/rules.h#2 edit
Differences ...
==== //guest/matt_armstrong/jam/hdrscan_on_target_fix/make.c#2 (text) ====
158a159
189c190,194
< pushsettings( t->settings );
---
214c219,220
< popsettings( t->settings );
---
==== //guest/matt_armstrong/jam/hdrscan_on_target_fix/rules.c#2 (text) ====
27d26
< * usesettings() - set all target specific variables
29a29
239a240,254
==== //guest/matt_armstrong/jam/hdrscan_on_target_fix/rules.h#2 (text) ====
170a171
Date: Tue, 18 Dec 2001 23:16:11 -0700
Subject: Improved Header Scanning for Jam
Inspired by David Abrahams' BINDRULE extension in Boost Jam and Diane
Holt's SCANFILE extension in
I came up with a synthesis of the two that I think is the most
"jamlike" (whatever that means).
I have to give 90% of the credit goes to Diane's 1999 implementation.
The only change this has over it is the communication of the header's
bound name in a new 3rd argument to HDRRULE (instead of through a
global variable).
As Diane's tests suggested, this this finds a few stray headers that
the stock jam's header search algorithm misses. I wonder if this
could make it into the stock distribution? It cannot produce
incorrect results, as it simply finds *more* headers than before.
The change is in two parts:
The HDRRULE is called with a new 3rd argument -- the bound name of $(<):
==== headers.c ====
if( lol_get( &lol, 1 ) )
+ {
+ /* The third argument to HDRRULE is the bound name of
+ * $(<) */
+ lol_add( &lol, list_new( L0, t->boundname ) );
evaluate_rule( hdrrule->string, &lol );
+ }
The default HdrRule is changed to add the directory the header was
found in to HDRSEARCH if it is not already there:
==== Jambase ====
rule HdrRule
{
- # HdrRule source : headers ;
+ # HdrRule source : headers : bound name of $(<) ;
# N.B. This rule is called during binding, potentially after
# the fate of many targets has been determined, and must be
...
INCLUDES $(<) : $(s) ;
+
+ # If the directory holding this header isn't in HDRSEARCH,
+ # add it.
+ if ! $(3:D) in $(HDRSEARCH)
+ {
+ HDRSEARCH += $(3:D) ;
+ }
+
SEARCH on $(s) = $(HDRSEARCH) ;
NOCARE $(s) ;
Sent: Wednesday, December 19, 2001 1:16 AM
Subject: Improved Header Scanning for Jam
FWIW, this approach is somewhat more limited in its flexibility. For one
thing, I use BINDRULE to detect the location of Jam files brought in with
"include". Since included files use SEARCH, I can, for example, keep a stock
Jamrules file at the end of the SEARCH path for includes, so there will be
no error if a user doesn't supply Jamrules... and I can detect whether it
was their Jamrules file or mine that was found.
Date: Thu, 20 Dec 2001 13:30:36 -0700
Subject: Replacement command shell for Win32?
WinNT and Win2k can pass command lines up to 10240 bytes big, but
there are cases where cmd.exe barfs with a buffer that long. So
bumping the MAXLINES setting in jam.h can lead to problems.
E.g. cmd.exe's own del and echo commands can't handle anything longer
than about 1k.
Does anybody know of a nice little command shell for Win32 that could
do the job? Preferably it'd be free and come with source. I'm
thinking of a native port of one of the free unix Korn or Borne
shells, or maybe something more windows centric.
From: "Jerjiss, Allen" <Allen_Jerjiss@intuit.com>
Subject: RE: Replacement command shell for Win32?
Date: Thu, 20 Dec 2001 12:46:14 -0800
Give Cygwin a try at www.cygwin.com.
Build & Release Engineer, Quicken.com
Sent: Thursday, December 20, 2001 12:31 PM
Subject: Replacement command shell for Win32?
WinNT and Win2k can pass command lines up to 10240 bytes big, but
there are cases where cmd.exe barfs with a buffer that long. So
bumping the MAXLINES setting in jam.h can lead to problems.
E.g. cmd.exe's own del and echo commands can't handle anything longer
than about 1k.
Does anybody know of a nice little command shell for Win32 that could
do the job? Preferably it'd be free and come with source. I'm
thinking of a native port of one of the free unix Korn or Borne
shells, or maybe something more windows centric.
_______________________________________________
Subject: RE: Replacement command shell for Win32?
Date: Thu, 20 Dec 2001 13:51:24 -0700
From: "Mike Steed" <msteed@altiris.com>
I use the Win32 port of zsh:
ftp://ftp.blarg.net/users/amol/zsh
Amol no longer works on zsh, but he does maintain the Win32 port of tcsh:
ftp://ftp.blarg.net/users/amol/tcsh
According to the docs, these shells can handle command lines up to 64
KBytes, but I haven't verified this.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Replacement command shell for Win32?
Date: Fri, 21 Dec 2001 19:48:55 -0500
Note also that in many cases (e.g. when your command is multiple lines), the
command goes through a .bat file, which on Win2K has a maximum line length
of 2047 characters.
Date: Sat, 22 Dec 2001 17:19:44 +1100
From: Darrin Smart <darrin@suresoftware.com>
Subject: Multiple Jambase files
I've just started using Jam in an attempt to replace a very large
set of recursive makefiles.
Our project contains lots of deeply nested directories, basically
in two branches. One branch is called "tools" and is a bunch of
development tools built on the local machine. The other is called
"src" which is cross-compiled using some of the programs in tools
and some system-installed programs as well.
project/tools/...
project/src/...
The tools build with gcc and that is all working just fine with the
default Jambase file.
I am trying to get the src tree to build with a cross compiler
called xcc. xcc has a completely different set of flags from gcc.
gcc -c -o project/tools/eg/eg.o -O3 -I/some/hdr/dir
project/tools/eg/eg.c
but project/src/eg2/eg2.c needs:
xcc project/src/eg2/eg2.c -eas -v=/another/hdr/dir
=fd=project/src/eg2/eg2.r
I think what I want to do is have a new Jambase file that defines
rules and actions that only apply within the src subtree. However I
still want the build system to be rooted at the top-level
("project") so a single jam command can build the tools and then
the sources.
Is it possible to do this? I tried making project/tools/Jambase and
project/src/Jambase and using SubDir to set it up but it seems that
the last one parsed is used globally.
On a related note, it would be really cool if more example
Jambase/Jamfile setups were published as part of the documentation.
Date: Sat, 22 Dec 2001 17:53:11 +1100
Subject: Re: Multiple Jambase files
From: Darrin Smart <darrin@suresoftware.com>
Sorry, substitute Jamrules for Jambase in the text below.
From: "Andreas Haberstroh" <andreas@ibusy.com>
Date: Sun, 23 Dec 2001 16:07:41 -0800
Subject: MSVC Libraries
I stumbled upon JAM by accident, and thought, this is just what I
needed. But, after a few days of working with it, i've discovered a
little annoyance with the Microsoft's LIB.EXE program.
It does not have a response file format.
Now, the trick is, how do you feed individual files to it, via JAM?
Has anyone attempted this?
I'm currently trying to create little libraries and LIB them together,
as a temporary work around, but that is proving difficult as well,
since, I haven't mastered the JAM syntax yet.
Subject: Re: Multiple Jambase files
Date: Mon, 24 Dec 2001 21:10:40 -0700
You can't do that with the default Jambase's Cc action, since it hard
codes some of the command line flags.
Yes, but non trivial.
Yes, for any rule named X there is one and only one "actions X" that
will work. In this case, you're probably calling the rule Cc for both
the gcc and xcc trees. Call them different names and you can use them
both in the same Jam run. Then the problem becomes getting the built
in Objects rule to call xcc_Cc when appropriate. I don't have any
direct experience getting Jam to build with two different compilers in
the same run, so I won't be much help.
Subject: Re: MSVC Libraries
Date: Mon, 24 Dec 2001 21:16:06 -0700
What exactly is the problem you are having? I.e. why do you want to
feed it files one at a time?
Anyway, you can try sticking this in your Jamrules file:
if $(NT) && $(MSVCNT) {
actions updated Archive {
if exist $(<) set _$(<:B)_=$(<)
$(AR) /out:$(<) %_$(<:B)_% $(>)
}
}
This removes the "together piecemeal" portion of the Archive actions
like from the default Jambase.
Date: Thu, 27 Dec 2001 17:17:34 -0800 (PST)
From: cmcpheeters@aw.sgi.com (Craig McPheeters)
Subject: Changes in my Jam guest branch
I've made several changes to the branch of jam in my guest branch. If you have
a copy, you may want to update it. There are no changes to the header
cache code. I've added a few new extensions and modified some of the earlier
ones slightly.
An earlier change in my branch was to revert a 2.3 change in execunix() back
to its 2.2 behaviour. The change was on NT to always invoke an action which has
a special shell through a temporary file. I had reverted this change, but have
now un-reverted it, and in fact now the change applies to unix as well as NT.
One of the other changes in the branch is to allow action blocks of arbitrary
size to be generated on unix or NT. This allowed action blocks larger than
could be executed via execvp() on some flavours of Unix. With this new change,
since all actions which have special shells are invoked through an intermediate
file, this can be made to work, in the same manner as for NT.
Additional changes are:
* support for % complete as jam executes
* a new debug mode, -d+11, which outputs information on a nodes fate changes.
There were cases when a node was being rebuilt, but the reason wasn't
obvious. With this debug level, you can usually always figure out why
nodes are being updated now
* created some new jam variables to describe job (-jn) information. The
variables are:
JAM_NUM_JOBS - the integer given in -jn
JAM_MULTI_JOBS - unset if JAM_NUM_JOBS < 2, set otherwise
JAM_JOB_LIST - a list of the job slot values (ie, for -j4, its: 0 1 2 3
These can be used in a variety of ways in supporting multi-job builds
* Added job slot expansion to the action blocks, the sequence !! in an action
block is replaced by the job slot the action is running in. This can be
used, for example, in the generation of unique temporary file or
directory names. Actions such as yacc could use this as they may have
fixed temporary file names (which could go into a job-slot unique directory)
* Improved the -d+10 debug level, the dependency graph. It now shows the
flags on the node as well (NOUPDATE, NOTFILE, etc.)
* Only issue the '...skipped' messages for nodes which would have been built
This fixes a problem where the percent done may go beyond 100% if there
are many targets skipped
Date: Sat, 29 Dec 2001 15:35:20 -0800 (PST)
From: cmcpheeters@aw.sgi.com (Craig McPheeters)
Subject: Re: Replacement command shell for Win32?
Sorry to revive an old thread.
Bat files on NT can grow to be really large without problem, as long as
each of the lines in it are not longer than the line length limitation.
2047 characters or so sounds about right.
hate to give the wrong attribution to it...I think Diane explained it
first? Look through the archives for the original posting.
The trick is to create two jam variables:
SPACE = " " ;
NEWLINE = "
" ;
Because of the way Jam does its variable expansion, you can use the
expansion of these variables in creative ways. If you need an action to
record all of the $(>) files into another file, the easy way is:
echo $(>) >> $(<)
but that can violate the line length limitation on NT. The trick is to
do this instead:
echo$(SPACE)$(>)$(SPACE)>>$(SPACE)$(<)$(NEWLINE)
It took me a little while to understand its beauty, but now whenever I
use that trick, I get a little smile on my face. Thanks to the original
poster for showing it to me.
There is an extension in my guest branch which removes the limitation on
the size of an action block. With that extension, and the trick above, its
possible to safely create really large action blocks on NT (or Unix) and
have them always work. Often that trick is enough to get around the
limitations of cmd.exe.
Date: Sat, 29 Dec 2001 18:47:46 -0500
Subject: Re: Replacement command shell for Win32?
From: David Abrahams <david.abrahams@rcn.com>
I don't think you need to resort to tricks. I do the same thing with a
piecemeal action that builds the response file. Actually, there are two
actions: the first erases the response file if it already exists, and
the second one appends all the filenames. The relevant code is below. It
makes use of two of my Jam extensions (named arguments and indirect rule
invocation), but they are irrelevant to the basic technique. I know the
code looks bigger than what Craig posted, but keep in mind that he's
only summarizing.
# build TARGETS from SOURCES using a command-file, where RULE-NAME is
# used to generate the build instructions from the command-file to
# TARGETS
rule with-command-file ( rule-name targets * : sources * ) {
# create a command-file target and place it where the first target
# will be built
local command-file = $(<[2]:S=.CMD) ;
LOCATE on $(command-file) = $(gLOCATE($(targets[1]))) ;
build-command-file $(command-file) : $(sources) ;
# Build the targets from the command-file instead of the sources
Depends $(targets) : $(command-file) ;
local result = [ $(rule-name) $(targets) : $(command-file) ] ;
# clean up afterwards
remove-command-file $(targets) : $(command-file) ;
return result ;
}
# Used to build command files from a list of sources.
rule build-command-file ( command : sources * ) {
Depends $(command) : $(sources) ;
# First empty the file
command-file-clear $(command) : $(sources) ;
# Then fill it up piecemeal
command-file-dump $(command) : $(sources) ;
}
# command-file-clear: silently remove the target if it exists
if $(NT) {
# NT needs special handling because DEL always barks if the file isn't found
actions quietly command-file-clear {
IF EXIST "$(<)" $(RM) "$(<)"
}
} else { actions quietly command-file-clear { $(RM) "$(<)" } }
# command-file-dump: dump the source paths into the target
actions quietly piecemeal command-file-dump { echo "$(>)" >> "$(<)" }
# Clean up the temporary COMMAND-FILE used to build TARGETS.
rule remove-command-file ( targets + : command-file ) {
TEMPORARY $(command-file) ;
Clean clean : $(command-file) ; # Mark the file for removal via clean
}
actions quietly piecemeal together remove-command-file { $(RM) $(>) }
Date: Sun, 30 Dec 2001 14:54:11 +1100
Subject: Re: Multiple Jambase files
From: Darrin Smart <darrin@suresoftware.com>
It seems like a bit of a shortcoming in Jam, particularly for very
large and complex projects.
One solution might be to make the file name matching be based on
regular expressions instead of file suffixes. Then I could override
the Object rule to select CC or xcc_Cc as you said.
Another example of the same problem I encountered is that some of
our .y files only work with yacc and some only work with bison (I
know, thats not good, but the point of the exercise is to replace
make, not fix up all the code!)
Date: Mon, 31 Dec 2001 09:25:57 +0100
From: Toon Knapen <toon.knapen@si-lab.org>
Subject: Re: Multiple Jambase files
Boost.Jam is able to use different compiles at the same time.
Actually, the 'trick' is identical to what one would do in 'make' :
create a small jamfile for every specific compiler, then include one of
these in a jam run according to some (global) variable which identifies
the compiler that should be used. (Correct me if I'm wrong David)
Actually, Boost.Jam is able to use multiple compilers at once so the the
trick is a little more subtle.
Take a look at the Boost.Jam code as it is used in the
Boost(www.boost.org) project :
cvs -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost login
cvs -z3 -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost
checkout boost
cvs -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost logout
more specifically, look in the tools/build subdir to see all compiler
specific jamfiles, look in tools/build/jam_src for the source code of boost.jam.
Date: Mon, 31 Dec 2001 09:14:57 +0100
From: Toon Knapen <toon.knapen@si-lab.org>
Subject: Re: Multiple Jambase files
Boost.Jam is able to use different compiles at the same time.
Actually, the 'trick' is identical to what one would do in 'make' :
create a small jamfile for every specific compiler, then include one of
these in a jam run according to some (global) variable which identifies
the compiler that should be used. (Correct me if I'm wrong David)
Actually, Boost.Jam is able to use multiple compilers at once so the the
trick is a little more subtle.
Take a look at the Boost.Jam code as it is used in the
Boost(www.boost.org) project :
cvs -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost login
cvs -z3 -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost
checkout boost
cvs -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost logout
more specifically, look in the tools/build subdir to see all compiler
specific jamfiles, look in tools/build/jam_src for the source code of boost.jam.
From: mark@cleandrive.net
Date: Mon, 31 Dec 2001 11:30:07 +0000 (GMT)
Subject: Warning! Everything you do on your computer is being logged [5QleR]
Subject: Re: Multiple Jambase files
Is it just the compiling that needs to be done using 'xcc', or do you need
to use that for the link as well? If the former, it's pretty
straightforward to do what you want -- just add a switch in the case for
.c in the Object rule (in Jambase) to use the Cc rule/actions for .c files
under tools and an Xcc rule/actions (defined in your Jamrules) for all others.
If you also need to use 'xcc' to do links, then you'll need to add more
stuff (obviously :) -- but it should still be doable.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Changes in my Jam guest branch
Date: Wed, 2 Jan 2002 11:48:18 -0500
Looking at this nefarious technique again, I see that it has certain
advantages over what I've been doing. For one thing, in my scheme, if the
link fails the response file is never removed. If I've forgotten a library,
the response file is wrong and needs to be rebuilt, but according to the
dates, it appears to be up-to-date. The downside of your scheme is that it's
difficult to factor out the common logic for creating the response files
from my many toolset definitions, but I don't think that's serious enough to
avoid using it. In fact, I expect that it won't be an issue in an upcoming
revision of the build system.
A question: why the explicit $(TOUCH)? Won't the redirected echo update the
modification date? And why does the mod. date matter, anyway? The response
file never appears in the dependency graph.
Date: Tue, 1 Jan 2002 13:33:05 -0800 (PST)
Subject: Re: Multiple Jambase files
Is it just the compiling that needs to be done using 'xcc', or do you need
to use that for the link as well? If the former, it's pretty
straightforward to do what you want -- just add a switch in the case for
.c in the Object rule (in Jambase) to use the Cc rule/actions for .c files
under tools and an Xcc rule/actions (defined in your Jamrules) for all others.
If you also need to use 'xcc' to do links, then you'll need to add more
stuff (obviously :) -- but it should still be doable.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Changes in my Jam guest branch
Date: Wed, 2 Jan 2002 11:48:18 -0500
Looking at this nefarious technique again, I see that it has certain
advantages over what I've been doing. For one thing, in my scheme, if the
link fails the response file is never removed. If I've forgotten a library,
the response file is wrong and needs to be rebuilt, but according to the
dates, it appears to be up-to-date. The downside of your scheme is that it's
difficult to factor out the common logic for creating the response files
from my many toolset definitions, but I don't think that's serious enough to
avoid using it. In fact, I expect that it won't be an issue in an upcoming
revision of the build system.
A question: why the explicit $(TOUCH)? Won't the redirected echo update the
modification date? And why does the mod. date matter, anyway? The response
file never appears in the dependency graph.
Date: Thu, 3 Jan 2002 13:46:22 +0100
From: "BROSSIER Florent" <F.BROSSIER@csee-transport.fr>
Subject: Dependency with files with the same name in different directory
Let suppose I have the following files and directories.
- At root:
Jamrules
HDRS = $(TOPDIR)$(SLASH)Other ;
Jamfile
SubDir TOPDIR ;
SubInclude TOPDIR Test ;
- In the directory Other
foo.hpp
#error
- in the directory Test
Jamfile
SubDir TOPDIR Test ;
Main Test.exe : main.cpp ;
main.cpp
#include "foo.hpp"
int main(int argc, char** argv) { return 0;
}
foo.hpp
// Empty
Now the problem:
When I start Jam. The compilation is ok. (Test/foo.hpp was included!)
If I modifie Test/foo.hpp and start Jam again nothing is done.
If I modifie Other/foo.hpp and start Jam again the executable is rebuild.
Is it a bug of Jam?
What can I do to solve this problem?
It seems that the problem comes from the HDRS variable which is used by
Jam to scan for include files.
The current path is not included in the HDRS variable but is
automatically added in the list of include path for the compilation
before any others.
PS: I use Jam version 2.3 with no modifications.
Subject: Re: Dependency with files with the same name in different
Date: Thu, 03 Jan 2002 09:38:07 -0700
Yes, I think there is a bug in Jambase here.
The Object rule in Jambase sets HDRS on targets to:
$(SEARCH_SOURCE) $(HDRS) $(SUBDIRHDRS)
But it sets HDRSEARCH to:
$(HDRS) $(SUBDIRHDRS) $(SEARCH_SOURCE) $(STDHDRS)
I think the bug is that the two do not specify the same order. So the
compiler will search with one order, and Jam another.
HDRSEARCH should probably be:
$(SEARCH_SOURCE) $(HDRS) $(SUBDIRHDRS) $(STDHDRS)
If you change the Object rule in Jambase to set HDRSEARCH the same way
it sets HDRS, Jam finds the correct foo.hpp file. (put the Object
rule at the end of this message in your Jambase).
But please take notice: placing a header file with the same name
multiple times is also dangerous. In this case, you should use
something called "header grist". The safest way to do this is to put
this after every time you call SubDir:
HDRGRIST = $(SOURCE_GRIST) ;
This way, each sub directory can find a different foo.hpp. Without
this change, Jam will find one foo.hpp and stop there. You will also
want to put this rule in your Jamrules:
rule FGristSourceFiles { return [ FGristFiles $(<) ] ; }
rule Object {
local h ;
# locate object and search for source, if wanted
Clean clean : $(<) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
# Save HDRS for -I$(HDRS) on compile.
# We shouldn't need -I$(SEARCH_SOURCE) as cc can find headers
# in the .c file's directory, but generated .c files (from
# yacc, lex, etc) are located in $(LOCATE_TARGET), possibly
# different from $(SEARCH_SOURCE).
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIRHDRS) ;
# handle #includes for source: Jam scans for headers with
# the regexp pattern $(HDRSCAN) and then invokes $(HDRRULE)
# with the scanned file as the target and the found headers
# as the sources. HDRSEARCH is the value of SEARCH used for
# the found header files. Finally, if jam must deal with
# header files of the same name in different directories,
# they can be distinguished with HDRGRIST.
# $(h) is where cc first looks for #include "foo.h" files.
# If the source file is in a distant directory, look there.
# Else, look in "" (the current directory).
if $(SEARCH_SOURCE) { h = $(SEARCH_SOURCE) ; }
else { h = "" ; }
HDRRULE on $(>) = HdrRule ;
HDRSCAN on $(>) = $(HDRPATTERN) ;
#HDRSEARCH on $(>) = $(HDRS) $(SUBDIRHDRS) $(h) $(STDHDRS) ;
HDRSEARCH on $(>) = $(h) $(HDRS) $(SUBDIRHDRS) $(STDHDRS) ;
HDRGRIST on $(>) = $(HDRGRIST) ;
# if source is not .c, generate .c with specific rule
switch $(>:S) {
case .asm : As $(<) : $(>) ;
case .c : Cc $(<) : $(>) ;
case .C : C++ $(<) : $(>) ;
case .cc : C++ $(<) : $(>) ;
case .cpp : C++ $(<) : $(>) ;
case .f : Fortran $(<) : $(>) ;
case .l : Cc $(<) : $(<:S=.c) ;
Lex $(<:S=.c) : $(>) ;
case .s : As $(<) : $(>) ;
case .y : Cc $(<) : $(<:S=.c) ;
Yacc $(<:S=.c) : $(>) ;
case * : UserObject $(<) : $(>) ;
}
}
Date: Thu, 03 Jan 2002 12:01:55 -0700
Subject: Improved Header Scan Cache for Jam
I just submitted code to //guest/matt_armstrong/jam/hdrscan_cache that
implements a header scan cache for Jam.
This code is an incremental improvement over Craig McPheeters'
original version in //guest/craig_mcpheeters/jam/src/. I've talked
with Craig and he plans to roll most or all of my changes into his version.
I have even higher hopes -- I'd like it to make it into stock jam.
Rationale:
- A header scan cache can improve things when HDRGRIST is in use.
For example, with stock Jam if you always set HDRGRIST to
$(SOURCE_GRIST), standard headers such as /usr/include/stdio.h
will now get scanned once for each SubDir. With the header scan
cache, common headers will be scanned only once.
This makes it practical to always use HDRGRIST. This means that
the stock Jambase can support multiple header files of the same
name. I think this rectifies a frequently encountered weakness
in Jam.
It is important to point out that you get this benefit
regardless of whether the cache is saved to disk.
- The header scan cache is persistent across runs of Jam only if
the user wants it (controlled via the HCACHEFILE variable). So
by default Jam will not sprinkle cache files all of the source
tree, and it is possible to use LOCATE to put the persistent
copy of the cache in, e.g., a build output directory.
Storing the header cache on disk can bring real benefits. On
the medium sized project I use jam for, it seems to speed jam
startup (time to first build action) by a factor of 6. People
are happy to wait 15 seconds instead of 90.
It is important to point out that about half of this speedup
occurs even if the cache is not persistent, since our project
makes heavy use of HDRGRIST to correctly find all the header
files in the project.
- The cache is implemented in such a way that it can never change
the semantics of what Jam does. The call to a target's HDRRULE
will be identical with or without the cache code.
Here is the text of the README.header_scan_cache that is part of the submit.
This change implements a header scan cache in a form that
(cross fingers) can be incorporated into the stock version of Jam.
This code is taken from //guest/craig_mcpheeters/jam/src/ on
the Perforce public depot. Many thanks to Craig McPheeters
OPT_HEADER_CACHE_EXT #define within the code.
Jam has a facility to scan source files for other files they
might include. This code implements a cache of these scans,
so the entire source tree need not be scanned each time jam is
run. This brings the following benefits:
- If a file would otherwise be scanned multiple times in a
single jam run (because the same file is represented by
multiple targets, perhaps each with a different grist),
it will now be scanned only once. In this way, things
are faster even if the cache file is not present when
Jam is run.
- If a cache entry is present in the cache file when Jam
starts, and the file has not changed since the last time
it was scanned, Jam will not bother to re-scan it. This
markedly increaces Jam startup times for large projects.
This code has improvements over Craig McPheeters' original
version. I've described all of these changes to Craig and he
intends to incorporate them back into his version. The
changes are:
- The actual name of the cache file is controlled by the
HCACHEFILE Jam variable. If HCACHEFILE is left unset
(the default), reading and writing of a cache file is
not performed. The cache is always used internally
regardless of HCACHEFILE, which helps when HDRGRIST
causes the same file to be scanned multiple times.
Setting LOCATE and SEARCH on the the HCACHEFILE works as
well, so you can place anywhere on disk you like or even
search for it in several directories. You may also set
it in your environment to share it amongst all your projects.
- The .jamdeps file is in a new format that allows binary
data to be in any of the fields, in particular the file
names. The original code would break if a file name
contained the '@' or '\n' characters. The format is
also versioned, allowing upgrades to automatically
ignore old .jamdeps files. The format remains human
readable. In addition, care has been taken to not add
the entry into the header cache until the entire record
has been successfully read from the file.
- The cache stores the value of HDRPATTERN with each cache
entry, and it is compared along with the file's date to
determine if there is a cache hit. If the HDRPATTERN
does not match, it is treated as a cache miss. This
allows HDRPATTERN to change without worrying about stale
cache entries. It also allows the same file to be
scanned multiple times with different HDRPATTERN values.
- Each cache entry is given an "age" which is the maximum
number of times a given header cache entry can go unused
before it is purged from the cache. This helps clean up
old entries in the .jamdeps file when files move around
or are removed from your project.
You control the maximum age with the HCACHEMAXAGE
variable. If set to 0, no cache aging is performed.
Otherwise it is the number of times a jam must be run
before an unused cache entry is purged. The default for
HCACHEMAXAGE if left unset is 100.
- Jambase itself is changed.
SubDir now always sets HDRGRIST to $(SOURCE_GRIST) so
header scanning can deal with multiple header files of
the same name in different directories. With the header
cache, this does no longer incurs a performance penalty
-- a given file will still only be scanned once.
The FGristSourceFiles rule is now just an alias for
FGristFiles. Header files do not necessarily have
global visibility, and the header cache eliminates any
performance penalty this might otherwise incur.
Because of all these improvements, the following claims can be
made about this header cache implementation that can not be
made about Craig McPheeters' original version.
- The semantics of a Jam run will never be different
because of the header cache (the HDRPATTERN check ensures this).
- It will never be necessary to delete .jamdeps to fix
obscure jam problems or purge old entries.
Date: Thu, 03 Jan 2002 11:57:40 -0800
From: rmg@perforce.com
Subject: Jam release plan
I thought that this might be a good opening for me to let people where
we're at with work on a new release of Jam. (Even though I'll defer
talking about header scanning just now).
I hope to soon have (I'm aiming at next week, really!) an update to
//public/jam/... comprising the changes to Jam made internally at
integrated into //public/jam, from a branch in //guest/richard_geiger/
where the individual changes from the Perforce internal version will
be imported. These changes will *not* be packaged into a new release
at this point... But you'll be thence be able to do integrations of
these changes from //public/jam/... into your //guest/.../jam/...
branches as desired, per the plan I outline here:
Christopher and I have done a triage to consider most other Jam
changes I'm aware of for inclusion in the new release (presumably this
will be Jam 2.4). Christopher reserves the right to be the final
arbiter on these decisions. In some cases, we'll take contributed
changes "as is"; in others, Christopher likes the intent of the
change, but wants to consider alternate implementations; in others, we
may decline to pick up the change altogether, at least for now.
I'll be contacting individual contributors of the changes we've
decided to take "wholesale". I'm hoping that, in most cases, these
individuals will be able to help by integrating the changes from
//public/jam/ into their own branches, which should make it easiest
for me to then integrate the individual features we want for Jam 2.4
into the //public/jam mainline.
Of course, anybody with a //guest Jam branch will be welcome and
encouraged to also integrate these changes.
At some point - hopefully before too many more weeks pass by - we'll
have a //public/jam/ that is ready to be packaged as Jam 2.4.
Beyond that, we can start a planning process for Jam 2.5, in hopes
that with a notch more of planning and coordination on my part, we can
do the best job of improving Jam, both for the "stock" and "custom" versions.
Date: Thu, 3 Jan 2002 13:07:11 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Changes in my Jam guest branch
Its not really my scheme, its something I picked up from the jam archives.
I sure like it though.
Also note that its a general technique. It can be used to create response
files as well as many other types of action blocks. Once you start using
the newline expansion trick, you'll find other uses for it.
The touch may not be required. I know with some Unix shells, if you do:
echo hi >> foo
and 'foo' doesn't exist, the '>>' fails as there is no file to append to.
Adding a touch there ensures the file exists, and that was its only intent.
Date: Thu, 03 Jan 2002 17:37:49 -0700
Subject: Expressing "lazy always update" intermediate files
Let's say file c depends on b which depends on a
a -> b -> c
File b is an intermediate file that, for various reasons, is best not
marked TEMPORARY (to make it concrete, it is a script that some jam
actions create, and sometimes people like to re-run b to create c
without running jam).
When I run jam, I want b to always be re-built, but only in the cases
where c is being built.
The dependency graph for b is complex (it depends on some of the
jamfiles in the project, the contents of various environment
variables, etc.) so it is simpler for it to always be rebuilt.
However, if I mark it with ALWAYS, then c is always rebuilt as well.
Date: Thu, 3 Jan 2002 18:50:00 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Expressing "lazy always update" intermediate files
(A depends on B = A -> B. I use dependency arrows, not data flow.)
One approach to try is to introduce a new node or two. Can you create a
new node, which has all the dependencies that 'c' normally would have, but
isn't actually a file?
Something like:
<real>c -> b -> c -> a
with:
NOTFILE c ;
or perhaps make the earlier 'c' a gristed NOTFILE node, and the eventual
'c' ungristed. Its hard to know without knowing the details, and I don't
really want to know the details :-)
A different approach is to leave the original order, but assign two actions
with the b and c nodes. The actions are processed in the order they are
called. This is something like how a yacc action can produce two files, a
.c and a .h. The graph for that is:
yacc.c -> yacc.h -> <stuff>
If the .c doesn't depend on the .h, a multi-job build may try to use one
of the files if a different job is processing the other. Jam can't
scan the .c for headers until after its created. (this is why the yacc
rule has the .c file include the .h explicitly.)
Assuming you want the script to be created by a separate action, I'll call
two rules. If one action can create the script and the c file, only call
myCreateFile and have the action do both.
Something like:
rule myCreateScript { Depends $(<[2]) : $(<[1]) ; }
actions myCreateScript {
rm -f $(<[1])
echo echo a new script > $(<[1])
chmod 0755 $(<[1])
}
rule myCreateFile {
Depends $(<[2]) : $(<[1]) ;
Depends $(<) : $(>) ;
}
actions myCreateFile {
rm -f $(<[2])
$(<[1]) > $(<[2])
}
myCreateScript b c ;
myCreateFile b c : a ;
Depends all : c ;
Depends c : d ;
I think that works. Of course the syntax can be improved, depending on
which of the nodes names can be automatically generated. Call a higher
level rule which creates names and calls lower level rules, etc.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Expressing "lazy always update" intermediate files
Date: Thu, 3 Jan 2002 19:04:47 -0800
How about ...
rule Newest {
# do nothing .. just introduce a node to JAM
}
action Newest {
# do nothing .. just tell jam how it can build a Newest
}
Newest N ;
# And then you a -> b -> c becomes
DEPEND b : a ;
DEPEND c : b ;
DEPEND b : N ;
I have not tried this, but it is a "trick" used in make all
of the time.
Since N never exists, its always forces the build of anything
that depends on it.
But jam only builds thos things related to the top level target
(normall all), so in c is no required by this target, it always
gets built.
(An ALWAYS applied to N could not hurt !)
Date: Fri, 4 Jan 2002 13:39:57 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Expressing "lazy always update" intermediate files
That's the same as saying you want b to be built as part of the action for
c. Go ahead.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Expressing "lazy always update" intermediate files
Date: Fri, 4 Jan 2002 17:37:11 -0500
I've tried to solve this problem, too, without success. Did any of the other
replies you received shed light on it? So far, my conclusion is that the
only way to do this is to hide construction of b by incorporating it as part
of c's build actions, much the same as you have suggested doing for response files:
a -> (b->c)
Sent: Thursday, January 03, 2002 7:37 PM
Subject: Expressing "lazy always update" intermediate files
Subject: Re: Expressing "lazy always update" intermediate files
Date: Fri, 04 Jan 2002 15:27:43 -0700
Doing it in a single action is impractical due to the limitations of
my shell (Win NT cmd.exe).
I tried this idea with Jamfile:
File <real>c : b ;
File b : a ;
Depends b : <fake>c ;
Depends <fake>c : a ;
NOTFILE <fake>c ;
Then this:
touch a
jam "<real>c"
rm c
jam "<real>c"
And b is not re-created on the second run of jam. So it doesn't do
exactly what I want.
Yes, this works. I'm not particularly eager to implement it in my
situation though -- there are 3-4 of these temporary files (scripts,
linker definition files, various response files, etc.) and passing all
of them along as the targets to the various rules that create each one
could be a maintenance program.
I'd be nicest to be able to write rules "normally" but have the right
thing happen. Hmm...
Date: Fri, 4 Jan 2002 14:59:58 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Expressing "lazy always update" intermediate files
If you can automatically determine the script/file combination from a
basename of some sort, you can avoid the maintenance problem. ie,
something like:
rule myGenerateFile {
local script file scriptDir fileDir ;
script = $(<:S=.script) ;
file = $(<) ;
...
scriptDir = ? ;
...
MakeLocate $(script) : $(scriptDir) ;
MakeLocate $(file) : $(fileDir) ;
myCreateScript $(script) $(file) ;
myCreateFile $(script) $(file) : $(>) ;
}
called something like:
myGenerateFile foo.cpp ;
Of course, add extra arguments to it as necessary to fully specify the
script and file.
myGenerateFile baseName : extra args for script : extra args for file ;
or other variations as needed.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Expressing "lazy always update" intermediate files
Date: Fri, 4 Jan 2002 18:08:39 -0500
I think that with the use of the rule indirection feature which allows rules
to be invoked through variable expansions,
$(rule-name) x y : z ;
you can wrap up all the logic in one place and avoid the maintenance
problems. I did a similar thing for response-file support in Boost.Build.
The code to implement the feature in Boost.Jam is pretty straightforward.
Let me know if you want to see it.
Date: Sat, 05 Jan 2002 00:48:38 -0700
Subject: New LAZY builtin
Yes, in a nutshell. Or Craig's idea of using more than one target in
$(1) (his post basically works out of the box). I also think using
TEMPORARY and removing the files after they're used is an alternative.
I don't find any of them very satisfactory.
I've implemented a new LAZY rule that behaves a bit like a mixture
between ALWAYS and TEMPORARY. (I was originally calling it
LAZY_ALWAYS). After the fate of a target is determined, new code
loops through its immediate dependents. If they are marked "lazy" and
their fate is not to be updated, their fate is changed to "touch" and
their dependents are similarly checked recursively. So it is "lazy"
in the sense that the targets are always rebuilt, but only when their
direct dependents are.
Here is my documentation for the feature. It explains why I don't
like many of the alternatives, then describes the feature, then an
artifact of the implementation that could be called a bug. The patch
follows. The code will show up in //guest/matt_armstrong/jam
eventually, but I won't make another announcement here.
Because the Windows NT shell (cmd.exe) sucks, it is often best to
break up complex operations into many actions. Examples include
creating various response files and linker definition files for
the link step of a compile.
The problem with this is that these files may not always be
rebuilt when necessary. It is difficult to construct a
straightforward chain of actions that guarantees that all the
response files that need to get built whenever the final link
makes use of them.
Stock Jam provides two main ways to accomplish this:
- Mark the response files TEMPORARY and remove them with
RmTemps after the link. This is problematic since removing
them just adds mystery to the final link process for the
typical engineer. People often want to look at the files to
see exactly how the link occurs.
- Perform the final link with several actions that take a list
of the final image and all the response files in $(<). Each
action would build one of the elements in $(<). This is an
obtuse hack that is difficult to explain and maintain.
The solution presented here is a new built in rule LAZY. When
called like this:
LAZY target ;
"target" is marked "lazy".
When Jam decides that a given target is to be built, it now checks
all direct dependents to see if they are marked lazy. If they
are, the lazy dependents are also marked for rebuilding, and their
direct dependents are similarly considered, and so on.
This affords the benefits of marking targets TEMPORARY (that they
will be rebuilt whenever the targets they depend on are rebuilt)
without the negatives (that they get deleted after the build).
BUGS:
There is a bug in this implementation that I do not believe will
lead to practical problems. Consider the following set of dependencies.
d -> b* (d depends on b*)
c -> b*
b* -> a
Consider b* to be marked "lazy". The current implementation will
correctly rebuild b whenever either d or c is rebuilt. However,
it does not guarantee that BOTH d and c get built whenever b* is
updated. If b* is updated only because it is lazy, some of its
dependents may not be updated. For example, if c is updated and
b* is marked for updated because it is lazy, then d may not be
updated. If d is marked for update and b* is marked for updated
because it is lazy, c may not be marked for update. I call this a
bug since it shouldn't be necessary to run Jam twice to satisfy
all dependencies.
A simple way to work around this is to mark b* with NOTFILE. This
will cause b*'s time stamp to no longer be considered. This is
arguably a reasonable thing to do, since these files are rarely
edited by hand and whenever they are used they are rebuilt.
Another workaround is to mark the final linked image with LEAF,
which will usually has a similar effect of removing b*'s time
stamp from consideration. Another workaround is to avoid having a
LAZY file with more than one dependent target (this is usually the
case anyway, which is the major reason I don't consider this
problem serious).
This patch is against my local copy of jam, which has been patched a
bit from stock jam. You can spot some features I took from Craig
McPheeters in there, and some of my own. But the diffs give plenty of
context and the changes fairly minor, so hand patching should go smoothly.
--- rules.h Wed Dec 19 23:06:47 2001
+++ c:/ext1/sc/jam/main/rules.h Fri Jan 4 23:54:58 2002
@@ -100,20 +100,23 @@
# define T_FLAG_TEMP 0x01 /* TEMPORARY applied */
# define T_FLAG_NOCARE 0x02 /* NOCARE applied */
# define T_FLAG_NOTFILE 0x04 /* NOTFILE applied */
# define T_FLAG_TOUCHED 0x08 /* ALWAYS applied or -t target */
# define T_FLAG_LEAVES 0x10 /* LEAVES applied */
# define T_FLAG_NOUPDATE 0x20 /* NOUPDATE applied */
#ifdef OPT_GRAPH_DEBUG_EXT
# define T_FLAG_VISITED 0x40 /* Used in dependency graph output */
#endif
+#ifdef OPT_BUILTIN_LAZY_EXT
+# define T_FLAG_LAZY 0x80 /* LAZY applied */
+#endif
char binding; /* how target relates to real file */
# define T_BIND_UNBOUND 0 /* a disembodied name */
# define T_BIND_MISSING 1 /* couldn't find real file */
# define T_BIND_PARENTS 2 /* using parent's timestamp */
# define T_BIND_EXISTS 3 /* real file, timestamp valid */
TARGETS *deps[2]; /* dependencies */
--- compile.c Wed Dec 26 12:04:28 2001
+++ c:/ext1/sc/jam/main/compile.c Fri Jan 4 23:50:23 2002
@@ -158,20 +158,25 @@
parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_NOTFILE );
bindrule( "NoUpdate" )->procedure
bindrule( "NOUPDATE" )->procedure
parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_NOUPDATE );
bindrule( "Temporary" )->procedure
bindrule( "TEMPORARY" )->procedure
parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_TEMP );
+#ifdef OPT_BUILTIN_LAZY_EXT
+ bindrule( "LAZY" )->procedure
+ parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_LAZY );
+#endif
+
#ifdef OPT_BUILTIN_MATCH_EXT
bindrule( "MATCH" )->procedure
parse_make( builtin_match, P0, P0, P0, C0, C0, 0 );
#endif
#ifdef NT
#ifdef OPT_BUILTIN_W32_GETREG_EXT
bindrule( "W32_GETREG" )->procedure
parse_make( builtin_w32_getreg, P0, P0, P0, C0, C0, 0 );
#endif
--- make.c Fri Jan 4 10:42:48 2002
+++ c:/ext1/sc/jam/main/make.c Fri Jan 4 23:54:07 2002
@@ -59,20 +59,25 @@
int updating;
int cantfind;
int cantmake;
int targets;
int made;
} COUNTS ;
static void make0( TARGET *t, int pbinding, time_t ptime,
int depth, COUNTS *counts, int anyhow );
+#ifdef OPT_BUILTIN_LAZY_EXT
+static void makelazy0( TARGET *t, int depth );
+static void makelazy( TARGET *t, int depth );
+#endif
+
#ifdef OPT_GRAPH_DEBUG_EXT
static void dependGraphOutput( TARGET *t, int depth );
#endif
static char *target_fate[] = {
"init", /* T_FATE_INIT */
"making", /* T_FATE_MAKING */
"stable", /* T_FATE_STABLE */
"newer", /* T_FATE_NEWER */
@@ -152,20 +157,100 @@
#endif
status = counts->cantfind || counts->cantmake;
for( i = 0; i < n_targets; i++ )
status |= make1( bindtarget( targets[i] ) );
return status;
}
+#ifdef OPT_BUILTIN_LAZY_EXT
+/*
+ * makelazy0() - checks if this target is not being built but marked
+ * lazy and if so, touches the target so it does
+ * get built.
+ */
+
+static void
+makelazy0( TARGET *t, int depth )
+{
+ TARGETS *c;
+ int i;
+
+ /*
+ * Step 1: don't bother if we are already being processed
+ */
+ if (t->fate <= T_FATE_MAKING)
+ return;
+
+ /*
+ * Step 2: don't bother if we're already being built
+ */
+ if (t->fate >= T_FATE_SPOIL)
+ return;
+
+ /*
+ * Step 3: don't bother if we're not lazy
+ */
+ if ( !(t->flags & T_FLAG_LAZY) )
+ return;
+
+ /*
+ * Step 4: say change our fate to "touched"
+ */
+#ifdef OPT_GRAPH_DEBUG_EXT
+ if (DEBUG_FATE)
+ printf("fate change %s from %s to %s by lazy touch\n",
+ t->name,
+ target_fate[t->fate], target_fate[T_FATE_TOUCHED]);
+#endif
+ t->fate = T_FATE_TOUCHED;
+
+ /*
+ * Step 5: check our dependents for laziness.
+ */
+ for (c = t->deps[i]; c; c = c->next)
+ makelazy0(c->target, depth + 1);
+}
+
+/*
+ * makelazy() - make the dependents of this target lazily
+ *
+ * makelazy() checks if this target is being built and if so
+ * changes the fate of any lazy dependents so that they
+ * are built as well.
+ */
+
+static void
+makelazy( TARGET *t, int depth )
+{
+ TARGETS *c;
+ int i;
+
+ /*
+ * Step 1: don't bother if we're not being rebuilt
+ */
+ if (t->fate < T_FATE_SPOIL)
+ return;
+
+ /*
+ * Step 2: check our dependents for lazines.
+ */
+ for (c = t->deps[i]; c; c = c->next)
+ makelazy0(c->target, depth);
+}
+#endif
+
/*
* make0() - bind and scan everything to make a TARGET
*
* Make0() recursively binds a target, searches for #included headers,
* calls itself on those headers, and calls itself on any dependents.
*/
static void
make0(
TARGET *t,
@@ -507,20 +592,26 @@
c->target->name);
#endif
hfate = max( hfate, c->target->hfate );
}
/* Step 4b: propagate dependents' time & fate. */
t->htime = hlast;
t->hleaf = hleaf ? hleaf : t->htime;
t->hfate = hfate;
+
+#ifdef OPT_BUILTIN_LAZY_EXT
+ /* Step 4c: if we're being rebuilt, rebuild any of our lazy
+ dependents. */
+ makelazy( t, depth );
+#endif
/*
* Step 5: a little harmless tabulating for tracing purposes
*/
#ifdef OPT_IMPROVED_PATIENCE_EXT
++counts->targets;
#else
if( !( ++counts->targets % 1000 ) && DEBUG_MAKE )
printf( "...patience...\n" );
Subject: Re: Expressing "lazy always update" intermediate files
Date: Sat, 05 Jan 2002 01:21:13 -0700
By "maintenance problem" I meant the maintenance and comprehension of
the rule itself, not the users of the rule (which I agree can be
adequately shielded from the implementation).
For some background, this basic problem has existed in our jam rules
for over four years with lots of smart engineers not thinking of the
above solution. From that I conclude that it is not obvious why the
above solution actually works, and I like to avoid techniques that
appear in any way "magical" (Jam is magical enough to the
uninitiated). Sure, I have dabbled in jam for years and steeped my
brain in it for about a month and I now understand how this technique
works. I'm not at all confident it'll be clear to me after an
extended period away from the code.
In my case, there are 3 auxiliary files, and the thought of passing
four separate targets in $(<) with various rules and actions
referencing them by $(<[2]) and the like makes my head spin. There is
a reason the original engineers wrote the rules and actions the
straightforward way.
And so the LAZY rule, which I just posted, is born. ;-) I figure if
Jam has built in functionality to deal with intermediate files that
are deleted (TEMPORARY) it is reasonable for Jam to have built in
functionality to deal with intermediate files that aren't (LAZY).
I am hopefully stopping short of being over zealous here. :-) I'm not
suggesting that "LAZY" be part of stock jam.
Date: Mon, 7 Jan 2002 11:33:46 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Expressing "lazy always update" intermediate files
If you don't mind a bit of topic drift, I'm curious about that. Could you
describe what would happen?
Subject: Re: Expressing "lazy always update" intermediate files
Date: Mon, 07 Jan 2002 10:17:19 -0700
Actually I don't think it is NT specific -- these "response files" are
huge, so they must be built with piecemeal actions. So, it isn't
possible to build all the response files with a single actions block.
Craig showed me how to build multiple targets by putting all of them
in $(<), but I think that gets too complex.
multiple-targets targetA targetB targetC : ... ;
Date: Mon, 7 Jan 2002 13:29:17 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: New LAZY builtin
That looks like it could be useful, nice one.
Just a minor point - the name lazy doesn't suggest to me what the new
feature is doing.
Here are some alternative names for the new keyword:
COUPLE a : b ;
RELATE a : b ;
UPDATE a : b ; # jam has NOUPDATE, this is kinda the opposite of that one
IFUPDATE a : b ;
Date: Mon, 7 Jan 2002 13:57:42 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Expressing "lazy always update" intermediate files
The limitations on the size of an action block can be removed with one
of the extensions in my branch. I find with unlimited action sizes,
and the newline expansion trick, the standard NT shell can be used
for most of my actions. Where more complex logic is needed, I use perl.
The way to make this more useful is to set a bunch of variables on some
target, each of which can grow to large lists.
For example, if you're building a shared object, you may want the list
of object files and the list of archives in separate lists. I do this
by setting different variables on the target. ie, something like:
OBJS on libfoo.dll += foo.o ;
OBJS on libfoo.dll += bar.o ;
ARCHIVES on libfoo.dll += liba.lib ;
ARCHIVES on libfoo.dll += libb.lib ;
ARCHIVES on libfoo.dll += libc.lib ;
where the above is done internally by my other rules. Then in the
action block for building the shared object:
actions myBuildShared bind OBJS ARCHIVES {
if exist $(<).objs $(RM) $(<).objs
$(TOUCH) $(<).objs
echo$(SPACE)$(OBJS)$(SPACE)>>$(<).objs$(NEWLINE)
if exist $(<).archives $(RM) $(<).archives
$(TOUCH) $(<).archives
echo$(SPACE)$(ARCHIVES)$(SPACE)>>$(<).archives$(NEWLINE)
$(LINK) ... @$(<).objs @$(<).archives ...
}
it is sometimes better to pre-process files via separate actions, but the
number of times I do that is much less now that it was when action blocks
had a fixed size limit.
Date: Wed, 9 Jan 2002 12:01:29 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Improved Header Scan Cache for Jam
fyi, this is done now. The integration in my guest branch is a
lightly modified version of Matt's modifications to my original.
Subject: Re: Re: Improved Header Scan Cache for Jam
Date: Thu, 10 Jan 2002 09:18:57 -0700
And, for the record, please consider Craig's version to be the
canonical one. I'll be rolling Craig's stuff into what I use and
deleting //guest/matt_armstrong/jam/hdrscan_cache.
Date: Thu, 10 Jan 2002 11:41:09 -0500 (EST)
From: Janos Murvai <murvai@ncbi.nlm.nih.gov>
Subject: unsubcribe
Date: Thu, 10 Jan 2002 11:02:35 -0700
Subject: "My" version of jam now available
I've published yet another custom version of Jam in
//guest/matt_armstrong/jam/patched_version/...
This represents the sum total of changes to stock jam that we've made
at our company. Some of these have been in use for 3-4 years, but
most of the code has been re-worked by me in the past month (e.g. some
of the rules were implemented in C++, I converted them to C, etc.)
//guest/matt_armstrong/jam/patched_version/LOCAL_DIFFERENCES.txt contains
a detailed description of each change. Each change should be totally
independent of the other, each selected by its own #define, so it
should be easy to pick and choose stuff you might like (thanks to
Craig McPheeters for this idea).
Here is the contents of the LOCAL_DIFFERENCES.txt.
This file details the differences between Geoworks' copy of Jam and
the stock upstream version as distributed by Perforce.
* Conventions Used for Jam Patches
All changes we've made to Jam C source are surrounded by an
#ifdef. The #ifdefs are constructed such that it is possible to
remove each feature independent of the other. This greatly eases
maintenance costs, since the next time an upstream version of jam
is merged in it'll be very easy to see why each change was made.
It also makes it easy to assess how big a tweak to jam each change
actually is.
The naming convention for the #defines is:
OPT_BUILTIN_..._EXT -- a new builtin rule
OPT_IMPROVE_..._EXT -- a new general improvement, but no real
change in functionaltiy
OPT_FIX_..._EXT -- a bug fix to Jam (suitable for including in
the upstream Jam).
OPT_..._EXT -- anything that doesn't fall neatly in the above.
All changes made to Jamfiles or Jamrules are surrounded by obvious
comments of the form:
### LOCAL CHANGE
#
stuff
#
### LOCAL CHANGE
* The builtin Jambase is slightly changed.
The builtin Jambase has a few tweaks to make it nicer under NT.
It sets the MSVCNT var from MSVCDIR if MSVCDIR is set. MSVCDIR is
the variable Visual C++ 6.0's version of vcvars32.bat sets, while
MSVCNT seems to be a Visual C++ 5.0 thing. This change has been
sent upstream.
If MSVCNT is still unset, it uses W32_GETREG and W32_SHORTNAME to
grab the installation location of Visual C++, and sets MSVCNT
appropriately. This is merely a matter of convenience for people.
It doesn't complain if it can't find a compiler under NT.
It doesn't announce that it is using Visual C++.
In all other ways (variables set, rules and actions defined) the
built in Jambase is identical to stock Jam.
* New Builtin Rules
** PWD
A new rule PWD returns the current working directory. Used like so:
pwd = [ PWD ];
This, together with some Jam logic, can be used to generate a
fully qualified path name. Currently it is only used to fully
qualify the tools/bin directry before changing the PATH.
This option is controlled by the OPT_BUILTIN_PWD_EXT #define.
** MATCH
A new rule MATCH does regexp matching on a string, returning the
result as a list of matches. Used like so:
matches = [ MATCH string : pattern ] ;
matches[1] is the portion of 'string' that 'pattern' matched.
matches[2], matches[3], etc. hold the portion of 'string' matched
The syntax of the pattern regexp is identical to that of the
HDRSCAN variable, since this rule uses Jam's internal regexp
engine.
The initial purpose of this rule is to allow the implementation of
a Split rule within Jam, so things like path names can be easily
decomposed.
This option is controlled by the OPT_BUILTIN_MATCH_EXT #define.
** W32_GETREG
Available only under WinNT (Win2k as well). Gets a value from the
registry, like so:
value = [ W32_GETREG list ] ;
This is primarily so Jam can find the location of the Visual C++
installation from the registry, which makes it a bit easier to get
a build environment up and running. Otherwise, they would have to
set the MSVCDIR environment variable, either at Visual C++ install
time or by running the vcvars32.bat file that comes with Visual
C++.
This option is controlled by the OPT_BUILTIN_W32_GETREG_EXT
#define.
** W32_SHORTNAME
Available only under WinNT (Win2k as well). Takes a string
holding a file name and returns its short name. E.g. "Program
Files" -> "PROGRA~1" etc. Used like so:
short = [ W32_SHORTNAME longname ] ;
This is primarily useful for shortening the long path name
supplied by W32_GETREG, which often contains things like "Program
Files" in it, etc., which confuses Jam later on.
This option is controlled by the OPT_BUILTIN_W32_SHORTNAME_EXT
#define.
* New Features
** Header Caching
This code is taken from //guest/craig_mcpheeters/jam/src/ on the
Perforce public depot. Many thanks to Craig McPheeters for making his
code available. It is delimited by the OPT_HEADER_CACHE_EXT #define
within the code.
Jam has a facility to scan source files for other files they might
include. This code implements a cache of these scans, so the entire
source tree need not be scanned each time jam is run. This brings the
following benefits:
- If a file would otherwise be scanned multiple times in a
single jam run (because the same file is represented by
multiple targets, perhaps each with a different grist), it
will now be scanned only once. In this way, things are
faster even if the cache file is not present when Jam is run.
- If a cache entry is present in the cache file when Jam
starts, and the file has not changed since the last time it
was scanned, Jam will not bother to re-scan it. This
markedly increaces Jam startup times for large projects.
This code has improvements over Craig McPheeters' original
version. I've described all of these changes to Craig and he
intends to incorporate them back into his version. The changes are:
- The actual name of the cache file is controlled by the
HCACHEFILE Jam variable. If HCACHEFILE is left unset (the
default), reading and writing of a cache file is not
performed. The cache is always used internally regardless
of HCACHEFILE, which helps when HDRGRIST causes the same
file to be scanned multiple times.
Setting LOCATE and SEARCH on the the HCACHEFILE works as
well, so you can place anywhere on disk you like or even
search for it in several directories. You may also set it
in your environment to share it amongst all your projects.
- The .jamdeps file is in a new format that allows binary data
to be in any of the fields, in particular the file names.
The original code would break if a file name contained the
'@' or '\n' characters. The format is also versioned,
allowing upgrades to automatically ignore old .jamdeps
files. The format remains human readable. In addition,
care has been taken to not add the entry into the header
cache until the entire record has been successfully read from
the file.
- The cache stores the value of HDRPATTERN with each cache
entry, and it is compared along with the file's date to
determine if there is a cache hit. If the HDRPATTERN does
not match, it is treated as a cache miss. This allows
HDRPATTERN to change without worrying about stale cache
entries. It also allows the same file to be scanned
multiple times with different HDRPATTERN values.
- Each cache entry is given an "age" which is the maximum
number of times a given header cache entry can go unused
before it is purged from the cache. This helps clean up old
entries in the .jamdeps file when files move around or are
removed from your project.
You control the maximum age with the HCACHEMAXAGE variable.
If set to 0, no cache aging is performed. Otherwise it is
the number of times a jam must be run before an unused cache
entry is purged. The default for HCACHEMAXAGE if left unset is 100.
- Jambase itself is changed.
SubDir now always sets HDRGRIST to $(SOURCE_GRIST) so header
scanning can deal with multiple header files of the same
name in different directories. With the header cache, this
does no longer incurs a performance penalty -- a given file
will still only be scanned once.
The FGristSourceFiles rule is now just an alias for
FGristFiles. Header files do not necessarily have global
visibility, and the header cache eliminates any performance
penalty this might otherwise incur.
Because of all these improvements, the following claims can be
made about this header cache implementation that can not be made
about Craig McPheeters' original version.
- The semantics of a Jam run will never be different because of
the header cache (the HDRPATTERN check ensures this).
- It will never be necessary to delete .jamdeps to fix obscure
jam problems or purge old entries.
** Exporting Jam variables to the environment using ENVEXPORT.
This change causes the global value of the ENVEXPORT variable to
take on special meaning. It becomes a list of Jam variables that
are to be exported into the environment.
the environment variables FOO, BAR, and BAZ will be set to
whatever values the Jam global variables of the same name were set to.
If a Jam global variable holds a list, the entire list is exported
to the environment. When the variable's name ends with "PATH",
"Path" or "path", then the list elements are concatenated together
with the SPLITPATH character separating elements (SPLITPATH is ';'
under Windows and ':' under Unix), otherwise the list elements are
concatenated with a single space.
environment variables are exported by default.
This option is controlled by the OPT_ENVIRONMENT_EXPORT_EXT
#define.
** The :X variable expansion
Expanding a variable with :X will change all \ chars in the
variable to / chars.
E.g.
foo = "a\\b\\c"
bar = $(foo:X)
# bar is now "a/b/c"
This is useful when dealing with cygwin tools that expect path
elements to be unix style, I guess.
FIXME: is this truly necessary? Or can it be solved in Jam?
E.g. we might be able to use the Split rule to get around this.
This feature is enabled with the OPT_EXPAND_UNIXPATH_EXT #define.
** Human Readable Dependency Output
This code is taken from from //guest/craig_mcpheeters/jam/src/ on the
Perforce public depot. Many thanks to Craig McPheeters for making his
code available. It is delimited by the OPT_GRAPH_DEBUG_EXT #define
within the code.
With this option, debug level 10 will print out the entire dependency
tree in a form that is more easily understood than jam's debug level 6.
** Target Fate Change Debugging
This code is taken from from //guest/craig_mcpheeters/jam/src/ on the
Perforce public depot. Many thanks to Craig McPheeters for making his
code available. It is delimited by the OPT_GRAPH_DEBUG_EXT #define
within the code.
With this option, debug level 11 prints out target fate changes as
they occur (and why they occur). This helps debug mysterious "why
is THAT file getting rebuilt" problems.
** Improved ...patience...
This changes the ...patience... lines to be printed out after the
first 100 and every subsequent 1000 files have been header scanned.
Previously, ...patience... was printed out for every 1000 targets.
This change both reduces the number of ...patience... lines printed,
and makes them more accurately reflect the work being done.
This change is enabled with the OPT_IMPROVED_PATIENCE_EXT #define.
** Improved debug level help
This change is delimited by the OPT_IMPROVE_DEBUG_LEVEL_HELP_EXT
#define within the code.
The -h option to jam now prints out what each of the debug levels do.
** Print Total Time
This change is delimited by the OPT_PRINT_TOTAL_TIME_EXT #define
within the code.
If the total time jam runs is greater than 10 seconds, the time is
printed when jam exits. This helps people back up claims that the
build is too slow and they need a faster machine. ;-)
** Improved HdrRule treatment
A new 3rd argument to HdrRule is the bound name of the 1st
argument to HdrRule. This allows HdrRule to extend the search
path for headers to include all directories headers have been
found in so far.
E.g. if a source file does "#include <foo/bar/baz.h>" and the
baz.h header is found in $(TOP)/include, this change allows
HdrRule to add $(TOP)/include/foo/bar to the HDRSEARCH path. This
way, if baz.h does #include "goo.h", any goo.h in
$(TOP)/include/foo/bar will be found.
The default Jambase makes use of this new argument to extend
HDRSEARCH on the header files.
This feature is enabled with the OPT_HDRRULE_BOUNDNAME_ARG_EXT
#define.
** Improved "compile" debug output.
With level 5 jam debugging, a jam rule execution trace is
printed. This extends that debugging output to include:
- when a new rule is defined (with a special note when the new
rule re-defines a pre-existing rule).
- when a new actions is defined (with a special note when the
new actions re-defines a pre-existing actions).
- when an included Jamfile ends.
This makes it possible to write scripts that process Jam debugging
output that look for potential errors, such as re-defining a rule
or action that is part of Jambase.
This feature is enabled with OPT_IMPROVE_DEBUG_COMPILE_EXT.
** "IFUSED" targets
Because the Windows NT shell (cmd.exe) sucks, it is often best to
break up complex operations into many actions. Examples include
creating various response files and linker definition files for
the link step of a compile.
The problem with this is that these files may not always be
rebuilt when necessary. It is difficult to construct a
straightforward chain of actions that guarantees that all the
response files that need to get built whenever the final link
makes use of them.
Stock Jam provides two main ways to accomplish this:
- Mark the response files TEMPORARY and remove them with
RmTemps after the link. This is problematic since removing
them just adds mystery to the final link process for the
typical engineer. People often want to look at the files to
see exactly how the link occurs.
- Perform the final link with several actions that take a list
of the final image and all the response files in $(<). Each
action would build one of the elements in $(<). This is an
obtuse hack that is difficult to explain and maintain.
The solution presented here is a new built in rule IFUSED. When
called like this:
IFUSED target ;
"target" is marked "ifused".
When Jam decides that a given target is to be built, it now checks
all direct dependents to see if they are marked ifused. If they
are, the ifused dependents are also marked for rebuilding, and
their direct dependents are similarly considered, and so on.
This affords the benefits of marking targets TEMPORARY (that they
will be rebuilt whenever the targets they depend on are rebuilt)
without the negatives (that they get deleted after the build).
BUGS:
There is a bug in this implementation that I do not believe will
lead to practical problems. Consider the following set of dependencies.
d -> b* (d depends on b*)
c -> b*
b* -> a
Consider b* to be marked "ifused". The current implementation
will correctly rebuild b whenever either d or c is rebuilt.
However, it does not guarantee that BOTH d and c get built
whenever b* is updated. If b* is updated only because it is
ifused, some of its dependents may not be updated. For example,
if c is updated and b* is marked for updated because it is
ifused, then d may not be updated. If d is marked for update
and b* is marked for updated because it is ifused, c may not be
marked for update. I call this a bug since it shouldn't be
necessary to run Jam twice to satisfy all dependencies.
A simple way to work around this is to mark b* with NOTFILE. This
will cause b*'s time stamp to no longer be considered. This is
arguably a reasonable thing to do, since these files are rarely
edited by hand and whenever they are used they are rebuilt.
Another workaround is to mark the final linked image with LEAF,
which will usually has a similar effect of removing b*'s time
stamp from consideration. Another workaround is to avoid having a
IFUSED file with more than one dependent target (this is usually
the case anyway, which is the major reason I don't consider this
problem serious).
* Operational Changes
** Versioning
We add a PATCHED_VERSION variable that indicates the local version
of custom jam is in use.
The variable is a list. PATCHED_VERSION[1] is the major version,
PATCHED_VERSION[2] is the minor version.
As you might expect, major version increments indicate
non-backwards compatible changes (elimination of builtin rules or
other features, changing features in an incompatible way, etc.).
Minor version increments indicate the addition of backwards
compatible features and bug fixes.
It is expected that a project's Jamrules will check the
PATCHED_VERSION variable and check for a major version mismatch,
and ensure the minor version is not too low.
This option is enabled with the OPT_PATCHED_VERSION_VAR_EXT
#define.
** Maximum Command Length for NT
Jam ships with a maximum command line length of 996 for Windows
NT. Windows NT 4.0 and greater can handle command line lengths of
at least 10240 characters long (perhaps longer, no tests have been
done).
This change increaces the maximum command line length to 10240 for
Windows 4.0 and greater.
Caveat: the default Windows 4.0 command shell can only handle
commands up to 1-2k bytes long for many of its own internal
commands, such as del and echo. So this feature has spurred the
implementation of jamshell.c, a simple shell that lives in
tools/jamshell.
This option is enabled with the OPT_FIX_NT_BATCH_EXT #define.
* Bug Fixes
** Windows NT Batch File Naming Bug
This code is taken from from //guest/craig_mcpheeters/jam/src/ on
the Perforce public depot. Many thanks to Craig McPheeters for
making his code available. It is delimited by the
OPT_FIX_NT_BATCH_EXT #define within the code.
Running jam multiple times on the same machine could break because
jam's temp batch file names were of the form jamtmpXX.bat, where
XX begins at 00 and increaces numerically.
This fix adds the jam processes own PID to the temp batch file
name, allowing multiple copies of jam.exe to run simultaneously
without interfering with each other.
** Improper handling of "on target" values during header scanning
Setting any "on target" variables for $(<) within a HdrRule will
actually set the global values for those variables and the "on
target" values will remain unchanged.
Why? Jam implements "on target" variables by swapping the current
global values with the target specific values (see pushsettings()
in rules.c) and then unswapping them when the target is no longer
in scope (see popsettings() in rules.c).
This works fine if the target variables are not changed between
calls to pushsettings() and popsettings(). But, when scanning for
header file dependencies, the HDRRULE is run, and so the "on
target" variables of $(<) can be set.
Doing so will actually cause the global value of the variable to
be set. Why? Because the target's value will be swapped with the
global value in the popsettings() call after the HdrRule is
called. The value set on $(<) will either not change (if the same
variable was previously set on the target), or be taken from the
global setting (if the variable had never been set on the target before).
This problem occurs with the default Jambase's HdrRule when any
file includes itself. In this case, $(<) will also be present in $(>).
This fix makes a copy of the target's "on target" variables and
uses the copy with pushsettings() and popsettings() in make.c's
make0() function. An alternate fix would be to freeze the "on
target" variables of $(<) within a HdrRule, disalowing any
modifications.
This code is enabled with the OPT_FIX_TARGET_VARIABLES_EXT #define.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: New LAZY builtin
Date: Fri, 11 Jan 2002 18:19:21 -0500
Why not just make this the meaning of ALWAYS + TEMPORARY? That was the first
thing I tried when I wanted this behavior.
That avoids introducing a new rule, too.
Subject: Re: New LAZY builtin
Date: Fri, 11 Jan 2002 17:16:17 -0700
Yeah, that's certainly a possibility (I tried that combination too).
Do the current semantics of ALWAYS + TEMPORARY have any useful
purpose? Let's see:
ALWAYS - mark the target for update regardless of its age on disk
TEMPORARY - if the target isn't on disk, take its age from its
oldest dependency, otherwise treat it as normal.
So it seems like right now, with a target marked ALWAYS and TEMPORARY,
ALWAYS "wins", so any current use of the combination is probably
accidental. I don't think giving new meaning to ALWAYS + TEMPORARY
would be so bad.
From: "Matt Armstrong" <matt+dated+200201161718.912441@lickey.com>
Sent: Friday, January 11, 2002 7:16 PM
Subject: Re: New LAZY builtin
BTW, TEMPORARY is broken in stock Jam - it doesn't work right if the target
has multiple parents. My patch is:
***************
*** 175,180 ****
printf( "warning: %s depends on itself\n", t->name );
return;
+ /* Deal with TEMPORARY targets with multiple parents. When a missing
+ * TEMPORARY target is determined to be stable, it inherits the
+ * timestamp of the parent being checked, and is given a binding of
+ * T_BIND_PARENTS. To avoid outdating parents with earlier modification
+ * times, we set the target's time to the minimum time of all parents.
+ */
+ case T_FATE_STABLE:
+ if ( t->binding == T_BIND_PARENTS && t->time > ptime &&
t->flags & T_FLAG_TEMP )
+ t->time = ptime;
+ return;
+
default:
return;
Date: Fri, 11 Jan 2002 17:04:00 -0800
From: rmg@perforce.com
Subject: Perforce internal jam changes to //public/jam/...
FYI: (to all who are working on your own Jam changes):
I have just submitted the following change to the Jam sources in
the Public Depot (//public/jam/...):
| Change 1319 by rmg@rmg:pdjam:chinacat on 2002/01/11 16:38:34
|
| This change is a drop of the Perforce internal Jam changes
| since the 2.3 public release. The individual changes
| represented herein are preserved in the
| //guest/richard_geiger/intjam/ branch.
|
| The intent of this drop is to provide a base, from which other
| contributors' Jam branches may be integrated into. It is not
| intended to become a packaged release in this state. We will
| be integrating changes from other users prior to creating the
| next packaged release.
|
| Please refer to the src/RELNOTES file for an overview of the
| changes present in this integration.
|
My next step (toward the next Jam release), will be to contact
individual contributors about changes we have decided to take "as-is"
for the new release, to coordinate those integrations back into the
mainline. That will start happening next week, but please do NOT draw
any conclusions based not having heard from me in that time frame.
I'll try to contact *everybody* who has Jam changes since 2.3.1 in the
Public Depot (whether or not we currently plan to grab any of your
changes), before the release is finalized.
In the event that we decide not to pick up a change that's important
to you in this round, take heart: there will be other releases down the line.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Perforce internal jam changes to //public/jam/...
Date: Fri, 11 Jan 2002 20:22:09 -0500
?? This looks like the exact same RELNOTES that's been there since 2.3.1
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Perforce internal jam changes to //public/jam/...
Date: Fri, 11 Jan 2002 20:26:55 -0500
Uh, Sorry. Better learn to sync before I speak ;-)
Subject: Re: Perforce internal jam changes to //public/jam/...
Date: Fri, 11 Jan 2002 22:08:30 -0800
From: rmg@perforce.com
Ah. Had me scared for a minute there!
Glad to know it's at least minimally discernable from the previous rev! :-)
Date: Sat, 12 Jan 2002 11:48:54 -0400
Subject: Re: New LAZY builtin
From: "Lex Spoon" <lex@cc.gatech.edu>
This would be confusing to people who don't know about it. A separate
keyword, on the other hand, privodes a clear indication to go look in
the documentation if it's unfamiliar.
Date: Mon, 14 Jan 2002 11:03:15 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Re: Subject: Perforce internal jam changes to
consider changing to a mail reader or gateway that understands how to
Thank you very much for your work. We are happy users of Jam in our
company and a new version of Jam will be put to use. E.g. the Glob
function is very welcome.
I compiled the sources under the latest version of cywin and remarked
that cygwin needs in the makefile the additional line 46a47
Furthermore I took me some time to find in the documentation how to
use the new Glob function. Could you consider changing in RELNOTES
New 'Glob' builtin that returns a list of files in a list of
directories, given a list of patterns.
To
New 'Glob' builtin that returns a list of files given
two parameters (list of directories, list of filename = pattern).
From: <boga@mac.com>
Date: Tue, 15 Jan 2002 09:14:52 +0100
Subject: TOGETHER targets not removed on failure
If an actions fails it's targets are delted by jam.
If the action is marked with TOGETHER it's targets are not deleted. Why?
I'd like them to be deleted.
Code from make1.c
Is !( cmd->rule->flags & RULE_TOGETHER ) neccesary here?!
Date: Fri, 18 Jan 2002 10:13:39 -0800
From: Raju Subbanna X4832 <hemantharaju.t.subbanna@nsc.com>
Subject: Auto jamfile creation
How can we auto create jamfile on a client creation in Perforce.
Date: Fri, 18 Jan 2002 14:33:33 -0500
From: "Wolpe, Paul" <Paul.Wolpe@blackrock.com>
Subject: Using Jam to invoke a remote Jam
I am redoing my company's make system and plan on using Jam, however,
there is one issue I am not sure how to approach.
The current build system is set up such that if a user types:
%make sol
This will rlogin to a Solaris machine dedicated to compiling and execute
the equivalent of "make all."
Alternatively, if the user were to enter:
%make lin
It will do the same process on a Linux build machine, regardless of the
platform the user is currently using.
Is there a way that in my rules I can specify a shell command to rlogin
to the appropriate machine, and then call Jam on that machine with the
appropriate arguments?
Date: Sat, 19 Jan 2002 01:24:11 -0500
Subject: external dependency scanner
I am working with a language whose dependencies cannot be analyzed by a regular
expression search (Objective Caml), but it comes
with a utility for doing such scans. Being a new Jam user, I have had to try
to bootstrap my head into the Jam mindset and try to
figure out how to do something new at the same time. But alas, I seem to have
just made the conclusion that my problem cannot be
solved without adding a primitive to the language.
The normal "make" thing to do is to use an include that depends on the
dependencies scanned from the files:
SRC := $(wildcard *.c)
OBJ := $(SRC:.c=.o)
test: $(OBJ)
$(CC) -o $@ $(OBJ)
include $(OBJ:.o=.d)
%.d: %.c
depend.sh $(CFLAGS) $< > $@
where depend.sh is a two line script which runs gcc -M and regexps the .d file
to depend on the same as the .o file. A more
thorough explanation is in Section 5.4 of:
http://www.canb.auug.org.au/~millerp/rmch/recu-make-cons-harm.html
So now of course the only reason this works is because make promises to reboot
itself if it finds any includes of out of date
files. I'm not sure if it does this after updating only one .d file or a bunch,
but it reboots itself. Apparently jam does not:
Now this would seem fair, since it would also be okay to just read the .d in and
DEPEND it. But I can't seem to find any file
reading functions. No way to bind a variable to the contents of a file.
One thing I thought of would be to use a sh backquote and
then get the file back, but again:
read command or something?
I'd like to say other than that the Jam paradigm looks really nice.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: external dependency scanner
Date: Sat, 19 Jan 2002 07:54:33 -0500
in and DEPEND it. But I can't seem to find any file
I'm sure you already realize this, but the included file has to be a Jam
language input file, so it's contents are treated the same way as what you
write in a Jamfile.
of would be to use a sh backquote and
there a known hack of the source code that gives a file
It isn't perfectly clear what your problem is, but if I understand you correctly...
Within the current Jam core language I can think of only one basic way: you
need to extend the action which builds your .d file with some postprocessing
so that either:
a. It is Jam language source code and can be handled by include
or
b. It can be scanned with the regular expression scanner to get what
you want out of it using a customized HDRRULE and HDRSCAN.
If you are willing to use one of the many extended versions of Jam out there
which has exposed a regular expression substitution facility (ours calls the
rule SUBST - binaries at www.boost.org/tools/build; documentation at
www.boost.org/tools/build/build_system.htm), you can set HDRSCAN on the
OCaml file to .* (or some other appropriately general regexp) and HDRRULE to
your own dependency-managing rule, and process all the lines yourself until
you get what you need out of it. I'm certain the next official release
(coming soon) will have this feature or something like it, since all the
code is basically already there and everybody who hacks jam adds it themselves.
Date: Sat, 19 Jan 2002 14:09:45 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Auto jamfile creation
I don't know whether it's possible, but I suspect that if you succeed, you
success will be a failure in drag.
Two newly-created clients will have the same jamfile, but two months later
one or both may have been edited a little bit. And then, one engineer's
going to write some code, test it in his client, submit it, and then it
doesn't work for another engineer, whose jamfile is different, and a day
or so is wasted trying to find the problem.
The conventional approach, keeping the jamfile/makefile in perforce along
with the source code, has its merits :)
Date: Sun, 20 Jan 2002 11:12:59 +0100
From: "Erling D. Andersen" <e.d.andersen@mosek.com>
Subject: Newbee Jam ? and comments
I moving my make system to using Jam. It look quite promising.
I have a question regarding the -I setting. I do something like
SubInclude TOP path1 ;
SubInclude TOP path2 ;
in my $(TOP)/Jamfile. I hoped when building the objects in the Jamfiles of the
two directories that they would use
-I $(TOP)/path1 -I $(TOP)/path2
so *.h files from both directories was visible. That seems not to the case.
Is there an easy way of doing this?
I have had a couple of frustrations with Jam which are:
1. When I do
TOP = c:\\mosekdev ;
Then it seems to be essential to include c: is there I think.
It took me hour to figure that out.
2. If I do
VAR1 = $(VAR2) $(VR3);
and VR3 is misspelled for VAR3 then the assignment seem to fail without any warning.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 21 Jan 2002 11:50:10 -0500
Subject: Boost.Build news
I am pleased to make the following announcements:
1. Most of the really difficult design problems of the Boost.Build
rewrite have now been solved, and we're ready to divide up the work and move
forward with implementation! I will post notice of a design document later
today for your reading enjoyment.
2. Rene Rivera has been pushing the current Boost.Build codebase
forward with the following features:
1. Fixed a problem with the Jamfiles, Jamrules, and other files
from getting multiple incuded. Sometimes as much as 15 times
in my project :-(
2. Support for <libflags> and <sysinclude> in all the current
toolsets.
3. "stage" rule that I mention sometime in the past to collect
files from the various subdirectories to a single one, with
file ranaming depending on the subvariant spec.
4. Ability to specify <dll> as a source dependency. This has the
effect that the DLL is linked in by the use of LIBPATH and
FINDLIBS instead of directly with NEEDLIBS. Which is the prefered
way for shared libraries.
3. Rene has generously agreed to take over maintenance of the current
Boost.Build codebase so that I can devote more energy to the new one. Rene
will also be participating in that project. Rene has proven his
understanding of the current system and I'm confident in his judgement.
Rene will coordinate enhancements like the testing work being done by Joerg
Walter and Brad King. Needless to say, I really appreciate Rene's help.
Date: Mon, 21 Jan 2002 17:53:37 -0700
From: Ray Caruso <Ray.Caruso@Netvion.com>
Subject: Problems Setting OPTIM on a target
I have one or two .cpp files in my project for which I would like to turn
off optimization.
I have set the global version of OPTIM and then am trying to set OPTIM on
the specific .cpp file.
OPTIM = -O3 ; # Using full opt on g++
OPTIM on foo = -O0 # turn off opt for foo.cpp
I've tried all kinds of versions of this such as
OPTIM on foo.cpp = -O0 ;
or
OPTIM on foo$(SUFOBJ) = -O0 ;
regardless, running jam -d 2 shows it compiling everything, including
foo.cpp with -O3.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 21 Jan 2002 22:54:58 -0500
Subject: Boost.Build design document
A document describing key parts of the new Boost.Build architecture (to be
implemented) is now at tools/build/architecture.html in the Boost CVS tree,
and can also be viewed at:
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/boost/boost/tools/build/architecture.html
Rene, I'm awaiting a proposed merging of the Preample and Initialization
sections from you.
Date: Thu, 24 Jan 2002 11:06:46 -0700
From: Ray Caruso <Ray.Caruso@Netvion.com>
Subject: Re: Problems Setting OPTIM on a target
Just wondering if anyone has any input on this. I am kinda in a jam (pun
not intended)
on this. I have used the "on" form on variable assignment before without
problem. Please, any clues?
Date: Thu, 24 Jan 2002 13:09:27 -0700
From: Ray Caruso <Ray.Caruso@Netvion.com>
Subject: Re: Problems Setting OPTIM on a target
Diane and everyone that replied to my plea for help.
From: Ian Godin <ian@sgrail.com>
Date: Fri, 25 Jan 2002 08:32:21 -0800
Subject: Compiling too many times
I haven't been using jam for long, and I ran into this little
problem for which I haven't been able to find a solution:
Jamfile:
Main a : a.c ;
Main b : a.c ;
I'm just playing around, so a.c is:
int main( void ) {
return 0;
}
When I run jam on this, it compiles a.c twice.
I'm not quite sure why:
...found 11 target(s)...
...updating 3 target(s)...
Cc a.o
Link a
Chmod1 a
Link b
Chmod1 b
...updated 3 target(s)...
This happens between libraries as well. I'm converting
a large project from make to jam, and compiling every
file twice is a huge waste of time :) We basically build
shared and static libraries from the same C files.
Date: Fri, 25 Jan 2002 19:41:40 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Compiling too many times
It's a weakness. I don't know how to fix it properly, but a workaround
should be easyish: Use the LibraryFromObjects rule twice and the Objects
rule once instead of using Library twice.
Btw, do you use the same compiler options for shared and static libraries?
From: Ian Godin <ian@sgrail.com>
Subject: Re: Compiling too many times
Date: Fri, 25 Jan 2002 10:48:28 -0800
Yes I do use the same flags. Thanks, that was indeed the solution.
Basically jam seems to accumulate all actions for a given target together
(went hunting through the source code). I can see a use for this behavior,
but it's a little weird. Perhaps adding a "once" modifier to actions... but
that requires a little more thought...
Anyways, that fixes my problem and it's pretty easy to work around.
So far I'm very impressed with jam... very much better than make.
Date: Fri, 25 Jan 2002 20:18:12 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Compiling too many times
I thought about it some more now, and I think jam's in the wrong. I've
heard an explanation for it, but now I think it's wrong.
If we make the basic assumption that the build process can fail (due to
compile errors, lack of disk space or other reasons) at any point and a
rebuild from that point should work, then I don't think jam has a
legitimate reason to build a target twice.
If jam builds a target twice in the exact same way, then one of the action
invocations is obviously superfluous. If jam builds a target twice in two
different ways, then it cannot know which one is left on disk in case a
build is interrupted, and that breaks the assumption above.
Date: Fri, 25 Jan 2002 14:26:30 -0500
From: Michael Gentry <mgentry@sharemedia.com>
Subject: Why does "jam install" build differently than "jam"
Does anyone know why Jam would build things in a different order if
"install" is added to the command line?
I have several custom rules that generate files which then get used in
the build process (such as .idl -> .hh and .cc). If I run the following
commands, it works fine:
jam clean
jam
jam install
But this sequence will not work:
jam clean
jam install
For some reason, a regular "jam" will build the .hh/.cc files before
compiling the things that depend upon them, but "jam install" doesn't
build them until later in the process.
Date: Fri, 25 Jan 2002 20:00:50 -0800
From: Ian Godin <ian@sgrail.com>
Subject: Re: Compiling too many times
Sorry I'm sending this from home and I don't have your orignal message.
But here are my thoughts after playing with this for a while. BTW I got
everything working to my satisfaction (by calling Objects separately).
Jam does not suffer from the problem you mentioned, because it seems
to delete files that didn't complete successfully. And it always runs all
commands, so should it rebuilds the same way everytime.
Going back to my example:
Main a : a.c ;
Main b : a.c ;
I think this behavior is wrong because when I write Jamfiles, I think about
dependencies. In the above example, I'm saying "a" depends on "a.c", and "b" depends
on "a.c". I am NOT saying build a.o twice because two things depend on a.c. The details
of how to "best" build those should be handled by jam, Jambase, and Jamrules. Now
granted, sometimes best might mean build a.o twice, but I don't think
that is the common case. So I believe the default behavior is incorrect.
But anyways, that's my thinking/philosophy on the subject. I also noticed the "together"
modifier on actions. From the docs, it seems to do pretty exactly what is needed (multiple
identical actions on a target is treated as a single action). I haven't
done any experiments yet with it.
Thanks for everyone who responded and helped. Seems I've hit a common
problem for newbies of jam :)
Date: Sat, 26 Jan 2002 17:54:24 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Why does "jam install" build differently than "jam"
The build order is in principle determined only by the dependencies jam
can see. On a single-CPU system, jam will use the same build order every
time, and there's a fairly simple rule that mostly tells you the order.
However, if you use -j (usual on SMP or netbuild systems), the build order
may be different. FWIW, I also have a hacked jam that (much handwaving
here) makes it build a lot faster, and it changes the build order even
more than -j does.
The basic rule has to be: Make your dependencies explicit.
By accident :)
Make sure there's a Depends invocation that tells jam about those
dependencies.
Date: Thu, 31 Jan 2002 02:36:16 -0800 (PST)
Subject: Problems in connecting CEPC to host through Ethernet
I am not able to connect CEPC to a host through an ethernet card.
I am using 3com EtherLink 10/100 PCI (3c905C-TC) card.
I have given the IRQ and BAse address by editing Autoexec.bat
IRQ I gave was 11 .That was what I found from the BIOS SETUP.
Date: Thu, 31 Jan 2002 09:26:24 -0800
From: rmg@perforce.com
Subject: Jam 2.4 release status
[Everything below assumes you are familiar with the Perforce Public
Depot, and its role in the ongoing evolution of Jam. If you care about
that, please see
http://public.perforce.com/public/index.html
and
http://public.perforce.com/public/jam/index.html
I would dearly love to finalize Jam 2.4 by the end of February.
Presently:
- I've integrated all of the Perforce-internal changes into
//public/jam/... in the Public Depot.
- I've integrated the handful of the "slam-dunk" changes from various
//guest/.../jam/... branches.
- I've also done a bunch of "null-change" integrations, aimed solely
at updating the revision history in Perforce, so ease the process
of looking for new contributed changes in the future.
- There's been at least one field-contributed bugfix in the last week
that's now in the mainline.
- I think I've individually contacted everybody who made changes to
//guest/.../jam/... branches, to let you know about that status of
your changes WRT 2.4. If anybody out there is thinking "What? why
wasn't *I* contacted?", please send me mail at
"opensource@perforce.com"
- There are several outstanding changes from contributors that we
know we want to integrate, though we may want to implement them
differently, and I'm not sure whether they will make it into 2.4.
I'll probably looking closest at things labeled as "bugfixes" at
this point.
- There are some contributed changes yet to be considered; some of
these are "major features" (in terms of functionality, at least, if
not perhaps code impact): for example the header scan caching
stuff. I'm guessing that these won't make it into the 2.4 release,
but are still open for consideration in the next release.
(By the way I'd hope that the next release after 2.4 will be able
to happen much quicker than 2.4 has followed 2.3).
At this point, I'd like to encourage everybody to grab the current
head revisions of //public/jam/..., and give it a go. My own resources
for testing across diverse platforms are thin, so I'm hoping that we
can shake out major portability problems (or other bugs) by way of
your efforts. I.e., I'll do what I can, but if there's a platform you
really care about, you can help by trying the current "release
candidate" there.
I will try to at least make a statement of the platforms I -expect- it
to work on, by next week. I'll also shift to a mode of posting news and
status at
http://public.perforce.com/public/jam/index.html
instead of long message (like this) to this list.
In short: the mainline has some significant improvements, which I am
aiming to make into the 2.4 release within the month. It's a great
time for you to try the head of main and report bugs (or bug me about
fixes you've already submitted, that I've missed integrating). The
process of getting 2.4 out will have also served as a learning curve
for me, which should allow work on 2.5 (3.0?) to progress that much faster.
Finally, thanks to all Jam users and developers out there. I've
enjoyed meeting you, and starting to work with you. Your attitude rocks.
Date: Mon, 04 Feb 2002 16:03:43 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Subject: Jam 2.4 release status
I never include "." in my path. Therefore I suggest that you
change in Makefile
all: jam0
jam0
to
all: jam0
./jam0
I had no problem to compile it under cygwin. It seems to work as
expected with my old Jamfiles.
When I aborted it with Ctrl-C I got the following output:
STATUS_ACCESS_VIOLATION
jam.exe.stackdump
This different compared with Jam 2.3 where it need sometimes more
than one Ctrl-C to stop, but never more output than "interrupted".
From: <boga@mac.com>
Date: Tue, 5 Feb 2002 08:39:01 +0100
Subject: Jam 2.4 vs. 2.3
The syntax of 'in' was changed in jam 2.4:
I've used to write something like:
Now i had to replace it with:
I don't mind this change, however it should be documented.
The change seems to be cause by rewrite of jamgram.yy
< | arg `in` list
to
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 5 Feb 2002 15:49:25 -0500
Subject: Re: jamming digest, Vol 1 #312 - 1 msg
For the record, I mind it.
From: "Andrew Reynolds" <andrew@syndeocorp.com>
Date: Wed, 6 Feb 2002 10:09:27 -0800
Subject: building jam for the first time - newbee help please
I am trying to build jam 2.3 itself form the source files I down loaded from
the perforce web site. I am building on Solaris 2.6. The make file
generates jam0 but I keep getting this error when running jam0:
Archive bin.unix/libjam.a
ld: fatal: file bin.unix/libjam.a: cannot open file: No such file or directory
ld: fatal: File processing errors. No output written to a.out
I am frustrated with the documentation on this - it is very vague and there
are few references on how to build jam the first time or what environment
needs to be set and I just can't seem to figure out what needs to be set to
fix this. Or maybe I have just spent too much time writing perl scripts and
have forgotten all my c.
So could some take pity on me and point me the right way on this? (either
which doc covers this or give me some instructions here).
Date: Wed, 06 Feb 2002 14:18:04 -0800
From: rmg@perforce.com
Subject: Re: jamming digest, Vol 1 #313 - 2 msgs
Ah, finally, controversy! Well, closest thing we've seen to it :-)
No final ruling yet, but it does sound like the change breaks existing
Jamfiles, and unless there's strong a reason to do so, I suspect we'll
want to put it back the way it was.
BTW, I'll likely be distracted (more the most part) from jam work
until the last week of the month, at which point I hope to be able to
complete my checklist of reviews needed before I can finalize a
packaged 2.4 release.
One item on the checklist will be to review all of the traffic on this
list since, say Jan 1, to make sure I haven't missed picking up any
important bug reports or fixes. So, please do keep them coming.
Date: Wed, 06 Feb 2002 16:16:16 -0700
From: Ray Caruso <Ray.Caruso@Netvion.com>
Subject: "in" syntax change in 2.4
Oh man, don't modify syntax, please. Add new stuff, but don't break
existing syntax.
That would be one very big reason to not move from 2.3 to 2.4.
From: Markus Scherschanski <MScherschanski@dspace.de>
Date: Thu, 7 Feb 2002 13:33:09 +0100
Subject: Single Pseudo-Target
I got a bit of a problem, I'm trying do create a target that is only called
if the command line says it, I mean something like:
jam -f myjamfile cclean
It should be a substitution for the normal clean.
I tried several ways, look what I mean:
rule CClean {
local _i ;
if $(UNLOCK_ONLY) != TRUE { ECHO removing object files and lib ; }
{
UnlockIt $(_i) ;
if $(UNLOCK_ONLY) != TRUE { CleanIt $(_i) ; }
}
actions existing CleanIt {
echo deleting $(<)
del $(<)
}
actions existing UnlockIt {
echo unlocking $(<)
attrib -r $(<)
}
CClean cclean : $(OBJ_FILES) $(LIB) ;
I also tried:
NOTFILE cclean ;
ALWAYS cclean ;
Depends ...
The phenomenon is that he calls the rule everytime and never executes the
actions commands!!!
What can I do? What the rule does, should be clear, if not: I'll explain.
Many thanks. Hopefully,
Date: Thu, 7 Feb 2002 12:40:37 -0800 (PST)
Subject: Re: Single Pseudo-Target
You were really really close... :)
Try:
NOTFILE cclean ;
ALWAYS cclean ;
rule CClean {
local _i ;
if ! $(UNLOCK_ONLY) {
CCleanMessage cclean ;
}
{ UnlockIt cclean : $(_i) ;
if ! $(UNLOCK_ONLY) {
CleanIt cclean : $(_i) ;
}
}
}
actions CCleanMessage {
echo Removing object files and lib...
}
actions existing CleanIt {
echo deleting $(>)
del $(>)
}
actions existing UnlockIt {
echo unlocking $(>)
attrib -r $(>)
}
CClean cclean : $(OBJ_FILES) $(LIB) ;
$ jam -d0 cclean
Removing object files and lib...
unlocking src/lib/a/a.o
deleting src/lib/a/a.o
unlocking src/lib/a/liba.a
deleting src/lib/a/liba.a
$ UNLOCK_ONLY=true jam -d0 cclean
unlocking src/lib/a/a.o
unlocking src/lib/a/liba.a
From: Jack_Goral@NAI.com
Subject: RE: Single Pseudo-Target
Date: Thu, 7 Feb 2002 07:10:27 -0600
I think you can take the idea from my code below:
rule Msdev # projectname {
local _t = $(1:S=.dsp) ; # make it : projectname.dsp
local _wt = \"$(1) - Win32 $(BUILD)\" ;
local _clean = [ FGristFiles clean ] ;
#
# find the .dsp file
#
SEARCH on $(_t) = $(SEARCH_SOURCE) ;
#LOCATE on $(_t) = $(SEARCH_SOURCE) ;
#
# make all depend on the target (.dsp)
#
Depends build : $(_t) ;
Depends all : build ;
Depends clean : $(_clean) ;
#
# always remake the target so 'msdev' decides what to rebuild
#
ALWAYS $(_t) ;
#
# msdev build target, for example: "NGExpertSvr - Win32 Debug"
#
if $(SUB_BUILD) {
_wt = \"$(1) - Win32 $(SUB_BUILD) $(BUILD)\" ;
}
MSDEV_BUILD_TARGET on $(_t) = $(_wt) ;
MSDEV_BUILD_TARGET on $(_clean) = $(_wt) ;
#
# run the 'msdev' project build
#
RunMsdev $(_t) ;
#
# use 'msdev' when target is 'clean'
#
RunMsdevClean $(_clean) : $(_t) ;
}
From: Markus Scherschanski [mailto:MScherschanski@dspace.de]
Sent: Thursday, February 07, 2002 6:33 AM
Subject: Single Pseudo-Target
I got a bit of a problem, I'm trying do create a target that is only called
if the command line says it, I mean something like:
jam -f myjamfile cclean
It should be a substitution for the normal clean.
I tried several ways, look what I mean:
rule CClean {
local _i ;
if $(UNLOCK_ONLY) != TRUE {
ECHO removing object files and lib ;
}
{
UnlockIt $(_i) ;
if $(UNLOCK_ONLY) != TRUE {
CleanIt $(_i) ;
}
}
actions existing CleanIt {
echo deleting $(<)
del $(<)
}
actions existing UnlockIt {
echo unlocking $(<)
attrib -r $(<)
}
CClean cclean : $(OBJ_FILES) $(LIB) ;
I also tried:
NOTFILE cclean ;
ALWAYS cclean ;
Depends ...
The phenomenon is that he calls the rule everytime and never executes the
actions commands!!!
What can I do? What the rule does, should be clear, if not: I'll explain.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sun, 10 Feb 2002 19:53:18 -0500
Subject: Nasty Jam behavior
My copy of Jam has the following behavior on Windows:
C:\>jam -f-
x = foo/bar ;
ECHO $(x:G=) ;
^Z
foo\bar
I guess it's mostly harmless when slashes and grist have their usual meaning
(though backslashes can cause trouble for some cygwin tools), but if you're
trying to do anything else, this behavior is at least surprising, and
potentially problematic. Slashes seem to automatically get reversed during
binding anyway, so is there any reason to keep this quirk in Jam?
Subject: Re: Nasty Jam behavior
From: Matt Armstrong <matt@lickey.com>
Date: Sun, 10 Feb 2002 19:42:04 -0700
The upcoming 2.4 release of stock jam's RELNOTES mentions something
related to this. It says that using any of GDBSM will result in the
variable being parsed and rebuilt as a filename, while all other
modifiers do not have this behavior (previously, they all did). I
wonder why :G is considered a filename operation though -- strictly
speaking it really has nothing to do with files.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Nasty Jam behavior
Date: Sun, 10 Feb 2002 21:54:10 -0500
Hmm. I can't see any reason for it to happen with any of the modifiers
regardless of their application as filename modifiers, since, as I said, it
happens anyway at binding time.
Subject: Re: Nasty Jam behavior
From: Matt Armstrong <matt@lickey.com>
Date: Sun, 10 Feb 2002 20:00:19 -0700
It makes some sense for things like :P and :D -- the filename
operations are OS dependent. Perhaps the win32 path functions are
smart enough to recognize '/' as a path separator, but they use '\'
when rebuilding the filename.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Nasty Jam behavior
Date: Sun, 10 Feb 2002 22:14:17 -0500
Of course I understand that. But who is this behavior helping? Surely any
system that depends on it is a fragile one. What's the scenario? To justify
the current behavior, it seems to me that all three of these conditions must
be true:
* The system supports user-specified paths with forward slashes
* The system happens to use these modifiers on all such paths
* The system depends somehow on the lack of forward-slashes in
paths that have been modified.
Subject: Re: Nasty Jam behavior
From: amaury.forgeotdarc@ubitrade.com
Date: Mon, 11 Feb 2002 09:49:57 +0100
Because of this, we chose to slightly modify Jam,
to make it more consistent:
Jam only builds filenames with '/' separators,
except when executing actions, where
$(<), $(>) and "bind" variables are rewritten with '\'.
With this change, we always write our Jamfiles with '/'
in path, even for NT-only rules. We even made our
developers think that backslashes are forbidden in
Jamfiles (which is not true. Jam still recognizes both).
The advantages are consistency and platform-independency.
The drawback is that every variable containing a file name
must be declared as "bind", otherwise it will appear with '/'.
From: "Vianney Lecroart" <lecroart@nevrax.com>
Date: Tue, 12 Feb 2002 15:57:41 +0100
Subject: jambase & keepobjs
I'm a newbie about Jam and I try to port my big project into Jam (on windows
platform to start).
I want to modify Jambase default file so I put a file Jambase in the same
directory where the Jamfile is, and I execute "jam" on the same dir but it
doesn t take my Jambase file but a default one. I have to execute
"jam -fJambase" and in this case, it calls mine but I think it s not cool,
is there another way to call my Jambase automatically?
Other question: I create a (big) Library. Strangely, it create all objs,
create the lib and erase all objs but in the default Jambase, there that:
if ! ( $(NOARSCAN) || $(KEEPOBJS) ) { RmTemps $(_l) : $(_s) ; }
and NOARSCAN is set to "true" on NT so the if should not enter and objs
should not erased? right?
Another strange things is that when I set KEPPOBJS to true, objs are not
deleted but the lib is not created, I think it s because:
Depends lib : $(_l) ;
is not call if KEEPOBJS is true and so, it doesn't check if the lib is here
or not (i not sure about that)
From: "Vianney Lecroart" <lecroart@nevrax.com>
Subject: Re: jambase & keepobjs
Date: Tue, 12 Feb 2002 17:08:15 +0100
Exact, so I understand the behavior now.
Ok, but my project is a library so I want the lib :) Is there a tricks to
generate the lib even if KEEPOBJS is true?
Now, another question, how to manage in a portable way the directory
separator? putting / doesn t work when creating a directory with MkDir on
windows so I use $(SLASH) like that:
TOP = r:$(SLASH)code$(SLASH)nel ;
But it s not very readable and the user could change the path so it s not a
very user friendly format, another way to perform this?
From: Patrick Frants <patrick@quintiq.com>
Date: Wed, 13 Feb 2002 08:08:57 +0100
Subject: How do I *not* build a file
I have a file b.cpp file which is included by a file a.cpp file.
When using Glob all .cpp files turn up including b.cpp.
Adding b.o to the depot and specifying both NOTFILE and NOUPDATE for b.o works,
but I think it's very ugly. I would
rather have one of these solutions (in this order):
1. Somehow exclude b.cpp from the list of source files returned by GLOB.
I can't find any operator for excluding an item
from a list. Maybe I could write a rule to exclude an item from a list with a for loop.
2. Somehow tell jam that b.cpp is very special and should not be compiled.
Date: Wed, 13 Feb 2002 18:46:32 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: How do I *not* build a file
Name it b.inc instead of *.cpp.
Yes, you can do that.
Isn't that what you're doing with NOUPDATE?
From: "Achim Domma" <achim.domma@syynx.de>
Date: Thu, 14 Feb 2002 15:36:17 +0100
Subject: First tries with boost-jam
I'm just doing my first steps with jam, but have no success. I downloaded
jam binaries for windows from the boost page and extracted them to a folder.
Then I set the following environment variables :
BOOST_BUILD_INSTALLATION=J:\boost-build
INTELC="D:\Program Files\IntelC++\compiler50\ia32"
VISUALC="D:\Program Files\Microsoft Visual Studio\VC98"
JAM_TOOLSET=INTELC
then I write the following simple Jamfile :
project-root ;
exe MyTestExe : first.cpp
second.cpp
some_more.cpp
;
Executing Jam without parameters I get :
Compiler is Intel C/C++
warning: unknown rule project-root
warning: unknown rule exe
...found 7 targets...
As far as I understand this means that jam does not know what to do with
'project-root' and 'exe'. I think jam should find the required files via
BOOST_BUILD_INSTALLATION !?
could somebody give me a hint in the right direction
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: First tries with boost-jam
Date: Thu, 14 Feb 2002 10:27:22 -0500
Try first setting BOOST_ROOT to the root directory of your boost
installation. BOOST_BUILD_INSTALLATION is becoming obsolete and that part of
the documentation may have gotten out-of-synch.
From: Markus Scherschanski <MScherschanski@dspace.de>
Date: Thu, 14 Feb 2002 16:51:44 +0100
Subject: Release what?!
first of all, many thanks to those, who had helped me with my little problem
building a single object (once!!!).
I was just testing Jam whether it would fit the needs of my firm. So in that
point I'm also interested in what's the future of Jam.
Everybody is talking about a 2.4 release, but when has the time arrived for
it and what will be the features. Will it be Win95-compatible and will have
all the functions ftjam has.
How about regular expressions and some support for e.g. testing the
existence of files, not just via actions and bloody shell programming.
My last wish would be A BETTER DOCUMENTATION!!!
Nevertheless, I must say, Jam is the best Make-Tool I ever saw, and I'll try
to make my bosses like it too.;)
Date: Thu, 14 Feb 2002 17:08:33 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Release what?!
I'm using that right now, and you can too. Just check it out from the
perforce public depot. 2.4 is at //public/jam/...
From: "Achim Domma" <achim.domma@syynx.de>
Subject: RE: First tries with boost-jam
Date: Thu, 14 Feb 2002 18:00:02 +0100
it works so far, thanks ! Now I have the 'Command-line and Environment
Variable Quoting' Problem. As mentioned in the documentation I put double
quotes around the path, but it does not work. Here are the top rows of the output :
Jamrules: No such file or directory
...found 43 targets...
...updating 7 targets...
intel-win32-C++-action
bin\PythonISAPI\intel-win32\debug\runtime-link-dynamic\Re
quest.obj
'D:\Program' is not recognized as an internal or external command,
operable program or batch file.
"D:\Program Files\IntelC++\compiler50\ia32"\bin\icl
/Zm400 -nologo -GX -c
/Zi /Od /Ob0 /GX /GR
Dd -I"." -Fo"bin\PythonISAPI\intel-win32\debug\runti
me-link-dynamic\Request.obj" -Tp"Request.cpp"
...failed intel-win32-C++-action
bin\PythonISAPI\intel-win32\debug\runtime-link-
dynamic\Request.obj ...
intel-win32-C++-action
bin\PythonISAPI\intel-win32\debug\runtime-link-dynamic\Response.obj
From: "Achim Domma" <achim.domma@syynx.de>
Date: Mon, 18 Feb 2002 13:55:13 +0100
Subject: Building COM Objects with boost jam
I have successfully build my first dlls with the boost version of jam. To
use jam as our build-system I must be able to build COM objects, so I have
to implement rules and actions for compiling *.idl files. I read the
documentation of boost.build but the included jamfiles are quite large.
Could somebody give me a starting point where to hook idl processing into
the boost build process ?
PS.: Is it ok to post in this list even if it's boost.build specific ? I
thought this list fits better than the boost list.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Building COM Objects with boost jam
Date: Mon, 18 Feb 2002 08:50:17 -0500
We have our own mailing list for Boost.Build-related discussions:
Date: Wed, 20 Feb 2002 18:06:58 +0000 (GMT)
From: Richard Smith <richard@ex-parrot.com>
Subject: Auto generated files
I'm trying to write a Jamfile for a project that has some automatically
generated .cc and .h files, and the automatically generated .cc files can
include the automatically generated .h files. I would like to use Jam's
ability to generate the files on demand to generate the .hs, however I've
been unable to get the dependencies quite right. I wonder whether someone can help.
I have the following test Jamfile
rule AutoSource {
Clean clean : $(1) ;
Depends $(1) : $(2) ;
}
rule AutoHeader { Clean clean : $(1) ; }
actions AutoHeader { touch $(<) }
actions AutoSource { cp $(>) $(<) }
AutoHeader foo.h ;
AutoSource foo.cc : foo.src ;
Main foo : foo.cc ;
# End file
... in a directory containing just a file, foo.src:
#include "foo.h"
int main() {}
When I run Jam the first time it builds foo.cc correctly but it doesn't
know that it needs to build foo.h before compiling it, and so fails. If I
re-run Jam, it is able to analyse the included files in foo.cc and it then
builds foo.h and compiles and links foo happily.
Is there a way of getting this to work? ( I do not want to just add a
"Depends $(1) : first ;" line to the AutoHeader rule. )
Subject: Re: Auto generated files
From: Matt Armstrong <matt@lickey.com>
Date: Wed, 20 Feb 2002 12:36:21 -0700
Once actions start running (and, say, build your foo.cc) there is no
way to modify the dependency information. For jam to build something
in one run, it needs the complete dependency tree before it starts
building anything. There is no way to scan foo.cc for headers after
Jam has just built it.
I wouldn't do that either.
Since foo.cc is generated, presumably the header files it will depend
up on are known ahead of time. You could add "Depends foo.cc : foo.h"
in there yourself. You might be able to wrap the messy details in the
rules that generate foo.cc. This is kindof yucky, since you're
inserting more knowledge about how foo.cc is generated into the Jam
rules than you might like, but doing stuff like this is the only way
to get it to work with Jam.
From: Badari Kakumani <badari@cisco.com>
Date: Wed, 20 Feb 2002 12:15:02 -0800
Subject: Re: emacs editing mode for Jam?
i am new to the list. i could browse the above changes using the
cgi script. but how do i download the actual source files?
do i need some perforce client running on my unix box to download these?
Date: Wed, 20 Feb 2002 17:21:26 -0800 (PST)
Subject: Re: emacs editing mode for Jam?
You could do that, or you could use my version of P4DB to get to the page
that has a "Download file" link (in the legend at the top):
http://www.tsoft.com/~dianeh/cgi-bin/p4db/fv.cgi?FSPC=//guest/eric%5fscouten/jam%2dmode/jam%2dmode.el&REV=1
Date: Wed, 20 Feb 2002 17:59:34 -0800 (PST)
Subject: Re: Auto generated files
If foo.h only needs to get generated when it doesn't exist (ie., its
generation isn't dependent on whether it's newer than foo.src), just
change:
rule AutoHeader {
Depends files : $(1) ;
Clean clean : $(1) ;
}
But if foo.h should be re-gen'd when it's out-of-date with foo.src, then
you'd need to make foo.h depend on foo.src:
rule AutoHeader {
Depends $(1) : $(2) ;
Clean clean : $(1) ;
}
and change:
AutoHeader foo.h : foo.src ;
Date: Wed, 20 Feb 2002 19:46:03 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Another compatibility problem in Jam2.4
In early Feb, Miklos (boga@mac.com) reported that the syntax for 'in'
changed in the current development version of Jam 2.4. His reported
fix is to alter the jamgram.yy file:
A second problem is with expressions like the following:
list = 1 2 3 4 ;
element = 1 ;
if ! $(element) in $(list) {
Echo Wrong. ;
}
With Jam 2.3 the echo is not reached. With 2.4-dev, the echo is executed.
If you add brackets to the example, jam 2.4 will do the right thing. The
change breaks a lot of my code.
One solution is to revert that section of jamgram.yy back to an earlier
style, with the extensions that 2.4 needs. The new section which is working
for me is:
expr : arg
{ $$.parse = peval( EXPR_EXISTS, $1.parse, pnull() ); }
| arg `=` arg
{ $$.parse = peval( EXPR_EQUALS, $1.parse, $3.parse ); }
| arg `!=` arg
{ $$.parse = peval( EXPR_NOTEQ, $1.parse, $3.parse ); }
| arg `<` arg
{ $$.parse = peval( EXPR_LESS, $1.parse, $3.parse ); }
| arg `<=` arg
{ $$.parse = peval( EXPR_LESSEQ, $1.parse, $3.parse ); }
| arg `>` arg
{ $$.parse = peval( EXPR_MORE, $1.parse, $3.parse ); }
| arg `>=` arg
{ $$.parse = peval( EXPR_MOREEQ, $1.parse, $3.parse ); }
| expr `&` expr
{ $$.parse = peval( EXPR_AND, $1.parse, $3.parse ); }
| expr `&&` expr
{ $$.parse = peval( EXPR_AND, $1.parse, $3.parse ); }
| expr `|` expr
{ $$.parse = peval( EXPR_OR, $1.parse, $3.parse ); }
| expr `||` expr
{ $$.parse = peval( EXPR_OR, $1.parse, $3.parse ); }
| arg `in` list
{ $$.parse = peval( EXPR_IN, $1.parse, $3.parse ); }
| `!` expr
{ $$.parse = peval( EXPR_NOT, $2.parse, pnull() ); }
| `(` expr `)`
{ $$.parse = $2.parse; }
;
From: sam th <sam@uchicago.edu>
Date: 22 Feb 2002 16:11:44 -0600
Subject: Newbie question
I'm just learning Jam, but I like it quite a lot. However, there's one
thing that I just can't seem to figure out how to get jam to
understand. I have a project with a couple subdirectories, call them
proj/lib and proj/bin. I compile a library in proj/lib, and that works
fine. But then I want to compile a program in proj/bin, and I can't get
Jam to find the library. I've tried lots of things. What's the best
way to do this?
Date: Tue, 26 Feb 2002 10:10:29 -0800
From: rmg@perforce.com
Subject: jam jobs in the Public Depot
(I will at some point mark the "must-fix" ones severity 'A', but have
not done so yet - I'm just collecting the list so far).
This further(*) introduces the use of Perforce jobs as a feature of
the public depot, a somewhat uncharted territory, but one that, will
be useful as the Public Depot revs up. In particular, expect the
jobspec (think of it as the bug tracking database schema) to change as
we learn more about this. I've tried to keep it very very simple for
now. There may also be some Perforce access control issues with jobs
in a public repository to work out, but that's just part of the game.
If you are Jam *and* Perforce user, who feels comfortable with jobs,
and would like to submit Jam bug reports (or feature requests) using
jobs in the Public Depot - go for it!
Once I have what I think is the list of "must-fix"es for 2.4, I'll
post to this list, so you'll have a final shot at arguing for other ones.
* Well, actually, Sam Wise of Perforce was the real ground breaker,
having started tracking p4hl issues with "jobs". Thanks Sam!
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: jam jobs in the Public Depot
Date: Tue, 26 Feb 2002 13:36:45 -0500
I think you should seriously consider fixing the buffer overrun problems
also, especially those in expand.c.
We have done that in Boost.Jam by implementing a string "class" in 'C'.
We're currently rolling the Perforce 2.4 changes into our source (compile.c
and expand.c are causing some significant trouble), but should have that
done in the next coupla weeks.
From: <rmg@perforce.com>
Sent: Tuesday, February 26, 2002 1:10 PM
Subject: jam jobs in the Public Depot
in v1 v2 v'
Jam source'
stock Ja'
removed on'
direction on'
Subject: Re: jam jobs in the Public Depot
Date: Tue, 26 Feb 2002 10:52:33 -0800
From: rmg@perforce.com
favor delaying the "finalization" of 2.4 for this?
My _intent_ of the moment is just to fix things that would inhibit a
happy "stock" 2.3.1 user from wanting to upgrade to 2.4; my _hope_ of
the moment is that this could happen very soon. (Also, we'll benefit
from my simply having gone through a first iteration at the release
packaging (etc) process).
Larger changes, (including ones like buffer overrun problems, to which
your fixes are, I presume, extensive) would be addressed in a later release.
From: "Stephen Smith" <khadrin@hotmail.com>
Date: Thu, 28 Feb 2002 11:44:58 -0500
Subject: Locating Includes in OpenVMS
This message regards the practice of including a relative path when
specifying include files:
// foo.cxx
#include "foo/bar.h"
int main() {}
The Compaq C++ compiler for VMS can get along with this fine. For
example, assuming a project laid out as follows
/root/foo/src:
foo.cxx
/root/foo:
bar.h
compile it like this
cxx/include=("/root") foo.cxx
and all is well.
What I cannot figure out is how to write the jamfile so that the
compiler is happy and jam is able to correctly scan header dependancies.
For example, if
HDRS = \"/root\" ;
the compiler is happy, but jam cannot locate "foo/bar.h" to check it's
timestamp.
We could also try specifying the directory VMS style...
HDRS = root:[000000] ;
but then both the compiler and jam are unhappy.
Date: Thu, 28 Feb 2002 11:23:30 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: re: TOGETHER targets not removed on failure
The answer is no, it is a mistake. The intention is to keep library
archives from being deleted if, for example, a single file can't be
added to the archive. But I'm thinking the check should be:
!( cmd->rule->flags & RULE_UPDATED )
Namely, if the target only sees 'updated' sources on its action list,
it presumably must maintain state of its own and therefore shouldn't be
deleted on failure to update.
The original confusion came about because 'updated' and 'together' modifiers
are always used together in the stock Jambase, and so the incorrest test didn't hurt.
I just wanted to check with you: would switching the test to the 'updated'
instead of 'together' actions modifier address your problem?
From: <boga@mac.com>
Date: Fri, 1 Mar 2002 11:39:12 +0100
Subject: Re: TOGETHER targets not removed on failure
Yes i think that output of actions marked with UPDATED must not be deleted.
(But this should be documented!)
From: Markus Scherschanski <MScherschanski@dspace.de>
Date: Fri, 1 Mar 2002 16:11:39 +0100
Subject: How many jams are there?
has anyone an overview how many Jam-Versions there are and which kinds of
features they provide? How about a global merge?
E.g. Matt's version has a MATCH-function - FTJAM has SUBST - FTJAM can do
Win9x - Matt's version not, so what now?
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 5 Mar 2002 12:28:19 -0500
Subject: short-circuit evaluation
We recently discovered a difference in behavior between Jam 2.3.2 and
2.4: Aside from the fact that Jam now accepts '&' and '|' in addition to
'&&' and '||' as conditional operators, the behavior of the old
operators has been changed to match that of the new ones: "short-circuit
evaluation" has been disabled. To see this, throw the following at
'jam -f-':
if $(FALSE) && [ ECHO 'this is Jam 2.4' ] {}
It doesn't make much sense to me to have added '&' and '|' to the
language if they're not going to operate differently from '&&' and '||'.
The fix is pretty easy. This is the one we use for our merged version of
Jam. I think the only difference for stock Jam is that you need to
replace the use of "frame" in compile.c with "lol", but don't hold me to it:
===================================================================
RCS file: /cvsroot/boost/boost/tools/build/jam_src/compile.c,v
retrieving revision 1.8.4.3
diff -r1.8.4.3 compile.c
160c160
< LIST *lr = parse_evaluate( parse->right, frame );
---
175c175,179
< if( ll && lr ) status = 1;
---
arse_evaluate( parse->third, frame );
179c183,191
< if( ll || lr ) status = 1;
---
arse_evaluate( parse->third, frame );
Index: jamgram.yy
===================================================================
RCS file: /cvsroot/boost/boost/tools/build/jam_src/jamgram.yy,v
retrieving revision 1.5.4.3
diff -r1.5.4.3 jamgram.yy
69a70
arse_make( compile_eval,l,P0,r,S0,S0,c )
216c217
< { $$.parse = peval( EXPR_AND, $1.parse, $3.parse ); }
220c221
< { $$.parse = peval( EXPR_OR, $1.parse, $3.parse ); }
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 5 Mar 2002 15:21:57 -0500
Subject: Re: jamming digest, Vol 1 #327 - 1 msg
I /swear/ I didn't actually write "arse_make" or "arse_evaluate"!
There's a missing initial 'p' in these cases, in case it isn't obvious.
Date: Wed, 6 Mar 2002 18:20:54 +0100 (CET)
From: Jan Langer <jan@langernetz.de>
Subject: LOCATE_TARGET on
i'm new to jam and this list and read most of the docs i found but some
simple things i just can't find.
i have three directories 'bin', 'obj' and 'src'. the sources reside in
'src', the object files shall be compiled to 'obj' and the executable
shall go to 'bin'.
this is my Jamfile, but it says that 'c4p.o depends on itself' and the
executable goes to obj and not bin.
SEARCH_SOURCE = src ;
LOCATE_TARGET = obj ;
LOCATE_TARGET on c4p = bin ; # this seems to be ignored
Main c4p : c4p.cc ;
i left C++, C++FLAGS and HDRS out, because i think they're not important
for this example.
what is wrong with my Jamfile?
jan langer ... jan@langernetz.de
"pi ist genau drei"
From: Craig Allsop <callsop@auran.com>
Date: Fri, 8 Mar 2002 10:16:43 +1000
Subject: compile errors?
If jamgram.c and jamgram.y are supplied with the jam source, shouldn't they
be up to date with jamgram.yy? The source at public.perforce.com/jam/src
compiles as so:
jamgram.c
jamgram.y(311) : error C2065: 'EXEC_UPDATED' : undeclared identifier
jamgram.y(313) : error C2065: 'EXEC_TOGETHER' : undeclared identifier
jamgram.y(315) : error C2065: 'EXEC_IGNORE' : undeclared identifier
jamgram.y(317) : error C2065: 'EXEC_QUIETLY' : undeclared identifier
jamgram.y(319) : error C2065: 'EXEC_PIECEMEAL' : undeclared identifier
jamgram.y(321) : error C2065: 'EXEC_EXISTING' : undeclared identifier
I've run yyacc over jamgram.yy and used bison to generate jamgram.c, however
jamgram.y includes rules.h which has the following typedef:
typedef struct _rule RULE;
This doesn't compile as RULE is a token used by the grammar.
Date: Sun, 10 Mar 2002 16:41:18 +0100 (CET)
From: Jan Langer <jan@langernetz.de>
Subject: SubDirHdrs
i just wondered why the ruls SubDirHdrs in the builtin Jambase file is
rule SubDirHdrs { SUBDIRHDRS += $(<) ; }
and not
rule SubDirHdrs { SUBDIRHDRS += [ FDirName $(<) ] ; }
the documentation (Jambase Manpage) says:
"SubDirHdrs d1 ... dn ;
Adds the path d1/.../dn/ to the header search paths for source
files in SubDir's directory. d1 through dn are elements of a directory path."
i think this means that d1 to dn are composed together to one path.
what is wrong?
ps: i there a collection of user defined jam rules on the net. i have
written some rules to handle PCCTS (a well-known parser generator)
files. although i not sure if i did it correctly (i just works quite
well in my case) i would like to share it with others who need it.
Date: Sat, 9 Mar 2002 03:10:14 -0800
From: David Lindes <user-perforce-jam@daveltd.com>
Subject: SoftLink rule?
I recently discovered jam, and I've been playing with it a bit,
trying to learn my way around... (I hope you don't mind a
non-list-member posting... that doesn't seem to be discouraged on the web site)
In my experiments with it, I came upon a desire to have a
Jamfile of mine create a symbolic link to a file in a different
directory, which would then be compiled, and which I wanted 'jam
clean' to get rid of for me...
I didn't see an obvious way of doing that (easily) with the
existing Jambase, but I figured this might be a common enough
thing that perhaps an addition to Jambase would be warranted...
So, I tried making a new Jambase file with the following changes...:
--- /home/lindes/src/otherware/devel/jam/jam-2.3/Jambase Thu Jan 4 07:53:08 2001
+++ /usr/tmp/Jambase.SoftLink Sat Mar 9 11:03:15 2002
@@ -681,6 +681,14 @@
SEARCH on $(>) = $(SEARCH_SOURCE) ;
}
+rule SoftLink
+{
+ Depends files : $(<) ;
+ Depends $(<) : $(>) ;
+ SEARCH on $(>) = $(SEARCH_SOURCE) ;
+ Clean clean : $(<) ;
+}
+
rule HdrRule {
# HdrRule source : headers ;
@@ -1539,6 +1547,11 @@
actions HardLink
{
$(RM) $(<) && $(LN) $(>) $(<)
+}
+
+actions SoftLink
+{
+ $(RM) $(<) && $(LN) -s $(>) $(<)
}
actions Install
... and that seems to work just fine for me. I don't know my
way around jam well enough to know if there's something I might
be doing that would be generally problematic and/or naive, and I
certainly don't know if this would create problems (and if so,
what sorts of problems, and/or what a good fix would be) on
platforms that aren't particularly similar to my own... So do
with this what you will, but I think this change (or one
comparable to it) would be a nifty feature...
David Lindes, possible future-jam-addict ;-)
P.S. I also thought about just adding the Clean line to the
HardLink rule, but I can see reasons why that might be bad
in some situations. In mine it would have been fine, but
I figured I'd create a SoftLink rule instead so as not to
be suggesting something that might cause problems for people. :-)
Date: Mon, 11 Mar 2002 17:01:04 -0800
From: rmg@perforce.com
Subject: Re: SubDirHdrs
What revision of the builtin Jambase are you using?
//public/jam/src/Jambase#10, which will be in the upcoming 2.4
release, seems to have the correct definitions (which is the one you
apparently expected):
rmg $ p4 print //public/jam/src/Jambase#10 | egrep SUBDIRHDRS Jambase | grep +=
SUBDIRHDRS += [ FDirName $(<) ] ;
I suspect that the Jambase you are looking at is out of date
WRT the documentation.
None that I'm aware of; for now, you could register as a Perforce
Public Depot user, and at least post your useful jam rules there.
I hope in upcoming months to add infrastructure to the Public Depot to
make it easier to post such things in ways that will make it easy for
interested persons to find them.
Date: Mon, 11 Mar 2002 21:20:12 -0800 (PST)
Subject: Re: LOCATE_TARGET on
You can do either:
SEARCH_SOURCE = src ;
LOCATE_TARGET = objects ;
Main c4p$(SUFEXE) : c4p.c ;
LOCATE on c4p$(SUFEXE) = bin ;
or:
SEARCH_SOURCE = src ;
LOCATE_TARGET = objects ;
Main c4p$(SUFEXE) : c4p.c ;
MakeLocate c4p$(SUFEXE) : bin ;
Note that I changed your output dir to "objects" -- that's to get rid of the warning.
(I also added $(SUFEXE) because I'm on an NT :) (Or :(, depending on how
you want to look at it :)
Date: Tue, 12 Mar 2002 11:37:52 +0100 (CET)
From: Jan Langer <jan@langernetz.de>
Subject: Re: Re: SubDirHdrs
2.3 from the boost version of jam.
i already changed it in my Jambase file and compiled jam again. so i'm
quite comfortable with it, if the error occurs only in this version.
yes, i will do that. but they work not like i want them to work. the
problem is the following:
i have a grammar file. from this file the actual cpp and h files are
created. now i want to search the grammar file for include directives
and make them dependencies of the cpp-file. currently i use this
construct:
HDRRULE on $(_grm) = HdrRule ;
HDRSCAN on $(_grm) = $(HDRPATTERN) ;
HDRSEARCH on $(_grm) = $(HDRS) $(SUBDIRHDRS) $(SEARCH_SOURCE) ;
HDRGRIST on $(_grm) = $(HDRGRIST) ;
Date: Tue, 12 Mar 2002 13:44:37 -0000
From: "Joolz " <Joolz@rsd.tv>
Subject: Help With a Small Problem
I have a build working with JAM, god its easier than using makefiles but
I have a small problem that I would like some help if possible
My code in JAMRULES
actions Counter {
ECHO actions Counter $(<) ;
"tools\Counter\Counter" $(<) ;
}
rule IncRevision {
ECHO Rule IncRevision $(<) ;
Depends revision : $(<) ;
Counter $(<) ;
}
My Code in JAMFILE
Depends revision.c : $(MyLibraries) ;
IncRevision revision.c ;
Library revision : revision.c ;
What I am trying to do is that when any of the libraries are modified by
a change in the source the file revision.c (which contains a counter and
date information) gets passed to a tool call counter which updates this
to reflect a change in the project.
This then builds the library revision.a which is then linked into the final image
From: Craig Allsop <callsop@auran.com>
Date: Wed, 13 Mar 2002 08:46:36 +1000
Subject: file_dirscan
I'm interested to know if other NT jam users had problems with Clean when
using Glob? The del command under cmd shell does not like forward slash
characters in the filenames. I looked through the archives but could not
find any issues on this. The file_dirscan routine is the only place in jam
that is different. The Glob function uses this routine for collecting files
and this is where this character is introduced. If I use Glob to scan for
source files it causes problems with Clean. To solve this issue, I've made
the following change to file_dirscan to use the define for this character.
#ifdef __USE_NT_PATH_DELIM
sprintf( filespec, "%s%c*", dir, PATH_DELIM );
#else // !__USE_NT_PATH_DELIM
sprintf( filespec, "%s/*", dir );
#endif // !__USE_NT_PATH_DELIM
Should this correction be made to the original jam?
Subject: Re: file_dirscan
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 12 Mar 2002 16:14:20 -0700
I think it should, but since PATH_DELIM is around for both NT and Unix
builds, jam can just use your __USE_NT_PATH_DELIM code for both.
From: Craig Allsop <callsop@auran.com>
Subject: RE: file_dirscan
Date: Wed, 13 Mar 2002 12:17:48 +1000
I agree, the #ifdef is only to show what was changed (an internal policy of ours).
From: Matt Armstrong [mailto:matt@lickey.com]
Sent: Wednesday, March 13, 2002 9:14 AM
Subject: Re: file_dirscan
I think it should, but since PATH_DELIM is around for both NT and Unix
builds, jam can just use your __USE_NT_PATH_DELIM code for both.
Date: Tue, 12 Mar 2002 21:45:48 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: file_dirscan
I'm a little confused: filespec (with the / instead of \) isn't used to
construct full pathnames - path_build() is. path_build() is given the
original directory name and the file within that directory. It should
do the right thing on Windows and put in a \.
Admittedly, passing dir/* to Windows' findfirst/findnext isn't quite
right, but I don't see how that / shows up in the results of Glob.
Which jam are you using? (jam -v)
Date: Wed, 13 Mar 2002 08:54:27 -0800
From: rmg@perforce.com
Subject: Whence Jam 2.4?
As usual, good news and bad news.
The Bad:
As you may have noticed, the 2.4 release hasn't happened yet, two
weeks now beyond my stated target of 3/1/2002.
The Good:
Christopher has been spending time on Jam work, and this will
result in the presence in 2.4 of some features that wouldn't have
been present had we held to the 3/1 target.
Having learned better, I will not venture to predict a revised target
date, but will let slip that Christopher signed a recent email on the
topic with
Date: Tue, 19 Mar 2002 16:09:26 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: jam's attitude to libraries...
I just converted a project from make.
Preamble:
Once I had removed about 9,000 lines of makefile generation from the
makefiles, I was left with something like this:
app: main.c this/libthis.a that/libthat.a other/libother.a ...
$(CC) -o app main.c -lthis/this -lthat/that -lother/other ...
That's not syntactically correct, but you get the idea.
Jam doesn't quite like that approach. It builds all the libraries, and it
builds the application and links in the libraries, but the dependencies
aren't quite what I want.
Question:
I cannot find a pretty way to say that the application depends on
$(MYLIBS), so that the libraries must be built first. Hints?
Subject: Re: jam's attitude to libraries...
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 19 Mar 2002 08:55:23 -0700
Does the LinkLibraries rule that comes with the standard Jambase do
what you want?
Example usage is in Jam's own Jamfile.
Date: Tue, 19 Mar 2002 17:06:35 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: jam's attitude to libraries...
It's meant to. But the project has libmumble.a in directory .../mumble,
and that confuses jam. It doesn't realize that the libmumble.a that's
being built is the mumble/libmumble.a that's needed by the application in
the mumble's parent directory.
My options appear to be:
- use my evil hack
- use MakeLocate in each and every Jamfile to put the libraries in one
central location
Subject: Re: jam's attitude to libraries...
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 19 Mar 2002 09:45:38 -0700
Once you start using the SubDir rule, Jam starts using grist (by
default). The library target will have grist stuck onto it, and the
gristed form is the one Jam knows about.
So you have to tell Jam which libmumble.a you want with explicit grist:
LinkLibraries myapp : <mumbledir1!mumbledir2>libmumble.a
Doing "jam -d5 | grep libmumble.a" might clue you into what Jam calls
the .a file.
You can probably make this nicer with a rule like this (untested):
rule MyLinkLibrary {
local lib_grist = [ FGrist $(3) ] ;
local lib = $(>:G=$(lib_grist:E)) ;
LinkLibraries $(<) : $(>) ;
}
And used like this:
MyLinkLibrary myapp : libmumble : mumbledir1 mumbledir2 ;
It basically takes care of putting the proper grist on libmumble, and
then calls LinkLibraries.
OR, you can eliminate all grist by doing this:
SOURCE_GRIST = ;
after every call to SubDir in your Jamfiles. This will cause problems
if you ever have two files or libraries of the same name in different
dirs, but will make things simpler otherwise.
Date: Tue, 19 Mar 2002 09:08:39 -0800 (PST)
Subject: Re: jam's attitude to libraries...
Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Rather than dealing with all grist stuff, can't you just simply put an
"include" in your Jamfile that references libmumble?
Date: Tue, 19 Mar 2002 18:26:08 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: jam's attitude to libraries...
The project has lots of files by the same name, unfortunately.
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Date: Tue, 19 Mar 2002 18:29:30 +0100
Subject: Re: jam's attitude to libraries...
That's effectively what I do now, not so prettily. (I just stuck a for
loop in there, making the Mail target depend on the gristed form of each
library. Grisly hack.)
The SubDir stuff seems to be the weakest part of Jam{,base} to me.
Date: Tue, 19 Mar 2002 09:52:28 -0800 (PST)
Subject: Re: jam's attitude to libraries...
Sorry, but I don't know what this means, in the sense of why it would
prevent you from using the "include" statement.
Date: Tue, 19 Mar 2002 19:12:33 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: jam's attitude to libraries...
As I understand it, I need gristing to differentiate between files of the
same name. To get gristing, I need to use SubDir, not include.
Date: Tue, 19 Mar 2002 10:31:58 -0800 (PST)
Subject: Re: jam's attitude to libraries...
Using SubDir/SubInclude doesn't prevent you from also using "include".
For example, in your Jamfile that has the Main for foo:
SubDir TOP src ;
Main foo : foo.c ;
LinkLibraries foo$(SUFEXE) : libbumble libmumble ;
include $(TOP)/bumble/Jamfile ;
include $(TOP)/mumble/Jamfile ;
If there's an a.c in both bumble and mumble, SubDir takes care of keeping
them uniquely named (by "gristing" them), so you don't have to deal with
any of that yourself.
From: "EXT-Goodson, Stephen" <stephen.goodson@boeing.com>
Subject: RE: jam's attitude to libraries...
Date: Tue, 19 Mar 2002 13:02:21 -0800
Our project has a similar layout to what you describe, and
LinkLibraries works fine for us. It gets the dependencies exactly
right, without any evil hacks, and without libraries in a central
location. Libraries don't get grist (at least in stock jam 2.3) so as
long as you don't have 2 libraries with the same name, there shouldn't
be any problem with jam determining what library you mean. It's hard
to guess what the problem might be without more details on what you're
doing, but if you're giving the path to libmumble in the LinkLibraries
rule, that might be what is confusing jam. You don't need to give the
path or suffix when you mention libmumble in the LinkLibraries rule.
From: Arnt Gulbrandsen [mailto:arnt@gulbrandsen.priv.no]
Sent: Tuesday, March 19, 2002 8:07 AM
Subject: Re: jam's attitude to libraries...
It's meant to. But the project has libmumble.a in directory .../mumble,
and that confuses jam. It doesn't realize that the libmumble.a that's
being built is the mumble/libmumble.a that's needed by the application in
the mumble's parent directory.
My options appear to be:
- use my evil hack
- use MakeLocate in each and every Jamfile to put the libraries in one
central location
Subject: Re: jam's attitude to libraries...
From: Matt Armstrong <matt@lickey.com>
Date: Wed, 20 Mar 2002 11:50:03 -0700
I was wrong and confused myself and Arnt. The libraries themselves
are not gristed, so Diane's idea looks like it'll work. In fact, I
just assumed Arnt was already doing this. :-)
From: Vladimir Prus <ghost@cs.msu.su>
Date: Fri, 22 Mar 2002 12:58:50 +0300
Subject: "if" behaviour change from 2.3 to 2.4
the following code:
l = "" a b ;
if $(l) { ECHO "Okay" ; }
Behaves differently in 2.3 and the most most recent version from the public
depot. Should this be considered a bug?
The problem is in compile.c:
LIST *
compile_eval(
PARSE *parse,
LOL *args )
{
...........................
switch( parse->num ) {
case EXPR_EXISTS:
if( ll && ll->string[0] ) status = 1;
^^^^^^^^ here's the problem
should check all the elements of the list.
It appears to be trivial to fix.
From: Craig Allsop <callsop@auran.com>
Date: Mon, 25 Mar 2002 14:55:13 +1000
Subject: Assert?
I'd be interested to know what other people have done to aid debugging
jamfiles. I'm considering adding an Assert rule that can be enabled via the
command line but is otherwise ignored. Anyone have an suggestions? I.e.
Assert expr : msg ;
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Assert?
Date: Mon, 25 Mar 2002 02:17:26 -0500
Boost.Jam contains a BACKTRACE builtin which is used to implement
assertions that show a trace of rule invocations and line numbers,
enabling you to pinpoint the source of an error.
Date: Tue, 26 Mar 2002 10:38:13 -0800
From: rmg@perforce.com
Subject: jam2.4, release candidate 1, now available
We've just finished rolling what changes we can into jam to make up
the 2.4 release. It is available at:
http://public.perforce.com/public/jam/jam-2.4.tar
http://public.perforce.com/public/jam/jam-2.4.zip
(Release notes only)
http://public.perforce.com/public/jam/src/RELNOTES
We've decided to give it the "rc1" (release candidate 1) tag. If nothing
compellingly broken is reported back in two weeks, We'll quietly remove
the "rc1" designation.
The changes between Jam 2.3[.2] and 2.4 are faithfully noted in the
RELEASE notes, but there were a few bugs introduced and fixed during
the 2.4-dev stage. These are only mentioned here:
- Change 1587 by rmg@rmg:pdjam:chinacat on 2002/03/25 11:32:53
if ( "" a b ) once again returns true.
Caught by Vladimir Prus <ghost@cs.msu.su>
- Change 1539 by seiwald@golly-seiwald on 2002/03/13 15:00:39
Fix definitions of FIncludes/FDefines for OS2 and NT, mistakes
caught by Craig McPheeters.
- Change 1537 by seiwald@golly-seiwald on 2002/03/12 16:29:31
Fix to 1319: make jam's &&, &, |, and || operators short circuit
as they did before. 'in' now short-circuits as well.
- Change 1489 by seiwald@thin-seiwald on 2002/02/27 23:29:36
Revert syntax of "expr : expr `in` expr" to jam 2.3's
"expr : arg `in` list", because:
a) It broke the precedence of `in` so that it was looser than
!, parsing "! xxxx in yyy" as '( ! xxx ) in yyy".
b) It didn't allow providing an in line list as "$(f) in a b c".
Note that this release includes only the smallest of outside contributions.
Now is the time to examine more closely the major forks of jam to see
how much of them can and should be folded back.
Finally: Thanks again to everybody who's contributed to Jam.
We are very appreciative. Keep it up!
Date: Wed, 27 Mar 2002 10:43:17 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: re: jam2.4, release candidate 1, now available
consider changing to a mail reader or gateway that understands how to
I compiled and installed jam 2.4rc1 without any problem on my WindowsNT
cygwin system.
The new function glob does not work as epected:
I add the following to lines to the jamfile and calling ./jam0
echo Glob1 [ Glob . : jam*.c ] ;
echo Glob2 [ Glob . : *.c ] ;
Then I called ./jam0 and got the following results
$ ./jam0
Glob1
Glob2 ./builtins.c ./command.c ./compile.c ./execmac.c ./execunix.c
./execvms.c
./expand.c ./filemac.c ./filent.c ./fileos2.c ./fileunix.c ./filevms.c
./glob.c
./hash.c ./headers.c ./jam.c ./jambase.c ./jamgram.c ./lists.c ./make.c
./make1.
c ./mkjambase.c ./newstr.c ./option.c ./parse.c ./pathmac.c
./pathunix.c ./pathv
ms.c ./regexp.c ./rules.c ./scan.c ./search.c ./timestamp.c
./variable.c
...found 161 target(s)...
ng@WS1092 /cygdrive/e/jam/2_4_rc1
I expected the following output for Glob1
Glob1 ./jam.c ./jambase.c ./jamgram.c
Is this a bug or a intended behaviour?
My workaround is:
echo Glob3 [ Glob . : ./jam*.c ]
Date: Fri, 29 Mar 2002 15:33:17 -0500 (EST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Proposed change to MATCH
I just ran across a situation where a variation of the new match rule
would be really useful. In a header rule, I need to parse a string which
contains a list of white space delimited words into a jam list. In
parsing a template file, it describes includes as:
INCLUDE foo.h bar.h jaz.h
the template is processed to contain real #include's. I need to generate
these dependencies for the template file.
I can create a regex to scan the file to grab the list of headers, but
they are returned as a single string. Decomposing the string into a
real jam list is proving difficult - does anybody know a way to go
from "foo.h bar.h jaz.h" -> "foo.h" "bar.h" "jaz.h" ?
The new match rule looks like my best bet, except that it only performs
a single match against the string. What I want is for the following
to work:
local list = [ MATCH "[ ]*([a-z]+)" : "foo.h bar.h jaz.h" ] ;
With the mainline version, this returns a list with one item, "foo.h".
I've modified a local version to apply the regex against the string
until it runs out of matches. This does change how you use MATCH, but
I think it provides a capability that is otherwise missing from Jam?
Its also useful to quickly decompose a path into its parts, rather than
the jam looping you need now:
local tokens = [ MATCH "([^/\\]+)" : "/this/is/a/filename.c" ] ;
-> tokens = "this" "is" "a" "filename.c" ;
The change to builtins.c is the modified function below. Basically,
rather than a single 'if' testing regexec() there is now a 'while' and
a little logic to find the end of the previous match. Its a simple change.
---
LIST *
builtin_match(
PARSE *parse,
LOL *args ) {
LIST *l, *r;
LIST *result = 0;
/* For each pattern */
for( l = lol_get( args, 0 ); l; l = l->next ) {
regexp *re = regcomp( l->string );
/* For each string to match against */
for( r = lol_get( args, 1 ); r; r = r->next ) {
char *string = r->string;
while( string && regexec( re, string ) ) {
int i, top;
/* Find highest parameter, set new string to its end */
string = 0;
for( top = NSUBEXP; top-- > 1; )
if( re->startp[top] != re->endp[top] ) {
string = re->endp[top];
break;
}
/* And add all parameters up to highest onto list. */
/* Must have parameters to have results! */
for( i = 1; i <= top; i++ ) {
char buf[ MAXSYM ];
int l = re->endp[i] - re->startp[i];
memcpy( buf, re->startp[i], l );
buf[ l ] = 0;
result = list_new( result, newstr( buf ) );
}
}
}
free( (char *)re );
}
return result;
}
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Proposed change to MATCH
Date: Fri, 29 Mar 2002 15:42:14 -0500
Boost.Build is doing this (with our SUBST rule inherited from FTJam, but
I think it can easily be made to work with MATCH):
# Returns a list of the following substrings:
# 1) from beginning till the first occurence of 'separator' or till the end,
# 2) between each occurence of 'separator' and the next occurence,
# 3) from the last occurence of 'separator' till the end.
# If no separator is present, the result will contain only one element.
#
rule split ( string separator ) {
local result ;
local s = $(string) ;
# Break pieaces off 's' until it has no separators left.
local match = 1 ;
while $(match) {
match = [ SUBST $(s) ^(.*)($(separator))(.*) $1 $2 $3 ] ;
if $(match) {
result = $(match[3]) $(result) ;
s = $(match[1]) ;
}
}
# Combine the remaining part at the beginning, which does not have
# separators, with the pieces broken off.
# Note that rule's signature does not allow initial s to be empty.
return $(s) $(result) ;
}
Subject: Re: Proposed change to MATCH
From: Matt Armstrong <matt@lickey.com>
Date: Fri, 29 Mar 2002 13:58:04 -0700
You can do it in jam pretty easily. I called the rule Split, and
you'd use it like:
list = [ Split "foo.h bar.h jaz.h" : "[ ]+" ] ;
This rule is written for my MATCH rule, which I think is different
from the one now in stock jam (someday I'll have time to sync up).
This isn't to say that there is no argument for similar functionality
written in C.
Also, recently, Chris installed some new behavior to stock Jam's MATCH
rule that I haven't had time to look at. It might be relevant.
#
# Return a list consisting of a string split where a regexp matches
#
# Usage: var = [ Split string : regexp ] ;
#
# This rule requires the builtin function MATCH which was is not part
# of the stock jam (as of this writing).
#
rule Split {
local match = [ MATCH $(1) : "^(.*)("$(2)")(.*)" ] ;
if $(match) && $(match[2]) != $(1) {
local last, element ; {
last = $(element) ;
}
return [ Split $(match[2]) : $(2) ] $(last) ;
} else { return $(1) ; }
}
Date: Fri, 29 Mar 2002 16:37:38 -0500 (EST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Proposed change to MATCH
Oops. It turns out this is easily accomplished with the new builtin
MATCH rule, thanks to David and Matt for pointing this out. The rule
I have now which works with the jam mainline is:
rule Split {
local pat = ([^$(2:E=$(SLASH))]+)(.*) ;
local match = [ MATCH $(pat) : $(1) ] ;
local result = $(match[1]) ;
local string = $(match[2]) ;
while $(string) {
match = [ MATCH $(pat) : $(string) ] ;
result += $(match[1]) ;
string = $(match[2]) ;
}
return $(result) ;
}
---
Running it with:
a = [ Split " foo.h bar.h jaz.h " : " " ] ; # last string is space-tab
ECHO files are .$(a). ;
a = [ Split "foo.h" : " " ] ;
ECHO file is .$(a). ;
a = [ Split "/this/is/a/filename.c" ] ;
ECHO components are .$(a). ;
a = [ Split "this" ] ;
ECHO component is .$(a). ;
yields:
files are .foo.h. .bar.h. .jaz.h.
file is .foo.h.
components are .this. .is. .a. .filename.c.
component is .this.
Hey, I like this new version of Jam!
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sat, 30 Mar 2002 08:05:36 -0500
Subject: GLOB behavior on NT
The previously-noted behavior that GLOB prepends the entire directory
name to its result has some unintended consequences on NT. In order to
match *.jam in the directory c:\foo\bar\baz, you need the following invocation:
GLOB c:\\foo\\bar\\baz : c:\\\\foo\\\\bar\\\\baz\\\\*.jam
I am using the following to work around the issues:
# A fix for the broken behavior of built-in glob
rule glob ( dirs * : patterns * ) {
return [ GLOB $(dirs:T) : $(dirs:T)\\$(SLASH)$(>) ] ;
}
Where :T changes backslashes to forward slashes (I encourage Perforce to
adopt :/ and :\ as primitives instead). A workaround that uses no
language extensions would be much more complicated, using MATCH to split
the path and :J=/ to re-join it.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sun, 31 Mar 2002 21:46:48 -0500
Subject: Odd behavior of MATCH
input:
x = [ MATCH (foo)(.*) : foo ] ;
ECHO -$(x)+ ;
output:
-foo+
Shouldn't this print:
-foo+ -+
instead? The 2nd pattern was matched, after all.
From: Badari Kakumani <badari@cisco.com>
Date: Sun, 31 Mar 2002 21:20:58 -0800
Subject: bug in Archive rule?
i am trying to debug a situation which is dropping one of our object
files (G0_distrib_lib_msg_gen.o) from
the corresponding library (libens.g0.a).
i ran jam with -d9 and have included the relevant output from jam.
jam is simply skipping
<infra!distrib!lib!obj-4k>libens.g0.a(G0_distrib_lib_msg_gen.o).
which should have been part of $(>) for the Archive rule.
anyone has ideas on what could cause jam to drop items
from $(>) list? is this some known bug?
relavant portions of ' jam -d9' output:
=======================================
get AR = ar.98r1-v0.mips64 -csr
expanded to ar.98r1-v0.mips64 -csr
expand '$(<)'
expand '$(>)'
Archive /vws/vpw/badari/jam_cleanup/infra/distrib/lib/obj-4k/libens.g0.a
From: Badari Kakumani <badari@cisco.com>
Date: Mon, 1 Apr 2002 07:55:51 -0800
Subject: Re: bug in Archive rule?
this file, G0_distrib_lib_msg_gen.o is getting archived into THREE libraries.
so it was already present when libens.g0.a was needed to be built.
they had KEEPOBJS enabled for this segment of Jamfile and that kept all
the object files around after they are built and archived.
what if KEEPOBJS is set and all the objects are present AND the library
was accidentally removed? would that cause the library to be NOT generated
at all (since none of the objects themselves got built)?
i think 'updated' should mean that:
a) if the library already had the object AND
b) the object was updated.
In this particular case since the libens.g0.a was non-existant, a) above
is NOT satisfied. so 'updated' clause should NOT have kicked in and jam should
have proceeded to archive G0_distrib_lib_msg_gen.o
that might be the bug in jam.
to work around this bug, i tried putting in 'actions' for 'Archive' WITHOUT
the 'updated' modifier. when i put the modified Archive in Jamrules, still
jam has the buggy behaviour.
Only when i over-rode jambase with -fJambase.txt AND my Jambase.txt did NOT
have 'updated' modifier, it worked properly.
so we may have to over-ride the Jambase to fix this problem.
Date: Mon, 1 Apr 2002 10:23:47 -0800 (PST)
Subject: Re: bug in Archive rule?
It shouldn't matter that the .o files are kept around -- you could compile
whether the object file inside the archive is older than the .o outside of
the archive, in which case it should be included in the ar update.
If your Jambase is compiled into your 'jam' executable, changes you make
to Jambase won't take effect (unless you explicitly point to it) -- you'd
need to make the changes in jambase.c and rebuild 'jam'.
From: "David Hoogvorst" <dc.hoogvorst@inter.nl.net>
Date: Mon, 1 Apr 2002 20:48:35 +0200
Subject: Jam and static source checks. Any advice?
I'm quite new to jam, but quite enthousiastic I should say, and try to
change the build system at my work to jam. Now, we have a home-made system
merely based on jam, but with a lot of Perl and Ruby scripting around it.
We develop software since 1982, so in the whole system parts are in Fortran,
parts in C and parts in C++. Furthermore, a lot of code is generated with
Pascal tools and Perl and Ruby scripts, due to (inter)nationalization needs
in the Fortran bit. The whole codebase is about 10,000 files I guess.
What we do, is that while developing (on NT or W2000), one has a local copy
of the module in a working directory. (We don't use the MS Visual
environment or anything like that). The rest of the sources remain in the
repository. Whenever one tries to build, his sources are put through a
number of static checks (line length, diacritic symbols, lint), to avoid as
many problems as possible when porting the software to the platforms we do:
OpenVMS, AS/400, IBM Mainframe, most Unixes (Unices?), and NT/W2000.
I've been struggling getting my jambase right. What I want to do first, is
to get the sources through these static checks. These checks are all command
line tools that exit with nonzero if not succesful. Furthermore, they yield
report files.
What I want is the following: If the static check is successful, remove all
the report files. If the static check fails, leave the report files and stop
the build (or leave a clear message where the build started to go wrong).
Jam cleans up when things go wrong, and leaves no trace, so that's just the
opposite of what I want.
Furthermore I wonder how I should go about with this local work directory.
Thanks a lot in advance. About 30 developers are straining at the leash to
get going with jam...
From: "David Hoogvorst" <dc.hoogvorst@inter.nl.net>
Date: Mon, 1 Apr 2002 20:51:08 +0200
Subject: Jam static checks, correction...
Sorry, the current system is not based on jam, but on make...
From: Vladimir Prus <ghost@cs.msu.su>
Date: Wed, 3 Apr 2002 11:41:08 +0400
Subject: "echo" behavour
I've noted that the following:
echo "hi" ;
outputs the character sequence 'h', 'i', ' ', '\n'.
Note that whitespace. Why would I care? The reason is that when writing tests
for Boost.Build I will surely need to compare jam output with expected
output, and having trailing spaces in tests can lead to many problems. Would
it be reasonable to apply the following trivial patch?
--- lists.c Sun Mar 31 17:18:56 2002
+++ ../../../../boost-cvs/tools/build/jam_src/lists.c Sun Mar 31 17:16:32
2002
@@ -172,8 +172,12 @@
void
list_print( LIST *l ) {
- for( ; l; l = list_next( l ) )
- printf( "%s ", l->string );
+ LIST *p = 0;
+ for( ; l; p = l, l = list_next( l ) )
+ if ( p )
+ printf( "%s ", p->string );
+ if ( p )
+ printf( "%s", p->string );
}
/*
Date: Wed, 03 Apr 2002 11:21:52 -0800
From: rmg@perforce.com
Subject: jam2.4, release candidate 2, now available
Based on feedback from jam2.4rc1 use, we have made some changes, which
become jam2.4rc2. These are now available for download as
http://public.perforce.com/public/jam/jam-2.4.tar
http://public.perforce.com/public/jam/jam-2.4.zip
ftp://ftp.perforce.com/jam/jam-2.4.tar
ftp://ftp.perforce.com/jam/jam-2.4.zip
From the RELNOTES:
| 0. Changes between 2.4rc1 and 2.4rc2:
|
| THESE NOTES WILL BE REMOVED WITH THE FINAL 2.4 RELEASE, SINCE THEY
| REFER EXCLUSIVELY TO ADJUSTMENTS IN BEHAVIORS NEW BETWEEN 2.3 and
| 2.4:
|
| Make MATCH generate empty strings for () subexpressions that
| match nothing, rather than generating nothing at all.
| Thanks to David Abrahams.
|
| GLOB now applies the pattern to the directory-less filename,
| rather than the whole path. Thanks to Niklaus Giger.
|
| Make Match rule do productized results, rather than
| using just $(1[1]) as pattern and $(2[1]) as the string.
(For more detail on the effect of these changes, please refer to the
change descriptions of changes 1601, 1612, 1616 and 1614 in the Public
Depot, and to the updated Jam.html included in the release.)
If no additional compelling bugs are reported back in two weeks, we'll
quietly remove the "rc2" designation.
Date: Thu, 4 Apr 2002 15:40:10 -0800 (PST)
Subject: MS VC++ and Perforce
We are looking to move to Perfoce and have a major
stumbling block. There are many people at my company
that are familiar and hope to remain using Visual
Studio for development as well as debugging. I think
debugging is the biggest hurdle.
Please excuse my Microsoft ignorance, but if we use
the 'cl' command line compiler then we seem forced to
debug using VC++. However to take full advantage of
the MS VC++ debugger we need the binary to be part of
a VC++ project.
How are others handling this? Can you offer some
advice? I'm hoping to avoid checking the .dsp and .dsw
files into source control along with the Jamfile.
Subject: RE: MS VC++ and Perforce
Date: Thu, 4 Apr 2002 17:00:20 -0700
From: "Mike Steed" <msteed@altiris.com>
We use jam with the Microsoft compiler (among others).
To debug an exe built without a Visual Studio project, start the IDE,
then choose File->Open Workspace, change the file type to "Executable Files",
and open your exe. You can set command line parameters for your app with
Project->Settings->Debug.
From: Roger Lipscombe <RLipscombe@sonicblue.com>
Subject: RE: MS VC++ and Perforce
Date: Fri, 5 Apr 2002 01:41:48 -0800
I wrote some (simple) notes after trying to use jam for building Win32 apps.
Start here: http://www.differentpla.net/~roger/devel/jam/
In particular, look here:
http://www.differentpla.net/~roger/devel/jam/tutorial/mfc_app/building_in_devstudio.html
Date: Sun, 7 Apr 2002 18:15:25 -0700
From: "Steve Johnson" <steve_johnson@Equilibrium.com>
Subject: A newbie question
I'm just starting to play with Jam. I'm having trouble figuring out
just how the locations of sources and targets can/should be specified.
I've read the 3 main documents that explain abstractly what SubDir,
SEARCH, LOCATE, SEARCH_SOURCE, LOCATE_TARGET are for, but I can't seem
to make them work quite right. I've had better luck with these if I set
them globally rather than per target, but I know I'm not really supposed
to do that. Unless I'm missing something, no examples are provided in
the main documentation to describe how these should really be used.
Can anyone point me at any additional documentation or examples that can
show me how to use these variables correctly? Are there any actual
public domain projects out there that use Jam that I could download and
look at?
My particular situation is this...I'm building a fairly standard tree
structure using the SubDir rule at the top of each of my Jamfiles. The
tree is populated with various subdirectories, some of which contain
sources for library modules, and others that contain executable sources.
For each module directory, I have source files in a "Sources" subdir,
and headers in an "Includes" subdir. I want to put my object files
(both intermediates and final targets) in a separate tree from my
sources, hopefully in a hierarchy that mirrors the source hierarchy.
Date: Mon, 8 Apr 2002 02:18:07 -0700 (PDT)
Subject: Re: A newbie question
If you don't set $(TOP), all the subdir's are relative instead of full
paths. So set ALL_LOCATE_TARGET in a Jamrules file to point to the
build-tree path (up to where you want the source-tree hierarchy to begin),
and set $JAMRULES in your env to point to that file.
For example:
$ cat $JAMRULES
BUILD_DIR = $(HOME)/build ;
ALL_LOCATE_TARGET = $(BUILD_DIR) ;
$ cd $HOME/work/jam
$ cat Jamfile
SubDir TOP ;
SubInclude TOP src ;
$ cat src/Jamfile
SubDir TOP src ;
Main foo : foo.c ;
$ jam -n
...updating 2 target(s)...
Cc /home/me/build/foo.o
/usr/bin/gcc -c -D__cygwin__ -O -Isrc -o /home/me/build/foo.o src/foo.c
Link /home/me/build/foo.exe
/usr/bin/gcc -D__cygwin__ -o /home/me/build/foo.exe /home/me/build/foo.o
Chmod1 /home/me/build/foo.exe
chmod 711 /home/me/build/foo.exe
...updated 2 target(s)...
Date: Mon, 08 Apr 2002 13:47:36 +0100
From: Derek Burgess <derek.burgess@cursor-system.com>
Subject: Why does changing a variable locally affect a global value?
I have an IDL rule I borrowed from the archive. I want to modify the
LOCAL_IDLFLAGS in Idl1 actions with a line in a local Jamfile like this:
LOCAL_IDLFLAGS += -m $(TOP)/foo/bar/message_ids.decl ;
I find that this value is then fixed. So if I have a number of Jamfiles
that I want to set a local value for LOCAL_IDLFLAGS I get just one of
the values for all invocations of the rule.
Have I missed something ? Any suggestions appreciated.
rule Idl {
# based on the Yacc rule
local h ;
h = $(<:BS=.h) ;
MakeLocate $(<) $(h) : $(LOCATE_SOURCE) ;
# Some places don't have an Idl.
if $(IDL) {
Depends files : $(<) $(h) ;
Depends $(<) $(h) : $(>) ;
Idl1 $(<) $(h) : $(>) ;
Clean clean : $(<) $(h) ;
}
INCLUDES $(<) : $(h) ;
}
actions Idl1 {
$(IDL) $(LOCAL_IDLFLAGS) $(IDLFLAGS) -c -i $(>) -o
$(LOCATE_SOURCE)/$(>:B)
}
From: Roger Lipscombe <RLipscombe@sonicblue.com>
Subject: RE: Why does changing a variable locally affect a global value?
Date: Mon, 8 Apr 2002 06:02:43 -0700
There's no such thing as local variables (on a per-Jamfile basis). All of
the Jamfiles are wedged together into one big (conceptual) Jamfile. I think
that the closest way to get what you want is either:
1. LOCAL_IDLFLAGS on foo += whatever ;
2. Reset LOCAL_IDLFLAGS in SubInclude or SubDir.
Date: Mon, 08 Apr 2002 14:55:07 +0100
From: Derek Burgess <derek.burgess@cursor-system.com>
Subject: Re: Why does changing a variable locally affect a global value?
Ah thanks. I think that's put me on the right track but I still have a problem
- the attempt to set LOCAL_IDLFLAGS has seemingly no effect. In my (sub)Jamfile
I have the following:
name = simulated ;
lib = $(PREFLIB)$(name) ;
dll = $(PREFDLL)$(name:S=$(SUFDLL)) ;
LOCAL_IDLFLAGS on $(lib) += -m $(SEARCH_SOURCE)/message_ids.decl ;
Library $(lib) : simulated.idl xxxsimulated.cpp ;
MainFromObjects $(dll) ;
( Jamrules as before )
But I get only this generated:
./bin/idl -m -c -i foo/bar/simulated.idl -o ./foo/bar/simulated
^^^ nothing here
Date: Mon, 8 Apr 2002 07:49:01 -0700 (PDT)
Subject: Re: A newbie question
Never mind. I must've been on drugs when I wrote that -- for some reason,
I saw it including the subdirectory name as well, but of course, it
wasn't, and would just end up putting everything into the same directory.
Once I have several cups of coffee this morning, I'll get back to this
(unless someone more coherent gets to it first :)
From: "EXT-Goodson, Stephen" <stephen.goodson@boeing.com>
Subject: RE: A newbie question
Date: Mon, 8 Apr 2002 13:45:48 -0700
If you are using the SubDir rule, it will set SEARCH_SOURCE, LOCATE_SOURCE,
and LOCATE_TARGET for you in each directory. The other jam rules then
use these values to set target specific SEARCH and LOCATE, which are the
variables that actually control where stuff is searched for and located.
Unfortunately, the SubDir rule won't set them quite the way you want.
You'll either need to modify SubDir, or after every time you use the
SubDir rule, put something like:
LOCATE_TARGET = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS) ] ;
LOCATE_SOURCE = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS) ] ;
where you have set ALL_LOCATE_TARGET in your Jamrules file to the top of the
tree you want generated files to appear in. [As an aside, wouldn't this
be a more sensible way for the built-in SubDir rule to use ALL_LOCATE_TARGET?]
Then, as long as sources appear in the directory with the Jamfile, and you
want generated files to go to the corresponding place under
ALL_LOCATE_TARGET, you shouldn't have to do anything.
If there are special cases that require setting target specific SEARCH or
LOCATE, don't forget to use the gristed name of the target.
From: Steve Johnson [mailto:steve_johnson@equilibrium.com]
Sent: Sunday, April 07, 2002 6:15 PM
Subject: A newbie question
I'm just starting to play with Jam. I'm having trouble figuring out just
how the locations of sources and targets can/should be specified. I've read
the 3 main documents that explain abstractly what SubDir, SEARCH, LOCATE,
SEARCH_SOURCE, LOCATE_TARGET are for, but I can't seem to make them work
quite right. I've had better luck with these if I set them globally rather
than per target, but I know I'm not really supposed to do that. Unless I'm
missing something, no examples are provided in the main documentation to
describe how these should really be used.
Can anyone point me at any additional documentation or examples that can
show me how to use these variables correctly? Are there any actual public
domain projects out there that use Jam that I could download and look at?
My particular situation is this...I'm building a fairly standard tree
structure using the SubDir rule at the top of each of my Jamfiles. The tree
is populated with various subdirectories, some of which contain sources for
library modules, and others that contain executable sources. For each module
directory, I have source files in a "Sources" subdir, and headers in an
"Includes" subdir. I want to put my object files (both intermediates and
final targets) in a separate tree from my sources, hopefully in a hierarchy
that mirrors the source hierarchy.
Any help at all would be much appreciated.
Date: Mon, 8 Apr 2002 16:02:46 -0700 (PDT)
Subject: RE: A newbie question
Just a couple of notes...
If you have executables under your various modules that have the same
name, Stephen's approach won't work right, unless you qualify the names in
the Jamfile because, otherwise, the "LOCATE on" will be the last one set.
In other words, if you have something like:
/home/me/work/src/mod1/Sources/foo.c
/home/me/work/src/mod2/Sources/foo.c
And in your Jamfiles you have:
Main foo : foo.c ;
then 'foo(.exe)' will end up getting linked, first against the mod1 foo.o,
but put into the mod2 build output dir, then linked again against the mod2
foo.o and, again, put into the mod2 build output dir. So you'd need to do
something like:
Main $(LOCATE_TARGET)/foo : foo.c ;
or
Main <mod1>foo : foo.c ;
in order to keep them separate. (Of course, if you never have exe's with
the same name, then there's no problem.)
Also, with Stephen's way of doing it, you'll get the entire subdir
structure recreated in the build output tree, including not only the
module subdirs but the Sources subdir under them as well. If that's not
what you want (I doubt I would :), then you can instead use:
LOCATE_TARGET = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS[1]) ] ;
LOCATE_SOURCE = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS[1]) ] ;
which will put the targets into /path/to/build/tree/{mod1,mod2,modN...}
rather than /path/to/build/tree/{mod1,mod2,modN...}/Sources.
Also, you can put the above lines in a file, then just use the "include"
directive in your individual Jamfiles, so if you ever need to change them,
you can do it in just one place (and it helps keep the Jamfiles a bit less
cluttered, as well).
Guess that's about it (hope this one's a bit more correct :)
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 8 Apr 2002 21:11:48 -0500
Subject: Force a target's removal in case of failure?
Does anyone have a trick for this?
I want to arrange that if any of a particular target's direct or
indirect dependencies fails to build, that target is removed.
I've tried about 100 different combinations of approaches to get this to
happen, but haven't come up with anything. I'm about to resort to a Jam
patch, but I'd rather not.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 8 Apr 2002 22:17:15 -0500
Subject: LEAVES
The LEAVES rule doesn't seem to have the effect I'd expect it to.
Namely, given a dependency graph like this:
Depends A : B ;
Depends B : C ;
LEAVES A ;
with A and C up-to-date, a request to build A will still cause B to be built.
Does anyone know a way to prevent intermediate targets from being built
if the root target is up-to-date?
Date: Tue, 09 Apr 2002 17:29:44 +0100
From: Derek Burgess <derek.burgess@cursor-system.com>
Subject: How do you describe a target when using the 'on' rule?
I have a Jamfile (below) in which I want to set LOCAL_IDLFLAGS.
I had this working briefly once (without understanding why) when ...
s/simulated.cxx/simulated.h/ in the Jamfile
I have had to change the idl rule slightly and can no longer get
LOCAL_IDLFLAGS to have any effect.
I suppose my question is what is the right way to construct the TARGET
in a line like:
THING on TARGET = whatever ;
{Here is the output from ./jam -- see the target of the Idl1 rule is
'simulated.cxx' - so why does the ON rule not have any effect?
Idl1 ./gen/QNXNTO/parts/lmu3/comms/tests/pseudomodem/simulated.cxx
./bin/QNX/idl -s ./aux/idl/symbols.decl -m
./aux/idl/message_ids.decl -c -i
parts/lmu3/comms/tests/pseudomodem/simulated.idl -o
./gen/QNXNTO/parts/lmu3/comms/tests/pseudomodem/simulated ;
}
Jamfile:
name = simulated ;
lib = $(PREFLIB)$(name) ;
dll = $(PREFDLL)$(name:S=$(SUFDLL)) ;
LOCAL_IDLFLAGS on simulated.cxx = -m $(SEARCH_SOURCE)/message_ids.decl
;
Library $(lib) : simulated.idl xxxsimulated.cpp ;
There is a rule in Jamrules for idl files:
rule UserObject {
switch $(>) {
case *.idl : C++ $(<) : $(<:S=.cxx) ;
Idl $(<:S=.cxx) : $(>) ;
case * : EXIT "Unknown suffix on" $(>) "- see UserObject rule
in Jamfile(5)." ;
}
}
rule Idl {
# based on the Yacc rule
local h ;
h = $(<:BS=.h) ;
MakeLocate $(<) $(h) : $(LOCATE_SOURCE) ;
# Some places don't have an Idl.
if $(IDL) {
Depends files : $(<) $(h) ;
Depends $(<:B) $(h) : $(>) ;
Idl1 $(<) : $(>) ;
Clean clean : $(<) $(h) ;
}
INCLUDES $(<) : $(h) ;
}
actions Idl1 {
$(IDL) $(LOCAL_IDLFLAGS) $(IDLFLAGS) -c -i $(>) -o $(<:DB) ;
}
From: "Kai Wang" <wangk@rpi.edu>
Subject: A newbie question - how to delete object file after a DLL building
Date: Thu, 11 Apr 2002 16:29:21 -0400
When building DLL from both sources and built libraries, I use the
following:
LINKLIBS = .... (external libs) ;
SRCS = foo1.cpp foo2.cpp... (source files) ;
LOCATE_TARGET = $(BINDIR) ;
Main myproj$(SUFSHR) : $(SRCS) ;
LinkLibraries myproj$(SUFSHR) : $(LINKLIBS) ;
The DLL myproj.dll is successfully built and put in $(BINDIR), but foo1.obj,
foo2.obj ... are not deleted automatically and messing up the $(BINDIR)
directory. I played with the RmTemps rule but it doesn't work for me.
From: Toon Knapen <toon.knapen@si-lab.org>
Date: Wed, 17 Apr 2002 15:11:14 +0200
Subject: shell-commands
Since my jam-build also needs to call `make` for some small subsystem, I
defined following in my Jamfile
The echo in my rule is executed but not the echo (nor the call to make) in
the actions. What am I doing wrong ?
rule call_make_all {
Depends $(<) : $(>) ;
ECHO calling make ;
}
actions call_make_all {
echo executing make all ;
make all ;
}
call_make_all subsystem$(SUFEXE) : subsystem$(SUFOBJ) ;
Date: Wed, 17 Apr 2002 08:26:13 -0700 (PDT)
Subject: Re: shell-commands
If you do 'jam subsystem[.exe]', you'll see your actions get run. In other
words, you don't have a dependency in your rule on anything other than the
target itself. Try adding:
Depends exe : $(<) ;
From: Toon Knapen <toon.knapen@si-lab.org>
Date: Wed, 17 Apr 2002 22:19:01 -0400
Subject: MkDir
If I want jam to just create a directory for me, should'nt a Jamfile only
containing following line :
MkDir mydirectory ;
not be sufficient ? There's no way to add a dependency AFAICT ?
Date: Wed, 17 Apr 2002 13:42:04 -0700 (PDT)
Subject: Re: MkDir
MkDir is a dependency of the pseudo-target "dirs", so you'd need to run
'jam dirs'. If you don't want to have to do that step, you can have a rule that does:
rule MakeDir {
Depends all : dirs ;
MkDir $(<) ;
}
then use MakeDir instead of MkDir.
Date: Wed, 17 Apr 2002 15:41:23 -0500 (CDT)
Subject: Re: MkDir
Jam is dependency driven, if nothing depends upon that directory
being there, then there is no reason to create it.
if something does depend upon it, then you'll need to express that
Depends mydirectory : somethingelse ;
There are some other targets that conveniently create dirs
as needed to place files in them, MakeLocate I believe.
I'm a little rusty.
From: Derek Burgess <derek.burgess@cursor-system.com>
Date: Mon, 22 Apr 2002 10:39:12 +0100
Subject: Dependency on autogenerated headers
Any idea how do I deal with the following :
2 inital files: foo.idl bar.cpp
An Idl compiler reads foo.idl and creates 2 new files: foo.cpp and foo.h
bar.cpp #includes foo.h
bar.cpp is therefore always older than foo.h and so gets recompiled every time...
Date: Mon, 22 Apr 2002 16:26:38 -0400 (EDT)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Another jam extension in my personal branch
I've added another extension to Jam into my personal branch in the perforce
public depot. Here it is, if you need something this in the future you
know where to look.
The extension is a way to serialize actions in parts of the graph so only
one is run at a time.
This gets a little technical, if you're not interested in jam source
extensions you may want to ignore this.
The problem I had been trying to solve has to do with how you compile
files on NT which use PDB debug information. There are two debug formats
on NT, C7 and PDB. C7 debug info gets placed into the .obj files, which
works really well with Jam. PDB debug info is stored in an external file,
which the compiler must open for writing when it is compiling a file. The
problem had to do with running Jam with multiple jobs, turning on pre-compiled
headers and using PDB debug info. With that combination, you need to ensure
only one compile job is running at a time which references the common .pdb
file.
When you use pre-compiled heades, you compile some source file to generate
the pre-compiled header info. Assume pch.cpp is compiled and produces a
pch.pch pre-compiled header file. That .pch file is provided to all of the
remaining related compiles to provide the pre-compiled header info. When
PDB debugging is being used, the compile which produces pch.pch will also
produce pch.pdb - the debug info. Any compile which is provided pch.pch
must also provide the pch.pdb file. Specifying a different .pdb file
results in a compile error. This is where the compiles are forced to
become serial.
C7 debug info has a hard limit of 64K symbols, which wasn't enough for us.
PCH is a significant compile win for us. We have many collections of files
which use different PCH files, so there is a lot of available work to do
but each of the pools of files must only have one job active at a time.
The approach I used to solve this was to allow you to specify a semaphore
node for targets. This is done with:
JAM_SEMAPHORE on $(target) = nodeName ;
the semaphore node can be placed on a related set of targets, and only one
of those targets will have an active job running on it at a time. The
semaphore node shouldn't be part of the graph, nothing should depend on it.
Its treated specially by jam with this extension which messes with the normal
binding/execution treatment of nodes.
With this facility, my problem can be solved by setting common semaphore
nodes on all .obj files which reference the same .pch file with PDB debug
info is enabled. If no semaphore is set on a node, there is no change from
the old jam behaviour.
Jam semaphores can also be used to ensure only one of a I/O intensive operation
be active at one time. For example, we have an action which links a shared
object. This action is often very I/O intensive, consuming vast amounts of
memory. Now by setting a common semaphore on the this target, only one
shared object link happens at a time.
These semaphores are used to solve a problem the normal dependency graph
can't express. A dependency over time rather than through files or
relationships. In my experience to date semaphores have a very limited
scope where they are useful, most problems can be fixed using the other
techniques. Yacc rules which use common temporary files can be solved
by creating unique sub-directories for example. So while this is available,
it should be used with a little restraint.
The extension turned out to be relatively minimal I think, much less bad
than I thought it would be. Its available in:
//guest/craig_mcpheeters/jam/src/...
That area is up-to-date with respect to the jam mainline (which is now
the official 2.4 release - way to go!)
Date: Mon, 22 Apr 2002 18:31:05 -0700 (PDT)
Subject: Re: Dependency on autogenerated headers
Don't foo.{cpp,h} only get re-gen'd when foo.idl is newer -- and then
wouldn't you want bar.cpp to be recompiled?
Date: Tue, 23 Apr 2002 16:23:06 -0700 (PDT)
Subject: Builds don't stop on error
I've seen a few posts about this subject before, but cannot find a clear
answer. When JAM compiles a source file, and the compile fails, JAM happily
continues to attempt to the link the file into a .lib, and then link the
.libs into an EXE. Of course, these actions all fail since the original
source file failed to compile in the first place.
Why doesn't JAM stop on the first error? I've tried the -q option, but it
has no effect. Here are my rules:
Main $(TARGET) : ; # Target source files
LinkLibraries $(TARGET) : data.lib # Target libraries
common.lib
... etc. more lib files ;
Library data.lib : tester.c
... etc. more source files ;
Library common.lib : ... etc. more source files ;
From: "Toqir Khalid" <toqir.khalid@openwave.com>
Date: Thu, 25 Apr 2002 10:37:22 -0700
Subject: Setting profile information with JAM
Is it possible to set any flags/options so that you can get profile
information for the binary that you are trying to build, using JAM. For
example with CC you can set -gp to get gprof information. Is there anything
within JAM that you can set to get this information.
From: michael.allard@acterna.com
Date: Thu, 25 Apr 2002 13:31:15 -0500
Subject: Re: Setting profile information with JAM
Here is how I handled this.
1. I created a "GP" variable to hold the profiling flags (architecture/compile
specific - Solaris wants -xpg, AIX wants -pg, etc.)
2. I use an "ARCH" variable for object files - we store our object files in an
architecture-specific subdirectory underneath the source directory. For
example, if the source is in "foo/source", the objects end up in
"foo/source/$(ARCH)".
3. I added support for a "PROF" flag to be set on the command line, e.g. "jam
-sPROF=Y".
4. If $(PROF) is Y, then I add a ".p" to the ARCH variable (e.g., it becomes
"AIX.p" or "SunOS.p"). This makes profiled objects end up in separate object
directories from their non-profiled counterparts (which IMO is only sane :-). I
also then set GP to its proper value; otherwise it's blank.
Snippets from my Jamrules:
if $(OS) = SOLARIS {
ARCH ?= SunOS ;
...
if $(PROF) = Y {
GP ?= -xpg ;
}
}
else if $(OS) = AIX {
ARCH ?= AIX ;
...
if $(PROF) = Y {
GP ?= -pg ;
}
}
if $(PROF) = Y { ARCH = $(ARCH).p ; }
CCFLAGS = $(O) $(DBG) $(GP) ... ... ;
Then, in foo/source/Jamfile (or any Jamfile that compiles C code), I have:
LOCATE_TARGET = $(SEARCH_SOURCE)/$(ARCH)
This places the objects in "AIX" or "SunOS" for non-profiled output, and places
profiled object in "AIX.p" or "SunOS.p".
Just typing "jam" makes non profiled output; typing "jam -sPROF=Y" makes
profiled outputs.
Caveat: I haven't played with this in a while, and it's just for testing in the
hopes of imnproving our build system someday, but it worked last time I tried it!!
Subject: Setting profile information with JAM
Is it possible to set any flags/options so that you can get profile
information for the binary that you are trying to build, using JAM. For
example with CC you can set -gp to get gprof information. Is there anything
within JAM that you can set to get this information.
Date: Fri, 26 Apr 2002 14:59:49 +0100
From: Derek Burgess <derek.burgess@cursor-system.com>
Subject: User Object rule not being invoked
I am trying to add a rule to generate C++ code from an IDL file and then compile it.
The sequence I am trying to create is:
But UserObject seems to expect that overall
And so loses interest in all my idl files. Is there anything I can do to make this work?
rule UserObject {
switch $(>) {
case *.idl : C++ $(<:S=_idl.o) : $(>:S=_idl.cpp) ;
Idl $(>:S=_idl.cpp) : $(>) ;
case * : EXIT "Unknown suffix on" $(>) "- see UserObject rule in Jamfile(5)." ;
}
}
From: Markus Scherschanski <MScherschanski@dspace.de>
Date: Tue, 30 Apr 2002 15:06:50 +0100
Subject: Dynamic Include-Files?
is there any way to read out other files and use their input in Jam?
I thought about this way, for example:
#### In Jambase
rule FileExists {
Depends first : $(<) ;
ALWAYS $(<) ;
NOTFILE $(<) ;
}
actions FileExists {
echo $(<)_EXIST = > _tmp_exist_$(<)
if exist $(>) echo TRUE >> _tmp_exist_$(<)
type $(SEMICOLON) >> _tmp_exist_$(<)
}
###### In Jamfile
FileExists TESTFILE : test.fil ;
include _tmp_exist_TESTFILE ;
if ($(TESTFILE_EXIST) = TRUE) { ECHO WOW,IT EXISTS }
It's just one case (sad enough that Jam cannot check whether file exists), I
also want to read in filelists and so on.
It's surprisingly works too, but only the second time I call the Makefile,
you know why?
Right! Actions are performed after the file is read, so bloody shit.
Here's my question:
Can I force Jam to execute some actions before rules and are the changes
included or can I include data from files in another way?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Another jam extension in my personal branch
Date: Tue, 30 Apr 2002 17:56:47 -0500
I haven't yet tried to use Jam's parallel build facilities, but Scons
(another build tool) lays claim to an interesting feature which most
build tools do not implement: it keeps all of the available build
processors busy at all times. Recursive dependency-tree satisfaction
(like most of what I've seen in Jam's make process) tends to limit the
number of simultaneous build processes to something related to the
branching factor of the dependency graph.
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: Dynamic Include-Files?
Date: Fri, 3 May 2002 16:24:37 +0400
It is possible to check if file exists using the GLOB rule in 2.4
if [ GLOB /home/ghost : a.html ] {
# do something
}
I don't know about any way. Maybe invoking jam recursively will work, but I
never tried it myself.
Date: Fri, 3 May 2002 14:54:14 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Dynamic Include-Files?
It's not the Jam Way...
In jam, the jamfiles are a complete, self-contained description of how to
build. Trying to put part of the build description somewhere else goes
against jam's design.
Date: Fri, 3 May 2002 18:08:12 -0700 (PDT)
From: "John D. Mitchell" <johnm-jam@non.net>
Subject: Current Java support?
Base question: What's the current state of Jam's support for Java?
Background: I've hunted through the archives of this mailing list and there
seems to be a real dearth of Java support coverage. Is anyone really using
Jam with Java or has everybody switched to Ant? I really dislike Ant.
FWIW, I've used Jam on some small C/C++ projects but I'm certainly not a Jam guru. :-)
I've got Ames' stuff from the old days but I'm hoping that there's a newer,
better place to start.
Date: Fri, 3 May 2002 18:26:55 -0700 (PDT)
Subject: Re: Current Java support?
I reworked a Jam-based (on "Ames' stuff") Java build process into an Ant
one and took it down from 40 minutes to 4, so I feel just a tad
differently about Ant than you do :) The problem with the Jam-based one
was no wildcarding, so you had these long lists of source files in the
Jamfiles that you had to maintain, and feeding the Java files to the
compiler one at a time, which is clearly gonna make the build incredibly slow.
Just out of curiosity, what is it about Ant that makes you dislike it?
Date: Fri, 3 May 2002 18:38:17 -0700 (PDT)
From: "John D. Mitchell" <johnm-jam@non.net>
Subject: Re: Current Java support?
Ah, I was hoping that someone had overcome the single .java file per
compiler invocation issue. I.e., feeding the java compiler all of the
.java files that it needs to process in each directory. :-(
XML stupidity. See: Humans should not have to grok XML:
http://www-106.ibm.com/developerworks/xml/library/x-sbxml.html
Date: Fri, 3 May 2002 18:53:59 -0700 (PDT)
Subject: Re: Current Java support?
Well, don't get your hopes down just yet -- someone may well turn up to
say they've done just that (the rework I did was about a year and a half ago).
In that case, you can look forward to Ant2:
Ant2, like Ant1, uses build files written in XML as its main input,
but it will not be restricted to it.
But, really, if your build process is relatively straightforward (which,
unfortunately, mine wasn't -- but going with Ant was still a vast
improvement), you can get away with a pretty small, top-level build file
(albeit, still in XML, but at least there wouldn't be all that much of it to look at :)
Date: Sat, 4 May 2002 06:28:41 -0700 (PDT)
From: "John D. Mitchell" <johnm-jam@non.net>
Subject: Re: Current Java support?
Yeah, I've got a lot of Ant stuff -- it's pretty much impossible to not
have these days if you use any open-source Java stuff.
Subject: Re: [ "John D. Mitchell" ] Current Java support?
Date: Sat, 04 May 2002 09:40:12 -0500
From: "Gregg G. Wonderly" <gregg@skymaster.c2-tech.com>
Except for people using Ant for a replacement to CPP (this is what property
files, JAR file meta-info etc are for), I generally just do
javac -d . `find ${SRC} -name '*.java' -print`
and be done with it.
I have 700+ class file projects that this works fine on. Now, if you are
going beyond a few thousand files, this might be a problem. But, if Ant can
do your builds for you, then plain old javac can do it with some simple care
to packaging and parameterization of your code.
My number one argument about Ant is that it demands that the source tree
mirror the class packaging structure, which is just not realistic for a number
of reasons. So, I don't use Ant, and you know what, I am still able to build
and distribute code with no problems.
Ant is a solution for a non-problem. It seems that technologies that are in
fact useful for some things (I use XML in lots of places) have been made into
something that is really more of a hinderence. Lots of people complain about
Ant's shortcommings. Yet, it still continues to get used... It's kinda like MS windows...
From: "Jan Mikkelsen" <janm@transactionware.com>
Subject: RE: Current Java support?
Date: Mon, 6 May 2002 08:59:53 +1000
I'm actually working on adding the following things to Jam at the moment:
- Parsing .class files to extract dependencies and put them into the
dependency graph.
- Modifying the Jambase to invoke the compiler once for the objects
associated with a particular target (jar, whatever). This is necessary
for correct builds as well as performance because dependencies from
.class files only gives the set of files to be build, not the order. I
know the compiler can do some of this, but I don't always trust it and I
don't want to depend on having source files in directories following
package name conventions.
- Adding support for multiple compilers (Javac, jikes, gcj).
- Adding support for native code generation using gcj from the same
Jamfile as a conventional Java build.
Acunia-Jam has some interesting stuff (particularly the :P modifier).
Acunia-JamJar (or something just like it) will be necessary.
I want to get my first version running in the next week or so, but
that's what I said at the beginning of last week and other stuff
intervened. I'll post here when it is ready.
Date: Mon, 6 May 2002 13:48:49 +0200 (CEST)
From: "chris.gray" <gray@acunia.com>
Subject: Re: Current Java support?
We made our own version to solve the problems we had with Java:
http://wonka.acunia.com/download.html#tools
So far we have failed to submit this to the Perforce depot: more
specifically, I have failed to do so. 8-0 I'll try to do something about
that RSN (I've just downloaded the command-line client).
Hi, we build our software in layers, with lower layers providing services to
upper layers. Lower layers must not depend on upper layers. To help enforce
this we build and _test_ lower layers before building upper layers. That
means that many executables should be built before libraries from upper
layers, and it's this last bit that is tripping me up. Jam seems to prefer
to build libraries and then executables.
To add to the mix one of our layers builds an executable that will then be
used in subsequent layers to generate source code. So we have,
layer 1 shlib + tests
layer 2 shlib + tests
layer 3 codegen + tests
layer 4 shlib + tests
layer 5 shlib + tests
and so on. We really do want layer 1 to build completely before layer 2 is
started. And in any case it is essential that layer 3 is built before layer
4 which would otherwise fail pretty quickly.
I realise that I could probably get what I want by expressing all of the
dependencies but there is something illogical about layer2 shlib depending
on layer 1 unit tests, and in any case the whole thing will become
unmanageable. So I'm hoping for a more elegant solution :-)
Date: Thu, 9 May 2002 16:02:50 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: layering and code generation
Oh. JFYI, I hereby notify you that I ignore your legal nonsense
completely. Please do sue me. ;)
This doesn't follow... I don't see why you should order that strictly. As
long as every build produces an error message for every error, the build
order shouldn't matter.
There are advantages to ordering more sloppily. For one thing, it allows
jam to use all the CPUs all the time.
The executable is no problem.
In jam, you need to define some rule that runs your executable, and use
that rule for various targets in layers 4 and 5. That rule needs to
contain one extra line, saying that $(<) (the target to be built) depends
on the executable.
Jam will then make sure that the actions to build the executable are run
before the actions that use the executable.
On an SMP machine, I'd think that some parts of layer 2 can be built while
layer 1 is being tested, etc.
You could have a "layer1" target, on which all layer2 parts depend... it's
not pretty.
Date: Thu, 9 May 2002 18:10:40 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: layering and code generation
Maybe I do see. You want to be very, very sure that there are no bad
dependencies, right?
Here's one way you might do that. It's rather experimental and I don't
know exactly how to do it. I have a secret hope that Diane Holt will step
in with the exact syntax you need ;)
Jam generally figures out source code dependencies by itself, by looking
at source files. That's a good idea, IMO. Eliminates a failure mode. It
also means that jam builds an up-to-the-second map of all dependencies.
You can use that map to spot bad depencencies.
You need a rule for the layer-n tests that depends on all those tests and
that has the LEAVES modifier set. Its action should get a list of
everything on which those tests depend. If that list contains something
from the wrong layers, you can give an error message.
This is safe, because if jam misses a dependency for a test, then either
the compile or the execution of that test won't succeed anyway.
Date: Thu, 9 May 2002 19:37:16 -0700 (PDT)
Subject: Re: layering and code generation
I think what you'd need to do is to not use any of the SubDir and
SubInclude stuff in your source-tree top-level Jamfile (which I'm assuming
you're currently doing), but instead chain your layers together via the
'include' directive at the bottom of each of your layers' top-level
Jamfiles, with your source-tree top-level Jamfile just doing a:
SubInclude TOP layer1 ; # the first link in the chain
And layer1's Jamfile having:
SubDir TOP layer1 ;
SubInclude TOP layer1 subdir1 ; #etc., for all subdirs of layer1
# Any targets that live at this level (usually none)
include $(TOP)$(SLASH)layer2$(SLASH)Jamfile
Do that (with layer2 doing an 'include' of layer3's Jamfile, etc.) for
each but the final layer, which doesn't need an 'include', since you're at
the end of the chain.
P.S. Shameless plug: Anyone know of a company in the bay area looking for
a Supremo Build&Release Engineer (that'd be me :)
From: Chris Higgins <chris.higgins@cursor-system.com>
Subject: RE: layering and code generation
Date: Fri, 10 May 2002 13:37:59 +0100
Please keep the abuse coming, it might help me get the nonsense removed :-)
Right, no upward dependencies.
I tried this but had no success and to be honest I don't see what you are
getting at. Why would avoiding SubInclude make much difference? The standard
Jambase SubInclude doesn't look much different than the above to me.
From: Patrick Frants <patrick@quintiq.com>
Date: Fri, 10 May 2002 14:19:52 +0200
Subject: How do I define a rule for .cpp -> .i (preprocessor output)
I would like to define a rule to convert .cpp to .i files.
I cannot do it with the UserObject rule because it looks at the suffix of the
source file, not the target file.
Date: Fri, 10 May 2002 12:02:52 -0700 (PDT)
Subject: RE: layering and code generation
Ack! -- I've been battling one of the nastiest colds I've had in years,
and clearly my proferred "solution" came out of a fever dream that made me
think using the include directive at the end would get all of "layer1"
built before anything in "layer2" did. Sorry about that (hangs head in shame).
There's only two ways I can think of to do what you're asking for, and one
way was getting to be too much work to do for free :) -- so I went with
the easy way, which is to run 'jam' in each of your layers' top-level
Jamfiles (via pseudo-targets -- one to build, one to clean) from your
source-tree's top-level Jamfile.
Here's the Jamrules:
NOTFILE layer cleanall ;
ALWAYS layer cleanall ;
rule Build {
Depends all : $(<) ;
Depends $(<) : $(>) ;
local dir ; {
BuildLayer layer : $(dir) ;
}
}
actions BuildLayer {
echo "Building $(>)..."
cd $(>) && jam
}
rule CleanAll {
Depends cleanall : $(>) ;
local dir ;
{
CleanLayer cleanall : $(dir) ;
}
}
actions CleanLayer {
echo "Cleaning $(>)..."
cd $(>) && jam clean
}
The top-level Jamfile:
SubDir TOP ;
Build layer : layer1 layer2 layer3 ;
CleanAll cleanall : layer1 layer2 layer3 ;
From: Chris Higgins <chris.higgins@cursor-system.com>
Subject: RE: layering and code generation
Date: Mon, 13 May 2002 15:13:54 +0100
Thanks for your approach, appreciate it. I think it gives away too much of
why I want to use Jam though (no recursive invocations, complete dependency
graph) so I probably won't use it.
I've come to the conclusion that what I want is outwith Jam's scope (at
least can't be captured by a dependency) and needs to be handled in some
other manner. I'm happy with that for the moment,
Now on to other problems, java looms :-), automated testing :-)
From: "Tim Baker" <dbaker@direct.ca>
Subject: Re: layering and code generation
Date: Wed, 15 May 2002 14:55:39 -0700
Maybe this will work:
Depends layer4 : layer3 ;
Depends layer3 : layer2 ;
Depends layer2 : layer 1 ;
NOTFILE layer1 layer2 layer3 layer4 ;
rule L1Main {
Main $(<) : $(>) ;
Depends layer1 : $(<) ;
}
rule L1Library {
Library $(<) : $(>) ;
Depends layer1 : $(<) ;
}
rule L2Main {
Main $(<) : $(>) ;
Depends layer2 : $(<) ;
}
rule L2Library {
Library $(<) : $(>) ;
Depends layer2 : $(<) ;
}
etc etc.
Then from the command line invoke "jam layer4" to build all layers or "jam
layer2" to build layer2 and layer1.
Date: Wed, 15 May 2002 16:17:19 -0700 (PDT)
Subject: Re: layering and code generation
Yeah, that was the other way I thought of that I ended up not going down,
since it involves more work than just what you've shown here -- because
once you hit the Main and Library rules, you're back to hitting
dependencies on the "lib" and "exe" pseudo-targets, which are dependencies
of "all", so you're back to Jam building all the libraries (in all the
layers) then all the exes. IOW, you'd need to go a lot further down the
road of creating new rules and pseudo-targets to distinguish the targets
being built for each layer (and, like I said, that was more than I felt
like doing for free :)
Date: Wed, 15 May 2002 20:19:26 -0700 (PDT)
Subject: Re: layering and code generation
Aaargh... this thread's making me crazy (shouldn't have worked on it at
all that day with that nasty cold, I suppose). Having looked at it again,
yes, this approach does work, so long as you include the "layer"
pseudo-target in your 'jam' command, which I wasn't doing, since I always
(do and aim to) just run 'jam'. So if you don't mind running 'jam layerN'
whenever you're at the top level, this works fine.
Date: Thu, 16 May 2002 09:40:31 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: layering and code generation
That approach may seem to work, but it's begging for problems:
- running "jam" runs a "successful" build, normally satisfying the
invoking user
- running "jam layer5" runs the correct build.
If running "jam" is faster than "jam layer5", I can easily imagine people
running "jam" more and more often...
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Thu, 16 May 2002 09:02:13 -0500
Subject: undefined behavior
My OSF compiler pointed out to me that code in fileunix.c invokes undefined
behavior. The enclosed patch fixes that.
*** fileunix.c.~1.4.~ Mon May 6 15:22:27 2002
--- fileunix.c Thu May 16 07:00:24 2002
***************
*** 218,224 ****
while( read( fd, &ar_hdr, SARHDR ) == SARHDR &&
!memcmp( ar_hdr.ar_fmag, ARFMAG, SARFMAG ) )
{
! char lar_name[256];
long lar_date;
long lar_size;
long lar_offset;
while( read( fd, &ar_hdr, SARHDR ) == SARHDR &&
!memcmp( ar_hdr.ar_fmag, ARFMAG, SARFMAG ) )
{
! char lar_name_[257];
! char* lar_name = lar_name_ + 1;
long lar_date;
long lar_size;
long lar_offset;
(Exim 3.33 #1)
Date: Fri, 24 May 2002 13:03:55 +0200
From: Michael Voucko <voucko@fillmore-labs.com>
Subject: Problems finding object files for linking
I'm new to Jam and not able to figure out how to do the following:
Consider a directory layout like this:
... -- UtilLib --- Util1 -- util1.c
|
-- Util2 -- util1.c
|
..
-- Utiln -- utiln.c
What I try to accomplish is to compile all the utilx.c files and link a
library from the resulting objects.
All the Jamfiles in the 'Utilx' sub directories look like this:
# Jamfile in TOP ... UtilLib Utilx
SuDir TOP ... UtilLib Utilx
Objects Utilx.c
The Jamfile from UtilLib should do the linking and looks like this
!!
# Jamfile in TOP ... UtilLib
SuDir TOP ... UtilLib
LibraryFromObjects UtilLib : Util1$(SUFOBJ) .... Utiln$(SUFOBJ) ;
SubInclude TOP ... UtilLib Util1
SubInclude TOP ... UtilLib Util2
...
SubInclude TOP ... UtilLib Utiln
Problem is that the LibraryFromObjects rule tries to locate
Utilx$(SUFOBJ) in <...!UtilLib> instead of <...!UtilLib!Utilx>
since in the LibraryFromObjects rule any path prefix of the object files
is discarded (at least from what I understand - and as I said I'm a
newbie;-). That is, changing the above rule to
LibraryFromObjects UtilLib : <...!UtilLib!Util1>Util1$(SUFOBJ)
....
<...!UtilLib!Utiln>Utiln$(SUFOBJ) ;
changes nothing at all.
Is it correct that when doing the link step a library is the Jam
'target' and the object files are the 'source' for it?
If so, can I modify the search path for the objects by setting
SEARCH_TARGET ? (I tried this one as well - same result, but I'm not
sure wether another modification might have influenced it).
Any explanation, pointer to documentation or sample jam file is apprecited.
Date: Sat, 25 May 2002 21:46:14 -0700
From: Rich Young <wratchdog@cox.net>
Subject: Mac OS X/Darwin Jambase
Does anyone happen to have a Jambase (other than what Apple uses) file
for Mac OS X/Darwin?
Mac OS X uses Jam for building except their Jambase is very specific to their
tools and has a lot of extra crap that I don't want to parse through to figure
out how to use it with my jam files.
What I would like is the equivalent of the Jambase that comes with the Jam package
but with the appropriate changes for OS X. I was tweaking the default file (from
I'm a newbie to using Jam but I was hoping it would be better than
GnuMake (which is the make utility I currently use with OS X).
Date: Sat, 25 May 2002 22:47:28 -0700
Subject: Re: Mac OS X/Darwin Jambase
Rich -- be aware that the /usr/bin/jam that is on Mac OS X has also been
customized for use by Project Builder. We do intend to change things such
that jam behaves by default like the standard jam does;
(I don't think we've yet fixed that, but we do intend to).
So, if you want standard jam behavior, you'll probably want to build it from
the standard sources.
It's possible that may fix some of the other problems you've been having with
Perforce's defaults Jambase.
From: "Radke, Kevin" <Kevin.Radke@nexiq.com>
Date: Tue, 28 May 2002 09:44:01 -0500
Subject: Treat MSVCNT special?
Having been bit by the "feature" that causes Jam to split
environment variables at spaces, I was wondering why variables
like MSVCNT, LIB, and INCLUDE are not treated "special" like
PATH is in variable.c and split at SPLITPATH instead of space.
In fact, I'm curious why the split variable isn't ';' on windows
instead of ' ', since it is already different (a comma) on OS_MAC.
When I made this mod, I still needed to double quote MSVCNT, but
only once, and not the multiple times other "workarounds" mentioned
on the list have discussed.
Anyone who has used Jam a lot more than me see any downside
in this change? Any way to not have to double quote MSVCNT at all?
Date: Tue, 28 May 2002 10:29:34 -0500 (CDT)
Subject: Re: Treat MSVCNT special?
don't install it on a path with blanks in it.
From: "Radke, Kevin" <Kevin.Radke@nexiq.com>
Subject: RE: Treat MSVCNT special?
Date: Tue, 28 May 2002 10:52:05 -0500
While this workaround may be ok for some, it isn't for us. Most
people unfortunately take install defaults, and MS likes spaces...
(And yes, I know you can use the short 8.3 name as well, but
with NTFS, short name support is optional.)
I've seen many people frustated by tools that don't "just work"
out of the box. If some simple changes allow Jam to work with
the default visual studio install path, more people will
be less resistent to using Jam. This is a big win in
my opinion. The odds are that most people looking at Jam
will have already installed MSVC, and not want to re-install.
I guess the real question is:
"On Windows we already know MSVCNT is special, why not treat it that way?"
Dave Abrahams sent this by private email, but I haven't explored
it's use yet:
Date: Wed, 29 May 2002 09:48:13 +0200
From: Michael Voucko <voucko@fillmore-labs.com>
Subject: linking with shared libraries
I have a problem with linking shared libraries.
On Windows everything works fine when using the LinkLibraries rule since
it is ok to replace the .dll sufffix by .lib.
But if I'm on Unix replacing .so by .a is not what I want.
Is there a different way to link shard libraries or do I have to write a
new rule which is able to figure out which suffix has to be appended
when adding a shared library using LinkLibraries.
From: Eyal Soha <eyal@procket.com>
Date: Tue, 28 May 2002 19:01:50 -0700
Subject: emacs mode for jam?
Is there an emacs mode for editing jam files?
From: Markus Scherschanski <MScherschanski@dspace.de>
Date: Wed, 5 Jun 2002 16:03:04 +0100
Subject: -q Option
is it intended that this quickquit option only works when debugging is
enabled (not -d0)?
Or are there any other reasons for this?
Date: Mon, 10 Jun 2002 14:24:24 -0400 (EDT)
From: David Berton <db@research.nj.nec.com>
Subject: jam and qt
Using jam 2.4, I am able to compile a Qt-based project (which uses moc)
using the rules outlined here (hi, Arnt):
However, I am having trouble getting this working when the executable I am
trying to build is a sub directory of a larger project (I keep getting
"don't know how to make <path!to!subdir>.moc.cpp").
Does anyone have rules that will handle moc generated files for Qt-based
projects, yet which also work if that project is a subdir?
Date: Tue, 11 Jun 2002 09:59:02 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: jam and qt
Yeah, I ran into that myself later last year, and made it work. I'm rather
unhappy about jam's SubDir/SubInclude stuff... I'm not sure whether this
is the best way, but it's the way I used.
rule Moc {
# o is the .o file, t is the target .cpp
local t = [ FGristFiles $(<) ] ;
local o = $(t:S=$(SUFOBJ)) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
LOCATE on $(t) = $(LOCATE_TARGET) ;
Clean $(t) ;
RmTemps $(o) : $(t) ;
LEAVES $(o) ;
Depends $(o) : $(t) ;
Depends $(t) : $(>) ;
Moc2 $(t) : $(>) ;
}
actions Moc2 {
$(RM) $(<)
echo $(>) | xargs -n1 -r $(QTDIR)/bin/moc > $(<)
}
Date: Tue, 11 Jun 2002 09:19:30 -0400 (EDT)
Subject: Re: jam and qt
Yes. Left Oslo almost a year ago now, wound up back in the States.
Excellent, thanks. For some workarounds for SubInclude that I think are
useful, see:
Date: Wed, 12 Jun 2002 11:37:24 +0300
From: Yurii Rashkovskii <yrashk@openeas.org>
Subject: OCaml
Does anybody made OCaml support for Jam/MR?
From: "Pitha, Robert" <rpitha@arqule.com>
Date: Mon, 17 Jun 2002 12:12:53 -0400
Subject: Help with "grist"
OK, newbie question... I have looked through a reasonable amount of the
archives, without finding any help there, but I can't help but feel I'm
overlooking an obvious source of information...
Anyway, my question is, what is "grist", or more importantly, where can I
find out how to use it effectively? I have looked through all the documents
linked by the main Jam page, and while they mention "grist", none of them
define or discuss it. As far as I can tell, it's a way to "uniquefy"
filenames, but there has to be more to it than that, and even so, it's
causing me problems and I need to know how to coexist peacefully with it.
Specifically, I have a directory hierarchy, say:
$(BUILD_ROOT), containing (so far):
/afb, containing (so far)
/afb
/arg
I have a Jamrules in $(BUILD_ROOT), and Jamfiles in each directory with
appropriate SubDir and SubInclude lines. If I go to one of the two
lowest-level directories (say, $(BUILD_ROOT)/afb/afb) and run jam, that
directory builds just fine. If, however, I go one level up
($(BUILD_ROOT)/afb) and run jam, I get a lot of complaints like "don't know
how to make <afb!afb>afbAKMCluster.cpp" (that's one of the file names in the
afb/afb directory - although it's actually in afb/afb/gen, the Jamfile is
set up to deal with that). This <afb!afb> looks like what I've seen hinted
at in the documentation as being this grist thing, but without any actual
documentation about "grist" I can't be sure, nor can I really figure out
what it's trying to tell me. Why does this Jamfile work when invoked in one
location and not another? Where is better documentation available?
Date: Tue, 18 Jun 2002 15:42:29 -0700
From: Jos Backus <jos@catnook.com>
Subject: Jam 2.4 Makefile buglet
Not everybody has . in her $PATH, so
all: jam0
jam0
in the Makefile should really be changed to
all: jam0
./jam0
?
From: Chris Rumpf <Chris.Rumpf@calix.com>
Date: Wed, 19 Jun 2002 17:49:36 -0700
Subject: Bug in Jam 2.4 - Naming targets the same as directories in the pa
th to the src code
I can't seem to have an executable w/ the same name as part of the
path.
Has anyone else run into this? See below for my example.
This is from freshly unzipped jam 2.4 and a test directory I set up
with the following structure:
/
/jam
/foo/src
Sitting in /
:->ECHO $TOP
.
:-> cat Jamfile
SubInclude TOP foo ;
:->
:-> cat foo/Jamfile
SubInclude TOP foo src ;
:-> cat foo/src/Jamfile
SubDir TOP foo src ;
/foo/src/foo
:-> jam/bin.solaris/jam -d2
warning: foo depends on itself
...found 18 target(s)...
...updating 2 target(s)...
Cc foo/src/a.o
gcc -c -o foo/src/a.o foo/src/a.c
MkDir1 foo/src/foo
mkdir foo/src/foo
Link foo/src/foo
gcc -o foo/src/foo foo/src/a.o
ld: cannot open output file foo/src/foo: Is a directory
collect2: ld returned 1 exit status
...failed Link foo/src/foo ...
...failed updating 1 target(s)...
...updated 1 target(s)...
:->
But if I change the foo/src/Jamfile to look like this:
:-> cat foo/src/Jamfile
SubDir TOP foo src ;
:->
It works fine!
:-> jam/bin.solaris/jam -d2
...found 19 target(s)...
...updating 2 target(s)...
Cc foo/src/a.o
gcc -c -o foo/src/a.o foo/src/a.c
Link foo/src/foobar
gcc -o foo/src/foobar foo/src/a.o
Chmod1 foo/src/foobar
chmod 711 foo/src/foobar
...updated 2 target(s)...
:->
Anyone care to explain why Jam has this fundamental problem?
From: Tim Docker <timd@macquarie.com.au>
Date: Sat, 22 Jun 2002 02:44:13 +1000
Subject: Confusion on subdirectories
I'm presently examining some alternative build systems, including jam. I'm trying to
build a library, containing object files derived from c++ files, derived from a
grammar file, in a subdirectory.
Forgetting at first the subdirectory bit, the following (all in one) jamfile [1] appears
to do what I need, giving output [2] (Should I be concerned about the independent target
errors here?)
When I try and put this into a directory hierarchy, and factor out the rules
then everything goes awry. I have the toplevel Jamrules [3] and Jamfile [4],
and the subdirectory Jamfile
[5]. If I build in the top level directory, I get [6]. If I build in the
subdirectory, I get [7].
I'm probably missing many subtleties here (I've only been playing with this for
an hour or two). A hint as to what I'm doing wrong would be much appreciated.
C++ = /opt/gcc/gcc-2.95.3/bin/g++ ;
CC = /opt/gcc/gcc-2.95.3/bin/gcc ;
LINK = $(C++) ;
C++FLAGS = -I/vobs/mts/include -I/usr/prod/mts/platform/i386/sybase-11.9.2/include ;
LIBDIR = /tmp ;
rule Antlr {
Depends $(<) : $(>) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
Clean clean : $(<) ;
}
actions Antlr {
/vobs/other/antlr/antlr_tool $(>) ;
}
Antlr DateExprParser.cpp DateExprLexer.cpp
DateExprLexer.hpp DateExprParser.hpp DateExprParserTokenTypes.hpp : DateExpr.g ;
Library $(LIBDIR)/libTools :
DateExprParser.cpp DateExprLexer.cpp ;
[timd@AA800315 tools2]$ jam -d2 lib
...found 10 target(s)...
...updating 5 target(s)...
warning: using independent target DateExprLexer.hpp
warning: using independent target DateExprParser.hpp
warning: using independent target DateExprParserTokenTypes.hpp
Antlr DateExprParser.cpp DateExprLexer.cpp DateExprLexer.hpp DateExprParser.hpp
DateExprParserTokenTypes.hpp
/vobs/other/antlr/antlr_tool DateExpr.g ;
C++ DateExprParser.o
/opt/gcc/gcc-2.95.3/bin/g++ -c -o DateExprParser.o -I/vobs/mts/include
-I/usr/prod/mts/platform/i386/sybase-11.9.2/include -O
DateExprParser.cpp
C++ DateExprLexer.o
/opt/gcc/gcc-2.95.3/bin/g++ -c -o DateExprLexer.o -I/vobs/mts/include
-I/usr/prod/mts/platform/i386/sybase-11.9.2/include -O
DateExprLexer.cpp
Archive /tmp/libTools.a
ar ru /tmp/libTools.a DateExprParser.o DateExprLexer.o
Ranlib /tmp/libTools.a
ranlib /tmp/libTools.a
RmTemps /tmp/libTools.a
rm -f DateExprParser.o DateExprLexer.o
...updated 5 target(s)...
[timd@AA800315 tools2]$ jam -d2 clean
...found 1 target(s)...
...updating 1 target(s)...
Clean clean
rm -f DateExprParser.cpp DateExprLexer.cpp DateExprLexer.hpp DateExprParser.hpp
DateExprParserTokenTypes.hpp /tmp/libTools.a
...updated 1 target(s)...
[timd@AA800315 tools2]$
C++ = /opt/gcc/gcc-2.95.3/bin/g++ ;
CC = /opt/gcc/gcc-2.95.3/bin/gcc ;
LINK = $(C++) ;
C++FLAGS = -I/vobs/mts/include -I/usr/prod/mts/platform/i386/sybase-11.9.2/include ;
LIBDIR = /tmp ;
rule Antlr {
Depends $(<) : $(>) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
Clean clean : $(<) ;
}
actions Antlr {
/vobs/other/antlr/antlr_tool $(>) ;
}
SubInclude TOP tools ;
SubDir TOP tools ;
Antlr DateExprParser.cpp DateExprLexer.cpp
DateExprLexer.hpp DateExprParser.hpp DateExprParserTokenTypes.hpp : DateExpr.g ;
Library $(LIBDIR)/libTools :
DateExprParser.cpp DateExprLexer.cpp ;
[timd@A