From: "Mark D. Baushke" <mark.baushke@solipsa.com>
Date: Thu, 29 Jul 1999 16:44:04 -0700
Subject: Looking for Jamrules to deal with java
Has anyone already written the rules and actions needed to compile
.java files into .class files and update .jar files with newly built
.class files? If so, I'd love to get a copy of them.
From: linda_farrenkopf@liebert.com
Date: Mon, 09 Aug 1999 14:36:05 -0400
Subject: Setting Environment Variables
I need to set an environment variable and to export it during my JAM build. The
compiler I am using does not accept a command line method of specifying the
include directories, but requires an environment variable setting instead. The
problem is that my project includes multiple trees each of which would need
slightly different settings for this variable, C51INC. Could someone please
explain how to do this from within a Jamfile.
Date: Mon, 09 Aug 1999 12:48:23 -0700
From: Steve Bennett <steveb@portal.com>
Subject: Re: Setting Environment Variables
The simplest way to do this is to write a wrapper script.
On Unix:
#!/bin/sh
#
# Runs cc with arg1 as INCS and arg2 as LIBS
#
INCS="$1"; shift
LIBS="$1"; shift
export INCS LIBS
cc $@
You can do something similar on NT.
From: Laura Wingerd <laura@perforce.com>
Subject: Re: Setting Environment Variables
Date: Mon, 9 Aug 1999 13:41:45 -0700 (PDT)
Is this something you could set right in the compile action? E.g.,
say you use the C++ rule to compile. You could modify your Jambase's
C++ actions to look something like:
if $(UNIX) {
actions C++ {
C51INC="$(C51INC)"
export C51INC
$(C++) -c $(C++FLAGS) $(OPTIM) -I$(HDRS) $(>)
}
}
if $(NT) {
actions C++ {
set C51INC=$(C51INC)&
$(C++) -c $(C++FLAGS) $(OPTIM) -I$(HDRS) -o$(<) $(>)
}
}
Then, somewhere in the Main or Library rule (or your own versions
thereof), you'd set C51INC on each target, e.g.,
C51INC on $(<) = yourvaluehere ;
From: Roark Hennessy <RHennessy@Stac.com>
Date: Tue, 10 Aug 1999 12:28:48 -0700
Subject: NEWBIE:directory dependencies?
Just learning jam and have a basic question...
TOP/
Jamfile
Jamrules
/include
Xyz.h
/shared/
s1.cpp
s2.cpp
s3.cpp
Jamfile
/test
test.cpp
Jamfile
In TOP/Jamrules:
GZHDRS = $(TOP)/include /usr/include ;
GZFLAG = -g ;
In TOP/Jamfile:
SubInclude TOP shared ;
SubInclude TOP test ;
In TOP/shared:
SubDir TOP shared ;
SubDirHdrs $(GZHDRS) ;
SubDirC++Flags $(GZFLAG ) ;
Library s : s1.cpp s2.cpp s3.cpp ;
In TOP/test
SubDir TOP test ;
SubDirHdrs $(GZHDRS) ;
SubDirC++Flags $(GZFLAG ) ;
Main RunMe : test.cpp ;
LinkLibraries RunMe : s ;
The problem is that the TOP/test/Jamfile does not know how to build the
TOP/shared library (s) . If I run jam from the TOP/shared subdirectory
manually to produce the s.a file, then switch to the TOP/test subdirectory
and run the jam from there, it skips the link because it can not find the
library.
If I change the LinkLibraries line to LinkLibraries RunMe : ../shared/s ;
then the link succeeds only if the s.a is present, if the s.a is not there
then the TOP/test/jamfile does not know how to build it.
Oh and I'm running on RedHat 6.0 linux, Jam/MR version 2.2.1 on intel
Date: Tue, 10 Aug 1999 16:26:35 -0500 (CDT)
From: Scott McCaskill <scott@pe-i.com>
Subject: Re: NEWBIE:directory dependencies?
I'm also new to jam, and I just tackled this same problem myself. Here's
what I came up with:
# SubIncludeOnce -- like SubInclude, but will only include each Jamfile once.
# This is handy for specifying dependencies between things in different
# directories. Usually SubIncludeOnce has to go at the end of the Jamfile.
rule SubIncludeOnce {
local i ;
local include_marker ;
include_marker = included ;
# value of include_marker is the concatenated directory names in the
# path to the directory being included
include_marker = $(include_marker)_$(i) ;
}
# if the variable whose name is the value of include_marker does not
# exist, then we know we haven't included that directory yet.
if ! $($(include_marker)) {
# Do not include more than once
$(include_marker) = TRUE ;
SubInclude $(<) ;
# } else {
# ECHO "Already included: " $(<) ;
}
}
I think you'll also need something like this to set up the dependency
between the library and the executable:
Depends RunMe : s ;
The Depends line may have to come before the SubIncludeOnce line in the
Jamfile, and the SubIncludeOnce lines may have to go at the end of the
Jamfile (they did for me).
I use this instead of SubInclude to set up dependencies between
executables and libraries. If you forget to use SubIncludeOnce and use
SubInclude instead, you may see some things get built more than once.
I also have RH6, jam 2.2.5. I haven't modified the Jambase.
BTW, if I don't reply, it's probably becuase I'll be out for a week
starting thursday.
Date: Tue, 10 Aug 1999 15:47:19 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: NEWBIE:directory dependencies?
When you're sitting in your top-level dir, your SubInclude's say to read in
the Jamfiles in the directories listed.
When you're sitting in the "test" dir, Jam is only going to read in the
local Jamfile, because you've not given it any directive to read in any
other Jamfile. So if you're only reading in the local Jamfile, then it's
only going to know about what's in that one. Having the executable link
against a library that gets built elsewhere isn't enough to make Jam go
someplace else to try and build it -- how would it know where it's supposed
to go? Using a relative (or even a full) path to specify where the library
can be found still isn't telling Jam where to go to build it (for all it
knows, maybe it gets built one place and installed into where it can be found).
Ordinarily, things are organized from a "top-down" perspective, and when
you're sitting in a subdirectory, you only want to build what's in that
directory (and anything below it). But there's no law that says you can't
go build things in other places if that's what you want to do. You just
need to tell it where to go.
BTW: I've gotten rid of things that aren't relevant to this particular thing,
like the header stuff -- oh, and I've used the -L flag to point to where s.a
lives (I didn't bother to make it libs.a...I was feeling lazy :)
So, given your example...
# Top-level Jamfile
SubInclude TOP shared ;
SubInclude TOP test ;
# Jamfile for shared
SubDir TOP shared ;
# To avoid being multiply included
S_INCLUDED = true ;
Library s : s.c ;
# Jamfile for test
# Note: Put this first so SubDir will still get set correctly
if ! $(S_INCLUDED) {
SubInclude TOP shared ;
}
SubDir TOP test ;
Main RunMe : main.c ;
LINKFLAGS on RunMe = -L $(TOP)/shared ;
LinkLibraries RunMe : s ;
Now, if you're in the "test" dir, and (lib)s.a needs to get built, it will:
% cd test
% jam -n
...found 19 target(s)...
...updating 4 target(s)...
Cc /tmp/roark/shared/s.o
cc -c -O -I/tmp/roark/shared -o /tmp/roark/shared/s.o /tmp/roark/shared/s.c
Archive /tmp/roark/shared/s.a
ar ru /tmp/roark/shared/s.a /tmp/roark/shared/s.o
Ranlib /tmp/roark/shared/s.a
RmTemps /tmp/roark/shared/s.a
Cc /tmp/roark/test/main.o
cc -c -O -I/tmp/roark/test -o /tmp/roark/test/main.o /tmp/roark/test/main.cLink
/tmp/roark/test/RunMe
cc -L /tmp/roark/shared -o /tmp/roark/test/RunMe /tmp/roark/test/main.o /tm
p/roark/shared/s.a
Chmod /tmp/roark/test/RunMe
chmod 711 /tmp/roark/test/RunMe
...updated 4 target(s)...
And it will also work from the top-level directory:
% cd $TOP
% jam -n
...found 19 target(s)...
...updating 4 target(s)...
Cc /tmp/roark/shared/s.o
cc -c -O -I/tmp/roark/shared -o /tmp/roark/shared/s.o /tmp/roark/shared/s.c
Archive /tmp/roark/shared/s.a
ar ru /tmp/roark/shared/s.a /tmp/roark/shared/s.o
Ranlib /tmp/roark/shared/s.a
ranlib /tmp/roark/shared/s.a
RmTemps /tmp/roark/shared/s.a
rm -f /tmp/roark/shared/s.o
Cc /tmp/roark/test/main.o
cc -c -O -I/tmp/roark/test -o /tmp/roark/test/main.o /tmp/roark/test/main.cLink
/tmp/roark/test/RunMe
cc -L /tmp/roark/shared -o /tmp/roark/test/RunMe /tmp/roark/test/main.o /tm
p/roark/shared/s.a
Chmod /tmp/roark/test/RunMe
chmod 711 /tmp/roark/test/RunMe
...updated 4 target(s)...
From: "Binder, Duane" <dbinder@globalmt.com>
Subject: RE: Setting Environment Variables
Date: Tue, 10 Aug 1999 18:05:24 -0500
Is there a reason that '&' is necessary in the NT actions?
It appears to me that Jam adds an extra space that CMD.exe does not ignore.
From: Laura Wingerd [mailto:laura@perforce.com]
Subject: Re: Setting Environment Variables
Is this something you could set right in the compile action? E.g.,
say you use the C++ rule to compile. You could modify your Jambase's
C++ actions to look something like:
if $(UNIX) {
actions C++ {
C51INC="$(C51INC)"
export C51INC
$(C++) -c $(C++FLAGS) $(OPTIM) -I$(HDRS) $(>)
}
}
if $(NT) {
actions C++ {
set C51INC=$(C51INC)&
$(C++) -c $(C++FLAGS) $(OPTIM) -I$(HDRS) -o$(<) $(>)
}
}
From: Roark Hennessy <RHennessy@Stac.com>
Subject: RE: NEWBIE:directory dependencies?
Date: Tue, 10 Aug 1999 18:46:20 -0700
Thanks that worked,
I understood all but, can you explain the:
LINKFLAGS on RunMe = -L $(TOP)/shared ;
Below...
BTW: I've gotten rid of things that aren't relevant to this particular thing,
like the header stuff -- oh, and I've used the -L flag to point to where s.a
lives (I didn't bother to make it libs.a...I was feeling lazy :)
# Jamfile for test
# Note: Put this first so SubDir will still get set correctly
if ! $(S_INCLUDED) {
SubInclude TOP shared ;
}
SubDir TOP test ;
Main RunMe : main.c ;
LINKFLAGS on RunMe = -L $(TOP)/shared ;
LinkLibraries RunMe : s ;
Date: 11 Aug 1999 03:53:30 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: NEWBIE:directory dependencies?
my jamfile:
if ! $(INSTACAST_CLIENT_BACKEND_INCLUDED) {
SubInclude TOP client backend ;
}
SubDir TOP client gtkclient ;
% jam
Top level of source tree has not been set with TOP
If I put "SubDir TOP client gtkclient ;" before and after the if, then all
goes well.
Date: Tue, 10 Aug 1999 23:24:54 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: RE: NEWBIE:directory dependencies?
It was just an illustration of how to point the linker to where
libraries live instead of using a path (you had used ../shared/libs.a
in one of your examples).
You don't actually need it in this case, since the library ends up
with a full-path name. But if you were linking against other libraries
that you weren't trying to build while in your "test" directory but that
weren't in the standard look-for-libraries places, then you'd use it.
Date: Tue, 10 Aug 1999 23:33:42 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: NEWBIE:directory dependencies?
This means you don't have $TOP set.
You wouldn't need to do this if you have $TOP set. My advice would
be to set $TOP.
Date: Thu, 12 Aug 1999 13:30:03 -0700
From: Brendan McCarthy <mccarthy@justintime.com>
Subject: Jam and Java
Has anybody on this list tackled the problem of dependency analysis when
compiling Java source? We currently use a system built on GNU make, and
the system cannot properly detect the targets that need to be built
because of the ambiguity of Java's "import" statement (similar to C's
"#include"). As a result there is no such thing as a "do nothing"
build, and everything is rebuilt every time. Can jam be used to solve
this problem?
Subject: Re: Jam and Java
From: "Mark D. Baushke" <mark.baushke@solipsa.com>
Date: Thu, 12 Aug 1999 13:51:31 -0700
mccarthy> Has anybody on this list tackled the problem of dependency
mccarthy> analysis when compiling Java source?
I've been looking for anyone who has java rules for jam myself. So
far, no one has come forth.
mccarthy> We currently use a system built on GNU make, and the system
mccarthy> cannot properly detect the targets that need to be built
mccarthy> because of the ambiguity of Java's "import" statement
mccarthy> (similar to C's "#include").
Yeah, the lines like:
import com.domain.foo.*;
statements can be parsed by a regexp in the HDRSCAN variable, but it
is more difficult to look in the various .jar files to see what files
satisfy the import or notice the lines
package com.domain;
to let you know what domain particular sources files are to be
found. It is also difficult to know that a single .java file might be
able to generate multiple .class files and to keep the rules to clean
up derrived .class files correct.
mccarthy> As a result there is no such thing as a "do nothing" build,
mccarthy> and everything is rebuilt every time. Can jam be used to
mccarthy> solve this problem?
That would be my hope, but I do not yet have any evidence to back up the idea.
Date: 12 Aug 1999 21:22:25 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: shared object rule
this is in my Jamfile:
SUFSO default = .so ;
rule SharedLibrary {
Library $(<) : $(>) ;
MainFromObjects $(<)$(SUFSO) ;
LinkLibraries $(<)$(SUFSO) : $(<) ;
LINKFLAGS on $(<)$(SUFSO) += -shared ;
RmTemps $(<)$(SUFSO) : $(<)$(SUFLIB) ;
}
SharedLibrary foo : foo.c ;
and all works, except everytime I build, it rebuilds everything:
% jam
...found 12 target(s)...
...updating 3 target(s)...
C++ chat.o
Archive chat.a
Ranlib chat.a
Link chat.so
Chmod chat.so
...updated 3 target(s)...
% jam
...found 12 target(s)...
...updating 3 target(s)...
C++ chat.o
Archive chat.a
Ranlib chat.a
Link chat.so
Chmod chat.so
...updated 3 target(s)...
I think its building the .so over since the .a is missing, and its building
.c's over because they are needed to build the .a.
I think what I need it to get the .so to be dependant on the .c's, but I
have no idea how to get that to work.
I tried adding
Depends $(<)$(SUFSO) : $(>) ;
and
Depends $(<) : $(>) ;
in the first line of the rule, but neither worked :(
Date: Thu, 12 Aug 1999 14:59:10 -0700 (PDT)
Subject: Re: Jam and Java
There are various dependency issues with Java that no build
tool has quite caught up with (not even Jam or Gnu make) -- and
of course, Java keeps changing on us. :-)
There are two issues with Java that put constraints on what
you can do with a build tool.
The first issue is that the javac compiler's built-in
dependency-checking still doesn't reliably work for a set of
Java files with circular dependencies. That is, if you try to
compile a specific .java file, or a few specific .java files,
it may miss compiling others that these files depend on. The
only way to reliably catch these dependencies is to run javac
with a wildcard to catch all of them:
javac *.java
This is actually pretty fast, even for directories with a lot of Java files.
The second issue is that the javac compiler creates .class
files with names that it generates on the fly. Nobody but javac
knows what these files will be named, so it becomes a headache
to maintain the build files. If you're going to be doing
something with the .class files, such as making a .jar archive,
you'll need to use a wildcard:
jar cf Cookie.jar *.class
These issues have prompted me to set up my Java development
in directories that are related to packages (a fine idea in its
own right, I think) and to write rules in which the .jar file is
the target and the above commands are in the rules. I let javac
and wildcards handle the finger granularity of the build.
P.S.: BTW, you might be interested in the Java-handling Jambase
that Ames Carlson wrote. You can find it in the list archives:
Date: Thu, 12 Aug 1999 15:38:41 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: shared object rule
Actually, there's lots of reasons, but they all come from the same
basic thing -- you're removing the .a that MainFromObjects uses,
and that Library wants as well, and so does LinkLibraries.
So the short answer is: don't remove the .a.
From: "Reddy, Jagannatha" <Jagannatha_Reddy@bmc.com>
Subject: Building Shared Library
Date: Thu, 12 Aug 1999 17:50:58 -0500
I am new to Jam. I have a requirement to build
Shared Libraries on multiple OS (AIX, HP-UX, Solaris).
Please let me know if you have any solutions for the same.
Date: Thu, 12 Aug 1999 18:58:04 -0400
From: Randy McCaskill <Randy_McCaskill@lcc.com>
Subject: Problem with jam dependencies
I am having a problem with jam's generated dependency list. I have
trimmed my tree down to a simple example (that is attached in a zip
file). I have a subdirectory with a couple of headers files in it. The
source file (main.c) includes one of the header files using
<SubDir/file1.h>. That include file then includes another include file
"file2.h" (using quotation marks since it is also in SubDir. The source
file (which isn't in SubDir) should be dependent on both files, but jam
doesn't catch the dependency on file2.h since it can't find the file in
the search path. I could add SubDir to my include path so that it is
searched, but outside of my small example, I have quite a few
directories in my include tree and I don't really want to have to add
them all.
Is there a simple solution to my problem? I am not currently on the
mailing list, but am reading via the archives, so please reply directly
as well as to the mailing list.
From: Paul Haffenden <pjh@unisoft.com>
Date: Fri, 13 Aug 1999 11:00:39 BST
Subject: Allowing the rules to determine the target specified on the command line.
We have found it useful to allow our jam rules to find out
how jam has been invoked.
We have several extra pseudo targets
and find that when we are using our
'source' pseudo target, we don't want
all the compile rules to be active.
By adding code in jam.c, we make a symbol $(ARGV)
visible that contains the targets specified
on the command line.
In our Jamrules file we have:
for i in $(ARGV) {
switch $(i) {
case sourceood : TARGET_SOURCE = 1 ; TARGET_NOBUILD = 1 ;
TARGET_SOURCEOOD = 1 ;
Depends sourceood : source ;
case source* : TARGET_SOURCE = 1 ; TARGET_NOBUILD = 1 ;
case push* : TARGET_PUSH = 1 ;
TARGET_NOSUBINCS = 1 ; TARGET_NOBUILD = 1 ;
case chmog* : TARGET_CHMOG = 1 ;
case scen* : TARGET_SCEN = 1 ; TARGET_NOBUILD = 1 ;
case pkgs* : TARGET_PKGS = 1 ; TARGET_NOBUILD = 1 ;
case lang* : TARGET_LANG = 1 ; TARGET_NOBUILD = 1 ;
case install* : TARGET_INSTALL = 1 ;
case root* : TARGET_ROOT = 1 ; TARGET_NOBUILD = 1 ;
case srcpkgs* : TARGET_SRCPKGS = 1 ; TARGET_NOBUILD = 1 ;
case remote : TARGET_REMOTE = 1 ;
}
}
The variables TARGET_* are then tested in our rules to see
if they are required to do something.
e.g:
if $(TARGET_INSTALL){
# now call Makei to do the hard work.
Makei $(<) : $(s) : $(4) :
$(i)$(t5)T : $(stag) : $(tdir) : $(sdir) ;
}
if $(TARGET_PKGS){
Makefilelist pkgs : $(PKG)$(i) :
abits$(SLASH)$(tdir)$(SLASH)$(<) $(4) f ;
}
Here are the code changes:
if( strlen( date ) == 25 )
date[ 24 ] = 0;
var_set( "JAMDATE", list_new( L0, newstr( date ) ), VAR_SET );
}
/* set up argv. This is a UniSoft Addition */
if (argc){
LIST *largv;
int i;
for(i = 0 ; i < argc ; i++){
if (i == 0) {
largv = list_new(L0, newstr(argv[i]));
} else {
list_new(largv, newstr(argv[i]));
}
}
var_set( "ARGV", largv, VAR_SET );
}
/* load up environment variables */
var_defines( environ );
var_defines( othersyms );
From: Peter Glasscock <peterg@harlequin.co.uk>
Subject: Re: Allowing the rules to determine the target specified on the command line.
Date: Fri, 13 Aug 1999 11:18:57 +0100 (BST)
With large projects, the scanning of source files is quite
time-consuming and can severely slow down a build when only one or two
source files have changed.
I have implemented some new rules (FILEOPEN, FILEWRITE, FILECLOSE) to
allow Jam to write lines to files whilst processing the jambase (or
equivalent). By using these to write valid jam files, which can be read
in on later invocations, I "cache" the dependency information found with
HDRRULE and HDRSCAN.
At the moment, I am using a wrapper around Jam which sets a variable on
the command-line with a list of all the targets that the user has asked
for. A solution like this, if it was incorporated into the main Jam
source would make this hack unnecessary.
If a large number of other people are interested in my dependency
caching, and/or the FILE* rules I've added, I'll post them. The actual
dependency caching use of the FILE* rules is quite complicated and
took some time to get right with my own replacement for the Jambase.
It would probably take a bit of work to incorporate it into the example
one that comes with the Jam source.
From: "Darrin Edelman" <darrin@aetherworks.com>
Date: Fri, 13 Aug 1999 09:23:18 -0500
Subject: Cross Compiling for VxWorks on NT using Jam
We have run into an issue that we thought others might also have to deal
with or may need to deal with in the future. That is that Jam is not really
cross-compile friendly. It assumes a certain form of library exists on each
platform which is generally a reasonable thing to do since on Windows you
have Windows libraries and on Unix you have Unix style libraries. This
however is not necessarily the case if you are cross-compiling.
The fix is relatively simple -- you need to allow the use of different style
libraries independent of system architecture. Attached are some diffs that
do just this for Unix style libraries under Windows. Note that we haven't
really added any new code -- we have just copied the appropriate code from
fileunix.c as mentioned in the diffs into filent.c to support this feature.
Of course, this is a quick and dirty hack that uses the environment to
determine which libraries should be used. Ideally, this code should be
factored out into a function and supported via a command-line option for
cross-compilation.
We really don't like the notion of using our own private version Jam, so we
would be more than happy to spend the time to add the command line option
properly but hesitate to do the work without assurance that it will be
included in the next release of Jam. If anyone knows how to make this
happen please contact me.
Also, if someone knows a better way to handle this then please do tell...
From: Temesgen Habtemariam [mailto:temesgen@jeeves.net]
Sent: Thursday, May 13, 1999 11:52 AM
Subject: Cross Compiling for VxWorks on NT
Jam assumes we are using NT style libraries if we are compiling on NT
platform. The vxWorks libraries are archived in Unix style and need to be
scanned in that manner. So I had to copy the code for file_archscan() from
the file fileunix.c to filent.c for the case where we are cross compiling
for VxWorks. I have assumed the variable VXWORKS is defined for VxWorks
compilations,(maybe there is a better way of telling weather we cross
compiling). Here is a file that has the diff for filent.c
name="diff.txt"
filename="diff.txt"
D:/Jeeves\tools\Jam\filent.c ====
***************
*** 16,24 ****
# include "jam.h"
# include "filesys.h"
# include <io.h>
# include <sys/stat.h>
-
/*
* filent.c - scan directories and archives on NT
*
# include "jam.h"
# include "filesys.h"
+ /* Added the following two includes to use var_get function in
+ the VXWORKS HACK in file_archscan function. */
+ # include "lists.h"
+ # include "variable.h"
+
# include <io.h>
# include <sys/stat.h>
/*
* filent.c - scan directories and archives on NT
*
***************
*** 168,173 ****
char *archive;
void (*func)();
{
+
+ /************************* BEGIN HACK
*******************************/
+
+ /* VXWORKS uses Unix type libraries. The followning code is copied from
+ fileunix.c line 164 - 248. */
+ /* FIXIT: We are assuming VXWORKS is defined for VXWORKS compiles */
+
+ if(var_get("VXWORKS")) /* cross-compiling for vxworks */
+ {
+ struct ar_hdr ar_hdr;
+ char buf[ MAXJPATH ];
+ long offset;
+ char *string_table = 0;
+ int fd;
+
+ if( ( fd = open( archive, O_RDONLY, 0 ) ) < 0 )
+ return;
+
+ if( read( fd, buf, SARMAG ) != SARMAG ||
+ strncmp( ARMAG, buf, SARMAG ) )
+ {
+ close( fd );
+ return;
+ }
+
+ offset = SARMAG;
+
+ if( DEBUG_BINDSCAN )
+ printf( "scan archive %s\n", archive );
+
+ while( read( fd, &ar_hdr, SARHDR ) == SARHDR &&
+ !memcmp( ar_hdr.ar_fmag, ARFMAG, SARFMAG ) )
+ {
+ char lar_name[256];
+ long lar_date;
+ long lar_size;
+ long lar_offset;
+ char *c;
+ char *src, *dest;
+
+ strncpy( lar_name, ar_hdr.ar_name, sizeof(ar_hdr.ar_name) );
+
+ sscanf( ar_hdr.ar_date, "%ld", &lar_date );
+ sscanf( ar_hdr.ar_size, "%ld", &lar_size );
+
+ if (ar_hdr.ar_name[0] == '/')
+ {
+ if (ar_hdr.ar_name[1] == '/')
+ {
+ /* this is the "string table" entry of the symbol table,
+ ** which holds strings of filenames that are longer than
+ ** 15 characters (ie. don't fit into a ar_name
+ */
+
+ string_table = malloc(lar_size);
+ lseek(fd, offset + SARHDR, 0);
+ if (read(fd, string_table, lar_size) != lar_size)
+ printf("error reading string table\n");
+ }
+ else if (ar_hdr.ar_name[1] != ' ')
+ {
+ /* Long filenames are recognized by "/nnnn" where nnnn is
+ ** the offset of the string in the string table represented
+ ** in ASCII decimals.
+ */
+ dest = lar_name;
+ lar_offset = atoi(lar_name + 1);
+ src = &string_table[lar_offset];
+ while (*src != '/')
+ *dest++ = *src++;
+ *dest = '/';
+ }
+ }
+
+ c = lar_name - 1;
+ while( *++c != ' ' && *c != '/' )
+ ;
+ *c = '\0';
+
+ if ( DEBUG_BINDSCAN )
+ printf( "archive name %s found\n", lar_name );
+
+ sprintf( buf, "%s(%s)", archive, lar_name );
+
+ (*func)( buf, 1 /* time valid */, (time_t)lar_date );
+
+ offset += SARHDR + ( ( lar_size + 1 ) & ~1 );
+ lseek( fd, offset, 0 );
+ }
+
+ if (string_table)
+ free(string_table);
+
+ close( fd );
+ }
+ /*************************** END HACK ************************************/
+ else /* NOT cross-compiling for vxworks */
+ {
struct ar_hdr ar_hdr;
char *string_table = 0;
char buf[ MAXJPATH ];
***************
*** 255,260 ****
}
close( fd );
+ }
}
# endif /* NT */
From: "Hoff, Todd" <Todd.Hoff@LIGHTERA.com>
Subject: RE: Cross Compiling for VxWorks on NT using Jam
Date: Fri, 13 Aug 1999 09:25:47 -0700
What we do is use the directory names to encode what should be built.
Something like: vx-ppc, win32, vx-x86.
Our crack jam expert made these translate into the right build environment
and rules.
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: Cross Compiling for VxWorks on NT using Jam
We do the similar thing here for our cross-compiling product.
Each piece of our product has one "generic" Jamfile that describes the
targets (i.e., contains rules like Main, Library, etc). Then a
"higher level" Jamfile does a "SubDir" into various obj-<targ> dirs,
and does an "include" of the generic Jamfile.
We have intentionally avoided hacking jam source or Jambase.
From: Laura Wingerd <laura@perforce.com>
Date: Mon, 16 Aug 1999 12:28:02 -0700 (PDT)
Subject: Re: last rule for jam?
Well, I searched the jamming archive, and I've searched my personal
mail files, but didn't find anything, although I could have sworn
I've seen postings about this...
Jam doesn't have any concept of "after the build is complete". If
you have a bunch of things that need to be done to stuff after it
is built, just define a rule that makes its outputs dependent on
its inputs, and invoke that rule using your built things as inputs.
E.g., if your last step is to bundle up everything you build in a
tar file, invoke your tar rule on everything you build. It won't
get run until the build is complete.
In fact, Jambase's Install* rules work this way. Because the Install
rule's outputs (files in the install path) are dependent on its
inputs (files in the build path), everything always gets "installed"
last. Take a look at those rules in Jambase and see if you can use
them as an example.
The problem arises when your last step needs to be run regardless
of whether the build was completely successful. For example, if
the last step is to generate a report that lists what got built,
you can't make that report dependent on built targets -- any failed
target will cause the report not to be generated. But you can't
make it independent of your built targets either, because then Jam
may try to run it before the build is done. In that situation, I
think your best bet is to run a separate post-build script after
jam completes.
From: "Hoff, Todd" <Todd.Hoff@LIGHTERA.com>
Subject: RE: Re: last rule for jam?
The ghost thread. We need a thread sensitive :-)
The problem is the build builds numerous products. The number
of dependencies would be huge and not maintainable. It would
more general to have a start and end action.
This is what i ended up doing.
Date: Mon, 16 Aug 1999 19:49:50 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: RE: Re: last rule for jam?
I sent mail about doing this not long ago. Maybe neither of you found it
in the archive because the Subject wasn't about doing post-"all" targets --
it was "Re: Jam's dependencies are broken" (someone else thinks they are,
not me :). Oh well...
Anyway, you don't need to make your post-"all" targets depend on every
individual target you need built before them -- you just need to make them
depend on "all" (currently, "all" depends on everything else, but nothing
depends on "all").
You can do it in Jambase, like this:
Depends last : all
Depends all : shell files.... (etc.)
<snip>
NOTFILE last all first shell....(etc.)
If you don't want to diddle with Jambase, you can just have it in your
Jamrules file (not [necessarily] in a particular rule).
Then in your post-"all" rule(s) for your after-all-is-done targets, you can have:
rule WrapUp {
Depends last : $(<) ;
etc....
}
If you want "last" targets to get built by default -- i.e., whenever you run
just 'jam' (as opposed to running 'jam last', like you do 'jam install', to
get everything built) -- you need to change jam.c as well:
153c153
< char *all = "all";
285c285
< status |= make( 1, &all, anyhow );
As an example: here's a top-level Jamfile:
# Jamfile for $TOP/src ;
SubInclude TOP src a ;
SubInclude TOP src b ;
SubDir TOP src ;
Boot start ;
WrapUp finish ;
Main foo : foo.c ;
And here are the rules for Boot and WrapUp:
rule Boot { Depends first : $(<) ; }
actions quietly Boot { echo ; echo "Starting build at $(JAMDATE)..." ; echo }
rule WrapUp { Depends last : $(<) ; }
actions quietly WrapUp { echo ; echo "Build done at `date`" ; echo }
And running 'jaml' (which is my 'jam' with "last" as the default target) with
my modified Jambase:
% jaml -f ../Jambase
...found 28 target(s)...
...updating 6 target(s)...
Starting build at Mon Aug 16 19:43:56 1999...
Link /tmp/last/src/a/exe1
Chmod /tmp/last/src/a/exe1
Link /tmp/last/src/b/exe2
Chmod /tmp/last/src/b/exe2
Cc /tmp/last/src/foo.o
Link /tmp/last/src/foo
Chmod /tmp/last/src/foo
Build done at Mon Aug 16 19:43:57 PDT 1999
...updated 6 target(s)...
From: "Raymond Wiker" <raymond@orion.no>
Date: Tue, 24 Aug 1999 17:06:09 +0200 (CEST)
Subject: Creating relative paths, outside current tree
I'm working on a project where the prohect source files are
located under /DevRoot/src/..., while a number of third-party modules
are placed under /DevRoot/ext/...
I have a top-level Jamfile at //DevRoot/src/TS/Jamfile, which
includes Jamfiles for directories at lower levels. At the moment the
only rule I use through the Jamfiles are for compiling idl files into
C++ headers, skeletons and stubs, and the rules assume that the
idl file is in the current directory (i.e, the same file as the
Jamfile that refers to it), and that the generated files should also
placed in this directory.
particular, I want to have Jam call the idl generator in such a way
that the generated files are placed in the current directory, while
the IDL source files can be placed outside the tree spanned by the set
of Jamfiles (e.g, under /DevRoot/ext). An example of a valid IDL
command (for a particular idl compiler) is
idl -B -A -I../../../../ext/ACE_wrappers/TAO/orbsvcs \
-out . \
../../../../ext/ACE_wrappers/TAO/orbsvcs/examples/CosEC/Factory/CosEventChannelFactory.idl
It would be quite acceptable to explicitly list the include
paths for the idl generator, as well as for the idl file, but I want
the paths to be relative. Note that TOP is //DevRoot/src/TS, and I
want to access files under //DevRoot/ext.
(Hum... I just realised that I could use a variable DEVROOT,
and make TOP relative to that, and make the dependencies relative to
that... is there a viable alternative?)
My current Jamrules file looks like this:
filename="Jamrules"
# Jam rules for compiling idl files to C++, using the Orbix idl compiler.
if $(DEVROOT){
makeDirName IDL : $(DEVROOT) ext orbix bin "idl.exe" ;
} else {
EXIT Please set the environment variable DEVROOT to the root
of your Perforce client ;
}
IDLFLAGS = "-A" ;
IDLBOAFLAGS = -B ;
NOTFILE idlfiles ;
rule Idl {
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
Depends idlfiles : $(<) ;
Depends $(<) : $(>) ;
}
rule IdlBOA {
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
Depends idlfiles : $(<) ;
Depends $(<) : $(>) ;
}
actions Idl {
$(IDL) $(IDLFLAGS) $(>)
}
actions IdlBOA {
$(IDL) $(IDLFLAGS) $(IDLBOAFLAGS) $(>)
}
rule IdlBOAObject {
local _newExts ;
_newExts = .hh S.CPP C.CPP ;
IdlBOA $(<:B)$(_newExts) : $(<) ;
}
rule IdlObject {
local _newExts ;
_newExts = .hh S.CPP C.CPP ;
Idl $(<:B)$(_newExts) : $(<) ;
}
--1lo/HnUyzz
And the top-level Jamfile:
--1lo/HnUyzz
filename="Jamfile"
# Top level Jamfile for TradeSys project(s).
TOP = . ;
SubInclude TOP MarketServer ;
SubInclude TOP TradeSysRM ;
SubInclude TOP TradeSysGW ;
SubInclude TOP SessionManagement ;
Date: Wed, 25 Aug 1999 11:28:16 +0200
From: Igor Boukanov <igor.boukanov@fi.uib.no>
Subject: Re: Creating relative paths, outside current tree
Try to put at the Jamrules top the following (I assume Jamrules is in
.../DevRoot/src/TS):
if ! $(DEVROOT) {
local tmp ;
# Make tmp a relative path from .../DevRoot/src/TS to ../DevRoot
makeSubDir tmp : src TS ;
# Assuming that TOP is a relative path from jam invocation directory to
../DevRoot/src/TS,
# prefix TOP by tmp and set it to DEVROOT. Note: does not work if TOP is
an absolute path
DEVROOT = $(TOP:R=$(tmp)) ;
}
Also I would advise to replace "TOP = . ;" in your top Jamfile by
SubDir TOP ;
Date: Wed, 25 Aug 1999 15:29:06 -0700
From: Hayden Ridenour <hridenou@interwoven.com>
Subject: file/directory names with whitespace
While trying to build the latest version of Jam/MR from the sources, I've
encountered the problem that -I<path> expansions of paths with a space
become -I<first-part> -I<second-part>. Does Jam/MR not support filenames
with spaces?
From: Laura Wingerd <laura@perforce.com>
Subject: Re: file/directory names with whitespace
Date: Wed, 25 Aug 1999 16:35:02 -0700 (PDT)
When jam parses a Jamfile, it treats spaces as delimiters.
Thus, the assignment:
HDRS = /some/where/over the/rainbow ;
sets HDRS to a list of two values, "/some/where/over" and "the/rainbow".
Luckily, you can use quotes to tell jam that a space is a data value, not
a delimiter:
HDRS = '/some/where/over the/rainbow' ;
Date: Wed, 25 Aug 1999 18:39:45 -0500 (CDT)
Subject: Re: file/directory names with whitespace
sort of. Jam macros will do a kind of mix and match when you concat a
string with a macro. Each item in the macro will be concatenated with
the string, as explained in the manual. spaces are used to determine
what constitutes items in a macro.
like:
includes = /usr/local/include /usr/me/include /usr/project/include ;
then specify -I$(includes) gives you
-I/usr/local/include -I/usr/me/include -I/usr/project/include
From: Yariv Sheizaf <yarivs@cimatron.co.il>
Date: Tue, 31 Aug 1999 17:29:40 +0200
Subject: Jam/MR & Visual C++
Is somebody has experience with Jam/MR in MS Visual C++ environment ?
From: Yariv Sheizaf [mailto:yarivs@cimatron.co.il]
Sent: Tuesday, August 31, 1999 8:30 AM
Subject: Jam/MR & Visual C++
Is somebody has experience with Jam/MR in MS Visual C++ environment ?
From: David.Buscher@durham.ac.uk
Date: Thu, 2 Sep 1999 14:27:46 +0100 (BST)
Subject: InstallFile broken on IRIX 6.2?
I have the following Jamfile and an empty Jamrules:
### My jamfile
SubDir STAGING c40Comms libsrc ;
InstallFile /software/Electra/include : c40string.h stdio40.h ;
### End of jamfile
When I try to do a 'jam install' under IRIX 6.2, I get the following output:
...found 9 target(s)...
...updating 2 target(s)...
Install /software/Electra/include/c40string.h
Chmod /software/Electra/include/c40string.h /software/Electra/include/stdio40.h
Cannot access /software/Electra/include/stdio40.h: No such file or directory
chmod 644 /software/Electra/include/c40string.h /software/Electra/include/stdio40.h
...failed Chmod /software/Electra/include/c40string.h /software/Electra/include/stdio40.h ...
...removing /software/Electra/include/c40string.h
Install /software/Electra/include/stdio40.h
...failed updating 2 target(s)...
It seems as though InstallFile is trying to do a Chmod on all the files
after only the first file has been copied. This Jamfile works fine on
Solaris 2.5 but not on IRIX 6.2. Is this a known bug? I am using jam-2.2.
Date: Thu, 2 Sep 1999 09:47:07 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: InstallFile broken on IRIX 6.2?
Yep, you found a bug. Looks like when INSTALL isn't set, all bets are off:
[From InstallInto]:
for i in $(>) {
Install $(i:G=installed) : $(i) ;
}
if ! $(INSTALL) {
Chmod $(t) ;
if $(OWNER) { Chown $(t) ; OWNER on $(t) = $(OWNER) ; }
if $(GROUP) { Chgrp $(t) ; GROUP on $(t) = $(GROUP) ; }
}
since "t" is set to all the source files.
The fix is to do the Chmod'ing in a for-loop the same way the Install'ing is done:
if ! $(INSTALL) {
for i in $(t) {
Chmod $(i) ;
if $(OWNER) { Chown $(i) ; OWNER on $(i) = $(OWNER) ; }
if $(GROUP) { Chgrp $(i) ; GROUP on $(i) = $(GROUP) ; }
}
}
% jam -n install
...found 8 target(s)...
...updating 2 target(s)...
Install /tmp/install/foo.tmp
cp foo.tmp /tmp/install/foo.tmp
Chmod /tmp/install/foo.tmp
chmod 644 /tmp/install/foo.tmp
Install /tmp/install/bar.tmp
cp bar.tmp /tmp/install/bar.tmp
Chmod /tmp/install/bar.tmp
chmod 644 /tmp/install/bar.tmp
...updated 2 target(s)...
Since this is an actual bug, someone from Perforce should probably pick
this fix up for real.
From: David.Buscher@durham.ac.uk
Date: Thu, 2 Sep 1999 18:19:04 +0100 (BST)
Subject: Re: InstallFile broken on IRIX 6.2?
Having looked at the code, though, I realise that, newbie that I am, I
don't understand why the original code was broke, or why the new code is
better. I think I am confused by the order in which jam executes actions:
obviously it is not in the order they appear in the rules - so what order
is it?
Date: Thu, 02 Sep 1999 15:32:08 -0700
Subject: jam on Solaris and install rule
I've encountered a problem with the Install rule when moving from Linux
to Solaris. This is using an unmodified Jambase. For both of these
platforms, Jambase uses the "install" executable to perform installs.
However, the Linux (GNU) "install" has the source and destination
arguments in a different order than the Solaris "install". Jam expects
the GNU convention, and breaks under Solaris. Has anyone else
encountered this?
My current fix is to define the INSTALL variable for Solaris to point at
a local shell script that swaps the arguments.
Subject: Re: jam on Solaris and install rule
From: "Mark D. Baushke" <mark.baushke@solipsa.com>
Date: Thu, 02 Sep 1999 16:45:14 -0700
I believe that you will find that a Solaris box (c.f, both SunOS5.6
and SunOS 5.7) has two different install executables. One is part of
the optional ucb package and lives in /usr/ucb/install and the other
in /usr/sbin/install
I have never seen any problems with the order of the arguments, rather
I have seen problems with the additional -m$(MODE) -o$(OWNER) and
-g$(GROUP) arguments.
% /usr/ucb/install
usage: install [-cs] [-g group] [-m mode] [-o owner] file ... destination
install -d [-g group] [-m mode] [-o owner] dir
% /usr/sbin/install
usage: install [options] file [dir1 ...]
%
The one that lives in /usr/ucb/install will tend to work like the GNU
version. I suppose that the Jambase should be changed to use a full
pathname for SOLARIS hosts, but that would assume the administrator
had installed the /usr/ucb tools unless the Jambase was changed to
specify both the /usr/sbin/install pathname as well as its arguments.
From: "Dowdy, Mark" <mark@ciena.com>
Date: Tue, 7 Sep 1999 15:33:02 -0700
Subject: Dependency Generation "Broken"
We're using Jam to build a fairly large, multi-directory
project using GNU tools and have uncovered a problem with
Jam's dependency generation. It appears that the compiler's
algorithm used to locate included files differs from Jam's
method in a way that causes some dependencies to be missed.
According to the GNU documentation, the compiler first
searches the directory where the current input file came
from and then searches the -I directories (the directories
in the HDRS variable). The problem arises when a file is
included with a directory name (i.e. #include "foo/fooFile.h")
and this header file includes another file from its directory
(i.e. #include fooFile2.h"). If the .../foo directory is not
in the HDRS list, the dependencies for fooFile2.h are missed
even though the compiler doesn't find any problems.
Has anyone else seen this problem? If so, do you have a
Jam fix? The obvious workaround for us is to include
directories on any file included in a header file, but
it would certainly be better if Jam behaved the way the
compiler does.
Date: Tue, 7 Sep 1999 17:42:08 -0500 (CDT)
Subject: Re: Dependency Generation "Broken"
We had a related problem where the header directories were not being
searched in the same order the compiler would, so a duplicate include
file was being found. We fixed up the order and that solved that problem.
Your problem is a little different.
Date: Tue, 07 Sep 1999 17:07:19 -0700
Subject: Finding the project libraries
Help -- my executable Jamfiles can't find my libraries.
I have a project which builds multiple executable and multiple libraries
(shared among the executables). The source is distributed among various
directories, and I've set up a series of related Jamfiles to build the
directory tree. The output files (libraries, .o, and executables) are
placed in a build/<platform> subdirectory below each project, via
ALL_LOCATE_TARGET. So far, so good.
But the system only works when I build from within the topmost directory
(via "jam" or "jam myprog".) If I go to, say, the "myprog" directory
and type "jam", it cannot find any libraries previously built for the
project, claiming that they're missing. Thus I effectively cannot do
partial rebuilds at all.
Imagine "myproj" links in "mylib", like so:
SubDir TOP myprog ;
Main myprog : myprog.cpp ;
LinkLibraries myprog : libxxx ;
libxxx has been built, and is in the directory
$(TOP)/libxxx/build/linux-x86. The LinkLibraries rule makes myprog
depend on libxxx, but Jam can't find libxxx when invoked from inside the
myprog directory. If I invoke it from the top-level directory via say
"jam myprog", it finds the libxxx fine and links it in.
I'm trying to figure out how Jam expects to solve this problem, so if
this isn't the right way to do things, please let me know. I have
various modules in various directories, some of which depend on the
targets of other modules. How do I set things up to allow modules to
find the built libraries they need during local builds, while still
allowing for a consistent global build?
I do not want to add all the module build directories to the link search
paths, if possible.
I have tried using an install rule to install libraries to a single
external directory, then making the LinkLibraries arguments contain the
path to that directory. This seemed to cause dependency problems when
doing a top-level build. It also feels clunky.
Are there any examples of Jamfile trees like this out there? I've
browsed the jamming archives, but haven't been able to find a jamfile
that does this. It seems like a very basic thing, so I have the feeling
I'm missing something.
Date: Tue, 7 Sep 1999 21:40:34 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: Finding the project libraries
Jam only knows where to find stuff based on two things: where it
currently is, or where it's been told to look.
If you have:
LinkLibraries myprog : libxxx ;
and libxxx isn't in the local directory, then you need to tell Jam where
to find it. Since it sounded like you're setting ALL_LOCATE_TARGET to
the build/<platform> subdir of <whatever> directory, then, when you're
in myprog, Jam will look for everything in ..../myprog/build/<platform> --
which isn't where libxxx lives -- it lives in ..../libxxx/build/<platform>.
So you need to have Jam look for it where it does live.
One way would be to have a symbol (e.g., LIBDIRS) that lists all the
library directories, then use SEARCH to include that list.
For example, in Jamrules (or if you'd rather, in a separate file that you
include from Jamrules), you could have:
LIBDIRS = $(TOP)/libxxx/build/$(PLATFORM)
$(TOP)/libyyy/build/$(PLATFORM)
$(TOP)/libzzz/build/$(PLATFORM)
;
Then, in the Jamfiles where you need to, you would add:
SEARCH = $(LOCATE_TARGET) $(LIBDIRS) ;
Note that doing it this way will mean that if you're in, say, the myprog
directory, and any of the libraries that myprog links against are out-of-date
with their source, they *won't* be rebuilt -- Jam will simply link against
whatever's there. But since it sounded like you didn't want it rebuilding
the libraries when you were in myprog anyway...
Even if you have a zillion library directories, the list should be pretty
trivial to generate.
If this idea won't work for you, let me know -- there are other ways of
doing it.
Date: Wed, 08 Sep 1999 13:05:09 +0200
From: Igor Boukanov <igor.boukanov@fi.uib.no>
Subject: Re: Finding the project libraries
The trick is to have a rule similar to SubInclude that process the given
subdir only once even if you use it several times on the same dir. I
call this rule ImportDir and in your case to use it you simply add at
the end of your `TOP myprog Jamfile'
ImportDir TOP libxxx build linux-x86
You will also need to replace SubInclude by ImportDir in the topmost
Jamfile. Then you can run jam from any directory you like and it will
build only targets in that subdirectory and everything they depend on.
I put ImportDir (plus some other usefull/useless staff) into Jambase but
you can add them to
your top most Jamrules file - see the attached Jambase-ext
# Additiones to Jambase by Igor Boukanov, Igor.Boukanov@fi.uib.no
# new rules
# ImportDir TOP d1 d2 ... ; include a subdirectory Jamfile
# if not already included
# ImportFile TOP d1 d2 ... file ; include the given file
# if not already included
# IncludeFile TOP d1 ... dn file ; include the given file
#
# new utilities
# addFileName var : d1 d2 ... file ; $(var) += path from root to file
# makeFileName var : d1 d2 ... file ; $(var) = path from root to file
rule ImportFile {
if ! $(<[1]) {
EXIT "ImportFile syntax error: TOP should be given" ;
}
if ! $($(<[1])) {
EXIT "ImportFile syntax error: TOP should be already set" ;
}
if ! $(<[2]) {
EXIT "ImportFile syntax error: should have at least 2 arguments" ;
}
local ImportFile__marker, ImportFile__i ;
ImportFile__marker = "imported__" ;
{
ImportFile__marker = $(ImportFile__marker)__$(ImportFile__i) ;
}
if ! $($(ImportFile__marker)) {
# Do not include more than once
$(ImportFile__marker) = TRUE ;
local ImportFile__path ;
makeFileName ImportFile__path : $($(<[1])) $(<[2-]) ;
include $(ImportFile__path) ;
}
}
rule ImportDir { ImportFile $(<) $(JAMFILE) ; }
rule IncludeFile {
if ! $(<[1]) {
EXIT "IncludeFile syntax error: TOP should be given" ;
}
if ! $(<[2]) {
EXIT "IncludeFile syntax error: should have at least 2 arguments" ;
}
local IncludeFile__path ;
makeFileName IncludeFile__path : $($(<[1])) $(<[2-]) ;
include $(IncludeFile__path) ;
}
rule addFileName {
if ! $(>) {
EXIT "Second argument in addFileName should have at least 1 component to form a file name" ;
}
if ! $(>[2]) { $(<) += $(>) ; }
else {
# In Jam I can not get $(>[all except last]) directly
local addFileName__list, addFileName__base, addFileName__i ;
addFileName__list = $(>[1]) ;
addFileName__base = $(>[2]) ;
for addFileName__i in $(>[3-]) {
addFileName__list += $(addFileName__base) ;
addFileName__base = $(addFileName__i) ;
}
local addFileName__dir_path ;
makeDirName addFileName__dir_path : $(addFileName__list) ;
$(<) += $(addFileName__base:D=$(addFileName__dir_path)) ;
}
}
rule makeFileName { $(<) = ; addFileName $(<) : $(>) ; }
From: Paul Bleisch <PBleisch@digitalanvil.com>
Date: Tue, 7 Sep 1999 16:31:28 -0500
Subject: New to Jam
I am new to Jam, but it appears that I need to tweak
alot of variables/rules to get anything to build on
NT. For instance, the default Jambase does not add
/D"WIN32" /D"_WINDOWS" to the default CCFLAGS (and
derivatives), the default linker command line does
not allow one to add library search directories
(-L/SomeDir) similar to the -I include directives, and
there is no default .rc build rule. I've hacked up
a Jamfile that attends to these problems and I
understand why they are not defaults, but...
Has anyone done all of this before? Is there a public
repository of rule files and Jambases for more extensive
support of compiler features? I've also noticed that the
default Jambase does not enable any of the more useful
compiler options for MSVC (optimizations, debugging,
etc). While I am not sure this is something that
should be in the default distribution, I would think
someone has done this already. If not...
What would be the "most correct" way to handle things like:
1) Building .PDB, .BSC files? (MS debug and browse info
files) I would think this is some kind of implicit
target for a Main target. i.e. Main foo would actually
build foo.exe, foo.bsc, and foo.pdb
2) How do I handle precompiled headers. Specifically, MSVC
(and other cc's I assume) allow one to specify a precompiled
header file on the command line (/Fp, and the /Y switches).
Is there an elegant way to handle "MSVC style" project
configurations in a Jamfile? i.e. I would like to build multiple
versions of the same target with different options (optimizations,
debugging, etc) depending on the "configuration" chosen. I
am thinking currently of multiple Main targets and somehow using
per-target binding to set the per-configuration settings.
From: David.Buscher@durham.ac.uk
Date: Wed, 8 Sep 1999 14:55:45 +0100 (BST)
Subject: Re: New to Jam
I would just like to echo the plea for (a) a FAQ (even one which says
"There is no FAQ" - I've spent a lot of time looking for one) and (b) a
repository of examples. The lack of these two is the biggest hurdle for us
newbies getting started with Jam. You have the feeling that you are
re-solving problems that countless others have already solved, but you
can't find out how they did it. Surely Open Source is all about not having
to re-invent wheels?
Date: Fri, 10 Sep 1999 18:58:01 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: New to Jam
What kinds of questions would you like to see answered in an FAQ? And
what types of things would you like to see examples of? (There are
actually probably lots of questions answered and examples of things in
the mailing-list archive -- is there a problem finding what you're
looking for that way? I haven't looked through it, so I don't know how
useful or un the archive is.)
From: David.Buscher@durham.ac.uk
Date: Sun, 12 Sep 1999 21:17:59 +0100 (BST)
Subject: Re: New to Jam
The question *I* would most like to see answered in a FAQ is about
cross-compiling & multiple variants in general, i.e. making multiple
versions of libraries and executables from a single set of sources. There
have been quite a few questions on this sort of topic in the archive, but
I've had to piece the answers together, and I'm still not sure I know how
to do it in a general way, i.e. catering for both libraries and
executables, and extending in an easy way to a multiple-directory project.
An ideal answer would include examples of the actual Jamfiles in a real
project that builds several variants.
I think the mailing-list archive is useful, but when you see the same type
of question recurring, in slightly modified forms, it seems to me that a
FAQ is a better way to address it.
As far as a repository goes, I'd imagine it would have example Jamrules
and Jamfiles from projects on various platforms, perhaps examples on
platforms which the standard Jambase does not cater for, e.g. the
compilers mentioned in the original post on this thread. Another sort of
thing in the repository might be examples of neat ways of solving
particular problems, and general 'tricks of the trade'. The
include-only-once rule that was recently posted would be a good example of this.
The way I learned to do Makefiles was to read other people's makefiles
(e.g. the Linux kernel makefiles) and see how they tackled the problems I
was having. There aren't a lot of publicly-available Jamfiles out there at
the moment (that I'm aware of), so repository would help to make the
learning curve a little less steep.
From: "Joan Yuen" <joany@ecdirect.com>
Date: Fri, 10 Sep 1999 09:53:51 -0700
Subject: Jam vs other tools
We are a startup Java shop and I'm looking into various make tools for our
build system here. Currently everything is built either with DOS batch
scripts or UNIX shell scripts. What are the pros and cons with using Jam vs
something like gnu make? Our number one requirement is cross-platform
development, as we have developers on both NT and Linux. Any feedback will
be appreciated.
Date: Mon, 13 Sep 1999 12:27:55 -0700
From: sweeney@informix.com (Tony Sweeney)
Subject: Re: Jam vs other tools
The problem with GNU _anything_ is that it assumes GNU _everything_. GNU
is Not Unix, but a replacement for it, so if you go the GNU route, you will
end up spending a considerable amount of time building GNU utilities and
installing all over the shop. Jam's big advantage is that it is idempotent,
comparitively speaking. You simply need a functioning C compiler, and an
understanding of how Jam works. Add in your specific rules for your own
environment(s), and you're done. Easy peasy. ;-)
Date: Mon, 13 Sep 1999 14:43:12 -0700
From: Brendan McCarthy <mccarthy@justintime.com>
Subject: an UPDATE variable?
I've been combing through the list archives for a couple of weeks now,
and I must say that the feedback and response times on this list are
remarkably good. Kudos, praises, etc. to all!
Question: (using jam 2.2.5 on Solaris)
Does a variable exist that tells whether a source is marked for
updating? In other words, I'd like to say in my rule definition:
if ( $(this source is marked for an updating action) ) {
$(list of sources that will be updated) .= $(this source) }
Essentially, I'd like to call an action that changes a variable, but
from what I see you can't muck with variables from within actions.
I've written some rules and actions to handle java compilations in a
large directory tree. Basically, I'm establishing a one to one
correspondence between each java source and its compiled object (we
aren't planning to allow multiple class files generated from a single
source, so this is a safe assumption for now). If the compiled object
is out of date, then I'd like to add the source to a list variable for
an updating action which occurs as the last step of the build. This
way, all of the java files (not from a single directory, but from the
whole directory tree!) are fed to the compiler at once. This is a
pretty crude solution to the problem of dependency analysis and may not
scale well, but it's doing the trick for now. The only problem is that
it's updating every target every time.
It seems as though the together modifier would be appropriate, but
this works for multiple sources going into a single target. I've also
considered just writing the names of the sources to a temporary file,
but I'm thinking someone might know of a better way...
Date: Mon, 13 Sep 1999 16:56:56 -0500 (CDT)
Subject: Re: an UPDATE variable?
I ended up having them go to a temporary file...
Date: Tue, 14 Sep 1999 14:57:21 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: an UPDATE variable?
I'm not sure what you mean by:
>from what I see you can't muck with variables from within actions.
If you mean can you change the value of a variable inside an actions
block, the answer is yes -- you can change the value of it -within-
that actions block (that's how I did the "sparse-tree" thing someone
asked for). But if you mean, can you change the value of a variable
within an actions block and then have some rule that also references
that variable see the value as what you changed it to, the answer is no.
You can do that with rules (to non-local variables), but not with actions.
As to if there's a special variable that holds the names of the sources
that are out-of-date (e.g., like gmake's "$?"), no -- a file that's newer
is marked internally, with T_FATE_NEWER, but that's not something you
have access to (but I can't think of why you'd want to even if you could).
And even gmake's $? only holds what's newer for an individual target --
it doesn't accumulate across the entire tree. It's not clear to me what
you'd do with a variable that kept a list of all the source-files over the
whole tree that were newer. (But there's no law that says you couldn't
create one. :) As long as your dependencies are set up correctly, then
all that should be getting passed to the actions are those files that need building.
sent mail about that a couple of times. You can have a "last" target
(or call it whatever you choose) that depends on "all". (Although from
what you've described, I'm not sure if you'd even need that -- if you
have some big target at the end that depends on all the individual
objects being up-to-date first, then you should be able to just say
that, without having to have anything special.)
But I'm not up on java, so I'm not sure what it is you're trying to do.
Maybe you could be a little more specific about what it is, and how a
list of all the newer sources would help you?
Date: Thu, 16 Sep 1999 09:05:25 -0700 (PDT)
Subject: Re: Jam vs other tools
I strongly disagree. GNU follows the same "software tools"
methodology as Unix itself, and you can use GNU Make with any
set of tools you'd like to. I've used it with native Unix
tools, VMS DCL commands, and even DOS.
Now, it might be *advisable* to install GNU tools all over
the shop, because they're usually better than the native tools
and the same tools work exactly the same way from platform to
platform; but it certainly isn't necessary.
I do agree with you that Jam is a better solution overall
for multi-platform development, because of that extra level of
abstraction it provides.
Date: Sat, 18 Sep 1999 14:26:10 -0500
From: "Frot" <frot@earthling.net>
Subject: Jam, development trees and executable/library/headers finding
I have a general question concerning the use of development tree structure.
Till now I have been using JAM using the 'flat'
directory structure (e.g. all sources, headers,
objects, libraries & executables in one directory)
This was OK and worked all of the time.
But currently I am working on a bigger project
with more sources & deliverables, so I decided to
introduce a development tree
structure to jam. First of all all object will be
put in a subdir called "bin.<OS>" (as used in the JAMDOS jamfile).
I also decided to split up independent sources
into different directories.
Example :
library abc
<deliverable 1>
tool xyz
<deliverable 2>
tool 123
<deliverable 3, uses library abc>
example code
<deliverable 4, uses tools xyz>
In my development tree this would look like :
<project-x>\
\abc\.......
(sources for library abc)
\abc\bin.<OS>\.......
(objects for library abc, incl. the library itself)
\xyz\.......
(sources for tool xyz)
\xyz\bin.<OS>\.......
(objects for tool xyz, incl. the executable)
\123\.......
(sources for tool 123)
\123\bin.<OS>\.......
(objects for tool 123, incl. the executable)
\exp\.......
(source for example code)
\exp\bin.<OS>\.......
(objects for example code, incl. the executable)
I have been able to use such a structure by using
the jam SubDir rule in my jam files, and by tweaking the SubDir rule
so that the LOCATE_TARGET variable always gets a bin.<OS> portion attached.
So far so good. Now what is my question ?????
Well, problem start when trying to build tool 123
& example code. The reason the other two have no problems is that
they are independent of other deliverables in the
development tree (library abc only needs its own
sources to build, so does tool xyz).
In above tree following problems arise :
1. tool 123 experiences problems linking as the linker does not find library abc
2. example code experiences problems as it does not find executable xyz
Possible solutions could be :
1. Never use such a structure (don't want that)
2. Adapt jambase so that all bin.<OS> directories are being accessed a search
paths for libraries (HOW ???)
3. Adapt jambase so that all bin.<OS> directories are being added to the
search path for starting executables (HOW ???)
4. Upon creation of
executables/libraries copy them to a tree fixed
place (e.g. <project-x>\lib) and add this path to
search path for both libraries and executables
(AGAIN HOW ??)
5. Upon creation of
executables/libraries copy libraries to a LINKER
aware directory outside of the dev. tree and
executables to a OS aware directory outside of
the dev. tree, so finding libraries will be a
task for the linker and executables for shell
I have looked at all of these possibilities and
they all have some dirty tricks and consequences
attached I rather would avoid.
My question to you is now to help me decide what
to take (preferably how) and whether you might
have some other idea's how to solve this.
I also would like to know how you all tackle this
problem of development trees & interdependent deliverables ?
Date: Sat, 18 Sep 1999 18:08:15 -0700
Subject: Re: Jam, development trees and executable/library/headers finding
This is very similar to a problem I was just having. My solution was to
define a new rule, "Uses":
rule Uses {
local LOCATE_LIBDIRS ;
#
# Uses <target> : <Dir1> <Dir2> ...
#
# This modifies variables that a SubDir rule has set up.
LOCATE_LIBDIRS = $(>[1-])/$(BUILT) ;
# Make generated files go into a platform-specific
# subdirectory
LOCATE_TARGET = $(SUBDIR)/$(BUILT) ;
# Look for needed things in subdirectory first, then
# in local built/xxx dir, then in remote built/xxx dirs
# mentioned with Uses:
SEARCH_SOURCE = $(SUBDIR) $(LOCATE_TARGET) $(LOCATE_LIBDIRS) ;
ECHO "SEARCH_SOURCE for " $(SUBDIR) " = " $(SEARCH_SOURCE) ;
}
In this rule, BUILT is a variable that resolves to
built.<os-specific-string>. The rule is invoked right after SubDir for
every Jamfile that needs the output of some other Jamfile, and lists
explicitly the absolute path to each of the other project directories:
SubDir src services dm ;
Uses dmd : $(KUDZU_ROOT)/src/emlib/delivery
$(KUDZU_ROOT)/src/emlib/socket
$(KUDZU_ROOT)/src/emlib/util ;
I don't know if this is the best way to do things, but it seems to work
out OK (so far).
Date: Sat, 18 Sep 1999 18:51:36 -0700 (PDT)
From: Diane Holt <dianeh@whistle.com>
Subject: Re: Dependency Generation "Broken"
The short answer is: Jam uses SubDirHdrs to find header-files like your
example shows. SubDirHdrs is just a list of the directories you want Jam
to look in -- and it does include a -I<dir> for each one you list. From what
you said, you didn't want that, and you didn't want to have to provide the
list of directories in the first place -- you wanted Jam to be able to figure
it out for you. (That's why it took me awhile to think about it.)
What I came up with was to add a new variable (which gets set in headers.c)
that allows you to see the boundname of the file Jam is scanning for includes.
Then I modified HdrRule to use that variable and include the directory of it
in HDRSEARCH (if it's not already in there). (BTW: I'm not thrilled with the
name of the variable I added, but I couldn't think of a better one -- can you?)
headers.c diff:
120,122d119
< /* Add a variable that holds the full-pathname of the file being scanned. */
< var_set( "SCANFILE", list_new( L0, newstr( file ) ), VAR_SET );
<
HdrRule diff (in Jambase):
<
< if ! $(SCANFILE:D) in $(HDRSEARCH)
< {
< HDRSEARCH = $(HDRSEARCH) $(SCANFILE:D) ;
< }
<
If the diff line numbers from Jambase are off -- I put this in after the:
INCLUDES $(<) : $(s) ;
and before the
SEARCH on $(s) = $(HDRSEARCH) ;
I put together a slightly more elaborate example than the one you provided
so I could test things out and make sure everything else still worked okay:
% grep "include" xxx.c
#include "hdr/xxx.h"
#include <stdio.h>
#include "../hdr/ggg.h"
%
% cat hdr/*.h
/* xxx.h */
#include "yyy.h"
/* yyy.h */
#include "zzz.h"
/* zzz.h */
#include "hdr1/aaa.h"
%
% cat hdr/hdr1/*.h
/* aaa.h */
#include "bbb.h"
/* bbb.h */
#include "hdr2/ccc.h"
%
% cat hdr/hdr1/hdr2/*.h
/* ccc.h */
#include "ddd.h"
/* ddd.h */
#include "hdr3/eee.h"
%
% cat hdr/hdr1/hdr2/hdr3/*.h
/* eee.h */
#include "fff.h"
%
% cat ../hdr/*.h
/* ggg.h */
#include "hhh.h"
I included Echo's in my Jambase so you can see what file is being scanned,
and how HDRSEARCH is modified accordingly (don't include them in real life :).
% myjam -f Jambase.echo -d2
SCANFILE is /tmp/mark/lib/xxx.c
HDRSEARCH is now /tmp/mark/lib /usr/include
SCANFILE is /tmp/mark/lib/hdr/xxx.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr
SCANFILE is /tmp/mark/lib/hdr/yyy.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr
SCANFILE is /tmp/mark/lib/hdr/zzz.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr
SCANFILE is /tmp/mark/lib/hdr/hdr1/aaa.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr /tmp/mark/lib/hdr/
hdr1
SCANFILE is /tmp/mark/lib/hdr/hdr1/bbb.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr /tmp/mark/lib/hdr/
hdr1
SCANFILE is /tmp/mark/lib/hdr/hdr1/hdr2/ccc.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr /tmp/mark/lib/hdr/
hdr1 /tmp/mark/lib/hdr/hdr1/hdr2
SCANFILE is /tmp/mark/lib/hdr/hdr1/hdr2/ddd.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr /tmp/mark/lib/hdr/
hdr1 /tmp/mark/lib/hdr/hdr1/hdr2
SCANFILE is /tmp/mark/lib/hdr/hdr1/hdr2/hdr3/eee.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/hdr /tmp/mark/lib/hdr/
hdr1 /tmp/mark/lib/hdr/hdr1/hdr2 /tmp/mark/lib/hdr/hdr1/hdr2/hdr3
SCANFILE is /usr/include/stdio.h
HDRSEARCH is now /tmp/mark/lib /usr/include
SCANFILE is /usr/include/sys/types.h
HDRSEARCH is now /tmp/mark/lib /usr/include /usr/include/sys
SCANFILE is /usr/include/machine/endian.h
HDRSEARCH is now /tmp/mark/lib /usr/include /usr/include/sys /usr/include/machin
e
SCANFILE is /tmp/mark/lib/../hdr/ggg.h
HDRSEARCH is now /tmp/mark/lib /usr/include /tmp/mark/lib/../hdr
...found 32 target(s)...
...updating 2 target(s)...
Cc /tmp/mark/lib/xxx.o
cc -c -O -I/tmp/mark/lib -o /tmp/mark/lib/xxx.o /tmp/mark/lib/xxx.c
Archive /tmp/mark/lib/libxxx.a
[etc...]
...updated 2 target(s)...
%
% touch hdr/hdr1/hdr2/hdr3/fff.h
% myjam -f Jambase -d7
bind -- fff.h: /tmp/mark/lib/hdr/hdr1/hdr2/hdr3/fff.h
time -- fff.h: Sat Sep 18 17:08:51 1999
made* newer fff.h
...found 32 target(s)...
...updating 2 target(s)...
Cc /tmp/mark/lib/xxx.o
[etc...]
...updated 2 target(s)...
%
% touch ../hdr/hhh.h
% myjam -f Jambase -d7
bind -- hhh.h: /tmp/mark/lib/../hdr/hhh.h
time -- hhh.h: Sat Sep 18 17:17:28 1999
made* newer hhh.h
...found 32 target(s)...
...updating 2 target(s)...
Cc /tmp/mark/lib/xxx.o
Archive /tmp/mark/lib/libxxx.a
Ranlib /tmp/mark/lib/libxxx.a
RmTemps /tmp/mark/lib/libxxx.a
...updated 2 target(s)...
%
From: "Scotte Zinn" <szinn@sentex.net>
Date: Fri, 17 Sep 1999 22:29:18 -0400
Subject: Request for FAQ
I'm trying to set up the following kind of environment
/thirdparty - top level of third-party package
/thirdparty/include - include files for third-party package
/thidparty/lib - library files for third-party package
/root - top level of system
/root/include - include files for system (source files)
/root/lib - libraries that are built (output files)
/root/src - top level of source
/root/src/dir1 - one module to be built (produces objects and libraries)
/root/src/dir2 - another module
/root/src/dir... - etc
/root/object/dir1 - objects for module dir1
/root/object/dir2 - objects for module dir2
/root/object/dir... - etc
/root/bin - resulting binaries from the build
I'd like to be able to execute jam in /root to build the complete system
I'd like to be able to execute jam in /root/src/dir1 to build only module in dir1
Some of the modules in dirXXX may depend on other modules in dirXXX
A set of JamRules / JamBase / JamFile files for this kind of hierarchy would
be an addition to an FAQ. I am very new to Jam and don't know where to
start with this. I have tried using MakeLocate and setting LOCATE_SOURCE
and have achieved getting the binaries and libraries to go to the correct
location, however if the objects were to go to a directory other than where
the source file resides, then it seems to keep wanting to build the
libraries each time I execute Jam.
Date: Sun, 19 Sep 1999 20:27:26 +0200 (MEST)
From: Igor Boukanov <boukanov@sentef3.fi.uib.no>
Subject: Re: Jam, development trees and executable/library/headers finding
At the end of your Jamfile in 123 directory add
SubInclude TOP abd ;
and at the end of your Jamfile in example dir add
SubInclude TOP xyz ;
Then remove references to example and 123 directories from your TOP
directory Jamfile.
Now of cause you need to 'cd 123; jam' and 'cd examples; jam'
to build them. If you prefere to be able to build also everything
including 123 and examples from the TOP, then modify SubInclude so it
include the given dir only once - see my ImportDir rule that I have posted
to the jam mail list couple weeks ago.
From: "Harry Callahan" <boner_ear@hotmail.com>
Date: Fri, 01 Oct 1999 11:45:20 EDT
Subject: Rsh to NT
Is anyone using Jam via a remote shell to NT?
We're primarily a Unix shop, but we have a requirement to
generate some targets on NT. I've got a Jamfile functioning
just fine on Unix and NT (via cmd prompt), however if I
remsh and jam.exe I get an illegal memory reference.
Wait, it gets better ... Only targets with corresponding
"actions" procedures give me the error. So, 'jam.exe -n'
and 'jam.exe clean' and 'jam.exe install' all work fine.
I even have a simple help action that just does some
echoes ... and it aborts ... Funny thing is, if I jump
to the NT box and issue the jam command natively, it works like a charm.
I've tried a couple flavors of third party rshd for NT.
It runs as a service on NT allowing remote shell connections.
From: Stephen Dennis <stephen.dennis@onyx.ca>
Subject: RE: Rsh to NT
Date: Fri, 1 Oct 1999 12:02:02 -0400
The problem is that most of the rshd's for NT run as a service, so they do not have
the same access to 'desktop' resources as a logged in user and are usually running
on a different 'WindowStation'. Same goes for the telnetd's, smbrun and others.
See the docs on 'CreateWindowStation' for more information about this mess.
My solution was to build an rshd from the publicly available code (I used BSD 4.4 rshd)
and wired in the appropriate NT process starting stuff, and ran this on the desktop of a
logged in NT machine. Ultimately I ended up building a custom client as well just to
make sure everything was semantically correct.
Thus, jam or make or whatever, has access to all the network connections, memory,
files and whatever you would have expected.
Ugly, but functional.
Right me for the code for the custom client and server if you would like.
From: Peter Glasscock <peterg@harlequin.co.uk>
Date: Wed, 13 Oct 1999 12:03:56 +0000 (GMT)
Subject: An option some of you may find useful...
Some colleagues have been complaining that Jam continues to build
non-dependant targets when one fails. I myself consider this a feature,
but they would like a way of stopping the build on the first error.
In response to this, I have made a tiny change to the Jam sources to
select this behaviour. I am posting the change here, so that others who
might have had similar thoughts can use it. You can also find the
change in my guest branch //guest/peter_glasscock, which also contains
some important fixes that haven't yet made it into the "official" Jam
sources.
If you think it's a useful feature, lobby Perforce to include this in
the "official" sources too.
Change 233:
edit //guest/peter_glasscock/jam/src/jam.c#5
edit //guest/peter_glasscock/jam/src/jam.h#3
edit //guest/peter_glasscock/jam/src/make1.c#5
Here is the change for those of you not familiar (or without) the p4
client software:
==== //guest/peter_glasscock/jam/src/jam.c#4 - c:\users\peterg\perforce\guest\jam\src\jam.c ====
129a130
173c174
< if( ( n = getoptions( argc, argv, "d:j:f:s:t:ano:v", optv ) ) < 0 )
182a184
206a209,211
==== //guest/peter_glasscock/jam/src/jam.h#2 - c:\users\peterg\perforce\guest\jam\src\jam.h ====
309a310
==== //guest/peter_glasscock/jam/src/make1.c#4 - c:\users\peterg\perforce\guest\jam\src\make1.c ====
409a410,411
From: Peter Glasscock <peterg@harlequin.co.uk>
Subject: Re: An option some of you may find useful...
Date: Thu, 14 Oct 1999 08:44:02 +0000 (GMT)
True, but DEBUG_MAKE is on for all debug levels above 0. Since 1 is the
default, you would have to explicitly choose to have no output at all in
order to turn this behaviour off.
The reason I put it within this conditional is because the main reason
people want the build to stop after the first error; to see what failed.
In order for Jam to print out the command that it executed that failed,
it must be at least debug level 1 (ie DEBUG_MAKE). In fact, the line
that prints out the failed command is just a few lines above the one I
added (in the same block!).
There doesn't seem to be any point in quitting after the first error if
you can't see what it was. But I suppose you'd be right in saying that
the behaviour isn't entirely consistent.
This is only a suggestion, so you are free to put the line wherever you
like. I don't have any use whatsoever for the 0 debug level, so I
really don't mind what happens at that level :-)
Date: Wed, 20 Oct 1999 19:50:02 +0200
From: Igor Boukanov <igor.boukanov@fi.uib.no>
Subject: Too small updated action list or do I have to modify jam for this?
To write a set of rules to compile Java sources I had to change jam
sources due to the following:
It is rather difficult to predict all Java '.class' file names so I
decided to avoid an introduction of any dependencies between java
sources and class files and I wrote something like
rule JavaMain {
Depends $(<) : $(>) ;
Depends all : $(<) ;
}
actions together updated JavaMain {
javac $(>) && touch $(<)
}
with usage:
JavaMain project-name : java-sources ;
JavaMain rule will touch 'project-name' file to change time stamp so it
is >= any of its sources.
I use 'together' and 'updated' modifiers to compile all sources in a
single command and
only those that are newer. But then I faced well known problem:
Although javac tries to compile not only given sources but also other
files if they depend in some sense on the given sources, still javac can miss.
For example, given a.java with single line
class a { }
and b.java with
class b extends a { }
the command 'javac a.java' does not compile 'b.java' even if a.java is
newer. Thus I have to tell Jam to compile b.java if anything in a.class
that b.class depends on is changed. To make life easy I translated this
into requirement to tell Jam to issue
javac a.java b.java
even if only a.java is modified and in the same time simply
javac b.java
for changes only in b.java
First I added:
Depends b.java : a.java ;
But this does not work because for changes only in a.java jam will
constantly recompile everything because nothing makes b.java newer than
a.java. So instead I added a new rule:
rule JavaDepends {
Depends $(<) : $(>) ;
}
actions JavaDepends {
touch $(<)
}
to make b.java newer than a.java after
JavaDepends a.java b.java
but of cause it has very annoying side effect of changing b.java time stamp.
So I needed something like JavaDepends that does not modify b.java.
After some attempts to implement this in the jam-2.2 I give up and added
a new built-in rule (see the attached patch for jam-2.2, apply it via
'patch -lp0 < jam-2.2.AsIfUpdated.patch'):
AsIfUpdated targets : sources ;
### If any of targets is stable (no updates) but any of AsIfUpdated
sources requires
### an update or newer, do not skip this target from source lists of
actions with
### "updated" modifier
I can write with it:
JavaMain my-project : a.java b.java ;
AsIfUpdated b.java : a.java ;
which will compile both a.java and b.java for a-modifications and only
b.java for b-modifications.
And now the question is:
Do I really need to patch jam for this?
diff -r -bBdc jam-2.2.orig/compile.c jam-2.2/compile.c
*** jam-2.2.orig/compile.c Wed Nov 12 10:22:24 1997
--- jam-2.2/compile.c Wed Oct 20 18:43:24 1999
***************
*** 43,48 ****
* builtin_echo() - ECHO rule
* builtin_exit() - EXIT rule
* builtin_flags() - NOCARE, NOTFILE, TEMPORARY rule
+ * builtin_as_if_updated() - ASIFUPDATED rule
*
* 02/03/94 (seiwald) - Changed trace output to read "setting" instead of
* the awkward sounding "settings".
***************
*** 66,71 ****
static void builtin_echo();
static void builtin_exit();
static void builtin_flags();
+ static void builtin_as_if_updated();
int glob();
***************
*** 121,126 ****
bindrule( "Temporary" )->procedure
bindrule( "TEMPORARY" )->procedure
parse_make( builtin_flags, P0, P0, C0, C0, L0, L0, T_FLAG_TEMP );
+
+ bindrule( "ASIFUPDATED" )->procedure
+ bindrule( "AsIfUpdated" )->procedure
+ parse_make( builtin_as_if_updated, P0, P0, C0, C0, L0, L0, 0 );
+
}
/*
***************
*** 765,770 ****
}
/*
+ * builtin_as_if_updated() - ASIFUPDATED rule
+ *
+ * If one of targets is stable but any of ASIFUPDATED sources requires
+ * update, do not skip this target from source lists of actions with
+ * "updated" modifier
+ */
+
+ static void
+ builtin_as_if_updated( parse, args )
+ PARSE *parse;
+ LOL *args;
+ {
+ LIST *targets = lol_get( args, 0 );
+ LIST *sources = lol_get( args, 1 );
+ LIST *l;
+
+ for( l = targets; l; l = list_next( l ) )
+ {
+ TARGET *t = bindtarget( l->string );
+ t->as_if_update_deps = targetlist( t->as_if_update_deps, sources );
+ }
+ }
+
+ /*
* debug_compile() - printf with indent to show rule expansion.
*/
diff -r -bBdc jam-2.2.orig/make1.c jam-2.2/make1.c
*** jam-2.2.orig/make1.c Wed Nov 12 10:22:36 1997
--- jam-2.2/make1.c Wed Oct 20 18:43:24 1999
***************
*** 577,583 ****
continue;
if( ( flags & RULE_NEWSRCS ) && t->fate <= T_FATE_STABLE )
! continue;
/* Prohibit duplicates for RULE_TOGETHER */
continue;
if( ( flags & RULE_NEWSRCS ) && t->fate <= T_FATE_STABLE )
! {
! /* Skip only if all t->as_if_update_deps are also stable or unknown
! */
! int should_skip = 1 ;
! TARGETS *cursor;
! for( cursor = t->as_if_update_deps; cursor; cursor = cursor->next )
! {
! if( cursor->target->fate > T_FATE_STABLE )
! {
! should_skip = 0;
! break;
! }
! }
! if( should_skip ) continue;
! }
/* Prohibit duplicates for RULE_TOGETHER */
diff -r -bBdc jam-2.2.orig/rules.h jam-2.2/rules.h
*** jam-2.2.orig/rules.h Tue Nov 18 18:32:17 1997
--- jam-2.2/rules.h Wed Oct 20 18:43:24 1999
***************
*** 157,162 ****
int asynccnt; /* child deps outstanding */
TARGETS *parents; /* used by make1() for completion */
char *cmds; /* type-punned command list */
+
+ TARGETS *as_if_update_deps; /* If this target is stable but
+ * any of as_if_update_deps targets requires
+ * update, do not skip this target from
+ * source lists of actions with "updated"
+ * modifier
+ */
+
} ;
RULE *bindrule();
Date: Fri, 22 Oct 1999 12:22:20 -0700
Subject: Q on "on"
I just ran into a problem using "on", and I'd like to validate my
solution. Here's the problem: I want to add a library to LINKLIBS for a
particular target, and no others. The following code sets LINKLIBS for
ex to be _just_ -limsdk, ignoring the global LINKLIBS variable completely:
LINKLIBS on ex += -limsdk ;
Main ex : ex.cpp ;
If I manually include the global LINKLIBS, it seems to work:
LINKLIBS on ex = $(LINKLIBS) -limsdk ;
I had naively expected that "LINKLIBS on ex" would have a default value
equal to $(LINKLIBS), but it looks like the default is no value. Is
this the right thing to do? As a side question, is there a way to print
out the value of LINKLIBS on ex for debugging purposes? I usually use
ECHO to see selected variables, but I can't get ECHO to print out
$(LINKLIBS on ex) or "$(LINKLIBS) on ex".
Date: Fri, 22 Oct 1999 13:46:32 -0700
Subject: Q on multi-Jamfile projects (again)
I just spent a few hours tracking down an annoying bug. I had a header
file which (for some reason) #included itself:
#ifndef FOO_H
#define FOO_H
#include "foo.h"
#endif
This caused Jam's bind phase to get totally out of whack; the symptom
was an inability to find things which were supposed to exist on the
current SEARCH path. Looking at the debugging output, it seemed that
additions to SEARCH weren't getting added (they did not show up in the
debug output). Comment out the above #include, however, and everything
works fine.
IWBNI Jam protected itself against such (twisted but legal) #include issues :)
Date: Fri, 05 Nov 1999 18:07:24 -0600
From: "Stanford S. Guillory" <guillory@vignette.com>
Subject: Setting a Jam variable and not getting a space at the end
I have the following rule:
actions vEncryptFile1 {
$(ENCRYPTFILE_EXPORT)
set ICU_DATA="$(THIRDPARTY)$(SLASH)icu$(SLASH)data$(SLASH)winnt"
$(ENCRYPT) $(>)
}
The encrypt invocation uses the environment variable to attach
additional directory components
to the path. However, jam insists on puttint a space on the end of the
line, so the actual environment variable is:
[ICU_DATA="$(THIRDPARTY)$(SLASH)icu$(SLASH)data$(SLASH)winnt" ],
so there is this space in the path. How does one get rid of this?
Date: Sat, 6 Nov 1999 11:57:43 -0800 (PST)
Subject: Re: Setting a Jam variable and not getting a space at the end
It wasn't clear to me from your example how having the trailing space was
getting in the way, but I can tell you where it comes from:
lists.c:
list_print( l )
LIST *l; {
for( ; l; l = list_next( l ) )
printf( "%s ", l->string );
}
The space is there so things don't get mooshed together:
Echo foo bar blat ;
results in:
foo bar blat
instead of:
foobarblat
I suppose you could swap it around and have a leading space, if that would be
better for you -- or you could get rid of the space in list_print, and be
responsible for it in your rules/targets (but that could get messy pretty fast).
Alternatively, since your example was an "actions", you could just use the
shell to get rid of it.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Mon, 8 Nov 1999 10:21:20 +0100
Subject: Re : Setting a Jam variable and not getting a space at the end
I also had the same problem.
It comes from the var_string() function, which expands strings containing variables.
This function adds a space after each item in the expanded variable, even for the last.
I modified variable.c and added a line before line 203 :
for( ; l; l = list_next( l ) ) {
int so = strlen( l->string );
if( out + so >= oute ) return -1;
strcpy( out, l->string );
out += so;
==> if( list_next( l ) ) *out++ = ' ';
}
list_free( l );
This removes the last space in variable expansion.
Date: Mon, 08 Nov 1999 14:11:09 +0100
From: Sven Havemann <s.havemann@tu-bs.de>
Subject: Jam/Irix 6.5
The jam Makefile contains a target
all: jam0
For careful people who don't have . in their $PATH (like me) this should
be changed to
all: jam0
./jam0
Date: Mon, 8 Nov 1999 10:32:44 +1100
From: Graeme Gill <graeme@colorbus.com.au>
Subject: Re: Re: Setting a Jam variable and not getting a space at the end
void
list_print( l )
LIST *l; {
int ptd = 0;
for( ; l; ptd = 1, l = list_next( l ) )
printf( "%s%s", ptd ? " " : "", l->string );
}
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Date: Mon, 29 Nov 1999 08:47:20 -0800
Subject: simultaneous builds in jam?
Can jam handle builds from multiple CPUs in the same
directory tree? I've parallelized our build into 3 simultaneous
phases and i'm seeing build failures i don't see when
the build is done serially. I'm wondering if this a jam
issue or something we have done.
The first step is to completely sync the build area with sources.
Then 3 build targets are executed in jam simultaneously
on 3 different hosts. So, multiple builds are running
at the same time in the same tree. The build targets
should be non-overlapping. For example, windoze libraries
and vxworks libraries are built at the same time. There
shouldn't be a conflict.
The problem i'm seeing is:
MkDir1 Z:\x\build\obj\Actor
mkdir Z:\x\build\obj\Actor
A subdirectory or file Z:\x\build\obj\Actor already exists.
mkdir Z:\x\build\obj\Actor
...failed MkDir1 Z:\x\build\obj\Actor ...
...skipped Z:\x\build\obj\Actor\win32 for lack of Z:\x\build\obj\Actor...
...skipped Z:\x\build\obj\Actor\win32\debug for lack of
Z:\x\build\obj\Actor\win32...
...skipped <Build!obj!Actor!win32!debug>Actor.obj for lack of
Z:\x\build\obj\Actor\win32\debug...
...skipped lib_Actor_win32_dbg.lib for lack of
<Build!obj!Actor!win32!debug>Actor.obj...
What seems to be happening is the first target to reach a certain directory
wins. The other
two targets will not be built. In the above example the vxworks release
target was built, but
the win32 and vxworks debug targets were skipped.
Time is a debt i pay only momentarily.
Date: Mon, 29 Nov 1999 10:37:19 -0800
From: "Olivier Brand" <olivier@intraware.com>
Subject: Jam and Java
I am working on a way to define makefiles for big projects. I have came
with a solution mixing gmake, jikes and perl (to build dependencies).
Everything works pretty good but I cannot express the following:
- Circular dependencies.
- Ordering the files to compile. (Build a hierachy tree)
Is it possible with Jam to do these 2 things ?
Where can I find Java resources for Jam ?
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Subject: RE: simultaneous builds in jam?
Date: Wed, 1 Dec 1999 15:19:55 -0800
Steve Babiak suggested changing the make directory rule to:
actions MkDir1 {
if not exist $(1)\nul $(MKDIR) $(1)
}
This gates the make directory and works for me! Steve's explanation is:
The "if not exist" is understood to test for existence of a file, not
a directory, in NT. That is why the "nul" file is tacked onto the end
of $(1). So, if the nul file does not exist in the directory, then
call MKDIR. The MKDIR on NT creates a directory _and _ that directory
will contain a nul file always!
Date: Wed, 08 Dec 1999 16:46:44 -0700
From: Lance Johnston <lance@scmlabs.com>
Subject: Future Plans for Jam?
Does anyone know what the future plans for Jam are? Will there be
ongoing releases, and if so, what's the planned functionality? Does any
future functionality include adding some string processing capablities?
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Fri, 10 Dec 1999 11:00:54 -0800
Subject: IBM MVS Open Edition (OE) aka OS/390 Unix System Services (USS)
I see in the http://public.perforce.com/public/jam/src/RELNOTES that Jam
has been ported to MVS OE. Does anybody have any tips or pointers on using
Jam on MVS? Thanks.
From: "Dowdy, Mark" <mark@ciena.com>
Date: Tue, 21 Dec 1999 13:13:03 -0800
Subject: Maximum "actions" length
Could anyone familiar with the internals of Jam tell
me whether or not there is a maximum line length for
an action. The reason I ask is because we have a java
compilation action that has a line that just reached 1023
characters (before variable expansion). When we recently
modified the action increasing the length of this very long
line, Jam would no longer run the action, even if a target
was out of date. When we remove our new additions to the
long line in the action, the action is run on the out-of-date targets.
Date: Tue, 21 Dec 1999 13:37:32 -0800 (PST)
Subject: Re: Maximum "actions" length
I believe it's MAXLINE in jam.h.
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: Maximum "actions" length
Date: Mon, 27 Dec 1999 15:12:54 -0800
What version of Jam are you running? I'll assume you're running the
"official" 2.2 release.
Is your java compilation action defined with PIECEMEAL? If not, you
might want to re-engineer it so that it uses PIECEMEAL.
We had this problem when running "jam clean". Basically, jam used a
heuristic to decide how many targets to pass to each action
more-or-less say "yeah, that should be < NN chars, do it", but the
expanded action would actually be > NN chars, and Jam would catch it
and abort.
Mark Baushke <mdb@acm.org> found this problem, and contributed a patch
to the perforce archive that improved the heuristic so that it would
take a few stabs at coming up with an actions line that didn't
overflow the buffer. It's been working like a charm for us.
From: "Randy Roesler" <rroesler@mdsi.bc.ca>
Date: Wed, 29 Dec 1999 14:37:01 -0800
Subject: Possible Bug in Jam
Jam version 2.2.5
I think I might have descovered a bug in the Jam engine itself.
I can post my Jamrules if anybody thinks that will help.
Here is the senario.
a) Source file exists in some source directory.
Lets call this file X.h
The SEARCH for X is correctly setup so that
the target X.h binds to $TOP/src/Cmp/X.h
where Cmp is the component contains X
b) We want to export X.h to some include directory using
symbolic links. To do this
Depends Cmp/X.h : <exported>X.h
Depends <exported>X.h : <source>X.h
Depends exports : <exported>X.h
set LOCATE on Cmp/X.h = $TOP/include
set LOCATE on <exported>X.h = $TOP/include/Cmp
# create the sumblic link.
SymLink <exported>X.h : <source>X.h
c) Some source file (X.cpp) includes Cmp/X.h
set HDRRULE on <src!Cmp>X.cpp = HdrRule
<etc as required for dependency scanning>
So, we now have two targets referencing the same file
$TOP/src/Cmp/X.h and two referebcing the exported files
$TOP/include/Cmp/X.h.
Dependency scanning should scan X.cpp, and propagate the
HDR* variables to Cmp/X.h (which in term, might propagate
to other included files). But there is no reason that the
HDR* variables are propagated to <exported>X.h or <source>X.h
Now the trouble is that somehow HDRRULE and HDRSCAN
are getting defined on <exported>X.h and <source>X.h, even though
the Jamrules never explicity requests dependency analysis on these
files. Its also not added by the HdrRule.
[Why am I doing this? because if dependency analysis was allowed
on <exported>X.h, it would notice that X.h includes Y.h (say). Y.h
might be newer than X.h, which would cause Jam to think that it needed
to rebuild Cmp/X.h, which causes the whole system to rebuild.
<exported>X.h is separated from Cmp/X.h so that the make process
can bind <exported>X.h and leave Cmp/X.h unbound. We need to delay
the binding of Cmp/X.h so that HDR* variables can be propagated to it BEFOPE
it is bound. Note, "jam exports obj" will bind the <export>* targets first,
and the Cmp/X.h targets are part of dependency analsys of the obj target. ]
The following is the output from
jam -n -d5 exports | grep QueueTestHandlerCallback
If you trace through it, you can see the HdrRule being called
with <exported>QueueTestHandlerCallback.idl even though
HDRRULE is never set on this target.
(sorry about the line wrapping)
exported>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/include
<Srvs!Queue!tests>QueueTestHandlerCallback_c.cpp
QueueTestHandlerCallback_s.h
<Srvs!Queue!tests>QueueTestHandlerCallback_s.cpp :
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback_c.cpp
QueueTestHandlerCallback_s.h
<Srvs!Queue!tests>QueueTestHandlerCallback_s.cpp :
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue
<Srvs!Queue!tests>QueueTestHandlerCallback_c.cpp
QueueTestHandlerCallback_s.h
<Srvs!Queue!tests>QueueTestHandlerCallback_s.cpp :
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = HdrRule
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = ^[ ]*#[
]*include[ ]*[<"](.*)[">].*$
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/include /opt/wle/include
/u/oracle/product/8.1.5/rdbms/demo /u/oracle/product/8.1.5/plsql/public
/u/oracle/product/8.1.5/network/public
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue /usr/include
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback_c.cpp
QueueTestHandlerCallback_s.h
<Srvs!Queue!tests>QueueTestHandlerCallback_s.cpp :
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<source>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue
/u/rroesler/continous/Top/include/Queue
<source>QueueTestHandlerCallback.idl
QueueTestHandlerCallback.idl QueueTestHandlerCallback_obj.cpp
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback_obj.cpp
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = HdrRule
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = ^[ ]*#[
]*include[ ]*[<"](.*)[">].*$
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/include /opt/wle/include
/u/oracle/product/8.1.5/rdbms/demo /u/oracle/product/8.1.5/plsql/public
/u/oracle/product/8.1.5/network/public
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue /usr/include
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = HdrRule
<Srvs!Queue!tests>QueueTestHandlerCallback.idl = ^[ ]*#[
]*include[ ]*[<"](.*)[">].*$
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
/u/rroesler/continous/Top/include /opt/wle/include
/u/oracle/product/8.1.5/rdbms/demo /u/oracle/product/8.1.5/plsql/public
/u/oracle/product/8.1.5/network/public
/u/rroesler/continous/Top/Srvs/Queue/tests
/u/rroesler/continous/Top/Srvs/Queue/gen
/u/rroesler/continous/Top/Srvs/Queue/gen/Queue /usr/include
<Srvs!Queue!tests>QueueTestHandlerCallback.idl
make -- <exported>QueueTestHandlerCallback.idl
Queue/QueueHandlerCallback.idl
bind -- <exported>QueueTestHandlerCallback.idl:
/u/rroesler/continous/Top/include/Queue/QueueTestHandlerCallback.idl
time -- <exported>QueueTestHandlerCallback.idl: Tue Nov 16
make -- <source>QueueTestHandlerCallback.idl
Queue/QueueHandlerCallback.idl
bind -- <source>QueueTestHandlerCallback.idl:
/u/rroesler/continous/Top/Srvs/Queue/tests/QueueTestHandlerCallback.idl
time -- <source>QueueTestHandlerCallback.idl: Tue Nov 16
made stable <source>QueueTestHandlerCallback.idl
made stable <exported>QueueTestHandlerCallback.idl
From: "Dowdy, Mark" <mark@ciena.com>
Subject: RE: Maximum "actions" length
Date: Wed, 29 Dec 1999 16:50:23 -0800
FYI, the problem turned out to be a stack corruption
because a line in one of our actions contained a token
with a pair of variables that when expanded, were longer
than MAXSYM. When var_expand() was doing strcpy's, there
wasn't a check to see if the size of out_buf was exceeded
and strcpy happily scribbled all over the stack.
From: mzukowski@bco.com
Date: Tue, 4 Jan 2000 10:47:05 -0800
Subject: backwards rule...
I use noweb which takes a .nw file and makes any number of other source
files, such as .c and .h files. When I change a .nw file, it may only
change one of the .c files, but not all of them. I'm not sure how to handle
that with Jam.
What I really need to do is have a rule which says to always run noweb on
the .nw files and then check to see if any of the .c or .h files have
changed. I don't want to say that the .c files depend on the .nw file
because sometimes they do and sometimes they don't, depending on which part
of the .nw file has changed. It's kind of a conditional dependency.
Has anyone dealt with a similar situation before?
From: "John Avery" <javery@taxcut.com>
Date: Tue, 4 Jan 2000 14:00:10 -0500
Subject: Re: backwards rule...
For a somewhat similar situation, I run jam twice, the first time with a
"setup" target that may write files needed by the second, main, build.
Subject: backwards rule...
I use noweb which takes a .nw file and makes any number of other source
files, such as .c and .h files. When I change a .nw file, it may only
change one of the .c files, but not all of them. I'm not sure how to
handle that with Jam.
What I really need to do is have a rule which says to always run noweb on
the .nw files and then check to see if any of the .c or .h files have
changed. I don't want to say that the .c files depend on the .nw file
because sometimes they do and sometimes they don't, depending on which part
of the .nw file has changed. It's kind of a conditional dependency.
Has anyone dealt with a similar situation before?
From: "Johnston, Keith" <johnston@vignette.com>
Date: Thu, 6 Jan 2000 10:52:26 -0600
Subject: Running Jam on NT in a Samba-Mounted Directory
I'm new to this list and to Jam, but I can't find anything about this in the
archives or the FAQ.
In the environment here, we have Jam files that are used to build both on
Unix and NT. I was hoping I could set up a shared directory on Unix, and
then run Jam on NT from inside that shared directory.
However, when I try this, Jam says it cannot find the source files:
$ jam
don't know how to make <foo!bar!fileA>fileA.cpp
don't know how to make <foo!bar!fileA>fileB.cpp
...
Jam works fine when I run it in a directory that really contains the source
files on NT instead of the mounted directory.
Any ideas? Is this a Samba configuration problem or a Jam problem?
From: "Johnston, Keith" <johnston@vignette.com>
Date: Thu, 6 Jan 2000 10:49:12 -0600
Subject: Running Jam on NT in a Samba-Mounted Directory
I'm new to this list and to Jam, but I can't find anything about this in the
archives or the FAQ.
In the environment here, we have Jam files that are used to build both on
Unix and NT. I was hoping I could set up a shared directory on Unix, and
then run Jam on NT from inside that shared directory.
However, when I try this, Jam says it cannot find the source files:
$ jam
don't know how to make <foo!bar!fileA>fileA.cpp
don't know how to make <foo!bar!fileA>fileB.cpp
...
Jam works fine when I run it in a directory that really contains the source
files on NT instead of the mounted directory.
Any ideas? Is this a Samba configuration problem or a Jam problem?
Date: Mon, 10 Jan 2000 18:05:48 -0800 (PST)
Subject: Re: Future Plans for Jam?
Well, I can't speak for anyone at Perforce (or anywhere else for that
matter), but October/November/December were exceedingly busy months for
me. I have several letters in my Jam mail folder that I've been meaning to
respond to -- including Lance's original one on this subject. I usually
try to respond immediately to any mail on this list that I have a response
to (I did manage to for one that only took two seconds to do) -- but since
I'm only now starting my Christmas shopping, any response that's going to
take a little more thought is still going to have to wait until time gets
a little less crunched (including Lance's original one on this subject :)
From: "Eric Johnson" <ejohnson@metrotools.com>
Date: Tue, 25 Jan 2000 19:04:03 -0500
Subject: Proteus - An alternative to make
I posted this comp.software.config-mgmt. But I thought
some folks on this list might find this interesting. I hope
this isn't viewed as intrusive. So far the reaction has
been mildly luke warm. Someone's bound to like it.
Proteus - A New Approach To Make
* A Criticism of Make
Proteus was born out of frustration. A make utility is a crucial part
of the software development process. Yet make alone is never
enough. Most developers and source code managers build up a warehouse
of scripts and utilities to squeeze out the desired behavior. But even
after building up such an arsenal, development groups must continually
wage war with their ad hoc make system.
The costs of the silent war build though the years. Most developers
within an organization are incapable of making bug fixes or
enhancements to their build process. Those that can fear the complex
dependencies of the various utilities cobbled together. Educating new
developers about the vagaries of the build process becomes a rite of
passage. As the source code base grows, the build system fails to
scale and creaks along like band-aids applied to a sinking ship.
While there are a number of flavors of make, this will focus on
critiquing the functionality common to Unix make, GNU make, Digital's
MMS, and Microsoft's NMAKE. The following critique will refer to the
collective common feature set as make.
The only prior knowledge that's required is that one needs to
understand the prototypical make relationship. Which is this
target : dependent1 dependent2 dependent3 ...
shell action1
shell action2
In words, the relationship operates like so - if any dependent on the
right hand side of the relationship is out of date with respect to the
target, the shell actions are invoked. Each dependent's out of date
state is determined by recursively locating a target and analyzing its
out of date relationship to its dependents.
* Out of Date Relationship
The simplest hardwired assumption is the method of determining if
something is out of date. In general, make assumes that the target and
dependent have a direct file counterpart from which a timestamp can be
extracted. From here, make performs a time stamp comparison between
the target file and dependent file. Should the dependent's time stamp
be newer than the target's time stamp, the target is deemed out of
date. When the target is out of date, the shell actions are invoked to
bring the target up to date.
For this discussion, this relationship will be called the out of date
relationship. To put it formally, make uses a timestamp out of date
relationship, which is a grave mistake. Consider the following
development scenario.
Suppose there are two development teams. The Tools Team designs a
low-level class framework for implementing an object persistence
model. The Tools Team has its own build and release cycle separate
from others in the organization. The release process consists of
delivering their source code base in its entirety to other internal
development groups.
Now consider the timestamp out of date relationship from the
perspective of those receiving the work of the Tools Team. Suppose the
Applications Team has been hard at work with v1.0 of the Tools Team's
object persistence framework. Since the Applications Team has been
under much pressure, they are under a daily build system to speed the
QA process and fold in the rest of the final development.. The effect
of this cycle is to cause each object module and executable to have a
very recent timestamp.
Let's say the Tools Team has finished work on v1.5 of their object
persistence framework. After testing their library of source code,
they release it to others within the organization. For sake of
completeness, let's say the Tools Team's last build was on October 1st
with all timestamps for their source code base dating from the
previous day, September 30th. On October 10th, the Applications Team
receives the release, they decide to rebuild their system against
these new changes.
Now comes the challenge. The source code that the Tools Team delivered
is in some sense old. It dates from September 30th. But given that the
Applications Team rebuilds everyday, the Applications Team will have
object modules and executables that date from after October 1st. Yet,
those binaries were built using the previous version of the Tools
Team's library.
As a result of this situation, when one goes to rebuild the system
with the new object persistence framework source files, nothing will
be rebuilt. That's because, from a timestamp perspective, all of the
resulting object modules and executables are in fact up to date in
relation to the files they depend on. From a conceptual perspective,
the files have changed.
A common solution to this problem is to modify the timestamp of all
the source files of the Tools Team's library. This would cause all
object modules and executables to rebuild which would guarantee a
correct build. Yet this is very unappealing for two significant
reasons. This is quite hackish and terribly inefficient. Surely not
everything has changed in the source code base, why waste so much time
rebuilding when perhaps very little has changed? The timestamp out of
date relationship is not an accurate way to communicate changed source
files. Yet make does not allow one to control the behavior of the
out of date relationship. There are of course better out of date
relationship algorithms to use, but rather than foist this choice
onto the make user, the make developer should have a choice.
* Dynamic Dependency Generation
Any large-scale system implemented in C or C++ will have a substantial
source to header file dependency structure. No development group can
be expected to manually create this information. Yet as a vital part
of the build process, make does not offer a simple mechanism to
conveniently generate this information. The root of the problem is
the way in which make evaluates the makefile. Make has two phases of
operation. The first phase is the syntactical parsing of the
makefile. During this phase, the underlying tree of target to
dependent information is built up. In the second phase, the tree is
evaluated and then brought up to date.
In order for make to operate, the complete dependency tree needs to be
computed as per the first phase. After all, there's no way to know
what to actually rebuild in the second phase unless one has the whole
dependency tree. But to compute the whole dependency tree can be time
consuming, especially for a large system. Thus, we're left with a
system where the second phase imposes a high cost on the execution of
the first phase.
A large source code base makes it too expensive to compute the whole
dependency tree each time. A more efficient process would be to
compute the dependencies only for any files that have changed. But
this is the very type of operation that only the second phase of make
can perform. The result is a situation where we need the features of
the evaluation phase to help us generate the information for the tree
population phase.
Some utilities solve this issue via a recursive invocation mechanism,
but this is an inefficient hack. The recursive invocations are
inherently unable to share information without further hacks. The
effect here is that each visited header file must be recursively
evaluated for its complete chain of headers. More work would need to
be reintroduced in order to eliminate this inefficiency.
The basic cause is that the two phases should really exist as one
integrated phase. Why not allow one to populate the tree as its
evaluated. Not only would this result in greater flexibility, but it
would result in greater efficiency too. The tree is grown only to the
size that is needed when it is needed. The theme here is lazy
evaluation.
* Parallel Build Capabilities
As the complexity of the products created grows, so grows the source
code base. With this growth comes the increased build time associated
with it. While great strides have been made in distributed systems and
multi-tasking operating systems, make is largely unable to put those
resources to use. A make system should be able to independently build
multiple components, but also have the skill to synchronize those
components that do depend on each other. A make system is more than
just file dependency, its that and conceptual dependency at the global
scope.
* Shell Commands Are Inadequate Actions
As part of every make process, one needs to implement the actions that
actually bring the target up to date with respect to the
dependents. Unfortunately, make implements these update actions as
shell operations. For those operating systems with a strong shell,
this direct access to the shell is a blessing given make's weak
variable handling tools. But for those make developers running under
weaker shells, this turns into a hideous curse. Apart from the Unix
based shells, OpenVMS's DCL and Microsoft's infamous "dos box" offers
a very weak feature set.
Even if these shells were stronger, the point still remains - direct
invocation of shell commands is the wrong approach. This hampers
portability of the make system. In order to accommodate the
ever-growing list of operating systems and Byzantine shells, the make
developer has to push their makefile through mind-numbing contortions.
Rather than rely on the shell as a warmed over interpreter, the good
make system should offer its own notion of a programmable
function. This would allow the make developer to implement more
complex behavior than could be achieved directly in the shell. In
addition, by using an actual programming language, the update actions
can become more operating system independent.
* A VPATH that Works
The goal of any large-scale make system is to minimize the amount of
work that needs to be done for local development. Actually reaching
this goal proves to be frustratingly difficult. One of the ways to
reach this goal is to have a make system that draws upon centrally
built binaries so that local development only rebuilds what's been
changed locally.
To make the issue clear, let's consider the following example. This
isn't the ideal way or the common way in which source code is shared,
but it will give you an idea as to the issue involved here.
/shared_directory/src ; Complete set of source files
/shared_directory/src/bin ; Binaries produced from the above
In the /shared_directory/src directory, we find a complete set of
source files that forms a complete library. The src directory contains
all headers and sources needed to build the library. And under the bin
directory, we'll find all of the binaries that are to be produced from
that source code base. Now, if a developer were to perform any local
development, they would have a directory chain that looks something
like this.
/usr/ejohnson/devel/src
/usr/ejohnson/devel/src/bin
Note that the hierarchy of directories for local development is the
same as that of the centralized development. In order for the user to
do any useful local development, the library must be completely
rebuilt. Thus the local directory becomes a complete mirror of the
central directory. If the library is sufficiently large, this will
consume a considerable amount of time and disk space. This becomes
particularly painful for the developer when the change is miniscule
compare to the overall size of the package.
The ideal way to handle this issue is to allow the local developer to
invoke a make system that can draw upon the centrally built binaries
when possible. More importantly though, the make system should have
enough smarts to know when the local changes require rebuilding of
central source files. To concretize this last point, let's assume a
simple source code base. The library to be built consists of three
source files, foo.h, foo.cpp, and bar.cpp. Furthermore, let's assume
that both source files, foo.cpp and bar.cpp, include foo.h. Thus, any
changes to foo.h would require the recompiling of foo.cpp and
bar.cpp.
For simplicity's sake, let's assume that the centrally build library
is completely up to date. Furthermore, let's assume that the developer
would like to modify foo.h without directly modifying any other source
file. This means that in the local directory, the developer will only
have foo.h and no other source file.
In this scenario, the make system should recompile both source files
from the central directory and place the binary output into the local
directory. The desired source to be recompiled should not be placed
into the local directory nor should the binary be placed into the
central directory. Once both binaries are placed into the local binary
directory, the library would be relinked. The difficulty in the above
scenario is in handling the recompilation of the source file. That's
because when the source file is recompiled, the binary is placed into
the local directory. This means that the target for which we were
looking at has been given a new home. To put it differently, the link
action for the library will need to be told that the object module was
placed into the local directory rather than the central directory.
This point is particularly important to grasp, yet difficult to
convey, so let's reconsider this issue from make's perspective. When
rebuilding the library, the make system will first consider a
relationship like this.
foo.exe : foo.obj bar.obj
[link actions]
The foo.exe will be produced locally, but the object modules, foo.obj
and bar.obj, could be found either centrally or locally. Let's
suppose that there are no copies locally, so when the make system goes
to look for them, the object modules will be found in the central
directory.
Thus the make system is now effectively working with a relationship
like this.
/central/bin/foo.obj : foo.cpp foo.h
[compiler actions]
As with foo.obj and bar.obj, foo.cpp and foo.h will need to be
searched for in a similar path like way. Thus, we'll look for those
source files locally and then centrally. In this case, we'll discover
that foo.h (the local copy) is out of date with respect to the object
module in the central directory. This will result in the source file
being recompiled into the local binary directory.
This is all well and good, but we're left with the odd effect of the
target not actually being built. In some sense, foo.obj was
rebuilt. But the actual target, as reported or known to the make
system, /central/bin/foo.obj, is still out of date. With a little hand
waving, we've brought a different foo.obj up to date.
In order to be completely correct though, the make system needs to
have a way to push back the new name of that target back up the
evaluation sequence. This means that the dependencies for the link
relationship, the one with foo.obj and bar.obj on the right hand side
would need to be informed of their target's new home. This push back
of new target home information is critical for the success of a shared
directory build system.
* Make is a Lousy Programming Language
What sums up all of the previous criticisms against make is this
simple observation. A makefile really needs to be thought of as a
program. It's a tool to be used and customized by developers for their
own needs. But as a language in which to write a program, make's suite
of functionality leaves much to be desired.
Thus, the final and fundamental criticism of make is that it is not a
professional language in which one could write any large scale,
portable program. The flow control constructs are weak. There's no
support for procedures with return values thus preventing any top-down
design. Error handling is cryptic and handicapped. In addition,
there's no real notion of structures to create aggregated data
units. Make, as a development language, is deficient and retarded.
The birth of Proteus really began with the above observations. It
began with the goal of implementing a good make utility that had a
real language in it. But rather than write yet another scripting
language, a simple laundry list of must have language features were
developed.
* Broad based support across popular and fringe operating systems
* Intelligent variable handling - including scoped variables
* Primitive OOP support - classes and polymorphism
* Thread support to implement parallel evaluation of build trees
* Strong ties to an operating system's shell
* Some measure of error handling
The two most popular scripting languages that match most of the above
criteria are Python and Perl. Unfortunately, Perl's thread support is
experimental and is incomplete. Thus, the only scripting language that
satisfies the above requirements is Python.
To summarize -
Proteus is framework for building a make system. Its written entirely
in Python and is freely distributable. If you'd like a copy, send me email.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Thu, 3 Feb 2000 10:54:07 -0500
Subject: Compiling Debug and Product targets in the same Jamfile.
We're in the process of migrating to JAM, with some difficulty I might add.
In the old environment, we recursively called the same makefile to generate
product(non debug) and debug variants of the objects, libraries and executables.
The different variants had their own destination directories so that the
could coexist in harmony.
With JAM I tried creating ProdApp and DebugApp rules. I was expecting the
both the debug objects and executable to end up in a bin/debug directory
while the product objects and executable to end up in a bin/prod directory.
rule ProdApp {
LOCATE_TARGET = $(TARGET_PROD) ;
Depends $(<) : $(>) ;
Main $(<) : $(>) ;
}
rule DebugApp {
LOCATE_TARGET = $(TARGET_DEBUG) ;
Depends $(<) : $(>) ;
Main $(<) : $(>) ;
}
HDRS += ..$(sep)include ;
source = file1.c file2.c file3.c ;
DebugApp progd : $(source) ;
ProdApp prog : $(source) ;
Results with:
...found n target(s)...
...updating n target(s)...
MkDir1 bin.nt
MkDir1 bin.nt\debug
MkDir1 bin.nt\release
Cc bin.nt\release\file1.obj
file1.c
Cc bin.nt\release\file1.obj
file1.c
Cc bin.nt\release\file2.obj
file2.c
Cc bin.nt\release\file2.obj
file2.c
Cc bin.nt\release\file3.obj
file3.c
Cc bin.nt\release\file3.obj
file3.c
Link bin.nt\debug\progd.exe
Chmod bin.nt\debug\progd.exe
Link bin.nt\release\prog.exe
Chmod bin.nt\release\prog.exe
...updated n target(s)...
Do I have to resort to changing the names of the object files (ie. appending
a 'd' to the debug variants). That seemed to do the trick with the executables.
If I have to run JAM twice with different settings, so be it, but I wished
to avoid that.
Date: Thu, 3 Feb 2000 10:54:13 -0600 (CST)
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
Without getting into it in detail, I guess it would be that the
executables are stated to depend upon the sources rather than the binaries,
so once jam builds the sources then it could link both exe's
from them.
if you run with debug, you can nail it down exactly, although its a lot
of output to look at.
We do a similar thing here, but I seem to remember a rule MainFromObjects
that expresses the exe to obj dependency directly.
What we do is run jam twice with a BUILD_TYPE=debug and BUILD_TYPE=release
and use the same rules.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 3 Feb 2000 17:55:58 +0100
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
This doesn't work, because the 2 invocations of the rule "Main" refer to the
same target.
And this target, of course, has only one location and one set of variables.
We run JAM twice, with and without a "-sDEBUG=1" option.
But you might try to distinguish the targets with a GRIST :
rule ProdApp {
SOURCE_GRIST = prod ;
LOCATE_TARGET = $(TARGET_PROD) ;
CCFLAGS = ... ;
LINKFLAGS = ... ;
Main $(<:G=prod) : $(>) ;
}
rule DebugApp {
SOURCE_GRIST = debug ;
LOCATE_TARGET = $(TARGET_DEBUG) ;
CCFLAGS = .... ;
LINKFLAGS = ... ;
Main $(<:G=debug) : $(>) ;
}
From: Randy Roesler <rroesler@mdsi.bc.ca>
Subject: RE: Compiling Debug and Product targets in the same Jam file.
Date: Thu, 3 Feb 2000 10:30:18 -0800
use GRIST ... chnage the Main rule (and other
rules as required) so that the executable and
object files have differnt GRIST.
for example, add '-release' or '-debug' to the GRIST
From: Koloseike, Jason [mailto:Jason.Koloseike@Cognos.COM]
Sent: Thursday, February 03, 2000 7:54 AM
Subject: Compiling Debug and Product targets in the same Jamfile.
We're in the process of migrating to JAM, with some difficulty I might add.
In the old environment, we recursively called the same makefile to generate
product(non debug) and debug variants of the objects, libraries and executables.
The different variants had their own destination directories so that the
could coexist in harmony.
With JAM I tried creating ProdApp and DebugApp rules. I was expecting the
both the debug objects and executable to end up in a bin/debug directory
while the product objects and executable to end up in a bin/prod directory.
rule ProdApp {
LOCATE_TARGET = $(TARGET_PROD) ;
Depends $(<) : $(>) ;
Main $(<) : $(>) ;
}
rule DebugApp {
LOCATE_TARGET = $(TARGET_DEBUG) ;
Depends $(<) : $(>) ;
Main $(<) : $(>) ;
}
HDRS += ..$(sep)include ;
source = file1.c file2.c file3.c ;
DebugApp progd : $(source) ;
ProdApp prog : $(source) ;
Results with:
...found n target(s)...
...updating n target(s)...
MkDir1 bin.nt
MkDir1 bin.nt\debug
MkDir1 bin.nt\release
Cc bin.nt\release\file1.obj
file1.c
Cc bin.nt\release\file1.obj
file1.c
Cc bin.nt\release\file2.obj
file2.c
Cc bin.nt\release\file2.obj
file2.c
Cc bin.nt\release\file3.obj
file3.c
Cc bin.nt\release\file3.obj
file3.c
Link bin.nt\debug\progd.exe
Chmod bin.nt\debug\progd.exe
Link bin.nt\release\prog.exe
Chmod bin.nt\release\prog.exe
...updated n target(s)...
Do I have to resort to changing the names of the object files (ie. appending
a 'd' to the debug variants). That seemed to do the trick with the executables.
If I have to run JAM twice with different settings, so be it, but I wished
to avoid that.
Date: Thu, 3 Feb 2000 15:57:37 -0800 (PST)
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
So far everyone seems to be suggesting you try using GRIST, but I'm not
sure it would do it for you (but maybe I'm just being slow today). For
things like this, I generally just use full paths -- that way, you end up
with two different targets (when you run 'jam', not in your Jamfile).
So, here's some stuff that works. It's basically just a quickie, so you'll
probably need to adjust things for your specific needs. For one thing, I
didn't bother using $(SLASH) (I was being lazy), so you'll need to change
any /'s (I did it on a FreeBSD machine, but it should work on Windoze once
you change those.) I also assumed you always want both flavors built -- if
you want to be able to do one or the other, you can always add that.
Dir struct is:
/tmp/jason/
Jamfile Jamrules src/
Jamfile foo.c
Jamrules:
# Build both debug and production executables
rule myMain {
local i t ;
{
t = $(SUBDIR)/bin/$(i)/$(<) ;
#Because we aren't using MakeLocate, which ordinarily does the mkdir's
myMkDir $(SUBDIR)/bin/$(i) ;
# Use full-path targets
myMainFromObjects $(t) : $(>:S=$(SUFOBJ)) ;
myObjects $(t:D) : $(>) ;
}
}
rule myMkDir {
Depends first : dirs ;
Depends dirs : $(<) ;
MkDir $(<) ;
}
rule myMainFromObjects {
local d s t ;
#Set directory, source, and target names
d = $(<:D) ;
s = $(d)/$(>) ;
makeSuffixed t $(SUFEXE) : $(<) ;
#Link -s for production executables
if $(d:B) = prod { LINKFLAGS on $(t) += -s ; }
#If executables have suffixes...
if $(t) != $(<) {
Depends $(<) : $(t) ;
NOTFILE $(<) ;
}
# Make compiled sources a dependency of target
Depends exe : $(t) ;
Depends $(t) : $(s) ;
Clean clean : $(t) ;
Link $(t) : $(s) ;
}
rule myObjects {
{ #Compile -g for debug
if $(<:B) = debug { CCFLAGS on $(<)/$(i:S=$(SUFOBJ)) += -g ; }
Object $(<)/$(i:S=$(SUFOBJ)) : $(i) ;
Depends obj : $(<)/$(i:S=$(SUFOBJ)) ;
}
}
Jamfiles:
# Top-level Jamfile
SubInclude TOP src ;
# Jamfile for src
SubDir TOP src ;
myMain foo : foo.c ;
'jam -d2' output (minorly edited so it's not so long):
...found 22 target(s)...
...updating 7 target(s)...
mkdir /tmp/jason/src/bin
mkdir /tmp/jason/src/bin/debug
mkdir /tmp/jason/src/bin/prod
Cc /tmp/jason/src/bin/debug/foo.o
cc -c -g -O -I/tmp/jason/src -o /tmp/jason/src/bin/debug/foo.o /tmp/jason/src/foo.c
Link /tmp/jason/src/bin/debug/foo
cc -o /tmp/jason/src/bin/debug/foo /tmp/jason/src/bin/debug/foo.o
chmod 711 /tmp/jason/src/bin/debug/foo
Cc /tmp/jason/src/bin/prod/foo.o
cc -c -O -I/tmp/jason/src -o /tmp/jason/src/bin/prod/foo.o /tmp/jason/src/foo.c
Link /tmp/jason/src/bin/prod/foo
cc -s -o /tmp/jason/src/bin/prod/foo /tmp/jason/src/bin/prod/foo.o
chmod 711 /tmp/jason/src/bin/prod/foo
...updated 7 target(s)...
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Fri, 4 Feb 2000 10:00:02 +0100
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
You are right:
t = $(SUBDIR)/bin/$(i)/$(<) ; (where $(i) is either 'debug' or 'prod')
will do the same thing as
t = $(<:G=$(i));
LOCATE on $(t) = $(SUBDIR)/bin/$(i) ;
Grists and complete paths are 2 ways to make a target unique.
(A grist appears on the target's name in Jam, but not in the target's
file name used in actions)
The advantage of grists is that you don't have to rewrite all the rules,
and the directory structure can be more complex. It is simpler to extract
the grist from a target's name than from a $(SUBDIR)/bin/$(i) construct.
Date: Fri, 4 Feb 2000 08:23:33 -0800 (PST)
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
I stand corrected (fighting the flu the past two weeks has clearly clouded my brain).
So, here's a much more elegant solution (Jason, I'd recommend you use this
one instead since it's a lot cleaner):
# Build both debug and production executables
rule myMain {
local i s t ;
{
SOURCE_GRIST = $(i) ;
t = $(<:G=$(i)) ;
s = $(>:G=$(i)) ;
if $(i) = debug { ObjectCcFlags $(s) : -g ; }
if $(i) = prod { LINKFLAGS on $(t) += -s ; }
Main $(t) : $(s) ;
LOCATE on $(s:S=$(SUFOBJ)) = $(SUBDIR)/bin/$(i) ;
LOCATE on $(t) = $(SUBDIR)/bin/$(i) ;
}
}
Date: Fri, 4 Feb 2000 09:16:13 -0800 (PST)
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
Ack! I'm obviously not functioning well. I forgot about making the
directories. So...here's what should be the final answer (famous last
words, right?)
# Build both debug and production executables
rule myMain {
local i s t ;
{
SOURCE_GRIST = $(i) ;
t = $(<:G=$(i)) ;
s = $(>:G=$(i)) ;
if $(i) = debug { ObjectCcFlags $(s) : -g ; }
if $(i) = prod { LINKFLAGS on $(t) += -s ; }
Main $(t) : $(s) ;
Depends $(s:S=$(SUFOBJ)) : $(SUBDIR)/bin/$(i) ;
MkDir $(SUBDIR)/bin/$(i) ;
LOCATE on $(s:S=$(SUFOBJ)) = $(SUBDIR)/bin/$(i) ;
LOCATE on $(t) = $(SUBDIR)/bin/$(i) ;
}
}
Date: Thu, 03 Feb 2000 11:22:37 -0800
From: Iain McClatchie <iain@10xinc.com>
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
Jason> In the old environment, we recursively called the same makefile
Jason> to generate product(non debug) and debug variants of the
Jason> objects, libraries and executables.
You can and should handle this in Jam with a single Jam run.
The rule is: every file you wish to build should exist as a seperate
"target" in the Jamfile hierarchy. That means you should have
seperate targets for the debug and production versions of your app,
and those files should be built from seperate debug and production
versions of each object file.
It is perfectly reasonable to have these target names distinguished by
different grist, or by different subdirectory names, or by adding ".d"
before the suffix of every debug object file or executable. We use
suffixes.
It is totally unreasonable to have to specify your build process twice
over in order to maintain these parallel builds. Instead, you write
rules in Jam to do it for you.
At 10x, we build three versions of every executable: a debug version,
an optimized "production" version, and a profiling version used for
performance tuning. This takes no more code in our Jamfiles than
building a single production version would. Here's how we do it:
"Jamrules" contains:
CC = /usr/local/bin/gcc ;
C++ = /usr/local/bin/gcc ;
CC_OPT_FLAGS = -m486 -malign-functions=4 -O2 -g2 ;
C++_OPT_FLAGS = -m486 -malign-functions=4 -O2 -g2
-fno-exceptions -fno-implicit-templates ;
CC_DBG_FLAGS = -m486 -malign-functions=4 -g -gstabs+ ;
C++_DBG_FLAGS = -m486 -malign-functions=4 -g -gstabs+
-fno-exceptions -fno-implicit-templates ;
CC_PROF_FLAGS = -m486 -malign-functions=4 -O2 -g2 -pg ;
C++_PROF_FLAGS = -m486 -malign-functions=4 -O2 -g2 -pg
-fno-exceptions -fno-implicit-templates ;
LINKFLAGS_OPT = -lm ;
LINKFLAGS_DBG = -lm ;
LINKFLAGS_PROF = -lm -pg ;
# MyMain basename : sources : libraries
# generates basename -- fast executable
# basename.dbg -- debuggable executable
# basename.prof -- fast executable with profiling
# libraries listed should be package libraries
rule MyMain {
local s t_fast t_dbg t_prof ;
makeGristedName s : $(>) ;
makeGristedName t_fast : $(<) ;
makeGristedName t_dbg : $(<).dbg ;
makeGristedName t_prof : $(<).prof ;
MainFromObjects $(t_fast) : $(s:S=.opt$(SUFOBJ)) ;
MainFromObjects $(t_dbg) : $(s:S=.dbg$(SUFOBJ)) ;
MainFromObjects $(t_prof) : $(s:S=.prof$(SUFOBJ)) ;
LinkPackageLibraries $(t_fast) : $(3) ;
LinkPackageLibraries $(t_dbg) : $(3) ;
LinkPackageLibraries $(t_prof) : $(3) ;
Depends dbg : $(t_dbg) ;
Depends profile : $(t_prof) ;
LINKFLAGS on $(t_fast) = $(LINKFLAGS_OPT) ;
LINKFLAGS on $(t_dbg) = $(LINKFLAGS_DBG) ;
LINKFLAGS on $(t_prof) = $(LINKFLAGS_PROF) ;
for i in $(s) {
CCFLAGS on $(i:S=.opt$(SUFOBJ)) += $(CC_OPT_FLAGS) ;
C++FLAGS on $(i:S=.opt$(SUFOBJ)) += $(C++_OPT_FLAGS) ;
CCFLAGS on $(i:S=.dbg$(SUFOBJ)) += $(CC_DBG_FLAGS) ;
C++FLAGS on $(i:S=.dbg$(SUFOBJ)) += $(C++_DBG_FLAGS) ;
CCFLAGS on $(i:S=.prof$(SUFOBJ)) += $(CC_PROF_FLAGS) ;
C++FLAGS on $(i:S=.prof$(SUFOBJ)) += $(C++_PROF_FLAGS) ;
Object $(i:S=.opt$(SUFOBJ)) : $(i) ;
Object $(i:S=.dbg$(SUFOBJ)) : $(i) ;
Object $(i:S=.prof$(SUFOBJ)) : $(i) ;
}
}
Here's a typical Jamfile: (in $WORKAREA/gemini/src)
SubDir TOP gemini src ;
MAINSRC = alloc.c chains.c deduce.c equate.c
fingers.c gemini.c hash.c match.c
nxtarg.c properties.c queue.c readgraph.c
simalloc.c simnet.c simread.c skipto.c
userdef.c ;
MyMain <gemini!src>gemini : $(MAINSRC) ;
Here's another: (in $WORKAREA/timepath/src)
SubDir TOP timepath src ;
SubDirHdrs $(TOP)/timepath/include ;
SUBDIRCCFLAGS = -W -Wall ;
SUBDIRC++FLAGS = -W -Wall ;
MAINSRC
main.cc sigslex.cc sigsgram.cc
sigs.cc hashloc.cc testcase.cc
testcaselex.cc testcasegram.cc
;
MyMain <timepath!src>timepath : $(MAINSRC) : libappbase.a ;
Yacc sigsgram.cc : sigs.y : sigs ;
Lex sigslex.cc : sigs.l : sigs ;
Yacc testcasegram.cc : testcase.y : testcase ;
Lex testcaselex.cc : testcase.l : testcase ;
Note that we've also redefined the Yacc and Lex rules to use the GNU
versions of the tools, and also to cleanly handle multiple scanners
and parsers in the same executable:
# Lex lex.c : lex.l : lexprefix
rule Lex {
defaultGrist csource : $(<) ;
defaultGrist lsource : $(>) ;
Depends $(csource) : $(lsource) ;
MakeLocate $(csource) : $(LOCATE_SOURCE) ;
SEARCH on $(lsource) = $(SEARCH_SOURCE) ;
Clean clean : $(csource) ;
if $(3) { LEXOPTS on $(csource) = -P$(3) ;
} else { LEXOPTS on $(csource) = ; }
Lex1 $(csource) : $(lsource) ;
}
actions Lex1 { $(LEX) $(LEXFLAGS) $(LEXOPTS) -o$(<) $(>) }
# Yacc yacc.c : yacc.y : yaccprefix
rule Yacc {
defaultGrist csource : $(<) ;
defaultGrist ysource : $(>) ;
defaultGrist others : $(<:S=.cc.h) $(<:S=.cc.output) ;
Depends $(csource) $(others) : $(ysource) ;
MakeLocate $(csource) : $(LOCATE_SOURCE) ;
MakeLocate $(others) : $(LOCATE_SOURCE) ;
SEARCH on $(ysource) = $(SEARCH_SOURCE) ;
Clean clean : $(csource) $(others) ;
if $(3) { YACCPREFIX on $(csource) = -p $(3) ;
} else { YACCPREFIX on $(csource) = ; }
Yacc1 $(csource) : $(ysource) ;
}
actions Yacc1 { $(YACC) -o $(<) $(YACCFLAGS) $(YACCPREFIX) $(>) }
# defaultGrist targvarname : targnames
rule defaultGrist {
local i res ;
res = ; {
if $(i:G) { res += $(i) ;
} else { res += $(i:G=$(SOURCE_GRIST)) ; }
}
$(<) = $(res) ;
}
(Yes, Laura, I know I'm supposed to check these into the Jam public
repository, and I haven't yet...)
Date: Thu, 03 Feb 2000 13:12:03 -0800
From: Iain McClatchie <iain@10xinc.com>
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
My example referenced a rule called LinkPackageLibraries. You
should change that out to use the LinkLibrary rule, unless you want
to pick up the 10x package system as well. In the latter case,
I've posted it to the Jam mailing list before, you should get it
from the archives.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Fri, 4 Feb 2000 18:52:38 +0100
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
Or shorter, using the LOCATE_TARGET variable :
# Build both debug and production executables
rule myMain {
local i s t ; {
SOURCE_GRIST = $(i) ;
LOCATE_TARGET = $(SUBDIR)/bin/$(i) ;
t = $(<:G=$(i)) ;
s = $(>:G=$(i)) ;
if $(i) = debug { ObjectCcFlags $(s) : -g ; }
if $(i) = prod { LINKFLAGS on $(t) += -s ; }
Main $(t) : $(s) ;
}
}
Date: Fri, 4 Feb 2000 13:36:12 -0800 (PST)
Subject: Re: Compiling Debug and Product targets in the same Jamfile.
Yes, that's cleaner -- and takes care of the directories getting made, too.
(I guess not working with Jam for so long [can it really be six years? --
ouch!] has caused me to lose even more of it than I thought I
had...bummer. Oh well, at least Jason now knows lots of different ways to
not do it :)
Date: Fri, 11 Feb 2000 15:26:26 -0500
From: Stefan Vorkoetter <stefan@freedomintelligence.com>
Subject: Jam is jamming my Win98 box
I use Jam at work on a Win2000 box to build a large database product,
implemented primarily in C++. We use MSVC 5 at the moment. I have an
identical setup at home _except_ that the OS is Win98 2nd Edition
instead of Win2000.
Whenever I run Jam, I encountered two types of problems:
1. Jam doesn't seem to see that certain things are already there.
they are not already there, and on my home machine, it always
attempts to create these directories, whether they are there
or not.
2. When Jam tries to build something, after a number of commands
have been issued by Jam, the Command Prompt window hangs.
I don't know if these two problems are related, but they are keeping
me from being able to use Jam.
I tried downloading the source, in hopes of being able to debug
things, and the same thing happens once the initial Makefile starts up jam0.
Before I spend a whole lot of time debugging, I was wondering if
anyone has had (and hopefully solved) this problem.
From: "Robert Morgan" <robertm@captivation.com>
Subject: RE: Jam is jamming my Win98 box
Date: Fri, 11 Feb 2000 12:58:15 -0800
If I recall, this is caused because Jam not reading timestamps due to a
potentially mal-formed pathnames. I fixed this in my local source -- I
suppose it should be gravitated into the main source.
The change to FILENT.C, file_dirscan():
sprintf( filespec, "%s/*", dir );
becomes:
if (dir[strlen(dir)-1] == '\\')
sprintf( filespec, "%s*", dir );
else
sprintf( filespec, "%s/*", dir );
Date: Fri, 11 Feb 2000 15:22:50 -0500
From: Stefan Vorkoetter <stefan@freedomintelligence.com>
Subject: Jam is jamming my Win98 box
I use Jam at work on a Win2000 box to build a large database product,
implemented primarily in C++. We use MSVC 5 at the moment. I have an
identical setup at home _except_ that the OS is Win98 2nd Edition
instead of Win2000.
Whenever I run Jam, I encountered two types of problems:
1. Jam doesn't seem to see that certain things are already there.
they are not already there, and on my home machine, it always
attempts to create these directories, whether they are there
or not.
2. When Jam tries to build something, after a number of commands
have been issued by Jam, the Command Prompt window hangs.
I don't know if these two problems are related, but they are keeping
me from being able to use Jam.
I tried downloading the source, in hopes of being able to debug
thingalland the same thing happens once the initial Makefile starts
up jam0.
Before I spend a whole lot of time debugging, I was wondering if
anyone has had (and hopefully solved) this problem.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Mon, 14 Feb 2000 16:05:09 -0500
Subject: Fairly new to JAM
I've been looking through the documentation and looked at the mail archives,
but I cannot figure out why this doesn't work:
rule GenAppProj { Depends $(<) : $(>) ; }
actions GenAppProj { genproj.py -tguiapp $(<) $(>) }
source = file1.c file2.c file3.c
GenAppProj prog.dsp : $(source) ;
The rule is referenced, but the action is never called. Looking at an earlier
example in the mailing list, I found a reference to generating man pages from
header files. If I add "Depends files : $(<) ;" to the rule, my action is
called, but I don't know why. I can't find any reference to "Depends files"
anywhere in the documentation.
Thanks. In case your wondering, I'm trying to generate a Visual C project
file from within jam. If we let the programmers do this by hand, they tend
to set the warning level too low (Gee! It worked for me).
Date: Mon, 14 Feb 2000 13:22:05 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Fairly new to JAM
Jam builds the 'all' target by default. If you want something built
when you just type 'jam' you need to make 'all' depend on it.
Your GenAppProj makes the .dsp depend on all the .c files, but the
'all' target doesn't depend on the .dsp.
Does it work when you run 'jam proj.dsp'?
The 'files' target that you used is part of the Jambase file built
into jam. If you look at the Jambase file (included with the source
distribution of jam) you'll see that it makes 'files' depend on 'all'.
So you had the following dependency when you had "Depends files ..."
in there:
all <- files <- proj.dsp
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Subject: RE: Fairly new to JAM
Date: Tue, 15 Feb 2000 09:43:58 -0500
Yes that is the answer. Using the jam proj.dsp does execute
the GenAppProj action. I mistakenly thought 'jam -a'
would achieve the same effect, but I guess it only rebuilds
the targets associated with 'all'.
From: Matt Armstrong [mailto:matt@corp.phone.com]
Sent: Monday, February 14, 2000 4:22 PM
Subject: Re: Fairly new to JAM archives,
Jam builds the 'all' target by default. If you want something built
when you just type 'jam' you need to make 'all' depend on it.
Your GenAppProj makes the .dsp depend on all the .c files, but the
'all' target doesn't depend on the .dsp.
Does it work when you run 'jam proj.dsp'?
The 'files' target that you used is part of the Jambase file built
into jam. If you look at the Jambase file (included with the source
distribution of jam) you'll see that it makes 'files' depend on 'all'.
So you had the following dependency when you had "Depends files ..."
in there:
all <- files <- proj.dsp
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Date: Tue, 15 Feb 2000 16:51:26 +0100
Subject: FW: Jam/MR questions
Hi, I just subscribed to the JAM mailing list and I'm pretty new to
JAM. As I have bothered Christopher with a problem, I think it's
better suited for the mailing list. Perhaps someone can help me with it.
From: Robert M. Muench [mailto:robert.muench@robertmuench.de]
Sent: Tuesday, February 15, 2000 11:25 AM
Subject: Jam/MR questions
building my OpenAmulet library project but I have some problems with it.
I have a source tree structure like:
.../source/XYZ
Where XYZ are functional parts of the library. Inside .../source I
have a 'jamrules' file which sets some variables like C++FLAGS etc.
Than I have a 'jamfile' in this directory, which contains a line for
each XYZ directory of the following form:
SubInclude TOP XYZ;
Inside each XYZ directory I have further 'jamfile' like this, with
'Object ...' for every source file in the directory XYZ.
SubDir TOP XYZ ;
Object ABC : ABC.cpp ;
Than I started JAM with:
set TOP=f:\openamulet\source
jam -a -n
...found 7 target(s)...
Why does nothing gets build? It seems that JAM doesn't recognize the
'Object' directive. I wanted to use 'Objects' but don't know where to
put this command into (.../source/jamfile ?) and how to specify it.
I try to do build a DLL, this means that the source files need to be
compiled to obj files, and than linked together for a DLL.
Date: Tue, 15 Feb 2000 10:48:37 -0600 (CST) (robert.muench@robertmuench.de)
Subject: Re: FW: Jam/MR questions
Object is a pretty low-level rule, and I believe that the dependencies
it (actually the C++ rule) sets are only between the object and the .cpp
file. (Objects would probably be a better choice, or one of the library rules.)
The upshot is that if you just say jam, or jam all it won't get built
because the all target does not depend upon ABC.
I just looked up -a, and it says to build all targets, even if they are
up to date. So, the default target is all, so it will build all targets
that depend upon all, and since none do, it does nothing.
jam is very dependency driven. Nothing will happen if a dependency does
not exist or need to be updated.
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: FW: Jam/MR questions
Date: Wed, 16 Feb 2000 13:15:28 +0100
Hi, first I can say, that I have solved the "nothing" happens problem.
I have written a script which generates JAMFILES automatically for all
subdirectories. Unfortunately the directory names given in SubDir were
shifted by one directory :-| I finally used the Objects rule, pretty
neat and easy to setup.
However, what I now have is that JAM starts building all objects,
nothing more. That's OK as there is no target, library etc. specified yet.
Here are some more questions:
1. I would like to call the compiler with a bunch (around 10) of
filenames to compile at once, and not for each single file... speeds
up compilation a lot. Is this possible?
2. The project I have at hand here uses macros for includes (I know
not very nice but it's old code). The problem is that JAM won't
recognize this and therefore doesn't find all dependencies. Is there
any other solution than replacing all macros with real #include statements?
3. Is it possible to start several compiler threads to speed up compiling?
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Fri, 18 Feb 2000 10:35:19 -0500
Subject: Has anyone written a JAM syntax file for VIM?
Before I start writing one myself, I was wondering if anyone had already
done this.
Date: Fri, 18 Feb 2000 09:08:19 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Has anyone written a JAM syntax file for VIM?
" Vim syntax file
" Language: Jam M/R file
" URL: none
" Last Change: Feb 10, 2000
" Remove any old syntax stuff hanging around syn clear
syn case match
" Jam keywords
syn keyword jamstyleKeywords all if else for in actions rule local switch include case on default
syn keyword jamstyleActionModifiers bind existing ignore piecemeal quietly together updated
syn keyword jamstyleBuiltinVars UNIX NT VMS MAC OS2 OS OS OSVER OSPLAT \
JAMVERSION JAMUNAME HDRSCAN
syn keyword jamstyleBuiltins ALWAYS Depends ECHO EXIT DIE INCLUDES LEAVES
NOCARE NOTFILE NOUPDATE TEMPORARY SEARCH LOCATE HDRRULE
" Jambase stuff
syn keyword jamstyleJambase first shell files lib exe obj dirs clean uninstall
Archive As BULK Bulk C++ Cc CcMv Chgrp Chmod Chown Clean CreLib FILE File Fortran
GenFile GenFile1 HDRRULE HardLink HdrRule Install InstallBin InstallFile
InstallInto InstallLib InstallMan InstallShell Lex Library LibraryFromObjects
Link Link LinkLibraries Main MainFromObjects MakeLocate MkDir MkDir1 Object
ObjectC++Flags ObjectCcFlags ObjectHdrs Objects Ranlib RmTemps Setuid Shell
SubDir SubDirC++Flags SubDirCcFlags SubDirHdrs SubInclude Undefines
UserObject Yacc Yacc1 addDirName makeCommon makeDirName makeGrist
makeGristedName makeRelPath makeString makeSubDir makeSuffixed
unmakeDir AR ARFLAGS AS ASFLAGS AWK BINDIR C++ C++FLAGS CC CCFLAGS
CHMOD CP CRELIB CW CWGUSI CWMAC CWMSL DOT DOTDOT EXEMODE FILEMODE
FORTRAN FORTRANFLAGS HDRS INSTALL JAMFILE JAMRULES JAMSHELL LEX
LIBDIR LINK LINKFLAGS LINKLIBS LN MANDIR MKDIR MSIMPLIB MSLIB
MSLINK MSRC MV NOARSCAN OPTIM RANLIB RCP RM RSH RUNVMS SED
SHELLHEADER SHELLMODE SLASH STDHDRS STDLIBPATH SUFEXE SUFLIB SUFOBJ
UNDEFFLAG WATCOM YACC YACCFILES YACCFLAGS SUBDIRCCFLAGS
RELOCATE SEARCH_SOURCE SUBDIRHDRS MODE OSFULL SUBDIRASFLAGS SUBDIR_TOKENS
LOCATE_SOURCE LOCATE_TARGET UNDEFS SOURCE_GRIST GROUP OWNER NEEDLIBS SLASHINC
" Comments
syn match jamstyleComment "^\s*#.*"
" Errors
syn match jamstyleError "[^ \t];"hs=s+1
if !exists("did_jamstyle_syntax_inits")
let did_jamstyle_syntax_inits = 1
hi link jamstyleKeywords Keyword
hi link jamstyleActionModifiers Keyword
hi link jamstyleBuiltins Special
hi link jamstyleBuiltinVars Special
hi link jamstyleError Error
hi link jamstyleComment Comment
hi link jamstyleJambase Identifier
"hi link jamstyleOption String
"hi link jamstyleTag Special
"hi link jamstyleTagN Identifier
"hi link jamstyleTagError Error
endif
let b:current_syntax = "jamstyle"
Date: Mon, 21 Feb 2000 14:59:55 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Prob with spaces in variables
I have this Jamfile:
TEST = "foo bar" ;
TEST += baz ;
ECHO \"$(TEST)\" ;
TEST2 = "foo bar baz" ;
ECHO \"$(TEST2)\" ;
It produces this:
"foo bar" "baz"
"foo bar baz"
Is there any way I can convince Jam at variable expansion time that
the variable being expanded is a scalar and not a list? I'd like a
way to make the TEST variable expand just like the TEST2 variable
regardless of how it got its value.
I'm running into this with the MSVCNT variable on WinNT. It has a
"Program Files"
component and since it comes from the environment Jam doesn't know not
to break it up. So it is impossible to use the default MSVCNT value
(without using the space free 8.3 version of the name, which is undesireable).
Date: Mon, 21 Feb 2000 17:26:04 -0600 (CST)
Subject: Re: Prob with spaces in variables
we do not install msvc++ in a path with blanks in it. this is part of our sop.
ps. but isn't test the way you'd want it, so that a path with blanks would keep
the "'s around it?
Date: Mon, 21 Feb 2000 21:59:29 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Referncing target specific variables within rules
Say I set a variable on a target:
VAR on target = value ;
Then later on, in some other rule I want to see what the value of "VAR
on target" is. Is this possible?
I know that VAR will get bound to value within actions that update
target, but I want to examine the value within a rule.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Tue, 22 Feb 2000 10:51:09 -0500
Subject: A couple of questions: Recursive Jam, the Clean rule.
Does Jam support a hiarchie of jamfiles, where a parent jamfile will call its child
jamfiles? Our old build environment was made up a large tree of directories
and sub directories. Previously we used a scripts and directory list to
determine what order the makefiles were called. It would be nice to add a simple rule
in our Jamfile describing the relationship between it and its children. This
way it would be a simple matter to go to the root of the product and build
everything or go to a subdirectory and only build that branch.
In our old build environment, we used to have a release rules that would
wipe/clean all targets and rebuild the product and debug targets. When I tried to
implement the same rule in Jam (Depends release : clean prod debug ;) Jam would stop
after the clean rule. This makes sense, but it would be nice to have it
continue on and rebuild the targets. Is this moot since 'jam -a' will rebuild
everything? Does it clean the existing targets first?
Date: Tue, 22 Feb 2000 11:03:53 -0500
From: Donald Sharp <sharpd@cisco.com>
Subject: Re: A couple of questions: Recursive Jam, the Clean rule.
Yep, Look at:
http://public.perforce.com/public/jam/src/Jamfile.html
Specifically the section labeled, Handling Directory trees.
Subject: Re: Referncing target specific variables within rules
From: "Mark D. Baushke" <mark.baushke@solipsa.com>
Date: Mon, 21 Feb 2000 23:50:28 -0800
Nope. Once you have set the value for the target, you won't be able
to fetch it out for later processing. If you need a copy of it, you
will have to setup your own copy
VAR on $(>) = $(VALUE) ;
VAR_$(>) = $(VALUE) ;
VAR_saved += $(>) ;
or something might do the trick. Then later, you can
play with going through $(VAR_saved) to find all of the VAR_$(xxx) values again
local myvar ;
myvar = $(VAR_$(xxx)) ;
# examine $(myvar) as desired
# do something weird like add a new element to the front
VAR on $(xxx) = something weird to the front $(myvar) ;
}
Of course, you must be careful that the VAR_saved value is never
a valid target as in 'VAR on saved = $(VALUE) ;'
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Thu, 24 Feb 2000 15:57:44 -0500
Subject: SubDir, SubInclude and relative paths.
Following the documentation, I've started to use SubInclude to setup a tree
of Jamfiles.
I had originally setup a Jamfile for testing purposes in
$(TOP)/common/utilities:
#
# ... and the directory hiarchy.
#
SubDir TOP common utility bin.nt ;
#HDRS += $(DOTDOT)$(SLASH)include ;
HDRS += $(COMMONINC) ;
source = cxdrcout.c cxdrhout.c cxdrmain.c
cxdrpars.c cxdrscan.c cxdrutil.c ;
#
# EXE Targets
#
Main cxdrgend : $(source) ;
This jamfile worked fine and since I added the 'bin.nt' to the end of the
SubDir clause, objects and executable ended up in that subdirectory.
Hurray! And even though the source was
in $(TOP)common/utility Jam was able to locate the source.
I've now added a parent jamfile in $(TOP)/common:
SubInclude TOP common utility ;
#
# ... and the directory hiarchy.
#
SubDir TOP common ;
Because of this, I was forced to use absolute paths in my original Jamfile's
HDR definition.
When I run jam from $(TOP)/common/utility everything works fine, but when I
run it from one directory up ( $(TOP)/common ) I get the following errors:
D:\ws\main\common>jam -a
don't know how to make <common!utility!bin.nt>cxdrcout.c
don't know how to make <common!utility!bin.nt>cxdrhout.c
don't know how to make <common!utility!bin.nt>cxdrmain.c
don't know how to make <common!utility!bin.nt>cxdrpars.c
don't know how to make <common!utility!bin.nt>cxdrscan.c
don't know how to make <common!utility!bin.nt>cxdrutil.c
...found 26 target(s)...
...can't find 6 target(s)...
...can't make 7 target(s)...
...skipped <common!utility!bin.nt>cxdrcout.obj for lack of
<common!utility!bin.nt>cxdrcout.c...
...skipped <common!utility!bin.nt>cxdrhout.obj for lack of
<common!utility!bin.nt>cxdrhout.c...
...skipped <common!utility!bin.nt>cxdrmain.obj for lack of
<common!utility!bin.nt>cxdrmain.c...
...skipped <common!utility!bin.nt>cxdrpars.obj for lack of
<common!utility!bin.nt>cxdrpars.c...
...skipped <common!utility!bin.nt>cxdrscan.obj for lack of
<common!utility!bin.nt>cxdrscan.c...
...skipped <common!utility!bin.nt>cxdrutil.obj for lack of
<common!utility!bin.nt>cxdrutil.c...
...skipped cxdrgend.exe for lack of
<common!utility!bin.nt>cxdrcout.obj...
...skipped 7 target(s)...
Why is the SubInclude screwing up the search for my source? Do I have
prefix my source with absolutes paths?
I would appreciate any help ... I thought I had a handle on Jam, but this
one throws me.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Subject: RE: SubDir, SubInclude and relative paths.
Date: Thu, 24 Feb 2000 16:05:27 -0500
Adding the absolute path to the source fixes the immediate
issue, but creates another one.
While the executable ends up in bin.nt subdirectory, the
object files are generated in the same directory as the source.
Date: Thu, 24 Feb 2000 13:20:15 -0800 (PST)
Subject: RE: SubDir, SubInclude and relative paths.
I thought we already went thru this stuff and, after several roundabout
ways on my part, eventually gave you the succinct way to get things built
into where you need them to go. Is that not working for you?
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Thu, 2 Mar 2000 17:10:13 -0500
Subject: Is this a Bug?
I've been writing Jam files recently (with your help) with the intent that
they are to be used accross many platforms.
Recently, I stopped using GRISTS to identify target directories because
objects, libraries and executables are all going to different directories.
In our Jamfile, I've been generating names for object files by combining
LOCATE_TARGET (../../common/utility/bin.nt) and source names. ie:
rule debugMain {
t_objs = $(>:S=d$(SUFOBJ)) ;
ourMainFromObjects $(t_dbg) : $(t_objs) ;
}
rule ourMainFromObjects {
local s t ;
s = $(LOCATE_TARGET)$(SLASH)$(>) ;
}
The rules work well on under NT, but on UNIX (solaris and linux) the object
name picks up an extra copy of $(LOCATE_TARGET). So instead of the C++ action
being told to compile target ../../utility/common/bin.linux/file.o, the C++
action is being told to compile target
../../utility/common/bin.linux/../../utility/common/bin.linux/file.o
By temporarily replacing the Object and C++ rules I've been able to confirm
that those rules are receiving the correct target name, but somehow the C++ action is
receiving the screwed up target name as $(<).
Has anyone run accross this problem before?
What the problem boils down to, the object target picks up
Date: Thu, 02 Mar 2000 22:32:51 +0000
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Is this a Bug?
It sounds like you may be fully specifying the target filenames as well
as setting the LOCATE var on them. You should probably stick to doing
one or the other.
Read up on the built in LOCATE variable and maybe take a look at the use
of the MakeLocate rule in the default Jambase.
Basically, the deal with LOCATE is if you do this (which is one of the
things the Jambase MakeLocate does):
LOCATE foo.exe : some/directory/somewhere ;
Then you can (and should) refer to foo.exe by "foo.exe" in all your
rules. But when an action gets run, the foo.exe gets magically bound to
the "some/directory/somewhere/foo.exe" filename.
Date: Fri, 03 Mar 2000 10:40:10 -0800
From: "andy nguyen" <aknguyen@onebox.com>
Subject: how to lock a branch in P4 (newbie)
I'm totally new to Perforce. Question: how can we lock a branch in
perforce. I tried to lock it, and P4 came back with a message "locked
0 files". Also there is no online help for lock.
From: "Dowdy, Mark" <mark@ciena.com>
Date: Fri, 3 Mar 2000 11:48:32 -0800
Subject: Meaning of "parents" in Debug Output
Could someone explain to me what the "parents" status means in
the debug output (i.e. "time -- <filename> : parents"). It
seems that files with this designation are later marked "stable"
even if they don't exist (and need to be built). Thanks.
Date: Fri, 3 Mar 2000 14:21:02 -0600 (CST)
Subject: Re: how to lock a branch in P4 (newbie)
if you want to keep anybody from changing it, its probably better to
do a p4 protect and set the branch to read only.
online help on lock says:
$ p4 help lock
lock -- Lock an opened file against changelist submission
p4 lock [ -c changelist# ] [ file ... ]
The open files named are locked in the depot, preventing any
user other than the current user on the current client from
submitting changes to the files. If a file is already locked
then the lock request is rejected. If no file names are given
then lock all files currently open in the changelist number given
or in the 'default' changelist if no changelist number is given.
which doesn't sound quite like what you want, I think.
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Date: Sun, 5 Mar 2000 13:30:23 +0100
Subject: Jam & includes by macro
The project I have at hand here uses macros for includes (I know
not very nice but it's old code). The problem is that JAM won't
recognize this and therefore doesn't find all dependencies. Is there
any other solution than replacing all macros with real #include
statements?
Can Jam be configured to recognize such a macro usage?
Date: Sun, 05 Mar 2000 22:56:57 +0000
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Jam & includes by macro
That depends -- do the header filenames appear when the macros are
used? If so, you might be able to tweak HDRPATTERN to suit your needs.
Check out how HDRPATTERN is used in the built in Jambase.
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: Jam & includes by macro
Date: Mon, 6 Mar 2000 09:44:32 +0100
Hi, unfortunately not, it looks like this:
#include MYINCLUDE__H
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Thu, 9 Mar 2000 10:34:21 -0500
Subject: String deconstrution.
Jam is quite capable of constructing strings, but I can't find anything
related to pulling a string apart like strtok().
I would love to be able to create a rule that generates a grist from
an actual directory specification.
Rather than:
makeGrist gristVar : dir1 dir2 dir3 ;
I would like to be able write a function:
myMakeGrist gristVar : /dir1/dir2/dir3 ;
Am I missing something? I've all through the documentation and
haven't seen anything releated to pulling strings apart.
Date: Thu, 09 Mar 2000 00:13:37 +0100
From: Ullrich Koethe <koethe@informatik.uni-hamburg.de>
Subject: Newbie Q: Automated unit test
I'm trying to built Jam rules which ensure that a package is only build
if a number of unit tests have successfully been executed. However, in
my Jamfiles, the package gets build even if the unit tests fail. What
needs to be changed to make the idea work ?
# Jamrules
Depends obj lib exe : unittest ;
NOTFILE unittest ;
rule TestedLibrary {
UnitTest $(<) : $(3) ;
# this rule shouldn't succeed if the UnitTest failed
Library $(<) : $(>) ;
}
rule UnitTest { Depends unittest : $(<) ; }
actions UnitTest { $(>) # produces nonzero exit code upon failure
}
# Jamfile
Main test : test.c ;
TestedLibrary libmod1 : mod1.c : test ;
# Output
UnitTest libmod1
/home/koethe/C++/make/sandbox/src/mod1/test
...failed UnitTest libmod1 ...
Cc /home/koethe/C++/make/sandbox/src/mod1/mod1.o
Archive /home/koethe/C++/make/sandbox/src/mod1/libmod1.a
(The last line shouldn't be there)
Date: Thu, 09 Mar 2000 10:01:22 -0700
From: Lance Johnston <lance@scmlabs.com>
Subject: Re: String deconstrution.
Sorry, there ain't no way. You've encountered one of Jam's biggest limitations.
Date: Thu, 9 Mar 2000 10:27:32 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: String deconstrution.
If you want to split at the directory separator, then you're in luck.
The :P variable expansion modifier gives just enough functionality to
get what you want. Here is what I've written:
# splitDir var : dir ;
#
# Split variable 'dir' containing a filename into its component
# parts and assign it to 'var'.
#
rule splitDir {
$(1) = ;
splitDirImpl $(1) : $(2) ;
}
rule splitDirImpl {
local parent ;
parent = $(2:P) ;
if $(parent) && $(parent) != $(SLASH) && $(parent) != $(2) {
splitDirImpl $(1) : $(parent) ;
$(1) += $(2:BS) ;
} else {
$(1) += $(2) ;
}
}
Date: Thu, 9 Mar 2000 11:08:01 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Newbie Q: Automated unit test
You need to make the library depend on the unit test. You probably
want to create a fake intermediate target for the unit test and then
have the library depend on it. Something like:
rule TestedLibrary {
local test library ;
test = $(<:S=.unittest) ;
NOTFILE $(test) ;
library = $(<:S=$(SUFLIB)) ;
Depends $(library) : $(test) ;
UnitTest $(test) : $(3) ;
Library $(library) : $(>) ;
}
Since the library depends on the fake "libname.unittest" target, the
library shouldn't get built if the unit test fails.
Date: Thu, 9 Mar 2000 16:54:30 -0800 (PST)
Subject: Re: String deconstrution.
Unless you specify something else for it to use, Jam does generate grist
from directory paths (well, subdirectory paths anyway). If you need it to
contain the full path for some reason, you could always just prepend $TOP to it.
But, you know, grist isn't really meaningful -- it's just purposeful. I
suppose you could try to make it be, but I'm not sure why you'd need to.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Date: Fri, 10 Mar 2000 10:58:48 -0500
Subject: Is there a bug in Header file dependencies?
I've settled on using fully qualified target names with rooted directories.
Everything seems to be working well, but there is one fly in the ointment
The situation is as follows:
- Jamfile #1 uses the SubInclude rule to include Jamfiles #2 and #3.
- Jamfile #2 uses a custom rule to generate a Header and a C++ file,
based on some text files. This rule also associates the generated
files with the 'files' pseudo target.
- Jamfile #3 builds a library from several static C++ files and the
generated C++ file in Jamfile #2. Some of the static C++ files also
include the Header file generated in Jamfile #2.
If I force Jamfile #2 to regenerate the C++ and header file by touching
one or more of their dependent text files ...
Jamfile #3 sees that the generated C++ file has been updated, so Jam
compiles and sticks in the library. For some reason Jam fails to realize
the static C++ files, dependent on the generated Header file also need
to be recompiled.
If I run Jam a second time it finally clues in and realizes that some of
static C++ files need to be recompiled.
Why is this happening. I don't want to have start writing complex header file
dependency trees. The automatically handling of the chore was one of the
major reasons we picked Jam. It also defeats the purpose of SubInclude
and treating all the Jam files as one.
I've tried adding the generated files to the 'first' pseudo target instead,
but this doesn't help.
Any suggestion? I'm currently running Jam 2.2.1 on Linux.
Date: Fri, 10 Mar 2000 10:22:55 -0800 (PST)
Subject: Re: Is there a bug in Header file dependencies?
Try running jam at a high enough debug level to get the dependencies shown
(I think they come in at d5...I just always go to d7), and see what shows
up depending on what. Then go from there.
Date: Fri, 10 Mar 2000 22:46:17 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Is there a bug in Header file dependencies?
My guess is that you're running into a problem caused by fully
qualifying your filenames everywhere. Since you're using fully
qualified target names I'm assuming you've somehow ditched grist.
The header targets found through jam's implicit dependency rule have
no grist or path names attached to them. If your rules are adding
grist or pathnames to the *generated* header file targets, then jam
will not know that the two files are the same.
So you might get this dependency tree:
all
files
/home/jason/src/gen/generated.cc
/home/jason/src/gen/source.txt
/home/jason/src/gen/generated.h
/home/jason/src/gen/source.txt
libs
/home/jason/src/libs/mylib.a
/home/jason/src/libs/generated.o
/home/jason/src/gen/generated.cc
generated.h
/home/jason/src/libs/static.o
/home/jason/src/somewhere/static.c
generated.h
Notice that Jam thinks both generated.cc and static.cc depend on
generated.h not /home/jason/src/gen/generated.h. The two are not the
same to jam. Jam may know that it is updating
/home/jason/src/gen/generated.h but it doesn't then automatically know
that generated.h will be changing too. You have to tell it with:
Depends generated.h : /home/jason/src/gen/generated.h ;
Or in a generic way, assuming the fully qualified header name is in
the "header" variable:
if $(header:BS) != $(header) { Depends $(header:BS) : $(header) ; }
To see if this is what is happening, touch your source .txt files and
run "jam -d+3 -n" and look at what files the generated.cpp and
static.cpp depend on. Pay special attention to the name jam uses for
the generated.h files. If they are fully qualified some places and
just the basename others, there ya go.
[As a general rule, I'd avoid fully qualified pathnames in Jamfiles.
Jam is designed to work best by using just the basenames as target
names. The SEARCH and LOCATE vars can be used on targets to tell jam
where to find or put the files. Grist can be used to differentiate
files of the same name in two different directories.]
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: String deconstrution.
Date: Fri, 10 Mar 2000 12:30:43 -0500
I've attached an alternative approach that we use here. It has the
add'l feature of being able to recognize $(TOP) and omit it from the
gristing. So:
foo/bar/foo.c becomes <foo!bar>foo.c
foo/bar becomes <foo>bar (no file vs dir check)
/abs/path/foo/bar/foo.c becomes <foo!bar>foo.c
(assuming $(TOP) = /abs/path)
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: String deconstrution.
Date: Mon, 13 Mar 2000 13:51:10 -0800
Oops, forgot to include the rules I mentioned. See bottom of this one...
# Assign to the first arg the grist of the dir part of the second
# argument. Typical usage:
# IosGristDir dir_name : $(file) ;
# You get back the !-separated dir components, with $(TOP) stripped
# off.
# Examples:
# foo/bar/foo.c becomes foo!bar!bar.c (no file vs dir check)
# foo/bar becomes foo!bar
# /abs/path/foo/bar becomes foo!bar (assuming $(TOP) = /abs/path)
#
rule IosGristDir {
if ! $(>:D) { # If dir part is empty
$(<) = $(>) ;
} else if $(>:D) = $(TOP) { # If dir part is $(TOP)
$(<) = $(>:BS) ;
} else {
IosGristDir $(<) : $(>:D) ; # Build grist w/ dir and append
$(<) = $($(<))!$(>:BS) ; # !tail
}
}
# Assign to the first arg the gristed name of the file path given by
# the second argument. Typical usage:
# IosGristPath file_name : $(file) ;
# Uses IosGristDir and adds <>foo.c around it to complete the job.
# Examples:
# foo/bar/foo.c becomes <foo!bar>foo.c
# foo/bar becomes <foo>bar (no file vs dir check)
# /abs/path/foo/bar/f becomes <foo!bar>f (assuming $(TOP) = /abs/path)
#
rule IosGristPath {
local exec_d ;
IosGristDir exec_d : $(>:D) ;
$(<) = <$(exec_d)>$(>:BS) ;
}
From: Jason Koloseike <koloseij@home.com>
Date: Thu, 9 Mar 2000 23:38:56 -0500
Subject: Dependency Error with Header files?
Finally settled on using absolute paths to minimize difficulty in dependency
checking, but ....
Everything seemed to work well. But either, I missing something or there
is a bug in Jam. The situation is as follows:
- Jamfile #0 Includes Jamfile #1 and #2
- Jamfile #1 generates a header and a C++ file when ever one or more
files *.str files are modified in a sub directory. In the rule that
generates the header and C++ file, I've associated these two targets
with the 'files' pseudo target.
- Jamfile #2 sees that the C++ file has been modified (in Jamfile #1) and
builds/rebuilds a library based on it and some static (ungenerated) C++
files.
Now some of the static C++ files (in Jamfile #2) also indirectly include
the generated header file (in Jamfile #1), but for some reason they are
not recompiled.
If I run the Jam a second time, it finally realizes that generated header
file has changed and triggers the C++ files to be recompiled.
Why do I have to run Jam twice. This sort of defeats the purpose of SubInclude.
I've tried adding the generated C++ and header files to the 'first' pseudo
target, but that doesn't help.
From: "Koloseike, Jason" <Jason.Koloseike@Cognos.COM>
Subject: RE: Dependency Error with Header files?
Date: Tue, 14 Mar 2000 13:17:20 -0500
Sorry, I don't know why this is poping up now, but
Matt Armstrong already posted an answer to similiar question of mine.
I guess I need to have a talk with my ISP provider.
From: Jason Koloseike [mailto:koloseij@home.com]
Sent: Thursday, March 09, 2000 11:39 PM
Subject: Dependency Error with Header files?
Finally settled on using absolute paths to minimize difficulty in dependency
checking, but ....
Everything seemed to work well. But either, I missing something or there
is a bug in Jam. The situation is as follows:
- Jamfile #0 Includes Jamfile #1 and #2
- Jamfile #1 generates a header and a C++ file when ever one or more
files *.str files are modified in a sub directory. In the rule that
generates the header and C++ file, I've associated these two targets
with the 'files' pseudo target.
- Jamfile #2 sees that the C++ file has been modified (in Jamfile #1) and
builds/rebuilds a library based on it and some static (ungenerated) C++
files.
Now some of the static C++ files (in Jamfile #2) also indirectly include
the generated header file (in Jamfile #1), but for some reason they are
not recompiled.
If I run the Jam a second time, it finally realizes that generated header
file has changed and triggers the C++ files to be recompiled.
Why do I have to run Jam twice. This sort of defeats the purpose of SubInclude.
I've tried adding the generated C++ and header files to the 'first' pseudo
target, but that doesn't help.
Date: Tue, 14 Mar 2000 16:04:43 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Problems with TEMPORARY
I've run into a situation where jam isn't generating a temporary file
when it should. I have attached a jamfile that exhibits the problem.
I build a dependency tree like this:
child
father
grandfather
mother
It works fine until I mark father as TEMPORARY. When I do that it
seems to work fine until I touch mother file. When I do that, jam
tries to build child without father being present.
======================================================================
make -- all
time -- all: missing
make -- child
time -- child: Tue Mar 14 15:46:32 2000
make -- father
time -- father: parents
make -- grandfather
time -- grandfather: Tue Mar 14 15:45:00 2000
made stable grandfather
made stable father
make -- mother
time -- mother: Tue Mar 14 15:54:26 2000
made* newer mother
made+ old child
made+ update all
...found 5 target(s)...
...updating 1 target(s)...
Cat child
cat father mother > child
cat: father: No such file or directory
...failed Cat child ...
...failed updating 1 target(s)...
======================================================================
Jam knows that 'child' depends on 'father' yet it isn't rebuilding it.
Is there some way I can deal with this problem?
The Jamfile I'm using to test is here...
rule Cp { Depends $(<) : $(>) ; }
actions Cp { cp $(>) $(<)}
rule Cat { Depends $(<) : $(>) ; }
actions together Cat { cat $(>) > $(<)}
rule RmTemps { TEMPORARY $(>) ;}
actions quietly updated piecemeal together RmTemps { rm $(>) }
rule MakeChild {
Cp father : grandfather ;
Cat $(<) : mother father ;
RmTemps $(<) : father ;
}
MakeChild child ;
Depends all : child ;
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Tue, 21 Mar 2000 11:26:27 -0800
Subject: Writing a new rule
I'm a new jam user, trying to get my head around this thing. I've been
using Opus Make for a long time and I'm pretty comfortable with that. I've
immersed myself in all the jam info I could find, but it's still a bit
alien to me yet.
I'm trying to write a new rule that runs a program and touches
"myprogram.run" to signify success. The idea is that we build a test
program, then we run it as part of our build process.
In make, it would look like this:
myprogram.run : myprogram.exe
myprogram
touch myprogram.run
(This could also be written as an inference rule.)
In jam, I tried adapting the Objects/Object rules and ended up with this:
# Add a new "run" target to the "all" target
Depends all : run ;
NOTFILE run ;
# based on the Objects rule
rule Run {
local i s ;
# add grist to filenames
makeGristedName s : $(<) ;
for i in $(s) {
Running $(i:S=.run) : $(i) ;
Depends run : $(i:S=.run) ;
}
}
# based on Object rule
rule Running {
Clean clean $(<) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
}
actions Running {
$(>) ;
touch $(<) ;
}
Run myprogram ;
This actually seemed to work, but with a couple glitches:
"jam clean" doesn't erase "myprogram.run".
If I put "badcommand" instead of $(>) in actions Running (to force a failed
return code), it still touches the .run file. This defeats the point of
the .run file.
Overall, I'm not sure I really understand how the rules and actions work
together, why some rules have no action and vice versa.
Could somebody explain what I've done wrong and what I've done right?
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Tue, 21 Mar 2000 15:37:00 -0800
Subject: Re: Writing a new rule
Right, it's similar to the "keep going" option in some make programs.
Maybe I can use the "updated" attribute to only run tests on new .exes that
have been built, but I still think I need some way of knowing that tests
from previous runs have been successful.
Subject: Re: Writing a new rule
Also, Jam does not quit if a target fails. It just does not try to make
anything that depends upon that target.
was every
Well, you can think differently. More direct is that you need to run the
test if the .exe is built. You don't need a .run file to figure out if
the .exe is going to be updated. That's the theory (TM), and I'm pretty
sure that's right.
Now, if running the test is dependent upon if the test failed last time
and/or the .exe, then you'd need to express that in the dependencies, and
a success file would be good.
Date: 22 Mar 2000 10:29:10 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: Writing a new rule
How did you get the clean to work?
Replying to problems like this in public helps others see what to do in the future.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Wed, 22 Mar 2000 06:40:33 -0800
Subject: Re: Writing a new rule
Sorry, I forgot to "reply all". I thought the later quoting included the context.
I was missing the colon after the second "clean".
Subject: Re: Writing a new rule
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Mon, 27 Mar 2000 08:00:04 -0800
Subject: Order of execution for rules and actions
In writing my own rules, I'm running into trouble because I don't have a
solid feeling for what order things happen in. And I'm mostly falling into
make-style thinking which is getting me into trouble. What I need is some
grounding in Jam theory.
For example, in Jambase, the Link rule calls the Chmod action, and there is
also a Link action (but no Chmod rule). Somehow, this causes the Chmod
action to happen after the Link action, but I'm not sure why.
Is there a document somewhere that explains more of this sort of thing?
(I've been through all the stuff on the jam web page, but I may have missed
it.) It would be especially interesting to see a side-by-side chart that
contrasts make and jam and points out the conceptual differences. Thanks.
Date: Mon, 27 Mar 2000 09:56:27 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Order of execution for rules and actions
I've found that many of the finer points like this are not clearly
documented. I frequently find myself writing little test Jamfiles to
test out theories about how Jam really works. Sometimes I look
through the source.
This is the best overall reference, though admitedly somewhat out of
date: http://www.perforce.com/jam/doc/jam.paper.html.
I usually stay out of trouble when I realize that jam runs in a few
distinct passes:
1) Parsing and rule execution. This is when jam parses all the
jamfiles, executing the rules as it goes. The key to remember here is
that all the Jamfiles are basically concatenated together into a big
global namespace as they are parsed. The product of this phase is a
dependency tree and maybe some output if you use the ECHO rule anywhere.
2) Binding. This is when jam binds target names to actual files,
checks modification times and decides what actions to run. If a taget
has HDRRULE set on it, that rule is executed and the dependency tree
is possibly updated by that rule. Because the HDRRULE
3) Actions are run.
I'm fuzzy on what really happens during phases 2 and 3, which not
surprisingly plays into why your example above is confusing.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Wed, 29 Mar 2000 08:07:51 -0800
Subject: Re: Order of execution for rules and actions
I found an answer about the Link/Chmod order in
http://public.perforce.com/public/jam/src/Jamlang.html:
"When a rule is invoked, its action definition, if any, is automatically
the first updating action to be associated with targets. Any other actions
invoked from a rule's procedure definition statements will be executing
during updating in the order in which they were invoked."
Meanwhile, I've found something that seems to work for my "run" rule. In
Opus Make, it would look like this:
%.run : %.exe
$(.SOURCE)
touch $(.TARGET)
If the exe failed, the target wouldn't be touched and Opus would notice the failure.
As I learned earlier, Jam runs all the actions at once and only sees the
last return code. So I headed down the path of trying to have two separate
rules/actions, one to set a pseodotarget that depends on the run, and the
other to create the .run flag based on the pseudotarget. I kept getting
tangled trying to line up all the dependencies.
So I want back to the idea that I was still really only trying to create
one target (the .run file), but it took two actual commands to do it, first
the .exe itself, then a touch command that required the .exe to be
successful. So I put both commands in the same actions definition, with
some logic to check the return codes, and a method of sending back a bad
return code if either of the commands failed (see below).
Of course I'll need a different action implementation for unix. Does this
look basically correct and proper? Did I violate any fundamentals of good
jam design? Is there a cleaner way to do it? Thanks for your help.
##################################
# Custom rules
# add a new pseudotarget for running the test programs
Depends all : run ;
NOTFILE run ;
# Our version of main that builds and tests the program
rule Ourmain {
Main $(<) : $(>) ;
Testmain $(<) ;
}
rule Testmain {
# define the .run file and set the location for it.
local runfile ;
runfile = $(<:S=.run) ;
MakeLocate $(runfile) : $(LOCATE_TARGET) ;
# set up the dependencies
Depends $(runfile) : $(<) ;
Depends run : $(runfile) ;
# set up the clean rule
Clean clean : $(runfile) ;
# set up the action
Runmain $(runfile) : $(<) ;
}
# check return code after each command
# we have to make sure the last thing we run is a "bad command" to indicate
failure
actions Runmain {
$(>) ;
if errorlevel 1 goto complain ;
touch $(<) ;
if errorlevel 1 goto complain ;
goto end ;
:complain ;
bad.returncode.code ;
:end ;
}
#################################
# What to build
# directory to put targets in
LOCATE_TARGET = bin ;
# we don't need Jam's libs
LINKLIBS = ;
# all our flags
C++FLAGS = /J /MLd /W3 /GX /Z7 /Od /DWIN32 /D_DEBUG /D_CONSOLE /YX ;
Library fund :
root.cpp hashcsr.cpp can.cpp array.cpp track.cpp buffer.cpp
bbuffer.cpp
hash.cpp hashfifo.cpp sarray.cpp dstring.cpp qstring.cpp queue.cpp
table.cpp outofmem.cpp bitarray.cpp fmttbl.cpp date.cpp
timeobj.cpp allocate.cpp xmlite.cpp ;
Ourmain ta : ta.cpp fundmain.cpp ;
LinkLibraries ta : fund ;
#################################
Subject: Re: Order of execution for rules and actions
I've found that many of the finer points like this are not clearly
documented. I frequently find myself writing little test Jamfiles to
test out theories about how Jam really works. Sometimes I look
through the source.
This is the best overall reference, though admitedly somewhat out of
date: http://www.perforce.com/jam/doc/jam.paper.html.
I usually stay out of trouble when I realize that jam runs in a few
distinct passes:
1) Parsing and rule execution. This is when jam parses all the
jamfiles, executing the rules as it goes. The key to remember here is
that all the Jamfiles are basically concatenated together into a big
global namespace as they are parsed. The product of this phase is a
dependency tree and maybe some output if you use the ECHO rule
anywhere.
2) Binding. This is when jam binds target names to actual files,
checks modification times and decides what actions to run. If a taget
has HDRRULE set on it, that rule is executed and the dependency tree
is possibly updated by that rule. Because the HDRRULE
3) Actions are run.
I'm fuzzy on what really happens during phases 2 and 3, which not
surprisingly plays into why your example above is confusing.
Date: Wed, 29 Mar 2000 09:23:18 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Order of execution for rules and actions
That isn't the case. Jam stops trying to build a target as soon as
one action fails. Try this Jamfile, it'll never get to the Print action.
rule MakeIt {
InduceError $(<) ;
Print $(<) ;
}
actions InduceError { exit 1 }
actions Print { echo Got to print rule: $(<) }
MakeIt a ;
Depends all : a ;
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Wed, 29 Mar 2000 09:06:02 -0800
Subject: Re: Order of execution for rules and actions
What I meant was if you have more than one line in an actions definition,
jam will run all the lines even if one of them fails. For example:
rule MakeIt { InduceError $(<) ; }
actions InduceError {
badcommand
echo Got past the command: $(<)
}
MakeIt a ;
Depends all : a ;
This is different from make's shell lines that will stop on the first
nonzero return code.
Meanwhile, I think I've finally figured out a more elegant way to take
advantage of the action sequence without having to resort to shell logic.
The Testmain-Runmain-Touch setup is very similar to the
MainFromObjects-Link-Chmod setup:
##################################
# Custom rules
Depends all : run ;
NOTFILE run ;
rule Ourmain {
Main $(<) : $(>) ;
Testmain $(<) ;
}
rule Testmain {
local runfile ;
runfile = $(<:S=.run) ;
MakeLocate $(runfile) : $(LOCATE_TARGET) ;
Depends $(runfile) : $(<) ;
Depends run : $(runfile) ;
Clean clean : $(runfile) ;
Runmain $(runfile) : $(<) ;
}
rule Runmain { Touch $(<) ; }
actions Runmain { $(>) ; }
actions Touch { touch $(<) ; }
#################################
# What to build
# [...]
Ourmain ta : ta.cpp fundmain.cpp ;
#################################
Subject: Re: Order of execution for rules and actions
That isn't the case. Jam stops trying to build a target as soon as
one action fails. Try this Jamfile, it'll never get to the Print
action.
rule MakeIt {
InduceError $(<) ;
Print $(<) ;
}
actions InduceError { exit 1 }
actions Print { echo Got to print rule: $(<) }
MakeIt a ;
Depends all : a ;
Subject: Re: Order of execution for rules and actions
From: "Mark D. Baushke" <mark.baushke@solipsa.com>
Date: Mon, 27 Mar 2000 09:22:11 -0800
All rules are processed and then all actions are processed that are
required to satisfy building the target specified by the jam command.
(There are a few odd cases like the header parsing rules which can do
some action-like work during rules processing, but no files are
getting written during the rules processing phase at all.)
A way to think of how jam works is that the rules processing phase
just builds the dependency tree and associates the actions needed to
build the various targets. After it has determined how to build
everything, it will then go through to build the target you specified
for it to build and any required actions to build the other targets
that are considered part of the final target.
If you type 'jam files' then jam goes through all of the jam rules and
finds any targets associated with the 'files' target and when it has
completed the rules processing goes to the leafs and starts building
the pre-requisite targets for it to finally finish with the 'files' target.
One of the more difficult things for jam to do is to have one action
generate multiple targets. It much prefers to have a single action
have a single result. It also does not like to have a circular
dependency. If you code one, then you will likely always get jam doing
some work if you rerun it after it has completed once.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Order of execution for rules and actions
Date: Wed, 29 Mar 2000 10:42:43 -0800
would somebody like to try
action InduceError { badcommand && echo Got pasth the command: $(<) }
Seems to work on NT, not sure about Win98/95. Definitely works in Ksh.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Thu, 30 Mar 2000 13:55:19 -0800
Subject: Compiling more than once
We have several test drivers that all link to the same obj. When I build
them with jam, the obj gets recompiled each time. Why is that?
Here's the Jamfile:
SubDir TOP ;
Library fund :
root.cpp hashcsr.cpp can.cpp array.cpp track.cpp buffer.cpp
bbuffer.cpp
hash.cpp hashfifo.cpp sarray.cpp dstring.cpp qstring.cpp queue.cpp
table.cpp outofmem.cpp bitarray.cpp fmttbl.cpp date.cpp
timeobj.cpp allocate.cpp xmlite.cpp ;
Ourmain ta : ta.cpp fundmain.cpp ;
Main easyparm : easyparm.cpp fundmain.cpp ;
Ourmain tqueue : tqueue.cpp fundmain.cpp ;
Ourmain thash : thash.cpp fundmain.cpp ;
Ourmain tbuff : tbuff.cpp fundmain.cpp ;
Ourmain talloc : talloc.cpp fundmain.cpp ;
Ourmain ttable : ttable.cpp fundmain.cpp ;
Ourmain tsarray : tsarray.cpp fundmain.cpp ;
LinkLibraries ta easyparm tqueue thash tbuff talloc ttable tsarray : fund ;
Here's the output after touching fundmain.cpp:
C:\disposable\fund>jam exe
...found 133 target(s)...
...updating 9 target(s)...
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
C++ bin\fundmain.obj
fundmain.cpp
Link bin\ta.exe
Chmod bin\ta.exe
Link bin\easyparm.exe
Chmod bin\easyparm.exe
Link bin\tqueue.exe
Chmod bin\tqueue.exe
Link bin\thash.exe
Chmod bin\thash.exe
Link bin\tbuff.exe
Chmod bin\tbuff.exe
Link bin\talloc.exe
Chmod bin\talloc.exe
Link bin\ttable.exe
Chmod bin\ttable.exe
Link bin\tsarray.exe
Chmod bin\tsarray.exe
...updated 9 target(s)...
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Thu, 30 Mar 2000 13:57:02 -0800
Subject: How to put a dependency on the jamrules and jamfiles
If I change a jamfile, I would like everything referenced in that jamfile to be rebuilt.
If I change jamrules, I would like everything rebuilt.
Is there a straightforward way to do that?
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Thu, 30 Mar 2000 14:16:25 -0800
Subject: Re: Compiling more than once
Another twist: When I use -d2, I see that the C++FLAGS are specified 8
times on each compile. I suspect the two problems are related.
If I put grist on each fundmain.cpp, then the multiple compile flags go
away, and the order of the compile/link is changed. It now does C++, Link,
Chmod for each, instead of doing 8 C++ first. That's slightly better, but
still gives me 8 compiles instead of one. Is there a proper way to compile
an obj once and link it several times?
Here's the gristed version:
SubDir TOP ;
Library fund :
root.cpp hashcsr.cpp can.cpp array.cpp track.cpp buffer.cpp
bbuffer.cpp
hash.cpp hashfifo.cpp sarray.cpp dstring.cpp qstring.cpp queue.cpp
table.cpp outofmem.cpp bitarray.cpp fmttbl.cpp date.cpp
timeobj.cpp allocate.cpp xmlite.cpp ;
Ourmain ta : ta.cpp <ta>fundmain.cpp ;
Main easyparm : easyparm.cpp <easyparm>fundmain.cpp ;
Ourmain tqueue : tqueue.cpp <tqueue>fundmain.cpp ;
Ourmain thash : thash.cpp <thash>fundmain.cpp ;
Ourmain tbuff : tbuff.cpp <tbuff>fundmain.cpp ;
Ourmain talloc : talloc.cpp <talloc>fundmain.cpp ;
Ourmain ttable : ttable.cpp <ttable>fundmain.cpp ;
Ourmain tsarray : tsarray.cpp <tsarray>fundmain.cpp ;
LinkLibraries ta easyparm tqueue thash tbuff talloc ttable tsarray : fund ;
Subject: Compiling more than once
We have several test drivers that all link to the same obj. When I build
them with jam, the obj gets recompiled each time. Why is that?
Date: Thu, 30 Mar 2000 15:23:38 -0800
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: How to put a dependency on the jamrules and jamfiles
Not with the stock Jambase, but you might use "jam -a" to make sure
everything is getting rebuilt correctly.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Thu, 30 Mar 2000 15:06:45 -0800
Subject: Re: How to put a dependency on the jamrules and jamfiles
Yes, I could always use jam -a, but what I really want is to have jam
figure out that the jamfile or jamrules are new and automatically do the -a
(or subset) for me.
Subject: Re: How to put a dependency on the jamrules and jamfiles
Not with the stock Jambase, but you might use "jam -a" to make sure
everything is getting rebuilt correctly.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: How to put a dependency on the jamrules and jamfiles
Date: Thu, 30 Mar 2000 16:23:44 -0800
You could simply add the dependency to your "Main" rule
The jamfile and or jamrules can be depended upon just jam
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Compiling more than once
Date: Thu, 30 Mar 2000 16:59:30 -0800
The way that Jam works if you have a rule and action like
rule DoIt { }
action DoIt { echo $(<); }
when each time the Jamfiles call the DoIt rule, Jam
adds a "pending action" to the target.
So,
DoIt A ;
adds the DoIt action to the target "A" three time.
What you need to do is add protection to the rule ...
rule DoIt {
if ! $($(<)-done) {
$(<)-done = 1 ;
ReallyDoIt $(<) ;
}
}
action ReallyDoIt {
echo $(<) ;
}
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Fri, 31 Mar 2000 14:44:57 -0800
Subject: foo.exe depends on itself
I ran into what seems to be a quirk:
If I try to build an executable called "foo" in a directory also called
"foo" I get:
warning: foo.exe depends on itself
Looking at a d3 trace, I see something like the first trace below.
I believe what's happening is jam is can't distinguish between the
directory "foo" and the pseudotarget "foo" that gets created by the
MainFromObjects rule.
If I use a different directory name (like bar), I get the second trace.
Is there a way to help jam distinguish a directory name from a pseudotarget?
foo.exe in foo directory:
make -- exe
time -- exe: unbound
make -- first
make -- foo.exe
bind -- foo.exe: foo\bin\foo.exe
time -- foo.exe: missing
make -- <foo>foo.obj
bind -- <foo>foo.obj: foo\bin\foo.obj
time -- <foo>foo.obj: Fri Mar 31 15:15:11 2000
make -- foo\bin
time -- foo\bin: Fri Mar 31 15:15:11 2000
make -- foo
time -- foo: unbound
make -- foo.exe
warning: foo.exe depends on itself
made stable foo
made stable foo\bin
make -- <foo>foo.cpp
bind -- <foo>foo.cpp: foo\foo.cpp
time -- <foo>foo.cpp: Fri Mar 31 15:05:28 2000
foo.exe in bar directory:
make -- exe
time -- exe: unbound
make -- first
make -- foo.exe
bind -- foo.exe: bar\bin\foo.exe
time -- foo.exe: missing
make -- <bar>foo.obj
bind -- <bar>foo.obj: bar\bin\foo.obj
time -- <bar>foo.obj: Fri Mar 31 15:23:01 2000
make -- bar\bin
time -- bar\bin: Fri Mar 31 15:23:01 2000
make -- bar
time -- bar: Fri Mar 31 15:22:50 2000
made stable bar
made stable bar\bin
make -- <bar>foo.cpp
bind -- <bar>foo.cpp: bar\foo.cpp
time -- <bar>foo.cpp: Fri Mar 31 15:22:28 2000
Date: Fri, 31 Mar 2000 17:56:35 -0600 (CST)
Subject: Re: foo.exe depends on itself
I considered setting grist on directories to add a special string like
_dir_, to make the target distinct from others. This seemed to fix
the problem, but we had another problem which I never determined
after that, got sidetracked.
Date: Fri, 31 Mar 2000 16:15:52 -0800 (PST)
Subject: Re: foo.exe depends on itself
I'm confused. How many things are named "foo"? ... your target (which in
the end becomes "foo.exe"), your directory, and some pseudo-target?
In any case, I've just tried several scenarios (including having a
pseudo-target named "foo"), and all of them worked just fine, so there's a
problem with your rule(s) somewhere.
Whenever you have dependency problems, though, you should run jam at a
high enough debug level to actually see the Depends.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Fri, 31 Mar 2000 15:29:47 -0800
Subject: Re: foo.exe depends on itself
I have a foo directory, a foo.exe, and a foo.cpp to build it with. (I
don't think the cpp matters.)
Here's what I get if I filter out the Depends for "foo". (Nothing unusual there.)
I think the key is "Subdir TOP foo ;" and running jam from the directory
above. And TOP is not set.
I'm confused. How many things are named "foo"? ... your target (which in
the end becomes "foo.exe"), your directory, and some pseudo-target?
In any case, I've just tried several scenarios (including having a
pseudo-target named "foo"), and all of them worked just fine, so there's a
problem with your rule(s) somewhere.
Whenever you have dependency problems, though, you should run jam at a
high enough debug level to actually see the Depends.
Date: Fri, 31 Mar 2000 17:06:07 -0800 (PST)
Subject: Re: foo.exe depends on itself
If that's the case then where does this "foo" come from:
Somewhere along the line, you've got a "foo" depending on "foo.exe".
I've tried several different ways to get it to break for me, and so far --
even when with a Depends foo : foo.exe -- it still works fine. Since I'm
not able to reproduce the error you're getting, I can't say for sure where
it might be, but it's undoubtedly somewhere in your own rule(s).
No, if it was a TOP-not-set problem, you'd get:
Top level of source tree has not been set with TOP
(BTW: I'm assuming you meant SubDir and just typed it wrong here.)
Date: Fri, 31 Mar 2000 22:44:02 -0600 (CST)
Subject: Re: foo.exe depends on itself
Here's what I remember from what I think was a similar problem.
Our situation was quite similar, we created an executable, and at some
point, we decided to name the directory the same name as the executable.
I believe the main rule looks something like this:
Main foo : foo.cpp slag.cpp etc... ;
it creates a chain of dependencies with "foo" in it. At some point,
it does a MakeLocate on the executable to be produced. I noticed in
the example it was foo/bin. This does a MkDir on foo/bin, which
proceeds to make every dir from top on down to bin. One of these
is dir "foo". The rule then finds itself dealing with a target which
already has stuff going on, and things get a bit mixed up from there.
You will find that if you set the location for the foo executable to
be in a path without the foo directory as part of it, then the problem
will go away. (that's the LOCATE macro, isn't it?)
Date: 2 Apr 2000 16:53:06 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: foo.exe depends on itself
SubDir TOP ;
SubInclude TOP foo ;
SubDir TOP foo ;
Main foo : foo.c ;
The above two Jamfiles will cause the same error you witnessed, right?
I ran into this exact problem and i solved it by changing the Main line to:
Main <foo>foo : foo.c ;
All seemed to work fine then, even when running jam from TOP directory.
Date: Mon, 3 Apr 2000 14:03:15 -0700 (PDT)
Subject: Re: foo.exe depends on itself
$ cat Jamfile
SubDir TOP ;
SubInclude TOP foo ;
$ cat foo/Jamfile
SubDir TOP foo ;
Main foo : foo.c ;
$ jam
...found 17 target(s)...
...updating 2 target(s)...
Cc /temp/foo/foo.o
Link /temp/foo/foo
Chmod /temp/foo/foo
...updated 2 target(s)...
I've tried everything I could think of, including pseudo-targets named
"foo", having explicit dependencies of "foo" to "foo.exe", having the
target "foo" build into TOP/foo/bin, etc., and I can't reproduce what you
guys (I think it's up to 3 now, right?) have seen happen. I can't remember
ever having seen Jam confuse directories and files, and since I can't get
it to do it even by trying to, I still have to strongly suspect it's
something in the rules your using.
Date: Mon, 3 Apr 2000 16:17:43 -0500 (CDT)
Subject: Re: foo.exe depends on itself
check your rules for MkDir, Laura may have put this into
the source a while back: (from feb 99)
How about if you hack the MkDir rule to grist *its* targets? So
that instead of building "axe" it builds "<_dir>axe"? Try these
changes in the MkDir rule:
929c929
< s = $(<:P) ;
---
> s = $(<:PG=_dir) ;
933c933
< switch $(s)
---
> switch $(s:G=)
Let me know if it works okay. (You don't have a directory called
"_dir" do you?)
If this fix isn't a problem for you (or anyone else), I'll put it
in the public depot source.
Date: 3 Apr 2000 21:20:40 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: foo.exe depends on itself
heres my transaction:
% ls -lR
.:
total 8
-rw-r--r-- 1 nirva users 34 Apr 3 16:08 Jamfile
drwxr-xr-x 2 nirva users 4096 Apr 3 16:09 foo/
foo:
total 8
-rw-r--r-- 1 nirva users 36 Apr 3 16:08 Jamfile
-rw-r--r-- 1 nirva users 11 Apr 3 16:08 foo.c
% cat Jamfile
SubDir TOP ;
SubInclude TOP foo ;
% cat foo/Jamfile
SubDir TOP foo ;
Main foo : foo.c ;
% cat foo/foo.c
main() { }
% jam
Jamrules: No such file or directory
warning: foo depends on itself
...found 10 target(s)...
...updating 2 target(s)...
Cc foo/foo.o
MkDir1 foo/foo
Link foo/foo
/usr/bin/ld: cannot open output file foo/foo: Is a directory
collect2: ld returned 1 exit status
cc -o foo/foo foo/foo.o
...failed Link foo/foo ...
...failed updating 1 target(s)...
...updated 1 target(s)...
% jam -v
Jam/MR Version 2.2.5. Copyright 1993, 1999 Christopher Seiwald.
same exact setting as your's. The only diff is that I don't set the TOP
environment variable, and you probably do. If I set TOP to pwd, then it
works fine. I get around this issue by adding grists to the foo executable
target. Jam is able to determine which directory to look in for Jamrules
correctly without setting TOP, so I think this is a jam bug.
Date: Mon, 3 Apr 2000 16:58:52 -0700 (PDT)
Subject: Re: foo.exe depends on itself
Okay, here's the poop:
- If you have a "SubDir TOP ;" in your top-level Jamfile (which
I went ahead and included in mine last time, so I'd match what
you had, but which I wouldn't ordinarily have had, because you
don't actually need it, and in fact I'd consider it wrong for
it to be there, since TOP's not a subdirectory), and
- You don't have TOP set, and
- You run 'jam' from the top-level directory, then
- Instead of getting the "Top level of source tree has not been set
with TOP" error, you'll get the "warning: foo depends on itself" error.
So, the correct solution is to (choose one):
- Delete the "SubDir TOP ;" from your top-level Jamfile, or
- Make sure you have TOP set if you're using "Sub{Dir,Include}"
rules (I'm not sure why you haven't been having it set), or
- Put the same test for TOP-being-set into the SubDir rule that
the SubInclude rule uses:
Date: 4 Apr 2000 00:18:02 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: foo.exe depends on itself
I'm not understanding this.. how can I have this work without having to set
the TOP environment variable?
All your solutions seems to require TOP to be set. They also seem to just
be ways to make sure that TOP is set in the environment.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Mon, 3 Apr 2000 17:18:38 -0700
Subject: Re: foo.exe depends on itself
Why is TOP required to be set? According to
http://public.perforce.com/public/jam/src/Jamfile.html,
"When you have set a root variable, e.g., $(TOP), SubDir constructs path
names rooted with $(TOP), e.g., $(TOP)/src/util. Otherwise, SubDir
constructs relative pathnames to the root directory, computed from the
number of arguments to the first SubDir rule, e.g., ../../src/util. In
either case, the SubDir rule constructs the path names that locate source files."
Subject: Re: foo.exe depends on itself
Okay, here's the poop:
- If you have a "SubDir TOP ;" in your top-level Jamfile (which
I went ahead and included in mine last time, so I'd match what
you had, but which I wouldn't ordinarily have had, because you
don't actually need it, and in fact I'd consider it wrong for
it to be there, since TOP's not a subdirectory), and
- You don't have TOP set, and
- You run 'jam' from the top-level directory, then
- Instead of getting the "Top level of source tree has not been set
with TOP" error, you'll get the "warning: foo depends on itself"
error.
So, the correct solution is to (choose one):
- Delete the "SubDir TOP ;" from your top-level Jamfile, or
- Make sure you have TOP set if you're using "Sub{Dir,Include}"
rules (I'm not sure why you haven't been having it set), or
- Put the same test for TOP-being-set into the SubDir rule that
the SubInclude rule uses:
Date: Mon, 3 Apr 2000 18:55:11 -0700 (PDT)
Subject: Re: foo.exe depends on itself
Fine -- SubDir doesn't require TOP be set. I stand corrected. But
SubInclude does -- there's a bail-out in it for exactly that.
So I should have said: If you're going to use the SubInclude rule, then
you need to have TOP set.
TOP set, then don't use SubInclude -- go ahead and use SubDir in your
top-level Jamfile so that TOP gets set for you, then just use regular include(s).
Date: 4 Apr 2000 05:18:21 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: being forced to set TOP outside jam
I don't get it.. if the first thing my toplevel Jamfile does is
SubDir TOP ;
then what is the diff between that and setting the TOP env var? The first
thing the SubDir command does is set TOP if TOP isnt already set.
Setting the env TOP outside of jam is horrible if you have many many trees
you might work with. There has to be a way to get this to work.
I don't understand what using include instead of SubInclude buys you.
SubInclude just does the include after concating the dir names, right?
Date: Tue, 04 Apr 2000 09:57:19 +0200
From: Igor Boukanov <igor.boukanov@fi.uib.no>
Subject: Re: being forced to set TOP outside jam
Instead I suggest to use simple
TOP = $(DOT) ;
I had some problems with "SubDir TOP ;" ones and did not have time to
find out where things goes bad in SubDir, but then I changed it to "TOP
= $(DOT) ;" and problems went away.
Date: 4 Apr 2000 14:45:02 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: being forced to set TOP outside jam
This can't possibly work if you don't jam from the TOP.
Date: 4 Apr 2000 16:13:02 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: Re: being forced to set TOP outside jam
Things are not fine! If you have a directory in your tree named foo, and
inside it you have a Jamfile with Main foo : foo.c ;, you will have errors
unless you set the TOP env var.
This discussion is about getting around the setting of TOP, and getting
rid of that error without adding a grist.
Date: Tue, 4 Apr 2000 11:09:48 -0500 (CDT)
04 Apr 2000 09:57:19 +0200)
Subject: Re: being forced to set TOP outside jam
I find this discussion very confusing. In our system, the top level Jamfile
does a "SubDir TOP ;" and we never explicitly set TOP, jam sets it
to a relative value. This works out much better than having to set it.
All the lower-level jamfiles do a SubInclude etc. and things are fine.
Date: Tue, 4 Apr 2000 11:54:48 -0500 (CDT)
Subject: Re: being forced to set TOP outside jam
I think you are better off doing the grist than setting TOP explicitly. Probably
better is to find out exactly why it is happening. a -d5 usually gives all the info
you need. I know that's easier said than done.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Tue, 4 Apr 2000 19:23:21 +0100
Subject: Re: foo.exe depends on itself
Excuse me if I'm wrong, but I think I found something
about a "foo" pseudo-target.
The MainFromObject rule says:
makeSuffixed t $(SUFEXE) : $(<) ;
if $(t) != $(<) {
Depends $(<) : $(t) ;
NOTFILE $(<) ;
}
In your example, ( Main foo : foo.cpp ; )
$(<) is "foo"
$(t) is "foo.exe".
To remove the pseudo-target, I suggest calling the Main rule with the suffixed file:
Main foo$(SUFEXE) : foo.cpp ;
I guess this "foo" pseudo-target can be useful when calling "jam foo"
from the command line.
For my part, I never call Main with the unsuffixed name
because I almost have a line such as
LINKFLAGS on foo.exe = ... ;
where the suffixed target name is required.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Tue, 4 Apr 2000 16:35:54 -0700
Subject: Jam on Win9x?
Is the NT version of Jam meant to run on Windows 9x? I got a bunch of
"invalid switch" errors, and Jam didn't seem to realize when a command
failed. It also doesn't seem to exit cleanly but instead locks up the
command prompt.
Date: Wed, 05 Apr 2000 13:02:38 +0200
From: Igor Boukanov <igor.boukanov@fi.uib.no>
Subject: Re: being forced to set TOP outside jam
I use actually in top level Jamfile (not Jamrules!):
if !$(TOP) { TOP = $(DOT) ; }
I had to do it due to problems with "SubDir TOP ;" with a jam/jambase
port tailored for GCC on Windows.
Of cause this does not work if you include your TOP-level jamfile from
some sub directory before SubDir declaration, but then "SubDir TOP ;"
would not work either.
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Date: Tue, 11 Apr 2000 16:18:07 -0700
Subject: targets mysteriously not getting built
We're trying to recreate our build environment in a remote location.
For some reason it's not working even though everything is the "same".
Jam looks like it is going to build everything and then it just stops
after building a couple files.
It looks like jam is just stopping and it's not saying anything.
The -d7 debug didn't yield anything obvious.
Does anyone have any ideas what could be happening? I've included some
output from the build log.
jam_cmd changing directory to Z:\allbuilds\build_main_2000-04-11_txn.
System Command=->Z:\allbuilds\build_main_2000-04-11_txn\bin\Jam.exe -d1
vx-ppc-rel 2>&1<-...patience...
...patience......patience......patience......patience......patience...
...patience......found 7052 target(s)......updating 2359 target(s)...
MkDir1 Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc
if not exist
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\nul mkdir
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc
MkDir1 Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release
if not exist
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\nul
mkdir Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release
C++_gnu
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\Actor.o
Z:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\bin\ccppc
-c -mcpu=603e -mstrict-align
-BZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\lib\gcc-
lib\ -ansi -nostdinc -DRW_MULTI_THREAD -D_REENTRANT -fvolatile -fno-builtin
-fno-defer-pop -Wall -DCPU=PPC603 -DVX -DRWDEBUG -O
-IZ:\allbuilds\build_main_2000-04-11_txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\cm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\lm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\tm
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\config\all
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\config
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\drv
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h\arch\ppc
-IZ:\allbuilds\build_main_2000-04-11_txn\component\Actor -o
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\Actor.
o Z:\allbuilds\build_main_2000-04-11_txn\component\Actor\Actor.cpp
Hamilton C shell(tm) Release 2.3.b
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpDispatcher.o
Z:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\bin\ccppc
-c -mcpu=603e -mstrict-align
-BZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\lib\gcc-
lib\ -ansi -nostdinc -DRW_MULTI_THREAD -D_REENTRANT -fvolatile -fno-builtin
-fno-defer-pop -Wall -DCPU=PPC603 -DVX -DRWDEBUG -O
-IZ:\allbuilds\build_main_2000-04-11_txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\cm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\lm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\tm
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\config\all
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\config
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\drv
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h\arch\ppc
-IZ:\allbuilds\build_main_2000-04-11_txn\component\Actor -o
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpDisp
atcher.o
Z:\allbuilds\build_main_2000-04-11_txn\component\Actor\OpDispatcher.cpp
Hamilton C shell(tm) Release 2.3.b
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpRunner.o
Z:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\bin\ccppc
-c -mcpu=603e -mstrict-align
-BZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\lib\gcc-
lib\ -ansi -nostdinc -DRW_MULTI_THREAD -D_REENTRANT -fvolatile -fno-builtin
-fno-defer-pop -Wall -DCPU=PPC603 -DVX -DRWDEBUG -O
-IZ:\allbuilds\build_main_2000-04-11_txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\component
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\cm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\lm
-IZ:\allbuilds\build_main_2000-04-11_txn\txn\tm
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\config\all
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\config
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\src\drv
-IZ:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\target\h\arch\ppc
-IZ:\allbuilds\build_main_2000-04-11_txn\component\Actor -o
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpRunn
er.o Z:\allbuilds\build_main_2000-04-11_txn\component\Actor\OpRunner.cpp
Hamilton C shell(tm) Release 2.3.b
Archive_gnu
Z:\allbuilds\build_main_2000-04-11_txn\build\lib\lib_Actor_vx-ppc_rel.a
Z:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\bin\arppc
-d Z:\allbuilds\build_main_2000-04-11_txn\build\lib\lib_Actor_vx-ppc_rel.a
Actor.o OpDispatcher.o OpRunner.o
Z:\allbuilds\build_main_2000-04-11_txn\Tornado\Ppc\host\x86-win32\bin\arppc
-q Z:\allbuilds\build_main_2000-04-11_txn\build\lib\lib_Actor_vx-ppc_rel.a
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\Actor.
o
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpDisp
atcher.o
Z:\allbuilds\build_main_2000-04-11_txn\build\obj\Actor\vx-ppc\release\OpRunn
er.o
Done target.Done BuildActor=
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Wed, 12 Apr 2000 09:31:14 +0100
Subject: Re: targets mysteriously not getting built
Yes, jam usually ends with messages like "...updated xxx targets...",
and a message is printed on every call to the exit() function (I checked this).
Did you look at the return code of the Jam process?
It may show a reason why the program stopped.
Date: Thu, 13 Apr 2000 18:08:38 -0500 (CDT)
Subject: INCLUDES
Well, maybe I'm brain-dead, but I don't understand what the
INCLUDES rule does/means.
it says
INCLUDES targets1 : targets2 ;
makes targets2 dependencies of anything of which
targets1 are dependencies.
so I think, if:
Depends A : B ;
Depends A : C ;
INCLUDES A : F ;
means that
Depends F : B ;
Depends F : C ;
Date: Fri, 14 Apr 2000 17:09:56 -0500 (CDT)
Subject: waif child found!
What does this mean:
...patience...
...found 3641 target(s)...
...updating 228 target(s)...
vC++ ./debug-solaris/config/client/menus.o
waif child found!
Compilation exited abnormally with code 1 at Fri Apr 14 16:59:40
Date: Sat, 29 Apr 2000 19:47:29 -0500 (CDT)
From: Nikolas Kauer <kauer@pheno.physics.wisc.edu>
Subject: interprocedure optimizations
I would like to facilitate more interprocedure optimizations by
compiling several source files in one compiler call. I've been
using jam for a while essentially compiling my program like this:
cxx -c -O src1.cpp
cxx -c -O src2.cpp
cxx -c -O src3.cpp
cxx -o executable src1.o src2.o src3.o
Say, I know code in file src1.cpp calls functions defined in file src2.cpp
and interprocedure optimizations would strongly increase program performance.
I would then manually compile in the following way:
cxx -c -O4 src2.cpp src1.cpp
cxx -c -O src3.cpp
cxx -o executable src1.o src2.o src3.o
or in cases with a small number of source files (in one programming language):
cxx -o executable -O4 src1.cpp src2.cpp src3.cpp
How would I write a Jamfile that results in these actions? Do I need rules
that are not defined in the default Jambase?
PS I've been using jam for almost two years now and like it quite a bit.
PPS I'm not on the jamming list, so please reply to this message.
Date: Mon, 1 May 2000 12:58:52 -0700
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: interprocedure optimizations
First I'd try to create a new .cpp that just #included the other two source
files.
If that own't work you'll probably want to write your own rule/action pair to
do the compiling. The key is to use the "together" modifier on the compilation
action. The existing rules will always call the C++ rule for .cpp files, so
you'll have to somehow avoid that.
From: "Temesgen Habtemariam" <temesgen@aetherworks.com>
Date: Mon, 8 May 2000 13:07:57 -0500
Subject: Accessing target-specific variables values inside a rule.
Is there any way to look at the value for a target specific variable
inside a rule. I know I can do some thing like VARNAME on TARGET = VALUE
to set the target-specific value for variable 'VARNAME'. Does any one know if
there is a syntax for using target specific values as the right had side
of an assignment. Some thing like VARNAME_ON_TARGET = $(VARNAME on TARGET)
From: "Temesgen Habtemariam" <temesgen@aetherworks.com>
Date: Fri, 5 May 2000 18:15:32 -0500
Subject: Accessing target-specific variables values inside a rule.
Is there any way to look at the value for a target specific variable
inside a rule. I know I can do some thing like VARNAME on TARGET = VALUE
to set the target-specific value for variable 'VARNAME'. Does any one know
if there is a syntax for using target specific values as the right had side
of an assignment. Some thing like VARNAME_ON_TARGET = $(VARNAME on TARGET)
Date: Mon, 08 May 2000 11:44:39 -0700
From: Matt Armstrong <matt@corp.phone.com>
Subject: Re: Accessing target-specific variables values inside a rule.
There isn't. It a suggested workaround somebody on this list suggested
to me was to set a different variable coded by the target name. So
every time you set the variable you do:
FOO on $(target) = VALUE ;
FOO_on_$(target) = VALUE ;
Then when you want to read the value you use the FOO_on_$(target) copy.
Ugly but it works.
From: "Brian Mosher" <bmosher@digimarc.com>
Date: Mon, 15 May 2000 20:42:43 -0700
Subject: Help compiling jam on the mac and/or need Mac Binaries...
After working all day to get the latest version of jam to build on my G4. I
am at wit's end. I'm using a G4 with OS9.04, the latest update of Code
Warrior Pro 5, MPW 3.5 and I am compiling using Universal Headers v3.3.1
I downloaded v1.83 of CWGUSI and fixed up all of the include and lib paths
in the MPW build file. What I soon discovered was that the version of GUSI I
was using was no longer compatible with my current MSL. It seems that some
of the file routine's internals have changed.
I was able to get around this problem with a total hack where I pulled out
the GUSI routines that jam needs and built with just those. It looks as
though jam uses GUSI only for the dirent routines opendir, readdir, &
closedir. After a bunch of munging I was able to get the whole business to
build.
Here is my problem: it doesn't work. As soon as I try to run it in mpw I
randomly get one of the following problems:
1. A Sioux window pops up with the message "Jamfile: No error". This Sioux
window opens in the context of mpw, so closing it kills mpw. There is no way
to get back to mpw and there are two menus on the menu bar, one for mpw and
2. I get the following error message in my mpw worksheet:
### MPW Shell - Unable to load code fragment "jam" of "jam".
# Fragment container format unknown (OS error -2806)
3. MPW disappears without any warning.
I am giving up for today. If anyone has any advice I'd love to hear it. I
would especially appreciate it if some kind soul could find it in their
heart to email me the binaries for the PPC(hqx'ed if possible, my email
system eats mac attachments.)
I am very new to Mac programming, being mostly a win/unix guy, but I don't
think I've made any obvious screw-ups so I am really unsure what to do next...
Date: Thu, 18 May 2000 10:12:00 -0700
From: Scott RoLanD <shr@chat.net>
Subject: Trying to justify Jam
I am a fan of Perforce and therefore have seen a couple of references
to jam as a better make. I've poked around a bit and tried to find a
concise comparison of jam vs. make but still don't understand the
difference.
Now I am looking at starting a big project at work with either Jam or
GNU make. I'd like to know if Jam is really that much better that I
should promote it, and if it is I need a quick reasoning to explain
why I chose it rather than make. The project is complex, but will only
be running on SUN Solaris systems and is actually a hardware
simulation (using Verilog, but with a bunch of perl and other scripts).
Date: Thu, 18 May 2000 17:38:37 -0500 (CDT)
Subject: Re: Trying to justify Jam
Jam is faster than make.
The rule system means that individual Jamfiles become very simple.
The dependency tracking is more sophisticated, and c and c++ files
are scaned for include dependencies automatically, so that info does
not need to be kept or updated.
Jam deals with dependencies across the entire build, so dependencies
between directories are automatically dealt with. I got to deal
with that when I converted a small system from jam to make.
On the other hand, it is different and hard to understand in some ways,
and make can do a perfectly good job and people know how to use it.
From: "Michael Graff" <michael.graff@diversifiedsoftware.com>
Date: Thu, 18 May 2000 17:23:05 -0700
Subject: Re: Trying to justify Jam
One big thing is the way Jam handles subdirectories. It knows the whole
tree at once and can automatically deal with things like .../this/subdir
needs to include headers from .../that/subdir.
The syntax is more abstract and more flexible. It's not just targets,
dependencies, and shell lines. It's more like a programming language.
Dependency generation is built in and automatic.
While it doesn't apply to your case with one platform, the
platform-independent syntax of the jamfiles is handy.
Scott RoLanD <shr@chat.net> on 2000-05-18 10:12:00
Subject: Trying to justify Jam
I'd like to know if Jam is really that much better that I
should promote it, and if it is I need a quick reasoning to explain
why I chose it rather than make.
Date: Fri, 19 May 2000 12:02:38 +0200 (METDST)
From: Igor Boukanov <Igor.Boukanov@fi.uib.no>
Subject: Re: Trying to justify Jam
If you need to update a big system after small changes, make output will
be in general very cluttered and it is sometimes hard to guess was the
build successful or not. In the same time jam will print just essential
information. This is very handy with big systems.
Date: Thu, 18 May 2000 12:10:28 -0700
From: Iain McClatchie <iain@10xinc.com>
Subject: Re: Trying to justify Jam
Scott> The project is complex, [...] and is actually a hardware
Scott> simulation (using Verilog, but with a bunch of perl and
Scott> other scripts).
Do you use your verilog files in lots of different ways? Are
some of them for gate-level simulation, others for formal
validation, others for behavioral-level representations of
blocks that have other verilog representation too? Make's
suffix-based rules are terrible for this kind of thing. Jam's
explicit procedural definition of dependencies and actions is
way more controllable. This is basically why I went with Jam.
I'm not unhappy with the result, but two years later I'm still
screwing around with the regression system (which is part of the
build system and implemented in Jam).
Most hardware design ends up doing iterations. Maybe Magma's got
the answer, but the rest of us iterate. Make, Jam, and all the
rest want to represent dependencies as a DAG. I have not seen a
well thought through solution to representing hardware design
iterations in a build system. We do it here: some tools essentially
use locally cached results from previous runs; the cache files do
not show up in the dependency graph, and each jam invokation runs
just one iteration.
The downside is that everyone who ever has to mess with the build
system (tools folks, logic designers, the guy who uses a perl
script to generate layout and RTL for some wacky full-custom ROM,
etc) has to learn a different way of doing things. They have to
learn about grist, which isn't complicated but it is different.
If you're going to have to hand this project off to other people
later, for instance, if you're building some piece of IP where the
build system is really part of the IP that gets transferred or
reused, then I'd either do it in make, build a make interface to
the Jam scripts right from the very start, or get more of a buy-in
for Jam from the rest of the folks in your organization.
If you're going to be making a change from make, you would do well
to evaluate everything out there. There is a perl-based make
replacement that might be good too: check out
http://www.dsmit.com/cons/
and other stuff on
http://software-carpentry.codesourcery.com/sc_build
Basically, though, you have a problem because build systems are
software-oriented and hardware is a different kind of thing.
The closest software gets to the kind of incremental updates that
end up happening in ECO flows are the incremental updates to .a
files from component .o files. Make has this particular rule
hardwired into the program as an exception. Ugh.
Date: Thu, 18 May 2000 12:15:56 -0600
From: Ray Caruso <rayman@powerplay.com>
Subject: Problems Defining my own rule
I am using Jam 2.2.5 on Solaris 2.6.
I am trying to define a new rule in my own Jamrules file.
The rule, named Moc, is used to build a new .cpp file from
a special .h file. To build the new .cpp file, I must run
a program called moc. The moc program takes three
arguments: moc file.h -o moc_file.cpp
So I set up a new rule called Moc in my jamrules file:
# These are the compilers we use.
CC = cc ;
C++ = CC ;
# This is the moc program.
MOC = moc ;
# Rule moc states that the file on the left side of the : is dependant on
on the file on the right side of the :
# Namely, moc_file.cpp is dependant on file.h
rule Moc { Depends $(<) : $(>) ; }
# To build moc_file.cpp we must run moc like this...
actions Moc { $(MOC) $(>) -o $(<) }
This Jamrules file sits in /home/me/Devel.
The jam file sits in /home/me/Devel/project/src.
My Jamfile looks like this:
SubDir TOP project src ;
HDRS = $(QTDIR)/include ;
Main application :
main.cpp moc_file.cpp ;
Moc moc_file.cpp : file.h ;
I know Jam is reading the Jamrules file because it is using the correct
compiler. By default, it uses gcc, not my CC.
When I run jam I get the following result:
$ jam
don't know how to make <as-ovm!src>moc_file.cpp
...found 112 target(s)...
...updating 8 target(s)...
...can't find 1 target(s)...
...can't make 2 target(s)...
C++ ../../as-ovm/src/main.o
And then it starts to build the .o files.
I don't get it. What am I missing here??
Date: Mon, 22 May 2000 11:41:20 -0500 (CDT)
Subjdct: Re: Problems Defining my own rule
The key here (at least part of it) is that it doesn't know how to make
<as-ovm!src>moc_file.cpp. Your rule tells it how to make moc_file.cpp,
which is a slightly different critter.
you should grist the args to the rule and send 'em to the action, which
probably means creating a slightly differently named action to call.
local hfile ;
makegristed name hfile : $(>) ;
makelocate $(<) : something ;
Depends $(<) : $(hfile) ;
Moc1 $(<) : $(hfile) ;
You need to look up the real names of those rules and commands and
verify that I got the $(<) and $(>) things in the right places!
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Mon, 22 May 2000 18:45:47 +0100
Subject: Re: Problems Defining my own rule
Your problem comes from the "grist" that Jam adds to
every target when using the SubDir rule.
In Jambase, look at the rule Objects: it contains a line
makeGristedName s : $(<) ;
that adds this grist.
You should rewrite your rule:
rule Moc {
local s t;
# Add grist to file names
MakeGristedName s : $(>) ;
MakeGristedName t : $(<) ;
Depends $(t) : $(s) ;
}
# Rule moc states that the file on the left side of the : is dependant on
on the file on the right side of the :
# Namely, moc_file.cpp is dependant on file.h
rule Moc { Depends $(<) : $(>) ; }
# To build moc_file.cpp we must run moc like this...
actions Moc { $(MOC) $(>) -o $(<) }
This Jamrules file sits in /home/me/Devel.
The jam file sits in /home/me/Devel/project/src.
My Jamfile looks like this:
SubDir TOP project src ;
HDRS = $(QTDIR)/include ;
Main application :
main.cpp moc_file.cpp ;
Moc moc_file.cpp : file.h ;
I know Jam is reading the Jamrules file because it is using the correct
compiler. By default, it uses gcc, not my CC.
When I run jam I get the following result:
$ jam
don't know how to make <as-ovm!src>moc_file.cpp
...found 112 target(s)...
...updating 8 target(s)...
...can't find 1 target(s)...
...can't make 2 target(s)...
C++ ../../as-ovm/src/main.o
And then it starts to build the .o files.
I don't get it. What am I missing here??
From: "Michael O'Brien" <mobrien@pixar.com>
Subject: Re: Problems Defining my own rule
Date: Mon, 22 May 2000 09:53:38 -0700
Jam alters a file name with grist to create a unique file description. This
allows multiple files with the same name in different directories. The
default grist is to replace dir1/dir2/file.cpp as <dir1!dir2>file.cpp.
rule MakeMoc {
local gristedLhs gristedRhs ;
MakeGristedName gristedLhs : $(<) ;
MakeGristedName gristedRhs : $(>) ;
Moc $(gristedLhs) : $(gristedRhs) ;
}
rule Moc {
Depends $(<) : $(>) ;
Clean $(<) ;
}
actions Moc {
# your action goes here...
}
The gristing for *.h files, by default, is nothing. So, the gristedRhs in
the above example, is actually the same as the *.h file. The gristing for
*.h files is nothing because header files need to be located during the header search.
Anyway, I didn't really test the above snipets, so your mileage may vary. If
you have any questions, feel free to shoot me an e-mail.
Date: Tue, 23 May 2000 17:01:56 +0530
From: Amitava Bhattacharjee <amitav@cisco.com>
Subject: JAM for HPUX & AIX
- What is JAM stands for?
- Can you give any pointer for JAM for HPUX 11.0 & AIX 4.3.* ?
From: "Kolarik, Tony" <TKolarik@Verbind.com>
Date: Wed, 24 May 2000 13:37:04 -0400
Subject: FW: timestamps
So far I know about 'cons', anyone know of any other makers, commercial or
not that are not based solely on timestamps? Thanks,
From: "Kolarik, Tony" <TKolarik@Verbind.com>
Date: Wed, 24 May 2000 13:28:13 -0400
Subject: timestamps
I'm trying to find a maker that will *always* rebuild things correctly -
whenever the contents of say a header file have changed since the last build
- regardless of timestamps. Getting an earlier version of a perforce
controlled file using the modtime keyword in the client spec is a common
example of this.
Looking at the Jam doc I get the impression that dependencies are based on
file's timestamps. Is that true? If so is it the only method used?
I know 'cons' can handle it, anyone know of any other tools, commercial or not? Thanks,
Date: Thu, 25 May 2000 14:37:27 -0700 (PDT)
Subject: Re: JAM for HPUX & AIX
Just Another Make (actually, the full name is Jam/MR, because there was
already another product out in the world called JAM...I think the MR
stands for Make Replacement).
Just pick up the source for it and build it on the platforms. The README
mentions AIX, without a specific release, and HPUX at 9.0, but unless
there's something hugely different between 9.0 and 11.0, there's probably
no reason why it shouldn't work. Can't hurt to give it a try anyway.
Apache's Jakarta project includes a build tool called "ant" that lets you
say what criteria to use to determine whether it should be rebuilt (e.g.,
different OS, different compiler, etc.), so you might be able to get it to
build based on file contents.
It's primarily geared towards doing Java things (e.g., javac'ing, jar'ing,
zip'ing, etc.), and comes with those types of "tasks" (as they refer to
them) defined, much the same way Jam comes with certain rules already
defined -- but you can add new "task definitions", assuming you know how
to write Java code (which is what Ant is written in; the build files are
in XML, so it probably wouldn't hurt to know something about that as well).
It's still a really new tool, so it's still going thru changes, but if you
want to check it out, go to jakarta.apache.org, and click on Ant under SubProjects.
From: "Allan Anderson" <a@be.com>
Date: Wed, 07 Jun 2000 11:44:39 -0700
Subject: templated targets
I'm trying to set up some default lists of link libraries and header
locations that something in my build tree can use. My goal is to be
able to just say something like 'TYPE=driver' in a Jamfile and have it
build stuff with the appropriate pre-defined link options and dependencies.
I suppose that I could have it depend on a particular dummy target that
sets up the link options and always gets called. I guess that could get
slow, tho. Maybe it could be done with a bunch of variables defined at
the top that get expanded when appropriate -- macros, I guess. Does my
make background show? :)
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: templated targets
Date: Wed, 7 Jun 2000 12:41:18 -0700
Do you mean something like this:
BuildType default = Normal ;
LINK_MODE default = Link ;
switch $(BuildType) {
case NoLink* : ECHO Linking is being disabled ;
LINK = echo Skipping... ;
case FullDebug* : ECHO Generating full debug for all components ;
OPTIM = $(DEBUG) ;
case Purify : ECHO Setting up for Purify ;
OPTIM = $(DEBUG) ;
LINK_MODE = Purify ;
SUFEXE = .purified ;
case Quantify : ECHO Setting up for Quantify ;
OPTIM = $(DEBUG) ;
LINK_MODE = Quantify ;
SUFEXE = .quantified ;
DEBUG += -DNDEBUG ;
OPTIM += -DNDEBUG ;
case Optimize : ECHO Optimizing all components ;
ECHO "(no debugging symbols)" ;
DEBUG = $(OPTM) ;
case NoDebug* : ECHO Disabling All Debugging Information ;
DEBUG = $(OPTIM) ;
DEBUG += -DNDEBUG ;
OPTIM += -DNDEBUG ;
case Normal : ECHO Starting normal build ;
case * : EXIT "Unknown option (" $BuildType ") for BuildType"
;
}
Subject: RE: templated targets
From: "Allan Anderson" <a@be.com>
Date: Wed, 07 Jun 2000 13:07:26 -0700
Sure. But how would I specify this in lots of different Jamfiles (not
on the command line) and have it automatically differ for each?
with make, I'd have each file do an include of this logic. I want to do
the switch for each Jamfile. Passing the info in from the jamfile just
above it is no good, because this needs to work regardless of where it
is in the tree.
Date: Wed, 7 Jun 2000 16:00:34 -0500 (CDT)
Subject: Re: templated targets
I think what you'd do is make up a rule with the logic in it and invoke
it with an argument to do the differentation.
like:
SetFlagsFor driver ;
I probably don't fully understand what you want.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Date: Wed, 7 Jun 2000 16:01:22 -0700
Subject: Jam and AIX 4.3
this format, some or all of this message may not be legible.
AIX 4.3 changed the format of ar archives. They
refer to the new format as AR_BIG, it extends ths
size of the file names which can be stored in the
main part of the archive header to 20 bytes (from 16).
All AIX 4.3 commands ONLY produce archives in the
big format (they can read the older "SMALL" format as well).
Otherwise, the archive header has not changed.
The problem is that the header file, by default,
selects the AR_SMALL format.
Here is the two pathes to get Jam to work with
the big farmat archives. I've assumed that
Jam does not need to be able to handle the small
archive format, as Jam really only needs to
parse/scan/timestamp libraries which it itself
has created.
(There was a very small change in Jamfile for Jam itself, the line
if $(OS)(OSVER) = AIX43 { CFLAGS += -D_AIX43 ; }
needs to be added.)
From: Randy Roesler <rroesler@mdsi.bc.ca>
Subject: diff jam.h
Date: Wed, 7 Jun 2000 15:47:45 -0700
*** ../../orginal/src/jam.h Thu Sep 16 21:06:13 1999
--- jam.h Wed Jun 7 13:21:23 2000
***************
*** 151,160 ****
# ifdef _AIX
# define unix
+ # ifdef _AIX43
+ # define OSSYMS "UNIX=true","OS=AIX","OSVER=43"
+ # else
# ifdef _AIX41
# define OSSYMS "UNIX=true","OS=AIX","OSVER=41"
# else
# define OSSYMS "UNIX=true","OS=AIX","OSVER=32"
+ # endif
# endif
From: Randy Roesler <rroesler@mdsi.bc.ca>
Subject: diff fileunix.c
Date: Wed, 7 Jun 2000 15:48:14 -0700
*** ../../orginal/src/fileunix.c Thu Sep 16 21:06:01 1999
--- fileunix.c Wed Jun 7 15:22:46 2000
***************
*** 44,49 ****
# else
# if !defined( __QNX__ ) && !defined( __BEOS__ )
+ # ifdef _AIX43
+ /* AIX 43 ar SUPPORTs only __AR_BIG__ */
+ # define __AR_BIG__
+ # endif
# include <ar.h>
# endif /* QNX */
# endif /* MVS */
***************
*** 274,282 ****
if( ( fd = open( archive, O_RDONLY, 0 ) ) < 0 ) return;
! if( read( fd, (char *)&fl_hdr, FL_HSZ ) != FL_HSZ ||
strncmp( AIAMAG, fl_hdr.fl_magic, SAIAMAG ) ) {
close( fd );
return;
}
if( ( fd = open( archive, O_RDONLY, 0 ) ) < 0 ) return;
! if( read( fd, (char *)&fl_hdr, FL_HSZ ) != FL_HSZ ||
! #ifdef _AIX43
! strncmp( AIAMAGBIG, fl_hdr.fl_magic, SAIAMAG ) )
! #else
strncmp( AIAMAG, fl_hdr.fl_magic, SAIAMAG ) )
+ #endif
{
+ printf( "magic number wrong on %s\n", archive );
close( fd );
return;
}
Date: Thu, 08 Jun 2000 08:51:25 -0700
From: Iain McClatchie <iainmcc@ix.netcom.com>
Subject: Include file dependencies
All build systems have quirky weaknesses when it comes to handling
#include files, and Jam is no different. When Jam runs the "make"
phase on a .c or .cc file, it scans that file for #include files,
and marks them as NOCARE and as dependencies, which is great.
Ordinarily, these .h files are source code. Jam gets a timestamp
for each, and figures if the .c or .cc file should be rebuilt.
I have a situation in which the .h file is not source code. It's
"built" by an installation rule which copies it from the source
location in a different directory tree. Jam copies direct
dependencies correctly.
But it appears that Jam does not run the HdrRule on these .h files
that it finds. As a result, when these .h files furthur include
other .h files, Jam does not copy those over, and compilation fails.
Right now, my "workaround" is that multiple invokations of Jam copy
successive levels of the #include hierarchy, until they're all copied
in and the build works.
I can dig into Jam's source code to attempt to fix this problem, but
first I could use a little guidance. I think the HdrRule doesn't
get run on built .h files because it gets run during the bind phase,
before any of the update actions are run. As a result, I suspect
a fix would involve a change to the basic phased operation of Jam,
and now I'm probably talking about a totally different build tool.
Do you have any ideas on the matter?
From: David Moore <david.moore@dialogic.com>
Date: Sun, 11 Jun 2000 22:42:18 GMT
Subject: CORBA IDL rule
I'm trying to define a rule for CORBA IDL which will allow us to
build an executable by specifying the IDL files.
e.g. Main server : server.cc
pet.idl
petimpl.cc
owner.idl
ownerimpl.cc
;
My problem is the header file generated from one .idl file may depend
on that generated by another .idl file.
e.g. owner.idl #includes pet.idl, so when owner.h is generated it
has a #include "pet.h", but Jam tries to build owner.o from
owner.cc and owner.h before it has run the IDL rule for pet.h
and of course fails.
This is what I have...
rule UserObject {
switch $(>:S){
case .idl : C++ $(<) : $(<:S=.cc) ;
Idl $(<:S=.cc) : $(>) ;
case * : EXIT "Unknown suffix on" $(>) "- see UserObject rule in
Jamfile(5)." ;
}
}
rule Idl {
# based on the Yacc rule
local h ;
h = $(<:BS=.h) ;
MakeLocate $(<) $(h) : $(LOCATE_SOURCE) ;
# Some places don't have an Idl.
if $(IDL) {
Depends $(<) $(h) : $(>) ;
Idl1 $(<) $(h) : $(>) ;
Clean clean : $(<) $(h) ;
}
INCLUDES $(<) : $(h) ;
}
actions Idl1 {
$(IDL) $(IDLFLAGS) $(>)
}
I have seen Jam used in two steps:
- first to generate all the IDL files into C++ source regardless of
if they have changed or are even used
- then, to compile those source files into executables as specified
in a Jamfile.
I think Jam should be able to do better than that though, I want
it to manage the dependencies on the IDL files themselves.
Date: Mon, 12 Jun 2000 17:26:02 -0700 (PDT)
Subject: RE: CORBA IDL rule
You might try adding the "files" pseudo-target as a dependency on your Idl
targets, since "files" gets built before "lib" and "exe" do:
rule Idl {
Depends files : $(<) $(h) ;
Depends $(<) $(h) : $(>) ;
}
I don't have any IDL stuff, so I don't have any way of testing it to make
sure it'll do what you need, but it seems like it should.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: CORBA IDL rule
Date: Mon, 12 Jun 2000 19:17:32 -0700
My Idl (Web Logic) produces 4 output files from the single Idl
source file. The files produced for X.idl are X_c.h X_c.cpp (the
client stubs) and X_s.h and X_s.cpp (server skeletons).
So, here is my IDL rule.
IdlRm removes targets if they are links
IdlMv moves targets to correct dirctory.
My Idl compiler does not let you specify the output directory for the files !
I also build a default "interface control file" for
tuxedo and arrange for dependecy checking ...
rule Idl {
local g s c n ;
if ! $($(<:G=)-idl) {
# Cheesy gate to prevent multiple invocations
$(<:G=)-idl = true ;
makeGristedName g : $(<:G=) ;
n = $(<:G=) ;
s = $(n:S=_s.h) $(g:S=_s.cpp) ;
c = $(n:S=_c.h) $(g:S=_c.cpp) ;
IdlRm $(c) $(s) : $(g) ;
IdlDo $(c) $(s) : $(g) ;
IdlMv $(c) $(s) : $(g) ;
}
}
rule IdlDo {
local h i ;
# special case because of how idl.pl works
MakeLocate $(<[1]) $(<[3]) : $(LOCATE_COMPONENT) ;
MakeLocate $(<[2]) $(<[4]) : $(LOCATE_SOURCE) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
Depends $(IDLS) : $(<) ;
Clean clean : $(<) ;
# alias to non gristed form
for i in $(<) {
if $(i) != $(i:G=){
Depends $(i:G=) : $(i) ;
}
}
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIR_HDRS) ;
# Build a "default" ICF file
Depends $(<) : $(>:S=.xx) ;
Depends $(>:S=.xx) : $(>) $(ICFTMPLT) ;
MakeLocate $(>:S=.xx) : $(LOCATE_SOURCE) ;
RmIfLink $(>:S=.xx) ;
IdlIcf $(>:S=.xx) : $(>) $(ICFTMPLT) ;
ICFFILE on $(<) += $(>:S=.xx) ;
# If the source file is in a distant directory, look there.
# Else, look in "" (the current directory).
ScanFile $(>) ;
}
Will notice how the dependency is set up ..
X_s.h X_s.cpp X_c.h X_c.cpp depends on X.xx
X.xx depends X.idl
Is is because X.xx is used by the Idl command, and
needs to exist before X_s.h X_s.cpp X_c.h X_c.cpp can be build.
My Object rule now looks like this, I use a phone extention
.skel and .stub to build only part of the Idl compiler's output.
I have another macro which expands something linke
Main main : X.idl ; Into something (like)
Main main : X_c.cpp X_s.cpp ;
But if I only want the skeleton of the stub, I would invoke
Main main : X.skel ; (or X.stub) instead
rule Object {
local h ;
# locate object and search for source, if wanted
Clean clean : $(<) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
# Save HDRS for -I$(HDRS) on compile.
# We shouldn't need -I$(SEARCH_SOURCE) as cc can find headers
# in the .c file's directory, but generated .c files (from
# yacc, lex, etc) are located in $(LOCATE_TARGET), possibly
# different from $(SEARCH_SOURCE).
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIR_HDRS) ;
# handle #includes for source: Jam scans for headers with
# the regexp pattern $(HDRSCAN) and then invokes $(HDRRULE)
# with the scanned file as the target and the found headers
# as the sources. HDRSEARCH is the value of SEARCH used for
# the found header files. Finally, if jam must deal with
# header files of the same name in different directories,
# they can be distinguished with HDRGRIST.
# $(h) is where cc first looks for #include "foo.h" files.
# If the source file is in a distant directory, look there.
# Else, look in "" (the current directory).
ScanFile $(>) ;
RmIfLink $(<) ;
switch $(>:S) {
case .asm : As $(<) : $(>) ;
case .c : Cc $(<) : $(>) ;
case .C : C++ $(<) : $(>) ;
case .cc : C++ $(<) : $(>) ;
case .cpp : C++ $(<) : $(>) ;
case .pc : Cc $(<) : $(>:S=.c) ;
ProC $(<:S=.c) : $(>) ;
case .f : Fortran $(<) : $(>) ;
case .idl :
switch $(<:S=) {
case *_c : C++ $(<) : $(>:S=_c.cpp) ; Idl $(>) ;
case *_s : C++ $(<) : $(>:S=_s.cpp) ; Idl $(>) ;
}
case .skel : C++ $(<) : $(>:S=_s.cpp) ; Idl $(>:S=idl) ;
case .stub : C++ $(<) : $(>:S=_c.cpp) ; Idl $(>:S=idl) ;
case .l : C++ $(<) : $(<:S=.cpp) ;
Lex $(<:S=.cpp) : $(>) ;
case .s : As $(<) : $(>) ;
case .y : C++ $(<) : $(<:S=.cpp) ;
Yacc $(<:S=.cpp) : $(>) ;
case * : UserObject $(<) : $(>) ;
}
}
From: David Moore <david.moore@dialogic.com>
Date: Tue, 13 Jun 2000 22:19:29 GMT
Subject: RE: CORBA IDL rule
Adding that extra Depends works magic!
[jamming] CORBA IDL rule:
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Wed, 28 Jun 2000 14:12:32 +0900
Subject: Library rule quirks with LOCATE
Has anyone run in to the following? A known bug? I'm running on linux with
recent binutils.
When LOCATE is set on a library target (or MakeLocate rule used), strange
things start happening:
* The target is always remade even when it's current. Setting NOARSCAN
option flag seems to fix this (at a performance cost).
* If the KEEPOBJS option flag is set, the library won't be created.
Date: Thu, 29 Jun 2000 10:06:44 -0700
From: Scott RoLanD <shr@chat.net>
Subject: Need help with Rules
I am trying to get Jam to help me create a set of files. These files
can either be generated by calling a program "my_script" or they can
be custom files prepared by the user.
Here is sort of a shell scripting mock up of what I want...
tribs = "01 02 03 04" ;
for trib in $tribs do
begin
if [ -f custom/gen_$trib.cmnd ]; then
cp custom/gen_$trib.cmnd output/gen_$trib.cmnd
else
my_script gen.cfg $trib > output/gen_$trib.cmnd
fi
end
I want to be able to change the value of tribs depending on the setup
for this test.
Also I only want the cp to run if custom/gen_$trib.cmnd is newer than
output/gen_$trib.cmnd.
Finally I only want my_script to run if gen.cfg is newer than
output/gen_$trib.cmnd.
I can't seem to write Jam rules that let me say something like this:
tribs = 01 02 03 04 ;
for trib in $(tribs) {
Forge $(trib) ;
}
Date: Thu, 29 Jun 2000 17:09:45 -0700 (PDT)
Subject: Re: Need help with Rules
Works for me:
$ cat Jamfile
tribs = 01 02 03 04 ;
for trib in $(tribs) {
Echo $(trib) ;
}
$ jam
01
02
03
04
...found 7 target(s)...
Maybe it's your Forge rule that's the problem?
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Fri, 30 Jun 2000 17:00:52 +0900
Subject: Library rule quirks with LOCATE
Has anyone run into the following? A known bug? I'm running on linux with
recent binutils.
When LOCATE is set on a library target (or MakeLocate rule used), strange
things start happening:
* The target is always remade even when it's current. Setting NOARSCAN
option flag seems to fix this (at a performance cost)
* If the KEEPOBJS option flag is set, the library won't be created.
Date: Fri, 30 Jun 2000 07:54:47 -0700
From: Scott RoLanD <shr@chat.net>
Subject: Re: Need help with Rules
Right, I was saying that I couldn't write a Forge rule (or set of
rules) that would mimic the shell pseudo-code to do what I want:
tribs = "01 02 03 04" ;
for trib in $tribs do
begin
if [ -f custom/gen_$trib.cmnd ]; then
cp custom/gen_$trib.cmnd output/gen_$trib.cmnd
else
my_script gen.cfg $trib > output/gen_$trib.cmnd
fi
end
Date: Fri, 30 Jun 2000 08:52:08 -0700
From: Scott RoLanD <shr@chat.net>
Subject: Re: Need help with Rules
I played around with gmake and managed to get it to do what I want...
First the simple form:
tribs = 01 02 03 04
all : $(foreach trib,$(tribs),output/gen_$(trib).cmnd)
output/gen_%.cmnd: custom/gen_%.cmnd
cp -a $(<) $@
output/gen_%.cmnd: gen.cfg
my_script gen.cfg $* > $@
This works because gmake reads implicit rules from the top of the file
and once one matches it runs it and marks the target as updated.
I started with a more brute force form:
tribs = 01 02 03 04
all : $(foreach trib,$(tribs),output/gen_$(trib).cmnd)
output/gen_01.cmnd: $(shell if [ -f custom/gen_01.cmnd ] \; then \
echo custom/gen_01.cmnd \; else \
echo gen.cfg \; fi )
output/gen_02.cmnd: $(shell if [ -f custom/gen_02.cmnd ] \; then \
echo custom/gen_02.cmnd \; else \
echo gen.cfg \; fi )
output/gen_03.cmnd: $(shell if [ -f custom/gen_03.cmnd ] \; then \
echo custom/gen_03.cmnd \; else \
echo gen.cfg \; fi )
output/gen_04.cmnd: $(shell if [ -f custom/gen_04.cmnd ] \; then \
echo custom/gen_04.cmnd \; else \
echo gen.cfg \; fi )
output/gen_%.cmnd:
@if [ -f custom/gen_$*.cmnd ] ; then \
echo Copying custom $* ; \
cp custom/gen_$*.cmnd $@ ; \
else \
echo Creating $* ; \
my_script gen.cfg $* > $@ ;\
fi
This works because the dependencies are created on the fly. And then
when the rule is called it checks to see why it was run and does the
appropriate action.
So the question still remains.... Can I do this in Jam?
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Wed, 28 Jun 2000 14:12:32 +0900
Subject: Library rule quirks with LOCATE
Has anyone run in to the following? A known bug? I'm running on linux with
recent binutils.
When LOCATE is set on a library target (or MakeLocate rule used), strange
things start happening:
* The target is always remade even when it's current. Setting NOARSCAN
option flag seems to fix this (at a performance cost).
* If the KEEPOBJS option flag is set, the library won't be created.
Date: Mon, 3 Jul 2000 13:15:13 -0500 (CDT)
Subject: Re: Library rule quirks with LOCATE
we have not encountered this on solaris or nt.
I would guess that this means the dependencies are always out of date, possibly
because the thing built is not the same thing that the dependency is expressed on.
Date: Mon, 3 Jul 2000 19:55:03 -0700 (PDT)
Subject: Re: Need help with Rules
As far as I know, Jam doesn't do wildcards in source-file names. For
example, you can't do something like:
Bulk doc : *.html ;
If my assumption of what you're trying to do is correct:
If files are in custom, and they're newer or not in outdir, copy them
Else generate them if they don't exist in outdir or gen.cfg is newer
then I can't think of any clean way (just clunky ones) in Jam to do what
you're looking to do, other than to have a pseudo-target that always runs,
which just hands off to some other tool, say a perl script, that does the
work for you. It'd be a pretty straightforward script, but it would mean
having a separate tool to maintain.
If Jam is doing what you need it to for the most part, and these files are
a relatively small part of your build, then working around it is probably
reasonable. But if this is a huge part of what you do, you might want to
consider a build-tool that does deal with wildcarding.
Date: Mon, 3 Jul 2000 20:07:36 -0700 (PDT)
Subject: Re: Library rule quirks with LOCATE
I meant to reply to the guy who said this works for him, but I
accidentally deleted it. I'd like to see how you're doing it to make it work.
The behaviour I see is exactly as John described. If you set LOCATE on the
library target, it'll get put where you specify, but the object modules
for it will recompile everytime, because they depend on a local library
that doesn't exist (e.g., libfoo.a(foo.o)). The same is true if you use
MakeLocate.
If a Library target is specified without a directory, the
LibraryFromObjects rule calls MakeLocate with LOCATE_TARGET as the
directory, which, if you're using SubDir, is set to the local directory,
and if you're not using SubDir, is unset. So in either case, you end up
with the target library being LOCATE on'd the local directory, and the
object modules being dependent on that local library.
If you specify LOCATE (or LOCATE_TARGET) generically (as opposed to "on"
the library target), you'll get the object modules depending on the right
(full-path) library, but the modules themselves will be compiled into the
directory the library goes in rather than in the local source directory
(which probably isn't what you want, especially if you keep your objects,
since you could potentially end up with name conflicts).
If you set KEEPOBJS, then the object modules become dependencies of "obj"
and will get recompiled when necessary, but no dependency is set for the
library $(l) on "lib", so the library is never built. This looks to be an
actual bug. I'm not sure what the thinking was for the if on KEEPOBJS
setting different Depends, but commenting that all out makes it work, or adding:
Depends lib : $(l) ;
in the if $(KEEPOBJS) makes it work. Either way.
The only way I know of to get building-a-library-directly-into-somewhere-
other-than-the-local-directory to work correctly is to get your library
target into a full-path before handing it off to the Library rule.
Historically, when Jam was first being put together, that's how things
were done -- we had a CONFIG file, part of which was a list of
library-name symbols (e.g., LIBFOO), all of which had as their values a
full-path name (e.g., $(BUILDDIR)/lib/libfoo.a), and library targets were
specified as:
Library LIBFOO : foo.c bar.c ;
If you don't want to go that route, you could instead have a wrapper rule
that does something like:
rule myLib {
local l ;
l = $(LIBDIR)$(SLASH)$(<:S=$(SUFLIB)) ;
Library $(l) : $(>) ;
}
This will get the object modules to be dependencies of the full-path
library (e.g., /work/lib/libfoo.a(foo.o) -- the object modules will get
built into the local source directory, the library will get built into
$(LIBDIR), and the modules will only recompile as needed.
Alternatively, you could just let the library get built into the local
directory then do an InstallLib on it. (Note: Since install isn't part of
the dependencies on "all", if you want to both build and install, you have
to run 'jam install').
Subject: RE: Library rule quirks with LOCATE
Date: Tue, 4 Jul 2000 14:18:33 -0700
I've just started to use jam (under Windows NT), and I'm very interested in
putting my libs in a idfferent directory from my obj, which is different
from my source.
I saw the email earlier today about this, and I thought I would share my
solution and see what others might have to say about it.
The default implementation of the Library rule looks like:
rule Library {
LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ;
Objects $(>) ;
}
I created an override for the Library rule in my Jamrules file:
rule Library {
LOCATE_TARGET = $(OBJ_TARGET) ;
if ! $(<:D) { LibraryFromObjects $(<:D=$(LIB_TARGET)) : $(>:S=$(SUFOBJ)) ;
} else { LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ; }
Objects $(>) ;
}
and I use LIB_TARGET and OBJ_TARGET to control where the libraries go.
This seems to work for me, and it doesn't appear to rebuild anything unnecessarily.
Does anyone see any flaws with this?
Subject: RE: Library rule quirks with LOCATE
Date: Tue, 4 Jul 2000 14:12:46 -0700
I've just started to use jam (under Windows NT), and I'm very interested in
putting my libs in a idfferent directory from my obj, which is different
from my source.
I saw the email earlier today about this, and I thought I would share my
solution and see what others might have to say about it.
The default implementation of the Library rule looks like:
rule Library {
LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ;
Objects $(>) ;
}
I created an override for the Library rule in my Jamrules file:
rule Library {
LOCATE_TARGET = $(OBJ_TARGET) ;
if ! $(<:D) { LibraryFromObjects $(<:D=$(LIB_TARGET)) : $(>:S=$(SUFOBJ)) ;
} else { LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ; }
Objects $(>) ;
}
and I use LIB_TARGET and OBJ_TARGET to control where the libraries go.
This seems to work for me, and it doesn't appear to rebuild anything unnecessarily.
Subject: RE: Library rule quirks with LOCATE
Date: Tue, 4 Jul 2000 14:12:46 -0700
I've just started to use jam (under Windows NT), and I'm very interested in
putting my libs in a idfferent directory from my obj, which is different
from my source.
I saw the email earlier today about this, and I thought I would share my
solution and see what others might have to say about it.
The default implementation of the Library rule looks like:
rule Library {
LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ;
Objects $(>) ;
}
I created an override for the Library rule in my Jamrules file:
rule Library {
LOCATE_TARGET = $(OBJ_TARGET) ;
if ! $(<:D) { LibraryFromObjects $(<:D=$(LIB_TARGET)) : $(>:S=$(SUFOBJ)) ;
} else { LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ; }
Objects $(>) ;
}
and I use LIB_TARGET and OBJ_TARGET to control where the libraries go.
This seems to work for me, and it doesn't appear to rebuild anything unnecessarily.
Does anyone see any flaws with this?
From: Martine Habib <mhabib@microsoft.com>
Date: Thu, 6 Jul 2000 11:59:35 -0700
Subject: Using different flags for single file
I am trying to not just add cflags to a single file by using the
ObjectCcFlags rule, but substitute them entirely (as in the module is
compiled as debug, but the single file foo.c will have full optimization).
Is there any way I can do that ?
Date: Thu, 6 Jul 2000 14:22:51 -0700 (PDT)
Subject: Re: Using different flags for single file
You can just set CCFLAGS directly on the target (foo.o) itself. For example:
CCFLAGS on foo.o = -O ; #and whatever other flags you need
Note that you do need to do it on the object filename, not the source
filename (so you might want to be more platform-independently correct and
say instead:
CCFLAGS on foo$(SUFOBJ) = -O ;
If you're using SubDir, you'll need to specify it by the gristed name:
CCFLAGS on <$(SOURCE_GRIST)>foo$(SUFOBJ) = -O ;
Alternatively, if you do have a number of flags you use, you might want to
consider using OPTIM (which is passed on the compile line in the Cc
actions) to hold your debug (-g) flag, then set it to your optimize (-O)
flag on foo.o, since that would be pretty much maintenance free, whereas
if use CCFLAGS and at some point you changed the flags you compile with,
you'd need to remember to change them for foo.o as well.
From: Martine Habib <mhabib@microsoft.com>
Subject: RE: Using different flags for single file
Date: Thu, 6 Jul 2000 19:36:28 -0700
Can I do this with any variable ?
I mean for example, in the jamfile:
BLA on <$(SOURCE_GRIST)>foo$(SUFOBJ) = true ;
And in the jambase, I in the Cc rule:
if $(BLA) {
echo "BLA is true !" ;
}
It does not seem to work, but maybe I am doing something wrong.
Sent: Thursday, July 06, 2000 2:23 PM
Subject: Re: Using different flags for single file
You can just set CCFLAGS directly on the target (foo.o) itself. For example:
CCFLAGS on foo.o = -O ; #and whatever other flags you need
Note that you do need to do it on the object filename, not the source
filename (so you might want to be more platform-independently correct and
say instead:
CCFLAGS on foo$(SUFOBJ) = -O ;
If you're using SubDir, you'll need to specify it by the gristed name:
CCFLAGS on <$(SOURCE_GRIST)>foo$(SUFOBJ) = -O ;
Alternatively, if you do have a number of flags you use, you might want to
consider using OPTIM (which is passed on the compile line in the Cc
actions) to hold your debug (-g) flag, then set it to your optimize (-O)
flag on foo.o, since that would be pretty much maintenance free, whereas
if use CCFLAGS and at some point you changed the flags you compile with,
you'd need to remember to change them for foo.o as well.
Date: Fri, 7 Jul 2000 12:03:19 +0200 (METDST)
From: Igor Boukanov <boukanov@fi.uib.no>
Subject: RE: Using different flags for single file
When you assign a variable on a target, its assigned value will be
visible only in the target update action. There is now way to get
the value before that stage.
What you can do is to store the value in some variable, i.e.:
BLA on <$(SOURCE_GRIST)>foo$(SUFOBJ) = true ;
XX_BLA_$(SOURCE_GRIST)_foo$(SUFOBJ) = true ;
# Now you can query
if $(XX_BLA_$(SOURCE_GRIST)_foo$(SUFOBJ)) {
Echo "BLA is true !" ;
}
Here is rules that can simplify life in your case:
rule targetVarSet {
$(1) on $(2) = $(3) ;
__tVar_$(1)_$(2) = $(3) ;
}
rule targetVarGet { $(3) = $(__tVar_$(1)_$(2)) ; }
rule targetVarEcho { Echo $(__tVar_$(1)_$(2)) ; }
Usage can be like:
targetVarSet BLA : <$(SOURCE_GRIST)>foo$(SUFOBJ) : true ;
targetVarEcho BLA : <$(SOURCE_GRIST)>foo$(SUFOBJ) ;
targetVarGet BLA : <$(SOURCE_GRIST)>foo$(SUFOBJ) : tmp ;
if $(tmp) { Echo "BLA is true !" ; }
From: Martine Habib <mhabib@microsoft.com>
Date: Thu, 6 Jul 2000 11:30:13 -0700
Subject: Using different flags for single file
I am trying to not just add cflags to a single file by using the
ObjectCcFlags rule, but substitute them entirely (as in the module is
compiled as debug, but the single file foo.c will have full optimization).
Date: Fri, 7 Jul 2000 15:39:43 -0700 (PDT)
Subject: RE: Using different flags for single file
[ Just a reminder: Nowadays Jambase is compiled into the jam executable,
so if you make a change to the Jambase file and it doesn't seem to be
working, you might have forgotten (I usually do :) that you need to
specify the Jambase file on the commandline('jam -f /path/to/Jambase').
You can also just make the change to jambase.c and rebuild jam (but you
probably want to do that after you've worked it out in Jambase first :) ]
As to what you're trying to do...as far as I know:
- If you set a variable-value "on" a target, you can't access
that value in a rule, but it will carry through to the actions.
- If you set the value generally, you can access that value in a
rule, but if you access it in the actions it will be whatever
it was last (as in finally, in the end) set to.
Example variable-value set "on" a target:
Jamfile:
Main foo : foo.c ;
BLAH on foo = true ;
Jambase:
actions Link bind NEEDLIBS {
[ $(BLAH) ] && echo "Blah is true"
$(LINK) $(LINKFLAGS) -o $(<) $(UNDEFS) $(>) $(NEEDLIBS) $(LINKLIBS)
}
$ jam -f /usr/local/lib/Jambase
....found 10 target(s)...
....updating 2 target(s)...
Cc foo.o
Link foo
Blah is true
Chmod foo
....updated 2 target(s)...
Example variable-value set generally:
Jambase:
rule Cc {
if $(BLAH) { Echo BLAH is $(BLAH) ; }
[...]
}
Jamfile:
rule BlahFalse {
BLAH = ;
Main $(<) : $(>) ;
}
rule BlahTrue {
BLAH = true ;
Main $(<) : $(>) ;
}
BlahTrue foo : foo.c ;
BlahFalse bar : bar.c ;
$ jam -f /usr/local/lib/Jambase -n
BLAH is true
....found 13 target(s)...
....updating 4 target(s)...
Cc foo.o
cc -c -O -o foo.o foo.c
Link foo
[ ] && echo "Blah is true"
cc -o foo foo.o
Chmod foo
[etc.etc...]
....updated 4 target(s)...
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Library rule quirks with LOCATE
Date: Fri, 7 Jul 2000 18:28:30 -0700
Basically, the way that Jam is implemented, there is a
requirement on the Jamrules that library members have the
same LOCATE value as the library itself. This stems from
the fact that the timestamp() function, when applied to
a library member, looks for the library using the LOCATE
value associated with the member.
The default LibraryFromObjects rule implements this
requirement. But you can break you build is you override
the LOCATE on the library without also resetting LOCATE
on all of the members.
Thus do not MakeLocate on the library.
Don't like this behavior ? You can change your Jamrules
to place libraries in LOCATE_LIBRARIES instead of LOCAT_TARGET
by modifing the LibraryFromObjects rule. You would also
want to modify the SubDir rule.
Another option would be to change jam itself so that the
timestamp function would first lookup/bind the library node
(and thus the library's location), and then use this to
scan the archive. Effectively ignoring the LOCATE associated
with the library member (if any).
Perhaps I'll post a patch for this when builds on NT become
an issue for me (symbolic links start failing).
Re the KEEPOBJ flag, in the default Jamrules, if set,
libraries are not build unless explicitly required by another
target. You can see that dependency is established for the
objs to the obj pseudo target objs rather than the library
being linked to the lib target.
From: "Kimpton, Andrew" <awk@pulse3d.com>
Date: Tue, 11 Jul 2000 12:08:47 -0700
Subject: Multiple 'independant' targets
I have (what I think is) a pretty straightforward question but browsing the
docs couldn't answer it.
I need to build two 'independant' binaries from a single execution of Jam -
something that
all: foo bar
foo : foo.c
cc -o foo foo.c
bar : bar.c
cc -o bar bar.c
would do in Make. To add to the confusion one of the things I'm building is
a shared library built from Objects kept in a $(TOP)/obj.i386 heirarchy so I
use theMainFromObjects rule.
Date: Tue, 11 Jul 2000 15:25:22 -0500 (CDT)
Subject: Re: Multiple 'independant' targets
Depends exe : foo bar ;
main foo : foo.c ;
main bar : bar.c ;
LinkLibrary foo : obj.i386 ;
LinkLibrary bar : obj.i386 ;
or something like that.
We actually have customized version of main and linklibrary rules, as
well as others.
Date: Thu, 13 Jul 2000 08:38:56 -0400
From: Alex Nicolaou <alex@freedomintelligence.com>
Subject: adding an incremental mode
I'm trying to add an incremental mode to Jam 2.2.5 since building the
dependancy tree is pretty expensive and is taking a long time. The
approach I've taken is to modify make() to include a loop, and
re-initialize the target's fate, hfate and progress to their initial
values. This doesn't seem to work properly, as nothing is ever built
after the initial build. I imagine that I'm just not initializing some
element of the target structure and so I am hitting a short-circuit
somewhere; any suggestions?
Date: Thu, 13 Jul 2000 07:23:44 -0700
Subject: Re: adding an incremental mode
From: Matt Armstrong <matt@corp.phone.com>
Try searching the mail archives for this list. A patch to do just
this was circulated about 1.5 years ago.
Date: Thu, 13 Jul 2000 11:41:26 -0700
Subject: Re: adding an incremental mode
From: Matt Armstrong <matt@corp.phone.com>
You're right. I probably mailed one of the people in that thread for
the patch. Unfortunately, I've sence lost track of the patch after
changing jobs.
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: adding an incremental mode
Date: Fri, 14 Jul 2000 11:41:23 -0700
I, too, am interested in this patch. Does it exist? The few postings
I read on the archive didn't contain any patch or pointer to a patch.
Date: Thu, 13 Jul 2000 10:55:41 -0700
From: Iain McClatchie <iain@10xinc.com>
Subject: Re: adding an incremental mode
I'm not sure what you mean by incremental mode.
One of the troubles I have with Jam is that it first builds the
dependency tree, and then it figures out what to execute and does
that. I would very much like to be able to modify the dependency
tree during execution -- essentially, I'd like an action to be able
to call a rule.
I thought this was too large a change to make to Jam without a
total rewrite. Is your incremental mode something like what I'm
talking about?
Date: Mon, 17 Jul 2000 14:28:25 -0400
From: Alex Nicolaou <alex@freedomintelligence.com>
Subject: Re: adding an incremental mode
No, it's almost opposite from what you want. What I want to do
is persist the dependancy graph so that it doesn't need to be
recomputed each time - the time it takes to read all the headers
and determine what includes what is the lion's share of my build time.
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Subject: RE: adding an incremental mode
Date: Mon, 17 Jul 2000 11:30:58 -0700
Like the idea. But, for this to work well don't you need the OS to
tell you about file system changes so you can reevaluate which
dependencies need updating?
Date: Mon, 17 Jul 2000 11:39:43 -0700
Subject: Re: adding an incremental mode
From: Matt Armstrong <matt@corp.phone.com>
The OS tells you by updating the modification time on the file. Say
you cache dependency information for foo.h. Next time you run jam,
jam checks the modification time of foo.h and uses the cached
dependency information only if foo.h hasn't canged.
If I remember correctly, jam's internal data structures are amenable
to this kind of thing. The patch to do this was not large.
Date: Mon, 17 Jul 2000 12:09:42 -0700
Subject: Re: adding an incremental mode
From: Matt Armstrong <matt@corp.phone.com>
Jam has to check the timestamp on all the files in the dependency tree
anyway -- how else would it know what to build?
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Subject: RE: adding an incremental mode
Date: Mon, 17 Jul 2000 12:17:07 -0700
If you are a listener for OS file changes then you can update your out of
date table from just these changes. You don't have to check every file on every build,
it should always be up to date. This requires a "server" component to act as the
listener.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Date: Fri, 21 Jul 2000 15:34:06 -0700
Subject: variable default = value
Its not explicit in the jam documentation, but
variable = ;
is equivalent to unsetting the variable.
this is normally not a problem, except if you use
variable default = x ;
The semantics of which should be documented as
"Set variable to x if variable unset or variable has a vero length (zero elements)"
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Sun, 23 Jul 2000 23:24:16 +0900
Subject: bug in HDRPATTERN
Here is a line pulled from a header in SGI's STL:
#include <time.h> /* XXX should use <ctime> */
Jam's default HDRPATTERN interprets the header name as "time.h> /* XXX
should use <ctime" because the .* in the regexp does greedy matching.
Here is a corrected HDRPATTERN. Be sure to replace each {t} with a real tab.
HDRPATTERN = "^[ {t}]*#[ {t}]*include[ {t}]*[<\"]([^\">]*)[\">].*$" ;
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Mon, 24 Jul 2000 11:16:42 +0900
Subject: MakeLocate problem
I'm still gathering details, but here is what I have so far...
I ran into a problem of targets being remade for no reason, and on top of
that the set of remade targets seemed to change cyclically on every jam run.
This was just for a simple Main rule with a few sources. After pulling my
hair out for a long time I realized that the problem was timing dependent...
it seemed to be related to compile or link times.
After trudging through jam debug output I noticed that all my targets had a
dependency on the current directory. In unix a directory's date is updated
whenever its contents change, which in turn causes my targets to be tagged
as old. If say, object and exe targets are put in the same directory, and
the link takes more than 1 second, the writing of the exe will make the
directory date newer than the object files.
The cause of the dependency is LOCATE_TARGET (and the MakeLocate rule),
which is being set by the SubDir rule. I understand that the reason for the
dependency is to ensure that the target directory exists, but surely we
don't want to be remaking targets when a directory date changes...?
I'm also wondering if this is not related to the problem I posted previously
about using MakeLocate on library targets.
It seems hard to believe that no one else has run into this problem. I must
be making a mistake somewhere?
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Mon, 24 Jul 2000 12:54:14 +0900
Subject: conditional bug?
The jam language docs state that ( cond ) can be used for precedence
grouping. However the following expression will evaluate to true:
TESTVAR = Hello ;
if $(TESTVAR) && ($(TESTVAR) != $(TESTVAR)){ ECHO "true" ; }
If the grouping ( ) is removed the expression evaluates correctly.
From: john@nanaon-sha.co.jp (John Belmonte)
Subject: Re: MakeLocate problem
Date: Mon, 24 Jul 2000 13:11:43 +0900
It turns out that this problem only occurs for targets in the TOP directory,
in other words when LOCATE is set to the current directory ("." or DOT).
Although MkDir correctly sets the NOUPDATE rule on the directory targets, it
does not do this when the directory is DOT. MakeLocate however still does a
Depends in this case, resulting in a dependency on DOT without a NOUPDATE.
Here is a proposed correction to MakeLocate (doing nothing when the
directory is DOT). I'm not that confident that it won't break something else.
rule MakeLocate {
if $(>) && $(>[1]) != $(DOT) {
LOCATE on $(<) = $(>) ;
Depends $(<) : $(>[1]) ;
MkDir $(>[1]) ;
}
}
From: "Greg Loucks" <gloucks@msmail.tti.bc.ca>
Subject: RE: conditional bug?
Date: 25 Jul 2000 09:33:00 -0700
I believe you need spaces around the grouping ()'s as in:
TESTVAR = Hello ;
if $(TESTVAR) && ( $(TESTVAR) != $(TESTVAR) ) { ECHO "true" ; }
From: john@nanaon-sha.co.jp [mailto:john@nanaon-sha.co.jp]
Sent: July 24, 2000 06:00
Subject: conditional bug?
The jam language docs state that ( cond ) can be used for precedence
grouping. However the following expression will evaluate to true:
TESTVAR = Hello ;
if $(TESTVAR) && ($(TESTVAR) != $(TESTVAR)) { ECHO "true" ; }
If the grouping ( ) is removed the expression evaluates correctly.
Date: Tue, 25 Jul 2000 14:14:21 -0700 (PDT)
Subject: Converting to jam
After having used jam at my last company, I've have
come to love and depend on it. However, at the new
company I am working at, they are still using smake.
I'm looking to try to convert from their existing
makefiles to jam on my own and then maybe maintain
them in tandem of a while until the other developers
buy into it.
In anycase, here's my question. Does anyone have some
autmoted scripts/tools that will convert makefiles to jamfiles?
Date: 26 Jul 2000 17:47:48 +0200
From: " <robert.muench@robertmuench.de>
Subject: How to: Link libraries?
Hi, I have a bunch of 'jamfile' in different sub-directories. All use
the 'Objects' rule and just list the *.cpp filenames. All obj files
are being generated. No problem. Now I want to link them all together
into one DLL, therefore I added a MainFromObjects rule. But well...
the files are missing. How can I refer to all the compiled/needed
objects files with one variable?
TOP = f:/openamulet/source ;
MainFromObjects oa.dll $OBJS;
^^^^^^ ???
When do you want to reboot today?
Subject: RE: How to: Link libraries?
Date: Wed, 26 Jul 2000 11:37:18 -0700
So I was able to get this to work under NT using the following:
SUFEXE = $(SUFLIB:S=.dll) ;
LINKLIBS += $(NAMES_OF_LIB_FILES) ;
LINKFLAGS += "/dll /def:$(NAME_OF_DEF_FILE) /implib:$(NAME_OF_IMPLIB)
/LIBPATH:$(PLACE_TO_LOOK_FOR_LIBS)" ;
Main MyProject :
SomeSource.cpp
;
If the import library goes in the same directory as the .dll file then you
can eliminate the /implib portion. If your libraries are fully qualified,
then you can eliminate the /LIBPATH portion.
Date: 27 Jul 2000 14:16:10 +0200
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: How to: Link libraries?
What are all these $(<) and $(>) ??
Perhaps more detailed information helps:
1. I have jamrules file in the base directory of the source tree. This
file only contains the compiler/link flags stuff and sets the TOP
variable.
2. In the base directory I have jamfile, which contains all the
'SubInclude' statements.
SubDir TOP opal ;
Objects opal.cpp opal_code.cpp opal_operations.cpp ...
That's it! I now tried to add 'MainFromObjects oa.dll ;' to my
jamrules files. (Perhaps it's better to put this into the base
directory jamfile?). But of course the linker states, no files to link...
Ok, but than you have to state the libraries by hand inside the
jamrules file, right? I thought JAM is smart enough to set an implicit
variable (like $objs), which I can refer to from a rule.
From: Walter Boggs <wboggs@cybercrop.com>
Date: Wed, 26 Jul 2000 11:43:55 -0600
Subject: Newbie question
I have built Jam 2.2 and want to use it for Java. However, when I run it, it
wants an environment variable pointing to my C compiler. I built Jam on a
Win2k box that had MSVC, then moved Jam to my current box that has no C
compiler. Do I need one to use Jam with Java?
Subject: RE: How to: Link libraries?
Date: Thu, 27 Jul 2000 13:43:41 -0700
When you specify stuff in Jam, you're specifying a rule with "targets" and
"dependants". Typically it looks something like:
Main foo : foo.c ;
Main is the name of the rule. $(<) refers to the list of stuff between Main
and the colon, and $(>) refers to the list of stuff between the colon and
the semi-colon.
You shouldn't have to set the TOP variable. It gets set automatically by the
first SubDir call.
Ahhh. Specifiying "MainFromObjects oa.dll ;" basically says to build oa.dll
from no objects (because no objects are listed on the right hand side of the
colon ("MainFromObjects oa.dll ;" is really the same as "MainFromObjects
oa.dll : ;".
Currently, jam builds up a list of dependants on an obj target, but I don't
think you can acess this as a variable.
I created a little test to see if I could get this work and got it working.
I made my top most jamfile look like:
SubDir TOP ;
rule MyObjects {
Objects $(<) ;
local s ;
makeGristedName s : $(<) ;
MY_OBJS += $(s:S=$(SUFOBJ)) ;
}
MyObjects foo.c ; # Only needed if there is source in the topmost directory.
SubInclude TOP Foo1 ;
SubInclude TOP Foo2 ;
ECHO "MY_OBJS =" $(MY_OBJS) ;
SubDir TOP ; # re-execute SubDir to reset grist, locate, etc.
SUFEXE = $(SUFLIB:S=.dll) ;
LINKFLAGS += "/dll /def:foo.def" ;
MainFromObjects foo : $(MY_OBJS) ;
and the subdirectory jamfiles looked like:
SubDir TOP Foo1 ;
MyObjects foo1.c ;
I had to put in a second invocation of SubDir to reset the SOURCE_GRIST and
LOCATE_TARGET etc, so that the invocation of MainFromObjects would work
properly. This is because the SubInclude'd jamfiles execute SubDir and
MainFromObjects uses makeGristedSource, so it would get the grist from the
last SubInclude'd jamfile rather than the one we want.
So basically, it provides a MyObjects rule which builds up a variable
MY_OBJS which contains a list of object files, which is then used by the
MainFromObjects rule.
A better place to put the MyObjects rule would be in the Jamrules file
located in the topmost directory.
This example builds a .dll.
Subject: RE: Newbie question
Date: Thu, 27 Jul 2000 14:26:54 -0700
I think you should be able to define one of MSVC, MSVCNT, or BCCROOT and
just point them to some directory (existant or not). As long as you don't
try to compile any C/C++, I think that this will work (although I haven't
tested it).
If you feel really ambitious, you could modify the Jambase located in the
src directory to not generate an error if one of these is not defined
(assuming of course that pointing one to some arbitrary location works).
Date: Thu, 27 Jul 2000 15:03:09 -0700 (PDT)
Subject: Re: Newbie question
You don't need a compiler -- you just need to satisfy Jam's need to have
one of BCCROOT, MSVCNT, or MSVC set. If I'm on an NT, and not intending to
do any C compiles (which I assiduously try to avoid ever doing on an NT
:), I usually just set MSVC to foo.
On the other hand, if you know for sure you won't ever be doing any C/C++
compiling/linking/etc., you could just get rid of all that if'ing and
else'ing (and its associated variables-setting) in the Jambase file (do it
in jambase.c and recompile to have it be the default, so you don't have to
point to Jambase on the command-line).
Just as a side note: Jam isn't really particularly geared towards Java, so
you might want to consider using a build tool that is.
From: Morgan Fletcher <morgan.fletcher@luna.com>
Subject: RE: Newbie question
Date: Thu, 27 Jul 2000 15:07:06 -0700
this format, some or all of this message may not be legible.
I know about Ant. (http://jakarta.apache.org/ant/index.html) Are there others?
Date: 29 Jul 2000 11:31:34 +0200
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: How to: Link libraries?
Thanks for the detailed explanation. I'm going to try it out and let
you know but it looks like it's exactly what I need. Thanks again
Date: Mon, 31 Jul 2000 12:57:21 -0700 (PDT)
Subject: Java build tools (was: RE: Newbie question)
There were a number of ones in development for awhile, but I think most of
them died off once Ant came along. There are a couple, though, that got
out before it did, but it's hard to say what their current state (as in
fate) actually is. I haven't used any of them, so I can't make any
comments about how good (or not) they are.
There's one called JarMaker:
www.gjt.org/servlets/JCVSlet/show/gjt/org/gjt/jem/jarmaker/README/1.3
And there's 'jmk', which is more or less 'make' written in Java, but it
does appear to have a strong Java-use slant to it:
www.ccs.neu.edu/home/ramsdell/make/edu/neu/ccs/jmk/jmk.html
I also ran across a tool the other day called STIX, which isn't available
yet (it isn't strictly Java=centric, but would work for it, as well as for
C/C++, etc.) -- it sounds like it could be pretty interesting:
A couple of articles that might be of interest:
www.inf.cbs.dk/staff/nielsj/research/make/tool.html (a little outdated,
but with some reasonably good info)
www.geosoft.no/javamake.html (Make-oriented, but ditto above "but")
Date: 1 Aug 2000 20:19:06 +0200
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: How to: Link libraries?
Hi, I now tried your solution. It's getting me closer. However, now I
have the problem, that the command line for the linker is too long :-|
Is there an easy way to let Jam split it up and call the linker
command several times, or to redirect the content to a file?
Date: Tue, 1 Aug 2000 14:44:26 -0400
From: Donald Sharp <sharpd@cisco.com>
Subject: Re: How to: Link libraries?
look at the actions piecemeal modifier.
http://public.perforce.com/public/jam/src/Jamlang.html
It's under Rules/Action Modifiers.
Subject: RE: How to: Link libraries?
Date: Tue, 1 Aug 2000 11:52:12 -0700
Unfortunately, the linker doesn't really lend itself to running mulitple
times (i.e. using the piecemeal option), and jam doesn't seem to support the
notion of "response" files (maybe I'm missing something). One approach would
be to merge several libraries into a bigger library and pass that on the
command line (thus reducing the size of the command line). If you're feeding
raw objects, creating libraries is probably the easiest way to go.
Another approach is to rewrite the Link rule/action to do something like:
rule Link {
MODE on $(<) = $(EXEMODE) ;
Chmod $(<) ;
local i ;
StartLink $(<) : $(>) ; { LinkItem $(<) : $(i) ; }
FinishLink $(<) : $(>) ;
}
actions quietly Link {}
actions quietly StartLink { $(RM) $(<:S=.rsp) }
actions quietly LinkItem { ECHO $(>) >> $(<:S=.rsp) }
actions FinishLink bind NEEDLIBS {
$(LINK) $(LINKFLAGS) /out:$(<) $(UNDEFS) @$(<:S=.rsp) $(NEEDLIBS)
$(LINKLIBS)
}
There doesn't appear to be any way to delete an action once it's been
created, so I've been using the form above for Link. The word "quietly"
means that it won't print "Link someoutput" message.
I tested this under NT and it seems to work for the simple test that I have.
You may wish to add the line
$(RM) $(<:S=.rsp)
to the FinishLink action to remove the .rsp file when you're done.
Date: 2 Aug 2000 09:38:04 +0200
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: How to: Link libraries?
Hi, I did but either I don't get it or it's not working as expected. I
variable. Than I added:
actions piecemeal OAObjects {}
In the hope that the rule will be used until a 'shell-buffer-overrun'
is approaching and that Jam continues normal processing and later
returns to where it left to continue the build process. But this
failed :-(
I expect that I have to add the link command inside the 'piecmeal'
scope, right?
Date: Wed, 2 Aug 2000 08:29:41 -0400
From: Donald Sharp <sharpd@cisco.com>
Subject: Re: How to: Link libraries?
That sounds right. I would have put the piecemeal
subcommand under the Link action.
From: "Greg Loucks" <gloucks@msmail.tti.bc.ca>
Date: 2 Aug 2000 10:14:00 -0700
Subject: multiple target platforms?
I would like to be able to use a single run of Jam to build multiple targets
for an embedded system (pSOS+). I have projects that have targets that can be
built for:
1. simulation environment (an addin to Visual Studio)
2. evaluation board (my target processor with distinct Board Support Package)
3. final hardware (my hardware and my BSP)
4. host OS, Windows NT (for source generation executables)
And, if that weren't enough, I also require the ability to build some tools
for the above target platforms _and_ Windows CE:
5. WinCE simulation environment
6. WinCE CEPC target platform (like an Eval Board)
7. WinCE on target device (different CPU, different BSP)
Has anyone come across this issue before?
I've fiddled a little bit with the code and came up with a hardware "grist"
that I can add to every file which specifies the target system, os, config etc.
Then I have a target-system-related action for every environment.
It seems pretty hokey to me.
E.g.
a Jamfile
=========
P1 = d.estp.pp ; # d for debug; estp is the eval board; pp is pRISM+ environ
P2 = d.wipc.ps ; # d for debug; wipc is wintel pc; ps is pSOSim
P3 = r.mybd.pp ; # d for release; mybd is my board; pp is pRISM+
S = exception.cpp object.cpp task.cpp queue.cpp mutex.cpp semaphore.cpp ;
LINKFLAGS$(P1:S) += -e _START ram-estp.dld ;
LINKFLAGS$(P3:S) += -e _START ram-mybd.dld ;
CCFLAGS$(P1:S) += -Xsmall-data=8 -ei1683 -g2 -Xno-optimized-debug ;
CCFLAGS$(P3:S) += -Xsmall-data=8 -ei1683 -g2 -Xno-optimized-debug ;
C++FLAGS$(P1:S) += -Xsmall-data=8 -ei1683 -g2 -Xno-optimized-debug ;
C++FLAGS$(P3:S) += -Xsmall-data=8 -ei1683 -g2 -Xno-optimized-debug ;
PLibrary $(P1) $(P2) $(P3) : tools-PSos : $(S) ;
PMain $(P1) : PSos-test-eval : test.cpp ; # generates an .elf file
PMain $(P2) : PSos-test-sim : test.cpp ; # generates an .exe file
PMain $(P3) : PSos-test : test.cpp ;
PLinkLibraries $(P1) $(P2) $(P3) : tools-PSos-test :
$(TOP)/system/pSOS/pSOS.a tools-PSos.a ;
The Jambase file contains rules like PLibrary which applies my hardware "grist"
to each target and source and then refers to the regular Library rule:
rule PLibrary # platform(s) : library : source(s)
{ { Library $(2:H=$(i)) : $(3:H=$(i)) ; } }
Then I have to modify all the rules and actions all the way down to check the
hardware settings and call the appropriate actions:
rule Link {
local p ;
makePlatform p : $(<) ; # get the Hardware grist
if $(p) {
LINK on $(<) = $(LINK$(p:S)) ;
LINKFLAGS on $(<) = $(LINKFLAGS$(p:S)) ;
UNDEFS on $(<) = $(UNDEFS$(p:S)) ;
LINKLIBS on $(<) = $(LINKLIBS$(p:S)) ;
switch $(p:S) {
case .pp : Link.pp $(<) : $(>) ;
case .vs : Link.vs $(<) : $(>) ;
# etc...
}
}
MODE on $(<) = $(EXEMODE) ;
Chmod $(<) ;
}
Date: Wed, 2 Aug 2000 12:53:08 -0700 (PDT)
Subject: RE: How to: Link libraries?
Have you tried just cranking up the value for MAXLINE in jam.h? I have it
set to 32768 on an NT and've never had a problem (w.r.t. too-long lines
anyway).
Date: Thu, 3 Aug 2000 16:07:29 +1000 (EST)
From: David Funk <d.funk@photonics.com.au>
Subject: Making non-unique target names fails
I'm using the latest jam sources from http://www.perforce.com/jam/jam.html
(version 2.2.1), compiled for FreeBSD 4.0.
In real life, I have a project that contains multiple libraries in
subdirectories, with each generating its own host test file to do unit
tests. I've simplified this in the following test setup:
/tmp
|
/\ SubInclude TOP sub1 ;
/ \ SubInclude TOP sub2 ;
/ \
/ \
| SubDir TOP sub2 ;
| Main hosttest : hosttest.c ;
|
SubDir TOP sub1 ;
Main hosttest : hosttest.c ;
TOP is set to /tmp/jamtest. I have an empty /tmp/jamtest/Jamrules.
Running jam gives me:
1 $ jam -d2
2 ...found 17 target(s)...
3 ...updating 3 target(s)...
4 Cc /tmp/jamtest/sub1/hosttest.o
5
6 cc -c -O -I/tmp/jamtest/sub1 -o /tmp/jamtest/sub1/hosttest.o \
7 /tmp/jamtest/sub1/hosttest.c
8
9 Cc /tmp/jamtest/sub2/hosttest.o
10
11 cc -c -O -I/tmp/jamtest/sub2 -o /tmp/jamtest/sub2/hosttest.o \
12 /tmp/jamtest/sub2/hosttest.c
13
14 Link /tmp/jamtest/sub2/hosttest
15
16 cc -o /tmp/jamtest/sub2/hosttest /tmp/jamtest/sub1/hosttest.o
17
18 Chmod /tmp/jamtest/sub2/hosttest
19
20 chmod 711 /tmp/jamtest/sub2/hosttest
21
22 Link /tmp/jamtest/sub2/hosttest
23
24 cc -o /tmp/jamtest/sub2/hosttest /tmp/jamtest/sub2/hosttest.o
25
26 Chmod /tmp/jamtest/sub2/hosttest
27
28 chmod 711 /tmp/jamtest/sub2/hosttest
29
30 ...updated 3 target(s)...
31 $
Lines 6 and 11 compile the two hosttest.c files as expected.
However, line 16 links sub1/hosttest.o into sub2/hosttest instead of
sub1/hosttest. Then line 24 links sub2/hosttest.o into sub2/hosttest,
overwriting the previous link. sub1/hosttest is never made.
Am I doing something wrong? Is jam designed to have non-unique target names in
a project? Is this a bug?
Date: 4 Aug 2000 12:28:19 +0200
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: How to: Link libraries?
that the shell states: command line to long. How have you changed the
limit of the command line length in NT? Robert
Subject: RE: Making non-unique target names fails
Date: Sun, 6 Aug 2000 11:11:35 -0700
I recreated your little example, and then did the following:
jam -d6 | grep Depends
I got the following output (note that I did this under NT, which is why the
extensions are a little different than what your exact example would
produce):
In particular notice the two lines:
The second dependency will overwrite the first one, and this is why you're
only getting a single target built. Jam uses the notion of "grist" to aid in
distinguishing duplicate targets. You'll see that hosttest1.obj is prefixed
by <sub1>. <sub1> is referred to as the "grist".
I think of "targets" in jam as arbitrary text strings or labels. Each label
gets bound to a real file on disk.
When you add grist, you are creating labels which are more unique. When you
do this it is very important that the grist is added both when you identify
the target and when you use it. For example, in Jambase, the Objects rule
makes object targets which associate object files with source files. The
Objects rule applies grist to both the source and object names. When the
MainFromObjects rule wants to associate objects with a final output target,
it need to make sure that it applies the exact same grist.
I modified your example by creating the following rule (and I would put this
rule in your Jamrules file):
rule MyMain {
local t ;
makeGristedName t : $(<) ;
Main $(t) : $(>) ;
}
and modified the lower level Jamfile to use MyMain rather than Main. If you
repeat the jam -d6 | grep Depends with these modifications you'll see:
and now the same corresponding two lines that I extracted earlier look like:
and you get two executable programs built rather than one.
Anybody who is dealing with multi-directory jam projects should take the
time to become familiar with, and understand, the concept of grist. It can
be a very powerful tool. I found grist to be a little mysterious when I
started to use jam, but now that I understand it, it gives me a great deal
of control about how things work.
Date: Mon, 14 Aug 2000 13:44:58 +0100
From: Paul Haffenden <pjh@unisoft.com>
Subject: Regular expression variable editing
I've added some changes to expand.c to implement
regular expression substitutions on variables. It uses a :E modifier.
e.g.
instring = "Now let all good men lend me five pounds" ;
# Delete the first 'o'
first = $(instring:E=/o//) ;
# Delete all the 'o's. Uses the global flag g.
second = $(instring:E=/o//g) ;
# Use () to remember part of the pattern string, and use
# it to replace \1.
third = $(instring:E="/good (...)/good wo\\1/") ;
# Use the magic character '&', that replaces the whole matched
# string. Note use of a different delimiter from '/'
fourth = $(instring:E="'good'& &'") ;
ECHO $(instring) ;
ECHO $(first) ;
ECHO $(second) ;
ECHO $(third) ;
ECHO $(fourth) ;
produces:
Now let all good men lend me five pounds
Nw let all good men lend me five pounds
Nw let all gd men lend me five punds
Now let all good women lend me five pounds
Now let all good good men lend me five pounds
Its a bit quirky, you have to be careful with your
'\', ':', '[' and ']' characters.
If anyone would find this change useful, email me and
I'll send you the amended source.
Date: Mon, 14 Aug 2000 10:58:59 -0500
From: Eric Scouten <scouten@Adobe.COM>
Subject: Re: Regular expression variable editing
Yes, I would certainly find this useful. This sounds like something that
belongs in the public P4 depot at public.perforce.com. I'll be happy to
submit it there if that's okay with you.
Date: Tue, 15 Aug 2000 17:23:46 -0700
From: Jos Backus <josb@corp.webtv.net>
Subject: SEARCH_SOURCE question
I'm very new to Jam so I may be missing something very obvious here...
How do I go about building a library consisting of sources that are grouped in
separate directories?
hash.c hash_bigkey.c live in ../hash; bt_close.c bt_conv.c live in ../btree. I tried
SRC1 = hash.c hash_bigkey.c ;
SRC2 = bt_close.c bt_conv.c ;
SEARCH_SOURCE = ../hash ../btree ;
Library libdb.a : $(SRC1) $(SRC2) $(SRC3) $(SRC4) $(SRC5) $(MISC) ;
This works but is ugly because I know that hash.c lives in ../hash, not in
../btree. I thought it would be possible to say something like
SEARCH_SOURCE on $(SRC1) = ../hash ;
SEARCH_SOURCE on $(SRC2) = ../btree ;
but that doesn't work: jam says
don't know how to make hash.c
Subject: RE: SEARCH_SOURCE question
Date: Wed, 16 Aug 2000 11:33:52 -0700
I think you want to use
SEARCH on $(SRC1) = ../hash ;
SEARCH on $(SRC2) = ../btree ;
but I'm pretty sure that this won't work exactly. The Library rule calls the
Objects rule which in turn sets SEARCH on each source file, and you would
need to use gristed versions of $(SRC1), not $(SRC1) itself. So the complete
solution using this technique would need to look like:
Library libdb.a : $(SRC1) $(SRC2) $(SRC3) $(SRC4) $(SRC5) $(MISC) ;
local s ;
makeGristedName s : $(SRC1) ;
SEARCH on $(s) = ../hash;
makeGristedName s : $(SRC2) ;
SEARCH on $(s) = ../btree;
It's important that the "SEARCH on" appear after the Library since the
Library rule calls "SEARCH on" which would wipe out any "SEARCH on"'s done
prior to the Library rule.
Another thing that you should be able to do is this:
SEARCH_SOURCE = ../hash ;
Library libdb.a : $(SRC1) ;
SEARCH_SOURCE = ../btree ;
Library libdb.a : $(SRC2) ;
I haven't tried either of these, so you'll need to try these and figure out
if they do what you want and which approach will satisfy your needs.
Subject: RE: SEARCH_SOURCE question
Date: Wed, 16 Aug 2000 21:29:49 -0700
This example won't work for 2 reasons:
1 - $(SRC1) is the raw source file names and doesn't include grist.
2 - The SEARCH on got wiped out by the SEARCH on that happens through the
Library rule.
If you call SubDir, then grist (which is derived from the directory
information passed to SubDir) is added to source files and object files. If
you don't call SubDir then the grist is empty and you'd get the same results
with ot wihout grist being applied. Since many people use SubDir, I always
assume that it's being used and code accordingly. In the examples below,
I've assumed that SubDir (and hence grist) is being used.
When you say
Library libdb.a : $(SRC1) ;
this is the same as saying
Library libdb.a : hash.c hash_bigkey.c ;
The Library rule calls LibraryFromObjects with $(<) and $(>:S=$(SUFOBJ)
which is effectively:
LibraryFromObjects libdb.a : hash.obj hash_bigkey.obj ;
The Library rule calls Objects with $(>) so this is effectively:
Objects hash.c hash_bigkey.c ;
Objects makes gristed names and calls Object for each one, so you'll get:
Object <Dir1!Dir2>hash.obj : <Dir1!Dir2>hash.c ;
Object <Dir1!Dir2>hash_bigkey.obj : <Dir1!Dir2>hash_bigkey.c ;
Object executes
SEARCH on $(>) = $(SEARCH_LOCATE) ;
so in effect it's doing:
SEARCH on <Dir1!Dir2>hash.c = ../hash ;
LibraryFromObjects adds grist to $(>), and for the simple case you wind up
with libdb.a depending on <Dir1!Dir2>hash.obj and
<Dir1!Dir2>hash_bigkey.obj.
<Dir1!Dir2>hash.obj depends on <Dir1!Dir2>hash.c
So the dependency tree is built up using the gristed names. You can see this
quite clearly if you run
jam -d6 | grep Depends
Saying
SEARCH on hash.c = blah ;
is quite different from saying
SEARCH on <Dir1!Dir2>hash.c = blah ;
hash.c and <Dir1!Dir2>hash.c are two completely different targets.
This will only work in the situation where grist is not used (i.e. you don't
call SubDir). If you do call SubDir, then this won't work. When you don't
call SubDir gristed names are the same as ungristed names.
This is crucial in jam, since unlike make, jam does eveything when it's
executed, and nothing is deferred.
I'm glad I could help. Examples do help. I've also learned alot by trying to
figure out other people's problems. I find the act of working through the
problem always gives me a new perspective, and allows me to learn more about
how the program works.
Date: Sun, 20 Aug 2000 20:35:51 -0500
From: Eric Scouten <eric@scouten.com>
Subject: Emacs syntax coloring for Jamfiles
For those of you who are in the habit of using Emacs to edit your Jamfiles,
you might want to take a look at jam-mode.el which I just submitted to
public.perforce.com (see //guest/eric_scouten/jam-mode/...).
It does a reasonably good job of syntax-coloring Jam files. It's my first
significant attempt at Emacs-lisp coding, so let me know if there are problems.
From: Jos Backus <josb@microsoft.com>
Subject: RE: SEARCH_SOURCE question
Date: Wed, 16 Aug 2000 14:44:07 -0700
...but this doesn't work. In fact, it was the first thing I tried. Then I
read about the Library rule setting
SEARCH to SEARCH_SOURCE and tried that. I still don't grasp why it doesn't
work. It would be so OO-ish :)
Forgive my ignorance, but why is this so?
This works. Interestingly, this also works:
Library libdb.a : $(SRC1) $(SRC2) ;
SEARCH on $(SRC1) = ../hash;
SEARCH on $(SRC2) = ../btree;
So what's important is _when_ the SEARCH (attribute) is bound to the files.
Yes, this one works as well.
Imo, it's examples like these that are really helpful in understanding and using Jam.
Subject: RE: SEARCH_SOURCE question
Date: Tue, 15 Aug 2000 23:36:24 -0700
I think you want to use
SEARCH on $(SRC1) = ../hash ;
SEARCH on $(SRC2) = ../btree ;
but I'm pretty sure that this won't work exactly. The Library rule calls the
Objects rule which in turn sets SEARCH on each source file, and you would
need to use gristed versions of $(SRC1), not $(SRC1) itself. So the complete
solution using this technique would need to look like:
Library libdb.a : $(SRC1) $(SRC2) $(SRC3) $(SRC4) $(SRC5) $(MISC) ;
local s ;
makeGristedName s : $(SRC1) ;
SEARCH on $(s) = ../hash;
makeGristedName s : $(SRC2) ;
SEARCH on $(s) = ../btree;
It's important that the "SEARCH on" appear after the Library since the
Library rule calls "SEARCH on" which would wipe out any "SEARCH on"'s done
prior to the Library rule.
Another thing that you should be able to do is this:
SEARCH_SOURCE = ../hash ;
Library libdb.a : $(SRC1) ;
SEARCH_SOURCE = ../btree ;
Library libdb.a : $(SRC2) ;
I haven't tried either of these, so you'll need to try these and figure out
if they do what you want and which approach will satisfy your needs.
From: Jos Backus <josb@microsoft.com>
Subject: RE: SEARCH_SOURCE question
Date: Thu, 17 Aug 2000 10:52:14 -0700
OK, this bites you in the SubDir case because the names become gristed.
I still don't quite understand why
SEARCH_SOURCE on $(SRC1) = ../hash ;
SEARCH_SOURCE on $(SRC2) = ../btree ;
Library libdb.a : $(SRC1) $(SRC2) ;
doesn't work, as the Object rule (called from Library -> LibraryFromObjects
-> Objects)
does use SEARCH_SOURCE to set SEARCH on each target, right?
I understand that when using SubDir you would need to set SEARCH_SOURCE
on the gristed names, but in this case LibraryFromObjects wouldn't do
any gristing because there is none (because I'm not using SubDir).
Surely you mean SEARCH_SOURCE here.
Date: Sun, 20 Aug 2000 20:33:05 -0500
From: Eric Scouten <eric@scouten.com>
Subject: Re: Regular expression variable editing
Hello... I've just submitted Paul Haffenden's regexp mods to
public.perforce.com. Paul wrote that he had tested on Linux and Solaris
only; I was able to confirm that it works on Windows NT as well. Perhaps
others could comment on other platforms...
For now, you can read this version from //guest/eric_scouten/jam/...
From: "Thorsten Schiller" <tschiller@ifhl.com>
Date: Mon, 28 Aug 2000 16:21:31 -0400
Subject: v2.2.5 deleting source files
I'm new to JAM and will apologize in advance if this issue has been
discussed before. While I have visited the archive, I'm stuck on a very
slow link right now and couldn't search through all the old messages.
I grabbed the most current jam build (2.2.5) and set up a little test
directory just to ensure that I got the basics right. I didn't, of course.
I made a dumb typo and, while the mistake is entirely my own, jam's
behaviour of deleting targets (RELNOTES for 2.2.5) may need some ...
fine-tuning.
It turns out that if you fail to leave a space between the target name and
the colon, jam may end up deleting your source code. I gather this is
because jam sees the source files as additional targets that need to be
deleted when the build fails?
I understand that the parsing is kept simple on purpose to make jam as
flexible as possible, however, anything that can result in the accidental
destruction of source code likely warrants some form of warning mechanism
(even a flag that I need to manually enable that reports that one of my
targets contains a colon would be perfectly adequate). I'm running linux
with gcc 2.95.3. I've included steps to recreate my result:
$ jam -v
Jam/MR Version 2.2.5. Copyright 1993, 1999 Christopher Seiwald.
$ echo blah >test1.c
$ echo blah >test2.c
$ cat Jamfile
Main test: test1.c test2.c ;
$ jam
warning: no targets on rule Objects
...found 10 target(s)...
...updating 1 target(s)...
Link test: test1.c test2.c
test1.c:1: unterminated character constant
test2.c:2: unterminated character constant
cc -o test: test1.c test2.c
...failed Link test: test1.c test2.c ...
...removing test1.c
...removing test2.c
...failed updating 1 target(s)...
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Thu, 31 Aug 2000 21:25:55 +0900
Subject: header search order & the current directory
If you have a header file in the current directory that has the same name as
a header in your include path (say because you are installing the header
into somewhere in your include path), then the jam header scanning and a
compiler such as gcc will not agree about which header is used. The jam
header scan will use the header in the include path, while gcc will use the
header in the current directory. Of course it can be argued that a project
should never be including a directory that it's installing headers into.
A workaround is to put $(DOT) at the start of your HDRS variable to force
gcc and jam to agree about where the current directory is in search order.
Date: Mon, 04 Sep 2000 17:53:40 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: Parallel builds on NT ?
The home page (http://www.perforce.com/jam/jam.html) states among
features that: "On UNIX and NT, jam can do this with multiple, concurrent
processes. " On the other hand, the documentation on the jam command
(http://public.perforce.com/public/jam/src/Jam.html) says that the -j option
(number of concurrent jobs) is "UNIX only".
From: Alexander Nicolaou <anicolao@mud.cgl.uwaterloo.ca>
Subject: Re: Parallel builds on NT ?
Date: Mon, 4 Sep 2000 12:49:29 -0400 (EDT)
I assume -j is implemented for NT, but if you're using MSVC's compiler
you cannot use it. The issue is that to save space and time cl creates
a ".pdb" file which is a database of debugging information for your
program, and only one compile process at a time can update this file;
the second simultaneous compile will fail (just like the irritating
feature that you can't compile while also running the debugger).
With g++ you will not have this problem, but g++ is about twice as
slow as MSVC at at compile time, so there is zero gain on a dual
processor machine. (Although g++ accepts a more complete subset of
ANSI C++ than MSVC, particularly in regard to templates.)
From: Martine Habib <mhabib@microsoft.com>
Subject: RE: Parallel builds on NT ?
Date: Mon, 4 Sep 2000 10:16:44 -0700
You can use the -j option, however, if you do not use "compiler pdb files"
(obtained by using the option /Zi), but "linker" pdb files (obtained by
using the /pdb: linker option. The price is that either you get an "all
release" build with limited debug info, or you have to build using /Z7 with
full debug info.
From: Alexander Nicolaou [mailto:anicolao@mud.cgl.uwaterloo.ca]
Sent: Monday, September 04, 2000 9:49 AM
Subject: Re: Parallel builds on NT ?
I assume -j is implemented for NT, but if you're using MSVC's compiler
you cannot use it. The issue is that to save space and time cl creates
a ".pdb" file which is a database of debugging information for your
program, and only one compile process at a time can update this file;
the second simultaneous compile will fail (just like the irritating
feature that you can't compile while also running the debugger).
With g++ you will not have this problem, but g++ is about twice as
slow as MSVC at at compile time, so there is zero gain on a dual
processor machine. (Although g++ accepts a more complete subset of
ANSI C++ than MSVC, particularly in regard to templates.)
Date: Mon, 4 Sep 2000 16:33:32 -0700 (PDT)
Subject: RE: Parallel builds on NT ?
Just to clarify, you mean that if I want to use the -j
option with jam and MSVC, then I just use /Z7 with cl,
right? What are the disadvantages to using /Z7 vs. /Zi
? I presume that you would lose the ability to do
"edit and debug" but I don't tend to use this feature anyways.
From: Martine Habib <mhabib@microsoft.com>
Subject: RE: Parallel builds on NT ?
Date: Mon, 4 Sep 2000 16:52:06 -0700
You also lose incremental linking and compilation. This can still be OK if
your modules are not too large, as you gain quite a lot of speed by using
You do, of course also lose "Edit and Continue".
Sent: Monday, September 04, 2000 4:34 PM
Subject: RE: Parallel builds on NT ?
Just to clarify, you mean that if I want to use the -j
option with jam and MSVC, then I just use /Z7 with cl,
right? What are the disadvantages to using /Z7 vs. /Zi
? I presume that you would lose the ability to do
"edit and debug" but I don't tend to use this feature anyways.
Date: Tue, 05 Sep 2000 15:21:42 -0400 (EDT)
From: Lee Marzke <lmarzke@kns.com>
Subject: Setting environment variables for compiler
How would you set an environment variable
e.g. GCC_EXEC_PREFIX
in a Jamfile? ( on Unix ) We have different targets that use
different version of GCC and this changes often.
Date: Wed, 06 Sep 2000 13:48:30 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: automagically listing source files
Thanks for your useful information on parallel builds on NT. One thing I
can say for Jam is that it has a friendly and competent user group !
With gnu make, I am used to listing source files automatically, e.g.
myprog: $(wildcard *.cpp)
This way, the makefile can be set up once and for all, and does not need
to be modified each time a new source file is added to the project. Is this
sort of thing possible using Jam ?
From: "Sawhney, Davinder" <DSawhney@ciena.com>
Date: Fri, 8 Sep 2000 10:41:58 -0400
Subject: Anyone looking for Jamming job - Maryland-Baltimore area
I am not sure if this is the correct use of this group
but any interested in consulting or permanent job in the
Maryland area can email me with their resume
Date: Fri, 8 Sep 2000 12:36:39 -400
From: "Lex Spoon" <lex@cc.gatech.edu>
Subject: Re: automagically listing source files
In my view, one of the nice things about Jam is that it gets away from
wildcard stuff. What's so hard about:
SourceFile foo.c ;
SourceFile bar.c ;
(etc)
If you can write the code for a file, then you can certainly add a line
to the Jamfile for it. And the really nice thing is, you can say what
*kind* of source file it is, even if the filename doesn't help:
ExtraOptimizedSourceFile zap.c ;
Furthermore, if you have any automatically generated *.c files, they
won't get picked up.
So anyway, you *can* do things like what you describe, but I actually
like having things listed out, better.
Date: Fri, 8 Sep 2000 12:33:43 -400
From: "Lex Spoon" <lex@cc.gatech.edu>
Subject: Re: Setting environment variables for compiler
What does this have to do with environment variables, exactly? You can
do things like:
set CC = /bin/gcc on foo.o ;
(I'm sure this syntax isn't exactly right). Anyway, then the regular
Jam commands will use that version of CC just for foo.o.
Date: Fri, 08 Sep 2000 14:57:26 -0400 (EDT)
From: Lee Marzke <lmarzke@kns.com>
Subject: Re: Setting environment variables for compiler
According the the manual variables are not exported to the shell.
See excerpt below:
Jam/MR variables are not re-exported to the shell that executes the updating
actions, but the updating actions can reference
Jam/MR variables with $(variable).
Unfortunately I need to export the variable.
Date: Fri, 8 Sep 2000 13:10:26 -0700
Subject: Re: Setting environment variables for compiler
From: Matt Armstrong <matt@corp.phone.com>
Since this is unix only, maybe try
CC = env GCC_EXEC_PREFIX=/whatever /wherever/this/one/is/gcc ;
You'll end up with long command lines, but unix tends to handle that okay.
Date: Fri, 8 Sep 2000 17:41:52 -400
From: "Lex Spoon" <lex@cc.gatech.edu>
Subject: Re: Setting environment variables for compiler
Surely you can export variables explicitly by changing commands like this:
mycommand myargs
to this:
FOO=$FOO mycommand myargs
Date: Thu, 14 Sep 2000 08:56:01 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: Newbie: simplest system for release/debug variants ?
I have a simple jam setup for a project with a complex directory
structure, using SubDir and SubInclude. Everything (obj, lib, exe) gets
built in the source directories.
Now I need to add release/debug variants. I am happy to build these
separately (invoking jam with -sDEBUG=1, for example), but I don't want
to get the targets mixed up.
What is the simplest way to do this ?
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 14 Sep 2000 14:46:09 +0100
Subject: RE: Newbie: simplest system for release/debug variants ?
You could let the release build in the source directories,
and have the debug files built in a subdirectory of each SubDir.
After the SubDir invocation, just write:
if $(DEBUG) { LOCATE_TARGET = $(SUBDIR)/debug ; }
Date: Thu, 14 Sep 2000 14:52:52 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: RE: Newbie: simplest system for release/debug variants ? -REPONSE
Just out of interest, could this be put in some kind of rule, to avoid having
to write it in each Jamfile of the directory tree ?
You could let the release build in the source directories,
and have the debug files built in a subdirectory of each SubDir.
After the SubDir invocation, just write:
if $(DEBUG) {
LOCATE_TARGET = $(SUBDIR)/debug ;
}
Date: Thu, 14 Sep 2000 14:58:30 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: MAXLINE problem
Jam is producing a command (a final link) that is too long for the
command shell (on NT 4.0).
1) I left MAXLINE in jam.h set to 996 for NT, but this does not seem to be
having any effect.
2) The README that comes with the jam distribution actually mentions
that NT 4.0 is no longer limited to 996. So how come I am having a
problem with this ?
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 14 Sep 2000 15:16:49 +0100
Subject: RE: Newbie: simplest system for release/debug variants
You could create a new SubDir rule (in your Jamrules file):
rule MySubDir {
SubDir $(<) ;
if $(DEBUG) { LOCATE_TARGET = $(SUBDIR)/debug ; }
}
And use this one instead of the standard rule.
Date: Thu, 14 Sep 2000 08:41:45 -0500
From: Eric Scouten <scouten@Adobe.COM>
Subject: RE: Newbie: simplest system for release/debug variants ? -REPONSE
If you look at the definition of SubDir (in Jambase), the first Jamfile
that gets read in relies on SubDir to read Jamrules, so your build will
fail with an unknown command "MySubDir" if you start jam from a directory
with a Jamfile that uses this rule because it hasn't seen the definition in
Jamrules yet. :-(
For this to work, you'd have to add the rule to Jambase and rebuild Jam.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 14 Sep 2000 16:43:58 +0100
Subject: Re: MAXLINE problem
The only way is to create a response file, containing the files to link,
and use it in the link command:
actions Link bind NEEDLIBS IMPLIB {
echo $(>) > $(<:S=.tmp)
echo $(LINKLIBS) >> $(<:S=.tmp)
echo $(NEEDLIBS) >> $(<:S=.tmp)
$(LINK) $(LINKFLAGS) /out:$(<) $(UNDEFS) @$(<:S=.tmp)
}
Following the old make(1) utility, I also hacked Jam to handle diversion:
In actions, all the text between <+ and +> is expanded, written into a temporary
file,
and the file name is put on the command line:
actions Link bind NEEDLIBS {
$(LINK) $(LINKFLAGS) /out:$(<) $(UNDEFS) @<+$(>) $(NEEDLIBS) $(LINKLIBS)+>
}
I suppose it would be useful to put this in the Public Depot,
if there is a general agreement on the <+ +> syntax;
Unfortunately I made lots of other changes in Jam since the original 2.1.5,
and it would require some work to integrate the changes back to the trunk.
Date: Thu, 14 Sep 2000 10:16:23 -0500 (CDT)
Subject: Re: MAXLINE problem
Here is our solution to the line too long problem:
# Notice the solution for the line too long problem
# create a file for the items, and use this trick, courtesy
# of Laura Wingerd to output the items
# This works due to the mix 'n match composition of macros
# by jam, each item in the extraobjects or $(>) is concatenated
# with the rest of the line. The period after the echo is
# ignored, I guess, but serves to make the whole thing one
# string. The newline macro splits it into individual lines.
actions vLink bind NEEDLIBS EXTRAOBJECTS {
copy nul: linkobjs.txt
echo.$(>)>>linkobjs.txt$(NEWLINE)
This will work no matter how big the $(>) or $(LINKLIBS) macros get
NEWLINE macro looks like:
NEWLINE = "
" ; # used to break up long lines for echo to a file
From: Karl Klashinsky <klash@cisco.com>
Subject: Re: MAXLINE problem
Date: Thu, 14 Sep 2000 09:35:35 -0700
Can you copy/paste the exact error message?
FYI, we had a similar snag here on our Solaris boxes. Turned out that
a faulty heuristic in the command-line "chunking" code would make a
guess at how many target/filenames could be tacked onto a single
command line. But that guess would cause the command buffer to overflow.
When Mark Baushke was still with us, he fixed this code to do a
"backtrack and try again" approach. His patch was submitted to the
official Jam depot @ perforce, but the patch has never been included
in an official jam release. You might want to retrieve that patch
from the jam depot and try it out.
[We've been using the patch in production for approx. a year now, with no problems.]
Date: Thu, 14 Sep 2000 11:37:37 -0700 (PDT)
Subject: Re: MAXLINE problem
Are you sure it's NT that the line's too long for? Maybe it's too long for
Jam because of what you have MAXLINE set to? I have mine (on NT) set to
32768, with no problem. (Don't remember now if I chose that as just a
reasonably large number, or whether I thought that was NT's limit. You
might want to give it try and see what happens -- maybe even crank it up
higher and see what it does).
Date: Fri, 15 Sep 2000 10:19:16 +0200
From: Dominic WILLIAMS <d.williams@csee-transport.fr>
Subject: Re: MAXLINE problem
set to 32768, with no problem.
Well, I am fairly sure it's NT for two reasons:
1- the error message is in French (my NT is in French, but I don't think
Jam has French error messages ;-)
2- if I type open a MS-DOS window and type an extra long command by
hand, it stops after about 2000 (2048?) characters. I just can't type on,
the keyboard doesn't work.
What makes people say that this limit no longer exists on NT4.0 ? Can I
make it go away ? Is it a problem with the French version of NT ?
What is the difference between these ? In Jambase I found an "actions
Link bind NEEDLIBS", without IMPLIBS, and found no "actions vLink...".
Should I modify the "actions Link..." in Jambase, or add one of these ?
Date: Fri, 15 Sep 2000 14:09:34 -0500 (CDT)
Subject: Re: MAXLINE problem
oh, the vLink is our own link rule. the important difference is that
no matter how big the jam macro gets, it will be properly echoed into
the response file (unless jam cannot handle the length of it!)
so, if $(>) = a.o b.o c.o d.o e.o f.o etc. ;
then
copy nul: linkobjs.txt # this creates an empty linkobjs.txt file
echo.$(>)>>linkobjs.txt$(NEWLINE) This line is expanded by jam to look like:
echo.a.o>>linkobjs.txt
echo.b.o>>linkobjs.txt
echo.c.o>>linkobjs.txt
echo.d.o>>linkobjs.txt
so that each item in the macro gets its own echo line. This is
important if the macro gets really big, with lots of items, otherwise
you can hit the dos line length limit.
From: "Paul Moore" <gustav@morpheus.demon.co.uk>
Subject: RE: MAXLINE problem
Date: Fri, 15 Sep 2000 20:20:02 +0100
The OS itself has no discernable limit (I've started commands with 3 Mbyte
command lines using the CreateProcess API - this was under WIndows 95, as
well, so the limit doesn't even exist there). However, the command
interpreters have limits - COMMAND.COM is a pathetic 256 bytes, IIRC. I'm
not sure what CMD.EXE is - it may not be documented. JP Software's 4NT.EXE
has a limit of 1023 bytes.
It would be nice if Jam used the OS (CreateProcess) when it doesn't need
shell services (such as redirection). I don't know if it does, though...
Date: 16 Sep 2000 01:10:00 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: RE: Newbie: simplest system for
Why can't the if statement be added to Jamrules itself?
Date: Sat, 16 Sep 2000 08:38:12 -0700 (PDT)
Subject: RE: MAXLINE problem
Actually, even if jam does need redirection services,
they can be easily implemented without resorting to CMD.EXE.
Date: Sat, 16 Sep 2000 11:11:10 -0500
From: Eric Scouten <scouten@Adobe.COM>
Subject: RE: Newbie: simplest system for
Good question. Jamrules gets read once and only once, regardless of how
many subdirectories you have. $(SUBDIR) changes per-SubDir invocation.
There might be some sort of skanky hack involving overriding SubDir in
Jamrules... I haven't tried that approach (yet).
Date: Thu, 7 Sep 2000 18:56:58 -0700 (PDT)
Subject: Multiple targets with JAM/MR
I was wondering if somebody could point me
to an example Jamfile which can build multiple
targets. I was primarily interested in
having Jam produce different libraries into
different directories from the same C code
based on compiler options (eg -g for a debug
build, -O2 for a performance build etc).
PS: I would have looked at the email archives before asking, but they seem to be missing
a global search index.
From: john@nanaon-sha.co.jp (John Belmonte)
Subject: Re: Multiple targets with JAM/MR
Date: Tue, 19 Sep 2000 10:47:51 +0900
Jam is not too pretty about this kind of thing. I'd recommend the post
"multiple target platforms?" from Aug 3, 2000.
Subject: Multiple targets with JAM/MR
From: "Michael O'Brien" <mobrien@pixar.com>
Date: Tue, 19 Sep 2000 09:43:41 -0700
Subject: Re: Multiple targets with JAM/MR
I've been using the grist for the file to allow multiple targets. Essentially,
I just make two targets that bind to two different object files. All generated
files need to have this duality (for example, we have some lex stuff).
So far, it's worked great. For reference, check out the makeGristedName rule in Jambase.
From: Grant_Glouser@palm.com
Date: Tue, 19 Sep 2000 15:37:09 -0700
Subject: Newbie: simplest system for
My solution to this is to add a "hook" to the SubDir rule. This
requies changing the Jambase and rebuilding Jam - but only once!
After that you can change the behavior of SubDir without rebuilding
Jam every time.
At the end of SubDir in the Jambase, I add this:
# invoke SubDirHook for customization by Jamrules SubDirHook $(<) ;
Then add an empty SubDirHook to the Jambase (this prevents Jam
from complaining if you decide not to use a SubDirHook in your Jamrules):
rule SubDirHook {
# Override this rule in the Jamrules
}
Notice that the SubDirHook is invoked *after* reading Jamrules.
So, the new SubDirHook rule in your Jamrules will always be
used, even in the first call to SubDir.
In the Jamrules, in this example, you could put a SubDirHook like this:
rule SubDirHook {
if $(DEBUG) { LOCATE_TARGET = $(SUBDIR)/debug ; }
}
This technique (hooks that can be overridden in Jamrules) could
be used in other places, but SubDir is where I've found it most useful.
From: Grant_Glouser@palm.com
Date: Tue, 19 Sep 2000 17:23:05 -0700
Subject: Re: Newbie: simplest system for
Someone else explained this in another message, but I will try to explain it
again. Short answer: SubDir is a special case when it comes to redefining
standard rules.
The problem with redefining SubDir in the Jamrules is that Jamrules is included
by SubDir. The first time SubDir is invoked, it will *always* use the SubDir in
the Jambase because you have not had the opportunity to redefine it. A new
SubDir in the Jamrules would take effect on subsequent invocations of SubDir,
but the first one will always be wrong (a real problem when you try to build
from a leaf directory in the hierarchy). You could redefine SubDir in every
Jamfile, but that is not a good solution.
SubDirHook gets around this by invoking another rule after you have had the
chance to override it. This way, it affects all uses of SubDir, including the first.
A better question might be: Why bother adding this hook mechanism when you can
just change the Jambase? I'm sure many sites have custom Jambases anyway, so
why not just change the Jambase and rebuild Jam when you need to? My only
answer is that it is a matter of choice and style - development style and usage
style. SubDirHook is another option, which I have found useful because I prefer
changing the Jamrules to changing the Jambase and rebuilding Jam.
Subject: Re: Newbie: simplest system for
Grant> My solution to this is to add a "hook" to the SubDir rule.
Grant> This requies changing the Jambase and rebuilding Jam - but
Grant> only once!
Hmm. You can redefine SubDir, or any other rule, in any jamfile
(typically Jamrules). Why bother recompiling jam? That just adds
more dependencies, potentially circular.
Date: Tue, 19 Sep 2000 16:02:42 -0700
From: Iain McClatchie <iain@10xinc.com>
Subject: Re: Newbie: simplest system for
Grant> My solution to this is to add a "hook" to the SubDir rule.
Grant> This requies changing the Jambase and rebuilding Jam - but
Grant> only once!
Hmm. You can redefine SubDir, or any other rule, in any jamfile
(typically Jamrules). Why bother recompiling jam? That just adds
more dependencies, potentially circular.
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Thu, 21 Sep 2000 15:27:57 +0200
Subject: PDB file
In jamrule file I want to write following rule, that should create in target
directory PDB file with target name. How I can pass to jam that every build
I want use target_name also for PDB file name?
I think, that it should be like this:
C++FLAGS += -Fd$(ALL_LOCATE_TARGET)$(SLASH)<target_name>.pdb ;
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 21 Sep 2000 18:40:45 +0100
Subject: E9f._:_[jamming]_PDB_file?=
You could rewrite the C++ rule and add
C++FLAGS += -Fd$(<:R=$(LOCATE_TARGET):S=.pdb) ;
But it's much better to create your own rule:
rule MainWithPdb {
Main $(<) : $(>) ;
local i s ;
# Add grist to file names
makeGristedName s : $(>) ;
for i in $(s) {
C++FLAGS on $(i:S=$(SUFOBJ)) += -Fd$(i:R=$(LOCATE_TARGET):S=.pdb) ;
}
}
Then replace each call to the Main rule by MainWithPdb !
From: Alfred Landrum <alandrum@s8.com>
Date: Thu, 21 Sep 2000 11:54:32 -0700
Subject: How to set per-target link flags
I'm looking for a Link version of "ObjectC++Flags", so that
Main badprogram : badprogram.cpp ;
could be made to link with compiled version of libefence.a.
I see the variable LINKFLAGS; is there a target-specific way of setting it?
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Fri, 22 Sep 2000 14:29:01 +0100
Subject: Re: How to set per-target link flags
You can use
LinkLibraries badprogram : libefence ;
This is roughly the same as setting the LINKFLAGS variable
for this target:
LINKFLAGS on badprogram += libefence.a ;
But the latter is not portable.
From: "Temesgen Habtemariam" <temesgen@nxnetworks.com>
Date: Wed, 11 Oct 2000 18:45:43 -0500
Subject: using c-shell on NT.
I am trying to get around the command line length limitation of cmd on NT.
I have been able to execute longer compile lines on the NT c-shell I got
from MKS Unix toolkits. I was thinking of making changes to jam so that it
executes all actions on NT using the c-shell instead of cmd. Does this sound
like some thing that can be done ? I was wondering if some one out there has
done some thing similar ...
From: "Temesgen Habtemariam" <temesgen@nxnetworks.com>
Date: Wed, 11 Oct 2000 18:27:11 -0500
Subject: using c-shell on NT.
I am trying to get around the command line length limitation of cmd on NT.
I have been able to execute longer compile lines on the NT c-shell I got
from MKS Unix toolkits. I was thinking of making changes to jam so that it
executes all actions on NT using the c-shell instead of cmd. Does this sound
like some thing that can be done ? I was wondering if some one out there has
done some thing similar ...
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Wed, 18 Oct 2000 11:13:03 +0200
Subject: ResourceCompiler rule
I need a help
I want to write rule that compiles *.rc files in *.res files
My Jamfile is:
Main ST : STMain.cpp ;
LibraryFromObjects ST : STMain.res ;
Object STMain.res : STMain.rc ;
My Jamrules looks like this:
rule UserObject {
switch $(>:S) {
case .rc : ResourceCompiler $(>:S=.res) : $(>) ;
case * : ECHO "unknown suffix on" $(>) ;
}
}
rule ResourceCompiler {
ECHO $(<) $(>) ;
Depends $(<) : $(>) ;
Clean clean : $(<) ;
}
actions ResourceCompiler {
ECHO "I am in action" ;
rc /D _AFXDLL /fo $(<) $(RCFLAGS) $(>)
}
But it doesn't work. I receive next message:
"Don't know how to make STMain.res"
Where is my mistake? If somebody tried to compile rc files into res?
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Wed, 18 Oct 2000 13:45:49 +0100
Subject: RE: ResourceCompiler rule
You should not use the UserObject rule, which only builds *.obj files
For resources, I use the following rules:
# Resource : builds a resource file
#
rule Resource {
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
Depends $(<) : $(>) ;
Clean clean : $(<) ;
RCFLAGS on $(<) = $(RCFLAGS) /d$(RCDEFINES) ;
}
actions Resource {
RC $(RCFLAGS) /Fo$(<) $(>)
}
# LinkResource : Links the resource file into an executable
#
rule LinkResource {
local t r ;
if $(<:S) { t = $(<) ;
} else { t = $(<:S=$(SUFEXE)) ; }
r = $(>:S=.res) ;
Depends $(t) : $(r) ;
NEEDLIBS on $(t) += $(r) ;
}
Then I write something like:
Main program : program.c ;
Resource resource.res : resource.rc ;
LinkResource program : resource.res ;
Date: Thu, 26 Oct 2000 16:04:34 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Compile Problem on Windows NT
consider changing to a mail reader or gateway that understands how to
I would like to test whether jam is a better alternative to make, as our
make problems get tougher and tougher to solve. But I have problems to get
jam working on NT.
We are using NT as a cross development platform for embedded PPC.
Therefore I just have a working GNU-Cross-Compiler and a cygwin (1.0)
native compiler I rarely use.
Neither make nor build.bat works. Could somebody mail me directly a
working jam.exe file or provide me with a working makefile?
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Mon, 30 Oct 2000 14:37:17 +0200
Subject: build library from two source with the same names
I need to build library, for example guess.lib, from files, that have same
names but different location,
example:
guess\eval\hash\of\int2.cpp
guess\prd\vec\plt\int2.cpp
gess\sol\vec\int2.cpp
In jamrules I use following:
ALL_LOCATE_TARGET = $(TOP)$(SLASH)lib.$(OS)$(SLASH)debug ;
if $(NT) {
OPTIM = -Zi ;
LINKFLAGS += /debug ;
C++FLAGS += -MDd ;
rule Library {
local name browse ;
LibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ;
for name in $(>) {
ObjectC++Flags $(name)
: -Fr$(ALL_LOCATE_TARGET)$(SLASH)$(name:S=.sbr) -Fd$(ALL_LOCATE_TARGET)$(SLASH)$(<:S=.pdb) -D_DEBUG -MDd ;
}
Objects $(>) ;
}
}
In this case all object files saved in one directory and every new int2.obj
rewrite previous.
How I can ask to build my source directory tree in target directory and than
to build library from target object files?
From: Behrad Mehraie <Behrad_Mehraie@creoscitex.com>
Subject: RE: build library from two source with the same names
Date: Mon, 30 Oct 2000 08:27:32 -0800
this format, some or all of this message may not be legible.
To fix this problem, you have to use the grists. I had the same issue.
I have attached a sample project that I was working on for you.
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Tue, 31 Oct 2000 10:51:47 +0100
Subject: RE: build library from two source with the same names
Using grists is not enough with libraries:
Jam cannot handle 2 files with the same name in the same library.
That is, when scanning the .lib file, all the targets will have the same name
guess.lib(int2.obj)
I think this comes from libraries in Unix, where libraries don't store the
directory names of their members.
As NT libraries do store the complete path of the .OBJ files, you may try
to hack Jam this way:
=======================
in filent.c:
replace:
- /* strip leading directory names, an NT specialty */
-
- if( c = strrchr( name, '/' ) )
- name = c + 1;
- if( c = strrchr( name, '\\' ) )
- name = c + 1;
-
- sprintf( buf, "%s(%.*s)", archive, endname - name, name );
- (*func)( buf, 1 /* time valid */, (time_t)lar_date );
by:
+ sprintf( buf, "%s(%.*s)", archive, endname - name, name );
+ for(c = buf; *c; c++)
+ if( *c == '\\' )
+ *c = '/';
+
+ (*func)( buf, 1 /* time valid */, (time_t)lar_date );
======================
in Jambase, rule LibraryFromObjects:
replace:
- local i l s s2 ;
-
- # Add grist to file names
-
- makeGristedName s : $(>) ;
-
by:
+ local i l s s2 ;
+
+ # Add grist to file names
+
+ makeGristedName s : $(>) ;
+
+ # NT use full path member names
+ if $(MSVCNT)
+ {
+ s2 = $(s:R=$(LOCATE_TARGET)) ;
+ }
+ else
+ {
+ s2 = $(>:BS) ;
+ }
+
replace:
- if ! $(l:D)
- {
- MakeLocate $(l) $(l)($(s:BS)) : $(LOCATE_TARGET) ;
- }
by:
+ if ! $(l:D)
+ {
+ MakeLocate $(l) $(l)($(s2:G=)) : $(LOCATE_TARGET) ;
+ }
replace:
- # If we can scan the library, we make the library depend
- # on its members and each member depend on the on-disk
- # object file.
-
- Depends $(l) : $(l)($(s:BS)) ;
-
- for i in $(s)
- {
- Depends $(l)($(i:BS)) : $(i) ;
- }
by:
+ # If we can scan the library, we make the library depend
+ # on its members and each member depend on the on-disk
+ # object file.
+ Depends $(l) : $(l)($(s2:G=)) ;
+ for i in $(s2)
+ {
+ Depends $(l)($(i:G=)) : $(i:GBS) ;
+ }
From: Alfred Landrum <alandrum@s8.com>
Date: Tue, 31 Oct 2000 18:10:38 -0800
Subject: Cross Including Jamfiles - Bad Idea?
[see dir structure below]
If I cd into /top, everythings works as planned. But if I cd into
/top/exec, it can't find targets libfoo or libbar.
I want to be able to run jam from /top/exec, and have it find (and
potentially update) libfoo and libbar.
I see two solutions. One, make libfoo and libbar globally defined. Or, I
could try to include libfoo's and libbar's Jamfile from exec's Jamfile.
This will take some work, because I think we'll need to put "header guards"
in the library's Jamfiles.
I don't want to globally define them; it won't scale.
# Example directory structure
/top/exec
/top/libfoo
/top/libbar
# Jamfile in /top/exec
SubDir TOP exec ;
Main exec : main.c ;
LinkLibraries exec : libfoo libbar ;
# Jamfile in /top
SubDir TOP ;
SubInclude TOP exec ;
SubInclude TOP libfoo ;
SubInclude TOP libbar ;
From: Behrad Mehraie <Behrad_Mehraie@creoscitex.com>
Subject: RE: build library from two source with the same names
Date: Tue, 31 Oct 2000 08:56:47 -0800
this format, some or all of this message may not be legible.
Sorry, it seems that the attachment was not gone through,
because of the anti virus installed on our server.
so please rename the attached file from .zi1 to .zip and
then unzip it.
From: Amaury FORGEOT-d'ARC [mailto:Amaury.FORGEOTDARC@atsm.fr]
Subject: RE: build library from two source with the same names
Using grists is not enough with libraries:
Jam cannot handle 2 files with the same name in the same library.
That is, when scanning the .lib file, all the targets will have the same name
guess.lib(int2.obj)
I think this comes from libraries in Unix, where libraries don't store the
directory names of their members.
As NT libraries do store the complete path of the .OBJ files, you may try
to hack Jam this way:
=======================
in filent.c:
replace:
- /* strip leading directory names, an NT specialty */
-
- if( c = strrchr( name, '/' ) )
- name = c + 1;
- if( c = strrchr( name, '\\' ) )
- name = c + 1;
-
- sprintf( buf, "%s(%.*s)", archive, endname - name, name );
- (*func)( buf, 1 /* time valid */, (time_t)lar_date );
by:
+ sprintf( buf, "%s(%.*s)", archive, endname - name, name );
+ for(c = buf; *c; c++)
+ if( *c == '\\' )
+ *c = '/';
+
+ (*func)( buf, 1 /* time valid */, (time_t)lar_date );
======================
in Jambase, rule LibraryFromObjects:
replace:
- local i l s s2 ;
-
- # Add grist to file names
-
- makeGristedName s : $(>) ;
-
by:
+ local i l s s2 ;
+
+ # Add grist to file names
+
+ makeGristedName s : $(>) ;
+
+ # NT use full path member names
+ if $(MSVCNT)
+ {
+ s2 = $(s:R=$(LOCATE_TARGET)) ;
+ }
+ else
+ {
+ s2 = $(>:BS) ;
+ }
+
replace:
- if ! $(l:D)
- {
- MakeLocate $(l) $(l)($(s:BS)) : $(LOCATE_TARGET) ;
- }
by:
+ if ! $(l:D)
+ {
+ MakeLocate $(l) $(l)($(s2:G=)) : $(LOCATE_TARGET) ;
+ }
replace:
- # If we can scan the library, we make the library depend
- # on its members and each member depend on the on-disk
- # object file.
-
- Depends $(l) : $(l)($(s:BS)) ;
-
- for i in $(s)
- {
- Depends $(l)($(i:BS)) : $(i) ;
- }
by:
+ # If we can scan the library, we make the library depend
+ # on its members and each member depend on the on-disk
+ # object file.
+
+ Depends $(l) : $(l)($(s2:G=)) ;
+
+ for i in $(s2)
+ {
+ Depends $(l)($(i:G=)) : $(i:GBS) ;
+ }
From: Behrad Mehraie <Behrad_Mehraie@creoscitex.com>
Subject: RE: build library from two source with the same names
Date: Tue, 31 Oct 2000 08:48:43 -0800
this format, some or all of this message may not be legible.
If you open the attached .zip file, you'll see that everything is working without
any problem, althought all the source files have the same name.
Also the output files are going to have the same names.
Again I have attached the .zip file. Run Jam from the directory which includes
the jamRules and you'll see what happens.
From: Amaury FORGEOT-d'ARC [mailto:Amaury.FORGEOTDARC@atsm.fr]
Sent: Tuesday, October 31, 2000 1:52 AM
Subject: RE: build library from two source with the same names
Using grists is not enough with libraries:
Jam cannot handle 2 files with the same name in the same library.
That is, when scanning the .lib file, all the targets will have the same name
guess.lib(int2.obj)
I think this comes from libraries in Unix, where libraries don't store the
directory names of their members.
As NT libraries do store the complete path of the .OBJ files, you may try
to hack Jam this way:
=======================
in filent.c:
replace:
- /* strip leading directory names, an NT specialty */
-
- if( c = strrchr( name, '/' ) )
- name = c + 1;
- if( c = strrchr( name, '\\' ) )
- name = c + 1;
-
- sprintf( buf, "%s(%.*s)", archive, endname - name, name );
- (*func)( buf, 1 /* time valid */, (time_t)lar_date );
by:
+ sprintf( buf, "%s(%.*s)", archive, endname - name, name );
+ for(c = buf; *c; c++)
+ if( *c == '\\' )
+ *c = '/';
+
+ (*func)( buf, 1 /* time valid */, (time_t)lar_date );
======================
in Jambase, rule LibraryFromObjects:
replace:
- local i l s s2 ;
-
- # Add grist to file names
-
- makeGristedName s : $(>) ;
-
by:
+ local i l s s2 ;
+
+ # Add grist to file names
+
+ makeGristedName s : $(>) ;
+
+ # NT use full path member names
+ if $(MSVCNT)
+ {
+ s2 = $(s:R=$(LOCATE_TARGET)) ;
+ }
+ else
+ {
+ s2 = $(>:BS) ;
+ }
+
replace:
- if ! $(l:D)
- {
- MakeLocate $(l) $(l)($(s:BS)) : $(LOCATE_TARGET) ;
- }
by:
+ if ! $(l:D)
+ {
+ MakeLocate $(l) $(l)($(s2:G=)) : $(LOCATE_TARGET) ;
+ }
replace:
- # If we can scan the library, we make the library depend
- # on its members and each member depend on the on-disk
- # object file.
-
- Depends $(l) : $(l)($(s:BS)) ;
-
- for i in $(s)
- {
- Depends $(l)($(i:BS)) : $(i) ;
- }
by:
+ # If we can scan the library, we make the library depend
+ # on its members and each member depend on the on-disk
+ # object file.
+
+ Depends $(l) : $(l)($(s2:G=)) ;
+
+ for i in $(s2)
+ {
+ Depends $(l)($(i:G=)) : $(i:GBS) ;
+ }
From: "Amaury FORGEOT-d'ARC" <Amaury.FORGEOTDARC@atsm.fr>
Date: Thu, 2 Nov 2000 09:46:02 +0100
Subject: E9f._:_[jamming]_Cross_Including_Jamfiles_-_Bad_?=
You can avoid these "header guards" in the library's Jamfiles
with this trick:
in a sub-Jamfile, the $(s) variable is set to the arguments of
the current invocation of the SubInclude rule (see Jambase).
In your case, you could add to the Jamfile in /top/exec:
# if we called Jam from this directory, build libraries
if ! $(s) {
SubInclude TOP libfoo ;
SubInclude TOP libbar ;
}
Date: 6 Nov 2000 23:05:03 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: 1 file to two dependancies
i have an object file that needs to go into two libraries, how can i do
that if both libraries and the object file are in the same directory?
Date: Mon, 06 Nov 2000 17:35:05 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Command line limit on WindowsNT
It took me about a day to convert my vxWorks-makefiles to jam, which is not too bad.
Using this this for some of our real projects I ran into the infamous command line length
limit of WindowsNT.
I was really frustrated as with the cygwin make I never had this problem.
After unsuccessfully fiddeling around to circumvert the problem, I decided to go ahead
and to merge the solution used by the cygwin make-utility to exec-commands. This took
me another day, but now everythins is more or less working (no big tests done yet).
I think there are a few problems involving redirection of IOs around which plague
also the cygwin-make utiltity. Therefore I still want do polish the whole thing a little bit.
Is anybody else interested in a cygwin-port?
Is it okay to send the patches and new files to this mail list?
Do you think there are any problems incorporating these changes as they come under
the GNU license?
Date: Tue, 7 Nov 2000 21:53:59 -0800 (PST)
Subject: Re: Command line limit on WindowsNT
I'd be interested in your changes, Niklaus. I never
really did understand what's the precise problem with
the command line length though. This discussion has
been raised before on this mailing list and there
didn't seem to be a definitive answer. How did you
resolve the problem?
From: "Rukun Wei" <Rukun.Wei@sybase.com>
Date: Tue, 28 Nov 2000 11:13:53 -0800
Subject: How to check file status
We need to make sure that "runtime.zip" exists before compiling some
java files. Is there any way to tell jam to check this file exists
before even trying to build?
From: Amaury.FORGEOTDARC@atsm.fr
Subject: Re: How to check file status
Date: Thu, 30 Nov 2000 09:25:51 +0100
You could make it the first target of your build:
rule checkFirst {
Depends first : $(<) ;
ALWAYS runtime.zip ;
}
if $(NT) { actions checkFirst { if not exist $(<) exit 1 } }
if $(UNIX) { actions checkFirst { test -f $(<) } }
checkFirst runtime.zip ;
Then use the -q option of Jam to make it quit on the first failed action.
Date: Wed, 06 Dec 2000 15:20:48 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Antw: jamming digest, Vol 1 #142 - 2 msgs
I achieved the same result by patching the SubInclude rule like this:
rule SubInclude {
local s ;
# That's
# SubInclude TOP d1 [ d2 [ d3 [ d4 ] ] ]
#
# to include a subdirectory's Jamfile.
if ! $($(<[1])) {
EXIT Top level of source tree has not been set with $(<[1]) ;
}
makeDirName s : $(<[2-]) ;
#ECHO "SubInclude $(s) -> $($(s)-SubIncluded) " ;
if ! $($(s)-SubIncluded) {
# Gated entry.
$(s[1])-SubIncluded = TRUE ;
# ECHO SubIncludexy $(JAMFILE:D=$(s):R=$($(<[1]))) ;
include $(JAMFILE:D=$(s):R=$($(<[1]))) ;
# ECHO "End of SubInclude $(s) -> $($(s)-SubIncluded) " ;
} else {
#ECHO "Skipping as already included $(s[1])-included " ;
}
}
Personally I prefer to modify the rule, as everybody will get the correct result.
Date: Wed, 06 Dec 2000 15:18:47 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Jam for WindowsNT (cygwin)
consider changing to a mail reader or gateway that understands how to
I successfully compiled Jam under cygwin (Version 1.1x (Beta)).
As far as I tested it with my environment there is no limit to the command
line length (> 15 kB) nor an other apparent bug.
I would appreciate if anybody would crosscheck this readme and tell me,
whether he managed install it also on his/her machine.
Date: Thu, 7 Dec 2000 13:08:18 +0200
Subject: DEPENDENCE
I have the next situation: two different libraries use the same header file
that was changed.
I want that linking of one library will call also linking of all libraries
that use the same changed header file.
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Tue, 26 Dec 2000 16:22:02 +0200
Subject: recompilation
Every time that I want to run build, Jam compiles all files again even
nothing was changed.
I think that should exist any option that cause to compile only changed
files and not to touch files, that wasnot changed since last build.
Date: Fri, 5 Jan 2001 12:07:27 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: ANNOUNCE: jam 2.3 available at www.perforce.com/jam/jam.html
After 3 years, jam 2.3 is out to replace jam 2.2.
This is not a major release, but includes a number of user-contributed
changes from the Perforce Public Depot, as well as bug fixing and
enhancements done at Perforce Software.
The complete release notes are at
The highlights are:
Jam code is now ANSI C, so it can be compiled with a C++
compiler, but no longer with a K&R compiler.
Experimental support for rules returning values.
Miscellaneous bug fixes.
Lots of porting changes.
This release is being made in anticipation of more agressive development
of Jam in the next few months. I wanted to get what we had out so that
any user-contributed development can then be more easily merged.
The starting page for Jam information is:
From: Alfred Landrum <alandrum@s8.com>
Subject: RE: ANNOUNCE: jam 2.3 available at www.perforce.com/jam/jam.html
Date: Fri, 5 Jan 2001 13:39:09 -0800
We (Scale8) are shortly going to move to Jam.
I was wondering what the wish list is for jam? You mention aggressive
development; what new features are you planning?
From: Christopher Seiwald [mailto:seiwald@perforce.com]
Sent: Friday, January 05, 2001 12:07
Subject: ANNOUNCE: jam 2.3 available at www.perforce.com/jam/jam.html
After 3 years, jam 2.3 is out to replace jam 2.2.
This is not a major release, but includes a number of user-contributed
changes from the Perforce Public Depot, as well as bug fixing and
enhancements done at Perforce Software.
The complete release notes are at
http://public.perforce.com/public/jam/src/RELNOTES
The highlights are:
Jam code is now ANSI C, so it can be compiled with a C++
compiler, but no longer with a K&R compiler.
Experimental support for rules returning values.
Miscellaneous bug fixes.
Lots of porting changes.
This release is being made in anticipation of more agressive development
of Jam in the next few months. I wanted to get what we had out so that
any user-contributed development can then be more easily merged.
The starting page for Jam information is:
http://www.perforce.com/jam/jam.html
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: ANNOUNCE: jam 2.3 available at www.perforce.com/jam
Date: Fri, 5 Jan 2001 14:10:21 -0800
I'm happy that development has restarted on Jam.
I have a few questions about the just announced release ...
I did not see in the release notes that my
patch to handle the new AIX 3.4 ar format was applied.
I also did not see mention of the regular expression stuff
The problem with LOCATE on library's and library members
(they must be the same or Jam fails) was not mentioned.
Was the bug about grouping fixed:
TESTVAR = Hello ;
if $(TESTVAR) && ($(TESTVAR) != $(TESTVAR)) { }
Do we (the Jam users at large) need to resumit
which patches/bugs/etc ?
Finally, what are these planned enhansements, and
do you need any help.
Date: Sun, 7 Jan 2001 21:30:57 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: RE: ANNOUNCE: jam 2.3 available at www.perforce.com/jam /jam.html
Several have written with comments and questions about the jam 2.3 release.
I should have been more emphatic: jam 2.3 includes only SOME of the user
contributed changes. What got left behind? Anything done after September,
as that is when I last pulled in changes submitted to the public depot,
the variable substitution using regular expressions, as that went too
far in a direction ($(X:do-everything)) I'm trying to re-evaluate, and
some things no doubt I simply missed.
Development of Jam has never really stopped, though it slowed and all
of what was done was kept internal to Perforce. This release was to
push things out the door, so that user contributions can come from
similar code.
Jam does not have a full-time curator. As some of you may know, I have
other responsibilities at Perforce that take my time. We are looking
to hire an open source curator, for both Jam and other projects here.
I still plan to control its direction (which remains lean and mean, in
case you're wondering), but its the machinations that take considerable
effort.
Still, the best way to get changes into Jam is to submit them to the
public depot. Eventually, I plan to incorporate all changes or send
email explaining why the change is not being incorporated.
As to what's being planned, here are the crib notes:
- a directory scan operation, like
files = [ glob $(SRCDIR) *.c ] ;
That would populate the $(files) variable with the .c
files in $(SRCDIR).
- Better manipulation of variables, moving away from
$(X:do-everything).
- (Big) Allowing UNIX-path target names on all platforms,
which get bound properly in non-unix envs (VMS, OS2,
NT, MAC), so that Jamfiles could be written entirely in
UNIX format.
- An overhaul of the way Jamrules works. Sharing Jamrules
is right now rather clunky.
- Fixing the scanner, so that =, :, and ; don't need
whitespace surrounding them. Whitespace was the sacred
character 8 years ago. Now everyone seems to have
whitespace in file names.
- Splitting defines out of CCFLAGS and C++FLAGS, so that
they can be expressed in a system independent fashion.
I thank you all for you interest, support, and ask your forgiveness for
its somewhat absentee landlord.
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Mon, 8 Jan 2001 16:39:24 +0200
Subject: multi-project build
I have following structure of my projects:
Proj1 -|
|-ProjWatcom
|-Jamfile
|-ProjUnix
|-Jamfile
Proj2- |
|-ProjMSVCNT
|-Jamfile
Tools-
|-Jambase
|-bin.linuxx86
|-jam
|-bin.ntx.86
|-jam
|-Jambase
|-Jamrules
My task is to create jamfiles in such way,
that I will be able to compile different environments (LINUX,NT,WATCOM)
using the same Jambase and Jamrules
My problem is to define TOP. I know that TOP is a directory that has the Jamrules
How I can create any pointer from my jamfiles to the TOP directory that
contains my Jamrules?
From: Miklos Fazekas <boga@mac.com>
Date: Mon, 8 Jan 2001 16:12:52 +0100
Subject: jam crash
The attached file causes a crash on Windows NT 2000, and MPW. (Unmapped memory excpetion).
(I've tested with 2.3, and that one contained the error too.)
Can anyone reproduce this? I used:
jam -fki.txt
Or is there something illegal with the file?
Date: Mon, 8 Jan 2001 11:03:09 -0600 (CST)
Subject: Re: jam crash
I imagine that it is just generating a macro that is too big. We had to
up the macro string size once...
The attached file causes a crash on Windows NT 2000, and MPW. (Unmapped
memory excpetion).
(I've tested with 2.3, and that one contained the error too.)
Can anyone reproduce this?
I used:
jam -fki.txt
Or is there something illegal with the file?
ECHO "Begin" ;
LFLAGS = "$(LFLAGS) $(alma)$(alma)" ;
ECHO "End" ;
ECHO "Begin" ;
LFLAGS = "$(LFLAGS) $(alma)$(alma)" ;
ECHO "End" ;
EXIT ;
From: Miklos Fazekas <boga@mac.com>
Subject: Re: jam crash
Date: Mon, 8 Jan 2001 18:25:14 +0100
I was not aware of this limit. (Probably 1024 then.)
I see, so instead of generating on long LFLAGS like:
LFLAGS = "$(LFLAGS) $(newflag)" ;
I should generate it into a list:
LFLAGS += $(newflag) ;
From: "Tim Baker" <dbaker@direct.ca>
Subject: Re: ANNOUNCE: jam 2.3 available at www.perforce.com/jam /jam.html
Date: Mon, 8 Jan 2001 17:52:33 -0800
Here's a simple 'glob' command for jam 2.3. The syntax is
glob DIR PATTERN PATTERN ...
The pattern matching uses the builtin glob() function.
I added this to the end of compile.c.
#include "filesys.h"
static LIST *_glob_pat;
static LIST *_glob_list;
/* Callback to file_dirscan() */
static void glob_func(
char *file,
int status,
time_t t ) {
FILENAME f;
LIST *l;
file_parse( file, &f );
f.f_dir.len = 0;
file_build( &f, file, 0 );
{
if ( !glob( l->string, file ) ) {
_glob_list = list_append( _glob_list,
list_new( L0, newstr( file ) ) );
}
}
}
static LIST *
builtin_glob(
PARSE *parse,
LOL *args ) {
/* FIXME: check number of args */
LIST *l = lol_get( args, 0 );
char *dir = l->string;
_glob_pat = list_next( l );
_glob_list = L0;
file_dirscan(dir, glob_func);
return _glob_list;
}
Then add this to the end of compile_builtins():
bindrule( "glob" )->procedure
parse_make( builtin_glob, P0, P0, P0, C0, C0, 0 );
From: Behrad Mehraie <Behrad_Mehraie@creoscitex.com>
Subject: RE: multi-project build
Date: Mon, 8 Jan 2001 12:18:26 -0800
The solution is simple. Just put a JamFile in the Root of your projects and
then the JamRule in the same place.
The Jamfile in the root must have this line as the first statement: SubDir
TOP ;
It also has to have the loop below to tell Jam which folders are included in
the project:
Note that this loop must be at the end of your JamFile in the project root.
CORE_PROJECTS = Proj1 Proj2 ;
for proj in $(CORE_PROJECTS) {
SubInclude TOP $(proj) ;
}
I have corrected your project tree. I hope it helps.
JamRule
JamFile ---> SubDir TOP ;
Proj1 -|
|-ProjWatcom
|-Jamfile ---> SubDir TOP Proj1 ProjWatcom ;
|-ProjUnix
|-Jamfile --> SubDir TOP Proj1 ProjUnix ;
Proj2- |
|-ProjMSVCNT
|-Jamfile --> SubDir TOP Proj1 MSVCNT ;
Tools-
|-Jambase
|-bin.linuxx86
|-jam
|-bin.ntx.86
|-jam
From: Ivetta Estrin [mailto:ivetta@schema.com]
Sent: Monday, January 08, 2001 6:39 AM
Subject: multi-project build
I have following structure of my projects:
Proj1 -|
|-ProjWatcom
|-Jamfile
|-ProjUnix
|-Jamfile
Proj2- |
|-ProjMSVCNT
|-Jamfile
Tools-
|-Jambase
|-bin.linuxx86
|-jam
|-bin.ntx.86
|-jam
|-Jambase
|-Jamrules
My task is to create jamfiles in such way,
that I will be able to compile different environments (LINUX,NT,WATCOM)
using the same Jambase and Jamrules
My problem is to define TOP. I know that TOP is a directory that has the Jamrules
How I can create any pointer from my jamfiles to the TOP directory that
contains my Jamrules?
From: Miklos Fazekas <boga@mac.com>
Date: Wed, 10 Jan 2001 17:56:57 +0100
Subject: Multiple Depnedants and order of actions.
I have the following jam-file, and the files:
a,b,c,d,e
actions copy { echo "copy $(<) " > $(<) }
rule copy { Depends $(<) : $(>) }
actions copy2 {
echo "copy $(>) " > $(<[1])
echo "copy $(>) " > $(<[2])
}
rule copy2 { Depends $(<) : $(>) ; }
copy a : c ;
copy2 a b : d ;
copy b : e ;
Depends all : b a ;
NOTFILE all ;
If i edit the file 'c' and 'e'. The orders of actions will be:
copy2 a b :d ;
copy b : e ;
copy a : c ;
That is not the same order as i defined!
I'd except an:
copy a : c ;
copy2 a b : d ;
copy b : e ;
order.
Is it a bug in Jam or is there any missing dependency in my example?
(Adding Depends b : c won't help.)
From: Amaury.FORGEOTDARC@atsm.fr
Subject: Re: Multiple Depnedants and order of actions.
Date: Wed, 10 Jan 2001 19:06:17 +0100
According to your rule:
you ask Jam to generate b before a,
and that's why the rules having b as targets are executed first.
Objet : Multiple Depnedants and order of actions.
I have the following jam-file, and the files:
a,b,c,d,e
actions copy { echo "copy $(<) " > $(<) }
rule copy { Depends $(<) : $(>) }
actions copy2 {
echo "copy $(>) " > $(<[1])
echo "copy $(>) " > $(<[2])
}
rule copy2 { Depends $(<) : $(>) ; }
copy a : c ;
copy2 a b : d ;
copy b : e ;
Depends all : b a ;
NOTFILE all ;
If i edit the file 'c' and 'e'. The orders of actions will be:
copy2 a b :d ;
copy b : e ;
copy a : c ;
That is not the same order as i defined!
I'd except an:
copy a : c ;
copy2 a b : d ;
copy b : e ;
order.
Is it a bug in Jam or is there any missing dependency in my example?
(Adding Depends b : c won't help.)
Date: Wed, 10 Jan 2001 12:25:12 -0600 (CST)
Subject: Re: Multiple Depnedants and order of actions.
my experience is that outside of dependencies, there is no specific
order that actions will occur in. Logically, if there are no
dependencies, then it will not matter...
my experience with make vs jam made me realize that lots of dependencies
are left unspecified in make such that the procedural order of things
in make is important. It is much less important in jam.
Date: Wed, 10 Jan 2001 13:09:09 -0600 (CST)
Subject: Re: Multiple Depnedants and order of actions.
well, in that case, just ask it to update that target instead of all of them.
Well, I suppose somebody could do a newest first sort of ordering on top
of equally choosable targets...
From: <boga@mac.com>
Subject: Re: Multiple Depnedants and order of actions.
Date: Wed, 10 Jan 2001 21:00:39 -0000
No! I tell jam that target 'all' needs to be updated if either 'a' or 'b'
was updated.
This is the meaning of: Depends all : b a ; NOTFILE all; Not?!
I have a simpler example:
action copy { cp $(>) $(<) }
rule copy { Depends $(<) : $(>) }
action append { cat $(>) >> $(<) }
rule append { Depends $(<) : $(>) }
rule mergefiles {
copy $(<) : $(>[1])
append $(<) : $(>[2])
}
mergefiles a : b c ;
Q1:
Is it true that:
a.) if either 'b' or 'c' updated 'copy' and 'append' action will be
executed!? (both, and not one of them.)
b.) if 'a' is updated the action 'copy' will be executed first and then
'append' !?
And if i change:
rule mergefiles {
copy $(<[1]) : $(>[1]) ;
append $(<) : $(>[2]) ;
}
mergefiles a d : b c ;
Q2:
Is it true that:
a.) if either 'b' or 'c' updated copy and append will be executed? (both,
and not one of them.)
b.) if 'a' is updated the action 'copy' will be executed first and then
'append' !?
Date: Wed, 10 Jan 2001 20:04:40 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Multiple Depnedants and order of actions.
Well, logic aside, it matters. I want the file I was just editing to be
compiled first, because that's the file I'm thinking about right now.
Some file has to be the first to be compiled, and the one I'm currently
working on and saved just before I started jam is better than a random
choice. Often not _much_ better, but I personally really hate it when I
'p4 sync' and the next time I compile, I have to wait a minute or two to
see the errors for the file I'm working on.
Date: Wed, 10 Jan 2001 20:20:24 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Multiple Depnedants and order of actions.
Yes, that's exactly what I tried to implement some time ago, but then a
deadline at work came and took me. I'll see if I can do it to 2.3. It's so
good to see a new version of jam.
What I tried was this:
1. The "score" of a target is set to the number of seconds since it's
been modified, plus the number of targets with the same modification time.
lowest of the scores on which it depends.
3. Targets are built lowest-score-first, to the degree the dependency graph permits.
The addition in step 1 is because things that modify many files aren't me.
I am human and almost always think about one file at a time. Perhaps two.
Things like "p4 sync" work on many.
Date: 11 Jan 2001 05:12:45 -0000
From: nirva@ishiboo.com (Danny Dulai)
Subject: two results from one action
When compiling a .dll on win2k, the link /dll command will produce two
important files, the .dll and the .lib. The .lib needs to be understood as
"existing" by jam, so my LinkLibraries rule can find it (in the binding
phase) later.
what i need is improve my dll build rules so that they somehow make the
side effect files (.lib) avilable to the binding phase.
I'm not sure if any of that made sense.. any suggestions?
From: Amaury.FORGEOTDARC@atsm.fr
Subject: Re: Multiple Depnedants and order of actions.
Date: Thu, 11 Jan 2001 09:38:28 +0100
Yes, it would be a nice feature.
but it's NOT a random choice.
it's the first file appearing in the dependency tree, which is built from the Jamfile
In the Miklos' original jamfile:
The dependency tree is:
all _____ b _____ d
| \__ e
|
\___ a _____ c
\__ d
Suppose c and e are modified:
- b needs rebuild because e is newer
- a needs rebuild because c is newer
actions having b as target are executed first, so the order:
copy2 a b : d ;
copy b : e ;
copy a : c ;
From: Amaury.FORGEOTDARC@atsm.fr
Subject: Re: Multiple Depnedants and order of actions.
Date: Thu, 11 Jan 2001 10:48:20 +0100
Yes, but since 'all' has no associated actions...
NOTFILE only means that the target doesn't exist as a file and has no timestamp.
If either 'b' or 'c' are updated, then 'a' is outdated, and yes, both actions
having 'a' as targets will be executed, in the order they appear in the Jamfile.
The Depends rules are as follow (execute jam -d5 to see the rule invocation)
Depends a : b
Depends a d : c
Then all depends of the order of the targets 'a' and 'd' in the dependency tree.
1) if 'a' appears before 'd' (add the rule "Depends all : a d ;" or invoke jam as
"jam a d"):
all _____ a _____ b
| \__ c
|
\__ d _____ c
if b or c is updated, then a will be rebuilt first,
and Jam executes all actions having 'a' as target, in the order they appear
in the jamfile.
(When jam comes to the d --> c dependency, the corresponding action is already executed,
so nothing more is done.)
2) if 'd' appears before 'a' (add the rule "Depends all : d a ;" or invoke
jam as
"jam d a"):
all _____ d _____ c
|
\__ a _____ b
\__ c
if 'b' is updated, then all actions having 'a' as target are fired,
in the original order.
BUT if 'c' is updated, the actions having 'd' as target are fired first,
the actions having 'a' as target come after.
so the 'append' command is run before the 'copy' command...
From: Miklos Fazekas <boga@mac.com>
Subject: Re: Multiple Depnedants and order of actions.
Date: Thu, 11 Jan 2001 12:07:33 +0100
Ok maybe i misunderstood Jam.
Then my question is:
Depends a b : c
the same as
Depends a : c
Depends b : c
I think the first one is:
a
\
*--- c
/
b
The second one is:
a
\
c
/
b
The differnce is, that the first one implies an update on 'b' any time an
update on 'a' is done. For example i use the first style rule for linking:
from one or more source i link:
- dll and importlib
However they are made at the same time! So if dll needs to be relinked, the
importlib should be updated too. And any targets depending on importlib
should be updated too.
From: Amaury.FORGEOTDARC@atsm.fr
Subject: Re: Multiple Depnedants and order of actions.
Date: Thu, 11 Jan 2001 12:33:17 +0100
What is confusing you (I think) is that actions
are independant from dependencies.
is not the same as
because there is one *action* on 2 targets in the first case,
and 2 actions in the second case, which Jam runs separately.
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Multiple Depnedants and order of actions.
If there isn't a rationale for the action execution order, the order is arbitrary.
I can't think of any rationale for this and didn't see any in the
documentation; can any other jam users?
Date: Thu, 11 Jan 2001 10:14:07 -0600 (CST)
Subject: Re: two results from one action
ok, I know we had this problem too, so I look around in our
jamrules (I should know, I put the fix in, but...)
Here is what we did:
# vLink is identical to Link in Jambase, explicitly uses $(<[1]) as
# the output file. This allows us to pass down extra targets that
# get ignored by the action, but makes jam think we're building them.
# This is used to let jam know that Link will build a .lib in addition
# to a .dll when building a .dll. See the vMainFromObjects rule.
From: leon glozman <leonid_g@schema.com>
Date: Mon, 15 Jan 2001 14:14:22 +0200
Subject: build library from two or more sources with the same name
I want to build library from two or more sources with the same name in LINUX
environment (we use g++ for compiling and linking) as the following:
For example, I have MyDir/utils/dir1/source.cc, MyDir/utils/dir2/source.cc
and MyDir/utils/dir3/source.cc.
I have Jamrules & Jambase in MyDir. I create the objects and targets (libs &
executables) in MyDir/lib.LINUX.
I want to create the objects as the following:
MyDir/lib.LINUX/utils/dirx/source.o (x is 1-3).
Generally, if my source path is MyDir/utils/srcdir/src_name.cc, it will be
compiled to MyDir/lib.LINUX/utils/srcdir/src_name.o.
When all objects will be created, I want to link the library utils.a from
the objects and put it under MyDir/lib.LINUX/
From: "Ivetta Estrin" <ivetta@schema.com>
Date: Wed, 17 Jan 2001 10:00:13 +0200
Subject: compile all sources from current directory
I have following Jamfile:
SubDir TOP dir subdir ;
Library BaseLib : foo1.cc foo2.cc foo3.cc foo4.cc foo5.cc ;
SubInclude TOP dir subdir subdir1 ;
SubInclude TOP dir subdir subdir2 ;
Instead of sources list (*.cc) I want to write expression that will take all
*.cc files that are in current directory as sources (something like
$(>:S=.cc))
Does somebody know how to do it?
Date: Wed, 17 Jan 2001 12:30:00 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: making #include scanning look in more directories
In a Jambase, I've set the C++ compilation flags to include an extra -I,
and all has been well for a very long time. But a few days ago I
discovered that the #include files in that directory aren't found. Hasn't
mattered up to now, because they practically never change.
It seems that I need to set SEARCH on all my source files to include that
directory, and I don't know how.
The only relevant rules I have are Main - three Main rules in all. Can I
do something to a Main invocation that'll magically propagate more SEARCH
directories to the source files it pulls in?
From: "Robert Cowham" <robert@vaccaperna.co.uk>
Subject: RE: compile all sources from current directory
Date: Wed, 17 Jan 2001 11:40:32 -0000
Christopher just announced that this sort of thing would be available in the
upcoming version of Jam.
Alternatively, there was a message about implementing it a week or two
back - check the archives.
From: "Patrick Frants" <patrick@quintiq.nl>
Date: Fri, 19 Jan 2001 14:04:45 +0100
Subject: newbie: Include problem
I am playing around with jam and would like to solve the following problem:
My TOP is at GlobalProject. I have jamfiles in ProjectX and
SubProjectXX. The include files for ProjectX are located in the 'hdr'
subdirectory in ProjectX. The only way I get /I$(TOP)/Project1/hdr on
the command line is by specifying it with rule SubDirHrs in the jamfile
on the SubProject level. It would be more appropriate however to specify
it in the jamfile on the Project level because it is the same for all
SubProjects of the Project. I tried to put the SubDirHdrs rule on that
level, but it is overridden in the jamfile of the SubProject... How can
I add $(TOP)/Project1/hdr to the include path in the jamfile in
$(TOP)/Project1?
Is there any variable (HDRS?) which I can set directly?
Also I use cygwin and gcc is the compiler instead of cc. Therefore I
created a jamrules in $(TOP) which contains one line: 'C++ = gcc'. Is
that the right way to do it?
GlobalProject
Project1
SubProject11 (contains .cpp files)
SubProject12
SubProject13
hdr
SubProject11 (contains .h files)
SubProject12
SubProject13
Project2
SubProject21
SubProject22
SubProject23
hdr
SubProject21
SubProject22
SubProject23
From: <boga@mac.com>
Date: Sun, 21 Jan 2001 10:37:56 -0000
Subject: Q: compiling multiple sources together
I'd like to compile multiple sources with one(!) compiler invocation.
My compiler support's compiling multiple sources at once in the form:
cc $(CFLAGS) a.c b.c c.c -o bin/
this is the same as:
cc $(CFLAGS) a.c -o bin/a.c.o
cc $(CFLAGS) b.c -o bin/b.c.o
cc $(CFLAGS) c.c -o bin/c.c.o
Just the first one is much faster. I'd like to implement such optimalization
with jam. Is it possible?
I'd like to have a rule like:
$(OBJECTS) = [ multiccompile $(SOURCES) : $(DESTDIR) ] ;
What have to be work is:
1. Should handle INCLUDE rules on SOURCES
2. If only some of the objects need to be updated, only those sources should
be passed to the compiler.
3. $(SOURCES) might contain sources from different directories, but not
sources with the same name.
The framework for multiccompile is something like:
rule multiccompile {
local OBJECTS ;
local OBJECT = $(i:d=$(2)).o ;
OBJECTS += $(OBJECT) ;
}
return $(OBJECTS) ;
}
What i've tried:
- i've tried to use action with 'updated' modifier, it didn't work because
INCLUDES aren't handled.
- the compiler also supports options from a file: cc @filename, so i've
tried to generate the file with echoing the to the file. It did not work,
because it generated actions like:
Echo $(CFLAGS) > compile.cmd
Echo a.c >> compile.cmd
cc @compile.cmd
Echo b.c >> compile.cmd
I've did this with something like:
rule multiccompile {
local OBJECTS ;
local OBJECT = $(i:d=$(2)).o ;
OBJECTS += $(OBJECT) ;
}
_starccompile $(OBJECTS) ;
local OBJECT = $(i:d=$(2)).o ;
_compile $(OBJECT) : $(i) ;
}
_endcompile $(OBJECTS) ;
return $(OBJECTS) ;
}
Date: Sun, 21 Jan 2001 14:59:28 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Q: compiling multiple sources together
You'll need a wrapper around your compiler so that jam can call it the
same way as it calls e.g. rm, and then you'll need an Object-like rule
that uses together and/or piecemeal the way e.g. the rm rule does.
From: Stephen.Riehm@varetis.de
Date: Mon, 5 Feb 2001 18:16:37 +0100
Subject: Jam can't handle libraries on AIX?
we've been using jam on AIX 4.1 for some time now, and it's been working
great (which is why you haven't heard much from me :-)
However, now we're trying to use jam on AIX 4.3 - and it appears that jam
can't access the contents of library files any more - has anyone else seen
this problem before?
We tried using the binary from 4.1, a recompiled (jam 2.1.1) binary and a
brand new jam 2.3 binary, but they all show the same problems.
Any information would be greatly appreciated.
varetis COMMUNICATIONS GmbH, Munich, Germany
Date: Mon, 5 Feb 2001 12:22:45 -0800 (PST)
From: Matt Watson <mwatson@apple.com>
Subject: Re: Jam can't handle libraries on AIX?
Has AIX 4.3 changed the size of off_t, or the header hierarchy? We noticed
that the lack of a prototype for lseek() was causing archives to be read
incorrectly. I believe that this change was sent back upstream...
From: Stephen.Riehm@varetis.de
Date: Wed, 7 Feb 2001 15:08:26 +0100
Subject: updating from jam 2.1.1 to 2.3
we've been using a slightly modified version of jam for a few years
now, and upgrading to jam 2.3 isn't going all that well, as our
modifications are of course missing in the new version. Our
modifications were minimal, but I'ld like to know if the same
functionality is now available in the current version.
We had the following extras:
A project path ($PRJPATH) was used to refer to multiple
development trees. (ie: <build_tree>:<private_tree>:<reference_tree>)
A routine was used to split a variable on any character
(similar to perl's split() function), this was used to split
PRJPATH into a list of paths (ie: a list of TOP directories).
(I believe such splitting is now standard for environment
variables whose name ends in PATH)
A second routine was added to jam to determine the directory
name relative to one of the TOP directories in $(PRJPATH).
Since jam could be started in a directory without a Jamfile
(i.e.: the Jamfile is only in the <reference_tree>) - the
Jambase set up a SEARCH for $(RELDIR)/Jamfile (SubDir thus
couldn't be used, because the Jamfile hasn't been found yet).
The effect was that a central source directory exists (the
<reference_tree>), in which the "official" code is placed. The
developers then work in a private directory (<private_tree>), again
only with source code. Finally, the developer has a third directory
(<build_tree>) where the build takes place. A complete "clean
re-build" simply requires this build directory to be deleted and a new call
to jam. The other side effect being that no-one can pollute a directory
with objects etc in the central dircetory, but everyone can build
using the same sources. No object or binary files are ever created in
the source directories - and no jam files need to exist in the build directories.
My Question:
How do these concepts fit in to the current version of jam? Is it
possible to determine the difference between the current directory and
a parent directory, and then use this difference for searching for
Jamfiles in other trees?
If this is not the case, would you [Christopher] mind if I sent you my
patch for the ability to split a path into TOP and RELATIVE parts, for
inclusion in the next version of jam?
(PS: there was a little discussion of this in 1996, but it kinda died,
and I haven't been working on the build environment for at least 3
years now - so sorry if I'm a little out of touch)
From: Leon Glozman <leonid_g@schema.com>
Date: Thu, 8 Feb 2001 15:02:13 +0200
Subject: Link of many objects in WATCOM
I work in NT WATCOM environment. For linkage I use wlink command. I have
many objects to link, but the wlink command can't link with many objects.
For this situation, I may write the object names into file and the wlink
will read the names from this file, as the following:
wlink @[file_name]
How can I do it in Jamrules file.
Date: Thu, 8 Feb 2001 10:29:28 -0600 (CST)
Subject: Re: Link of many objects in WATCOM
We do it like this:
# vLink is identical to Link in Jambase, explicitly uses $(<[1]) as
# the output file. This allows us to pass down extra targets that
# get ignored by the action, but makes jam think we're building them.
# This is used to let jam know that Link will build a .lib in addition
# to a .dll when building a .dll. See the vMainFromObjects rule.
#
# Notice the solution for the line too long problem
# create a file for the items, and use this trick, courtesy
# of Laura Wingerd to output the items
# This works due to the mix 'n match composition of macros
# by jam, each item in the extraobjects or $(>) is concatenated
# with the rest of the line. The period after the echo is
# ignored, I guess, but serves to make the whole thing one
# string. The newline macro splits it into individual lines.
actions vLink bind NEEDLIBS EXTRAOBJECTS {
copy nul: linkobjs.txt
echo.$(>)>>linkobjs.txt$(NEWLINE)
echo.$(EXTRAOBJECTS)>>linkobjs.txt$(NEWLINE)
set LIB=$(LIBPATH)$(EXTRALIBPATH)
$(LINK) $(LINKFLAGS) /out:$(<[1]) /PDB:$(<[1]:S=.pdb) $(COMLIBRESOURCE)
$(DEFEXPORT) $(UNDEFS) @linkobjs.txt $(NEEDLIBS) $(LINKLIBS)
$(RM) linkobjs.txt
}
newline is defined like this:
NEWLINE = "
" ; # used to break up long lines for echo to a file
Date: Fri, 9 Feb 2001 14:08:26 -0800 (PST)
From: Mark Lakata <lakata@mips.com>
Subject: jam as cad glue
I'm trying to use jam as CAD glue for chip design. That means, I don't use
any of the built in rules in Jambase, but I've got my own set of rules and
actions.
One thing that is sorely missing is access to the shell, like the GNU
makefile $(shell cmd) feature. What I would like is something like this:
files = `cat filelist`
or
files = [ shell cat filelist ]
Is this feature there already? Has anyone hacked it in?
Subject: Re: jam as cad glue
From: Matt Armstrong <matt.armstrong@openwave.com>
Date: 09 Feb 2001 15:37:49 -0800
It is not there as far as I know, though jam 2.3 added support for
filename globbing.
But if the first line of your filelist file is:
files
and the last line is
;
then
include filelist ;
will work. ;-) Jam takes input from the environment and the jamfiles,
that's it.
Date: Tue, 13 Feb 2001 11:32:01 -0800 (PST)
From: Mark Lakata <lakata@mips.com>
Subject: 'system' built-in rule
I hacked together a built-in rule called 'system' which is like the
backtick operator in csh/perl, and the $(shell ...) command in GNU make.
The system command you run must output one list item per output line, each
line terminated with a newline character. The command is run under the
Bourne shell (/bin/sh).
This compiles on Solaris 2.6, perhaps on other unix machines too.
syntax:
variable = [ system "cmd arg arg ..." ] ;
example:
listOfFiles = [ system "find . -name '*.c' -mdate +1" ]
ECHO $(listOfFiles)
The purists will argue that this is a bad idea, since the output of the
system command is not reproducible, and therefore builds are time
dependent. Well, my answer is that I am not building an executable, I'm
gluing together 3rd party tools in a design verification flow.
# Make this addition to Jamfile
LINKLIBS += -lgen ;
Make these modifications to compile.c:
/* add this declaration after the line that declares builtin_flags */
static LIST *builtin_system( PARSE *parse, LOL *args );
/* add this to the compile_builtins() routine */
bindrule( "system" )->procedure
parse_make( builtin_system, P0, P0, P0, C0, C0, 0 );
/* add this definition at the end */
static LIST * builtin_system( PARSE *parse, LOL *args ) {
/* FIXME: check number of args */
LIST *cmd_list = L0;
LIST *l = lol_get( args, 0 );
char *cmd = l->string;
FILE* fp[2];
char buf[BUFSIZ];
if (p2open("/bin/sh", fp) == 0) {
write(fileno(fp[0]),cmd,strlen(cmd));
write(fileno(fp[0]),"\n",1);
fclose(fp[0]);
while (fgets(buf, BUFSIZ, fp[1]) != NULL) {
char* lastchr = &buf[strlen(buf)-1];
if (*lastchr == '\n') {
*lastchr = NULL;
} else {
if (strlen(buf) >= BUFSIZ) {
fprintf(stderr,"FATAL: command \"%s\" generated line too long\n",cmd);
} else {
fprintf(stderr,"FATAL: command \"%s\" generated line with no newline termination\n",cmd);
}
exit( EXITBAD );
}
cmd_list = list_append( cmd_list,
list_new( L0, newstr( buf ) ) );
}
p2close(fp);
} else {
fprintf(stderr,"FATAL: failed to spawn /bin/sh\n");
exit( EXITBAD );
}
return cmd_list;
}
From: "Czura, Wojtek" <Wojtek.Czura@cognos.com>
Date: Tue, 13 Feb 2001 15:55:54 -0500
Subject: Build tree on VMS...
I started to install Jam 2.3 build environment on Alpha OpenVMS 7.1 box.
The build is large and deep and covers almost all existing platforms. We
want our Jamrules file to be in DEV:[dir.name.top], while builds span
between sources in top/our_group/lev1/lev2/.../levN and targets in
top/platform/public/lev1/.../levM. I have a problem setting the root
directory TOP which I use in local Jamfiles as: TOP = TOP: ;
If I use: define TOP dev:[dir.name.top.], I have access to all directories
in the build, except TOP itself. As the result, Jam will not start due to
TOP:Jamrules. not found.
If I use: define TOP dev:[dir.name.top], I have access to TOP:Jamrules. but
not to any of the branches.
If I don't use TOP at all, Jam tries to use relative paths
[-.-.-.-.vms.public.lev1..] and at some point the number of levels is
greater than VMS can handle and targets are not created or copied.
Included is a VMS listing which illustrates the problem:
$ pwd
PATH$SRVC:[SRVC.ME.SB.LEV1.LEV2.COMMON.COMMON]
$ sho log top*
"TOP" = "$1$DIA34:[SERVICES_AXP.SRVC.ME.SB.]"
"TOPSB" = "$1$DIA34:[SERVICES_AXP.SRVC.ME.SB]"
$ dir top
%DIRECT-E-OPENIN, error opening
$1$DIA34:[SERVICES_AXP.SRVC.ME.SB.][SRVC.ME.SB.LEV1.LEV2.COMMON.COMMON]*.*;*
as input
-RMS-E-DNF, directory not found
-SYSTEM-W-NOSUCHFILE, no such file
$ dir topsb
Directory $1$DIA34:[SERVICES_AXP.SRVC.ME.SB]
JAMRULES.;8 31 13-FEB-2001 14:03:31.09 (R,R,R,)
VMS.DIR;1 1 12-FEB-2001 13:04:13.54 (RWE,RWE,RWE,)
Total of 12 files, 322 blocks.
$ dir top:[vms]
Directory $1$DIA34:[SERVICES_AXP.SRVC.ME.SB.][VMS]
Total of 1 file, 2 blocks.
$ dir topsb:[vms]
%DIRECT-E-OPENIN, error opening TOPSB:[VMS] as input
-RMS-F-DIR, error in directory name
From: "Ducharme, Gregory" <Gregory.Ducharme@Cognos.COM>
Date: Thu, 15 Feb 2001 08:34:30 -0500
Subject: How do I do this in Jam?
I am stuck on what should be a trivial problem in Jam (it certainly is in
make): How do I create a simple rule to allow usage of an arbitrary code
generator and still maintain dependancies?
The usage would be similar to:
Library X : a.c b.c ;
Generator b.c : b.txt : "command generating b.c from b.txt" ;
Its easy to make this work if LOCATE_TARGET and LOCATE_SOURCE are left as
the defaults (i.e. cwd). However, as soon as they are changed the build
fails as follows:
jam
cannot make yadda!yadda!b.c
Generator locate!target!b.c (depends on files so
will run anyway)
skipping locate!target!b.o due to missing yadda!yadda!b.c
skipping locate!target!X due to missing locate!target!b.o
I've tried things like:
SEARCH on b.c = $(LOCATE_SOURCE) ;
but to no avail.
Has anyone solved this problem, and is willing to publish a solution in this newsgroup?
Date: Tue, 20 Feb 2001 16:52:24 -0800
From: Donald Blackfield <dtb@cisco.com>
Subject: Missing jam 2.2 jambase rules in jambase for jam 2.3
I am somewhat new to jam and have been given the task of getting jam 2.3
"up and running". We have been using jam 2.2 . I understand that jam 2.3 is
supposed to be backwards compatible with jam 2.2 . However, I have found that
there are at least 4 rules that were defined in the Jambase for jam 2.2
which appear to be missing in Jambase for jam 2.3
Below is an excerpt from an e-mail detailing some of the differences that we
have found:
In jam 2.3, there are some rules defined to replace some corresponding ones
in jam 2.2 since rules can return value in jam 2.3.
To make it compatible with jam 2.2, the old rules are redefined in jam 2.3
as a wrapper of their new peers.
The problem is that some of them are not redefined in the new Jambase.
Here's the list of all such rules.
Jam2.2 Jam2.3
======================================================================
addDirName FDirName
makeDirName FDirName
makeGristedName FGristSourceFiles
makeRelPath FRelPath
makeSuffixed FAppendSuffix
makeCommon _makeCommon !! not redefined !!
makeGrist FGrist !! not redefined !!
makeString FConcat !! not redefined !!
makeSubDir FSubDir !! not redefined !!
======================================================================
A quick fix is to put these 4 lines in the new Jambase
rule makeCommon { _makeCommon $(<) : $(>) ; }
rule makeGrist { $(<) = [ FGrist $(>) ] ; }
rule makeString { $(<) = [ FConcat $(>) ] ; }
rule makeSubDir { $(<) = [ FDirName $(>) ] ; }
Besides performing this "quick" fix, I am wondering why these rules have not
been defined. Have they been replaced by rules with a different name? Are we not
supposed to invoke these rules? If so, then why and what rules should we be
using instead? Have I "grabbed" the wrong version? I downloaded the jam-2.3.tar
file from your web site.
Any information which you can provide would be greatly appreciated.
From: Simon Cornish <simon.cornish@calix.com>
Date: Mon, 26 Feb 2001 13:41:54 -0800
Subject: Multiple recompiles of common object.
Jam 2.2.1 seems to have a bug determining dependancies of objects that are
required by a number of targets.
Consider the following Jamfile:
# Jamfile
TOP = . ;
Main A : a.c common.c ;
Main B : b.c common.c ;
# End Jamfile
Jam will compile common.c twice in order to build the targets A and B. This
seems wrong to me and running jam with maximum debugging yields no clues.
Even worse, only building one target (ie. running "jam A") still compiles
common.c twice!!
Any ideas how to avoid this? Building the common objects into a library is
not applicable for my target environment, but even if it was I think the
behaviour Jam exhibits here is incorrect.
Date: Mon, 26 Feb 2001 10:35:00 -0500
From: Beman Dawes <bdawes@acm.org>
Subject: Win 2K path in quotes?
Jambase doesn't seem to have quotes around paths based on MSVCNT, as
required by Windows for directory and file names with embedded spaces.
From: Grant_Glouser@palm.com
Date: Tue, 27 Feb 2001 12:13:22 -0800
Subject: Re: Multiple recompiles of common object.
I have encountered this many times. It's a common thing to try in a Jamfile.
The reason it happens is that common.c is passed to the Cc rule twice, and each
invocation of the Cc rule adds the Cc actions block to the target "common.o".
Main A : common.c ;
Objects common.c ;
Object common.o : common.c ;
Cc common.o : common.c ; # compile common.c (add the Cc
actions block to the actions for common.o)
Main B : common.c ;
Objects common.c ;
Object common.o : common.c ;
Cc common.o : common.c ; # compile common.c (add the Cc
actions block to the actions for common.o)
This happens during the evaluation of the Jamfiles, not during the processing of
the dependency graph. The target named "common.o" always has two Cc actions
blocks attached to it. So even if you just "jam A", common.o will be compiled twice.
This behavior is useful in some circumstances, which is why I hestitate to agree
that it's a bug. I think you want a library here, or else compile all the files
by hand, something like this:
Objects a.c b.c common.c ;
MainFromObjects A : a.o common.o ;
MainFromObjects B : b.o common.o ;
You could even modify the Jambase such that Main allows objects in $(>), which
makes the final Jamfile slightly cleaner, IMHO. We've done that to our
equivalent in-house rules, and it seems natural and obvious to me now to have a
Jamfile like this:
Objects common.c ;
Main A : a.c common.o ;
Main B : b.c common.o ;
Subject: Multiple recompiles of common object.
Jam 2.2.1 seems to have a bug determining dependancies of objects that are
required by a number of targets.
Consider the following Jamfile:
# Jamfile
TOP = . ;
Main A : a.c common.c ;
Main B : b.c common.c ;
# End Jamfile
Jam will compile common.c twice in order to build the targets A and B. This
seems wrong to me and running jam with maximum debugging yields no clues.
Even worse, only building one target (ie. running "jam A") still compiles
common.c twice!!
Any ideas how to avoid this? Building the common objects into a library is
not applicable for my target environment, but even if it was I think the
behaviour Jam exhibits here is incorrect.
Subject: Re: Multiple recompiles of common object.
From: Matt Armstrong <matt.armstrong@openwave.com>
Date: 27 Feb 2001 07:43:53 -0800
Some comments:
Don't set TOP like this, instead use SubDir like this:
SubDir TOP ;
That sets up a whole slew of other needed variables.
Instead of that, do this:
local common_src = common.c ;
local common_obj = $(common_src:S=$(SUFOBJ))
Objects $(common_obj) ;
Main A : a.c $(common_obj) ;
Main B : b.c $(common_obj) ;
Date: Wed, 28 Feb 2001 16:28:29 +0100
From: David Turner <david.turner@freetype.org>
Subject: Jam with Windows 95/98
I've been a new jam user for a few weeks now, and it's
really a fascinating tool. Thanks a lot to Christopher Seiwald
and all other people involved in the development of Jam.
I have managed to patch jam to run under Windows 95/98
(mostly) correctly, but I don't know if my approach is the
correct one. I'd appreciate input about this, and I'd be
glad to contribute my changes to the main sources if
you find it adequate..
The current jam (2.3) binaries do not run correctly
under Windows 95/98, and there are several reasons for this:
- first of all, there is no shell named "cmd.exe" that
comes with this version of Windows. Instead, we have
the incredibly clumsy "command.com"
- if you patch "execunix.c" to use "command.com /c" instead
of "cmd.exe /c/q" when a Windows95/98 system is detected,
the following problems appear:
- the trailing newline ('\n') at the end of the
"string" variable used to hold the command line
is interpreted as an additional argument by command.com
that is passed to the action being launched.
Most programs (compilers/linkers) are unable to deal
with it, so a quick patch is also applied to strip the
newline before calling "command.com" (this is minor,
but was really tough to track !!)
- "command.com /c" seem to always return 0, even if the
command that was launched through it failed. Jam will
think that every action is succesful, leading for a really
messy builds.
- the "del" command doesn't accept multiple arguments, i.e.
something like "del a.o b.o" will not work. Making
"jam clean" completely irrelevant with this shell..
To overcome this problems, I have written a new source file
named "execnt.c" to be used under Windows 95/98 and NT. It
acts exactly like the normal "execunix.c" on NT, while having
a very special behaviour under Windows 95/98:
- it detects hard-coded commands of "command.com" like
"del", "copy", "rename", etc.. and specifically
execute them through the ANSI "system" call
(which really invokes command.com)
- with the exception of "del" and "erase", which are
specially filtered in order to ignore any toggles/flags
and accept multiple targets/arguments..
- all other commands are called directly through a
synchronous "spawnvp" (which returns the program's
exit code). Given that W95/98 doesn't support multiple
processors, I don't think it's really a problem..
It seems to work very well here, given that I didn't need to
change anything to the Jambase, though I'm not exceptionnaly
proud of what I've done.
I have not tested this under a different shell, e.g. the Cygwin
bash one (though I wonder if it is supported ??)
I have also looked at the way actions and processes are handled
in GNU Make under Win32, but the method they chose involves
complex process setup/loading/invocations that would complexify
things, even if they seem to be more generic and "clean", it's
a _lot_ more code to write..
In all cases, I'd be glad to send my modifications to any
people who would like too.
Date: Wed, 28 Feb 2001 19:16:56 +0100
From: David Turner <david.turner@freetype.org>
Subject: Re: Jam with Windows 95/98
I'm the main author of a _very_ portable library (see www.freetype.org),
and we need to support a large set of build platforms. I have managed
to build a rather strange build system with GNU Make and a set of
specially crafted sub-Makefiles, but the end result is _really_
hard to understand and maintain, even if it supports a wide variety
of compilers and platforms.
Jam is able to do the same thing in dramatically simpler way, and
it's also capable of automatically compiling executables and run
them. This is really useful for automated test suites :-)
I'm now trying to scrap our old build system for Jam, but I need
to ensure that it supports all the platforms we currently do..
This means that I also intend to work on the Jambase to support
the following toolsets (on Windows):
- Intel C/C++
- Watcom C/C++
- GCC (MingW)
- Win32-LCC
The only thing I've done about it for now is in the "del"/"erase"
filter, which effectively supports double quotes in filenames.
Otherwise, I believe that more work in required in jam itself, but
I've not taken the pain to solve this (relatively minor) problem.
I just followed the naming convention used by "filent.c" and "pathnt.c" :-)
Of course, changing this wouldn't be too difficult but the rest of Jam
uses "NT" quite extensively (in macros, in the Jambase, etc..)
And even if MS is phasing out the NT name, I believe we're more
interested in working tools than marketing fluff..
PS: On a related note, has anyone considered the ability to automatically
generate Makefiles from Jam ? I know, I know, it's really a strange
proposal :-)
Date: Wed, 28 Feb 2001 22:25:45 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Jam with Windows 95/98
Fairly easily done, if all you want is the thing I anticipate wanting.
Say, a thirty-line perl hack.
1. Run jam clean, taking note of the commands executed.
2. touch every file.
3. Run jam -v, taking note of all commands executed.
4. Write a Makefile with two targets: 'clean' that does what jam did
during 'jam clean' and removes all files created during the 'jam -v',
and 'all' that depends on every file whose atime changed during the
'jam -v', and whose commands are all the commands jam executed during
'jam -v'.
Evil? Yes. But it'll produce a working makefile, enough that people who
don't have jam can compile the thing.
Date: Thu, 1 Mar 2001 12:57:25 -0800
From: "Neil Okamoto" <neil_o@my-deja.com>
Subject: question about SubDir* rules
I'm trying to understand something about jam's SubDir* rules.
As long as I build from the top level directory everything is fine.
However, if I try to build from a subdirectory in the project,
any dependencies that are not beneath the current directory in the
hiearachy are not found. I'd like to be able to build from any
arbitrary directory in the project.
Date: Fri, 02 Mar 2001 20:32:30 +0100
From: David Turner <david.turner@freetype.org>
Subject: Re: Jam with Windows 95/98
Well, I was thinking about generating the Makefiles from Jam itself.
Given that it knows all about dependencies, and that it has pretty good
string manipulation routines, it shouldn't be that hard..
(Also, relying on Perl or Python on the host isn't something I really
enjoy for such a simple task).
My biggest problem is that I want to preserve the use of macros
like $(CC), $(RM), $(CP), etc.. in the Makefile rules, as well
as their definitions (which depend on the toolset).. The solution
you're suggesting wouldn't allow this..
Date: Fri, 02 Mar 2001 20:41:54 +0100
From: David Turner <david.turner@freetype.org>
Subject: Jam binaries for Windows 95/98 and OS/2
I've finally publicly posted my changes. Have a look at the following addresses:
ftp://ftp.freetype.org/pub/contrib/jam/jam-win.zip
contains a pre-compiled Jam binary for Windows NT
and Windows 95/98, that supports the following
compilers: Visual C++, Intel C++, Borland C++,
Watcom C++, Mingw (gcc) as well as LCC-Win32
ftp://ftp.freetype.org/pub/contrib/jam/jam-os2.zip
contains a pre-compiled Jam binary for OS/2, that
supports the EMX and Watcom compilers/toolsets
(Jam 2.3 only supports Watcom). VisualAge is in
the works..
ftp://ftp.freetype.org/pub/contrib/jam/jam-src.zip
is my version of the Jam sources, based on version
2.3. they were used to build the two binaries above
I'm releasing these files because several people have
already asked me for the W95/98 binaries, and because
I want to use them as soon as possible in order to get
rid of the ugly build system in FreeType 2.
Note that it's just a "quick hack", that many things may
be re-written later, and that I really hope that these
changes will be accepted by the Jam community.
I'm sorry for not using Perforce yet, not sending patches,
etc.. If these changes would not reflect the current developments
in Jam, I'd be glad to adapt them in any way consistent with
what Christopher might think be best..
I'm joining the README for the new sources, as it explain the
rather important changes that were adopted here..
This is a special version of the Jam/MR tool.
For more information about Jam, see the file README.ORG, as well as:
http://www.perforce.com/jam/jam.html
Note that this code is based on Jam release 2.3
However, it includes the following enhancements:
- it runs under Windows 95/98 (mostly) correctly
(jam 2.3 only runs under Windows NT, as well as
UNIX, OS/2, BeOS, MacOS, VMS, etc..)
this required the writing of a new source called
"execnt.c" ("execunix.c" is still used on OS/2 though)
- it runs under OS/2 with either EMX (gcc) or Watcom C/C++
- it contains a new builtin rule named HDRMACRO
this rule is used to indicate that a source file
contains macro definitions that are used in
#include statements, like the following:
public.h:
#define MY_FILE_H <myfile.h>
#define OTHER_FILE_H "otherfile.h"
such files are parsed when a line like:
HDRMACRO public.h ;
is found in a Jamfile, in order to detect and record
the macro definitions in a global dictionary.
when other source files are parsed for #include statements,
lines like:
#include MY_FILE_H
are resolved through the global macro dictionary.
this new rule is required to compile FreeType 2 with
Jam, as well as a few other interesting projects..
(it is implemented by "hdrmacro.c", some changes were also
necessary in "compile.c" and "headers.c")
- it supports the following toolsets on Windows 95/98 and NT:
- MS Visual C/C++
- Intel C/C++
- Watcom C/C++
- Borland C/C++
- LCC-Win32
- MingW (GCC for Windows, but _not_ Cygwin)
even though it is compatible with the old jam 2.3 windows
support (i.e. you can define MSVC, MSVCNT or BCCROOT), the
toolset is now selected through the following scheme:
- define one of the following environment variable, with the
appropriate value according to this list:
Variable Toolset Description
BORLANDC Borland C++ BC++ install path
VISUALC Microsoft Visual C++ VC++ install path
VISUALC16 Microsoft Visual C++ 16 bit VC++ 16 bit install
INTELC Intel C/C++ IC++ install path
WATCOM Watcom C/C++ Watcom install path
MINGW MinGW (gcc) anything..
LCC Win32-LCC LCC-Win32 install path
- define the JAM_TOOLSET environment variable with the name
of the toolset variable you want to use.
as an example, you can use the Microsoft Visual C++ compiler with
something like:
set VISUALC=C:\Visual6 (really the path to the VC++ installation)
set JAM_TOOLSET=VISUALC
jam
a similar scheme is used under OS/2 to select between EMX and WATCOM
note that Watcom support is not fully tested, especially with shared
libraries..
I plan to add support for IBM Visual Age C/C++ to both the Windows
and OS/2 ports.
- I added a new variable expansion modifier, named "T", that is
used to toggle "\" and "/" in strings. This is required to correctly
support GCC on Windows. As an example:
set VAR = "c:\mydir\myfile" ;
echo $(VAR:T) ;
will print:
c:/mydir/myfile
(this was a quick hack in "expand.c")
Hoping that these changes will be integrated into the official version
of Jam, or at least that they'll drive the inclusion of similar features
Don't forget that all of this is a cuick hack over the Jam 2.3 sources,
and that some cleanup should certainly occur in the source code (HDRMACRO
might as well be renamed to something different, by the way).
Of course, everything is released under the Jam license..
Date: Fri, 2 Mar 2001 22:03:30 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Jam with Windows 95/98
Probably about as hard as its current job, I'd guess.
Might I ask why you'd want to do this?
Date: Sun, 04 Mar 2001 09:57:59 -0500
From: Beman Dawes <bdawes@acm.org>
Subject: Re: Win 2K path in quotes?
>Jambase doesn't seem to have quotes around paths based on MSVCNT, as
>required by Windows for directory and file names with embedded spaces.
>What am I missing?
What I was missing was the obvious workaround of using the old 8.3 names
that Windows generates for each real directory and file name. Ugly, but it
does appear to work.
It might be a good idea to add something to the docs. Perhaps add a final
paragraph to jam.html - LANGUAGE - Lexical Features:
Directory and file names with embedded whitespace characters will not work
correctly, because Jam treats the whitespace as a token separator. The
workaround for MS Windows is to use the 8.3 form of names. For example,
"c:\Program Files\Microsoft Visual Studio\vc98" might become
"c:\progra~1\micros~2\vc98". The exact translation to 8.3 names is system dependent.
Date: Mon, 05 Mar 2001 07:53:01 -0500
From: Beman Dawes <bdawes@acm.org>
Subject: Build multiple flavors of a library?
It is common to need to build multiple flavors of a library. For example,
release and debugging versions with single, multi-threaded, and dynamic
linking. 2 * 3 = total of 6 libraries. Each should go in a different directory.
How would I go about doing this with a single invocation of Jam?
From: Leon Glozman <leon.glozman@schema.com>
Date: Mon, 5 Mar 2001 14:56:32 +0200
Subject: How can I count object files for any exe or lib in Jam language?
How can I count object files for any exe or lib in Jam language?
Date: Tue, 6 Mar 2001 06:14:51 -0800
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
Subject: Precompiled headers
I can't quite get the dependencies right with Jam for precompiled
headers (MSVCNT). Anyone already have rules with proper dependencies
for the pch?
The closest I've gotten was that it worked fine as long as I let it
build "all", but if I tried to build a specific "foo.exe" then it tried
to build the Main foo : foo.cpp ; object prior to the PCH (that's the
only dep problem remaining, but it's a real bugger trying to eradicate
it).
Moving foo.cpp into a Library FOOLIB : foo.cpp otherfiles.cpp ; kind of
works except that the linker complains because I'm only linking from
libs so it doesn't know for sure the right target platform (ok,
whatever). I can work around that a few different ways, but the bottom
line is I should just be able to get the dependencies right and have the
whole problem go away.
In an attempt to get the deps totally correct, I've tried making
surgical changes to jambase to introduce support for a SubDirPrecompHdr
rule that sets up some per-subdir variables used by the C++ rule (to
indicate the PCH name, etc) and a new Pch rule (to create the PCH).
(for whatever reason MSVC is choosing to often ignore the PCH if I use
the /YX flag for automatic pch). This approach is not working very
well, because although it has the deps for the pch file right, the deps
are busted for everything else such that it recompiles everything (minus
the pchs) each time.
From: Leon Glozman <leon.glozman@schema.com>
Date: Wed, 7 Mar 2001 15:06:10 +0200
Subject: Different compilation flags for dll & exe creation
I work in WATCOM project. I want to create some dllls & executables. The
problem is that compilation flags for dlls & executables are different (dll
compilation flags have -bd, exe compilation flags are without it), but "rule
C++" in Jam "don't know", if you should create objects for dll or executable compilation.
How I can solve the problem?
Date: Fri, 9 Mar 2001 21:26:33 -0800
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
Subject: Multiple target files from one action?
I finally seem to have gotten the dependencies right for MSVC
precompiled headers (.pch) and interface files (.idl). It was no small
task, so I feel compelled to share: if anyone wants the rules/actions,
let me know. In particular, they work for both "jam -j" and "jam
target". Very minor tweaks to the C++, Library, and SubDir rules.
From: "Jan Mikkelsen" <janm@transactionware.com>
Subject: Re: Multiple target files from one action?
Date: Sat, 10 Mar 2001 17:07:13 +1100
I'm certainly interested in seeing the rules.
I new to Jam, and I'm creating rules for Antlr (a compiler compiler)
which also generates multiple targets from a single action. I haven't
really started to think about it yet, so looking at a working approach
would certainly be useful.
From: john@nanaon-sha.co.jp (John Belmonte)
Date: Sun, 11 Mar 2001 19:18:50 +0900
Subject: flag ordering problem / target-specific variables
The current flag ordering for the As, Cc, and C++ rules is:
<target-specific flags> <global flags> <subdir flags>
With this ordering it's not possible for target-specific flags to override
global/subdir flags. (I'm assuming that most compilers are like gcc in that
later flags have precedence.) For example I have my global C++FLAGS set to
"-fno-rtti" but on a certain target I would like to override this with
"-frtti". Hence I would like the ordering to be:
<global flags> <subdir flags> <target-specific flags>
but Jam does not support this neatly because there is no way to access
target-specific variables outside an action other than indirectly with the
+= operator.
I know the workaround would be to put the global/subdir flags under a
different variable and have the action take care of the ordering, but I'm wondering...
Wouldn't it be useful if Jam provided a way to read target-specific
variables? The current situation where append is possible (via +=), but
read is not, seems a bit strange.
Subject: RE: Multiple target files from one action?
Date: Mon, 12 Mar 2001 15:26:54 -0800
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
As far as I can tell, Jam seems to have a bug in how it treats a
rule/action that updates multiple targets from one source. If a
rule/action updates say 4 targets (for example, the Midl compiler for
interface definitions produces one .h file and three .c files), "jam
-j2" will run the Midl action, but will simultaneously start running the
Cc action to update the object files from the 3 generated .c files. But
of course the .c files do not yet exist.
I worked around this by inserting dependencies to fake Jam into
believing that only the .h file is generated by the Midl rule, and that
the three .c files are dependent on the .h file. I also use RmTemps on
the .c files, which ought to generally avoid the problem of not knowing
how to generate them independently if the .h file already exists.
However, now I've run into a third instance of this bug. (The first was
with .pch/.obj, the second was with .h/.c/.idl). Now I'm trying to hook
up .sbr/.bsc Extended Browse Symbols to be automatically built, but
unfortunately for me the compiler generates the .sbr file prior to
generating the .obj file. In order to apply my earlier trick, I would
need to make .obj depend on .sbr depend on .cpp, but that will cause
havoc all around. I guess I'll have to make the Cc/Cpp actions touch
the .sbr file after it's updated, to fake the dependencies in such a way
that Jam can understand them.
In any case, the source for my rules for the .pch/.idl files will be
mostly not useful to you. The only part I can see that would be useful
is just the trick about telling Jam something like this:
Depends parenttarget : fourgeneratedtargets ;
Depends threeofthetargets : theoneparticulartarget ;
UpdateRuleFor theoneparticulartarget : thesourcetarget ;
As indicated above, this has the side effect that if
theoneparticulartarget exists but one or more of threeofthetargets is
missing, Jam doesn't know how to build threeofthetargets. In my case
this is acceptable, and even unlikely since threeofthetargets are really
just temporary files anyway, and I enforce this with "RmTemps
parenttarget : threeofthetargets ;".
Longer term, I hope to track down exactly why Jam doesn't deal well with
multiple targets generated by a single action. Hopefully it will be
something relatively trivial, in which case I'll see what I can do to fix it.
From: Jan Mikkelsen [mailto:janm@transactionware.com]
Sent: Friday, March 09, 2001 10:07 PM
Subject: Re: Multiple target files from one action?
I'm certainly interested in seeing the rules.
I new to Jam, and I'm creating rules for Antlr (a compiler compiler)
which also generates multiple targets from a single action. I haven't
really started to think about it yet, so looking at a working approach
would certainly be useful.
Date: Mon, 12 Mar 2001 18:02:09 -0600 (CST)
Subject: Re: Multiple target files from one action?
I dont' think its a bug - jam just executes actions in the order
that is necessary, and since it waits for an action to complete
before doing the next, things are fine. -jn is a hack which just
makes it issue n actions at one time. They are issued in the
correct order, but it doesn't have the logic to wait for important
stuff to finish. I think it alludes to this in the description for -j
One approach is to make stuff dependent upon various phases, so that
a jam -j8 obj is fine because all source is already generated.
Subject: RE: Multiple target files from one action?
Date: Mon, 12 Mar 2001 16:56:31 -0800
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
You're right than -jn starts actions in the correct order. But both
code inspection and empirical testing show that the way in which it
determines the correct order is via the dependencies. Pretty much any
build that successfully completes will at one or more points starve the
-jn pipeline. For example, "jam -j8" waits until all objects are
updated before it starts the Link action. If there are some actions
that don't depend on the target of the Link action, then it will start
actions to work on those while the Link is in progress, but if there are
no more actions, it starves (AFAICT) temporarily.
This works successfully and consistently at several points during our
build, so Jam seems to basically have it right, except when the
dependency graph says that multiple targets are updated by a single
action (i.e., the dependency is M:1 or M:M). In that case it seems to
not wait at all, or perhaps it is only waiting for the first target to
have been updated; I haven't investigated that detail yet.
For example:
rule CreateFiles {
local cfiles = x.c y.c z.c ;
Depends $(<) : $(cfiles) ;
Depends $(cfiles) : $(>) ;
LibraryFromObjects xyzlib : $(cfiles) ;
}
actions CreateFiles {
$(CREATEFILES) /input=$(>) /out1=$(<[1]) /out2=$(<[2]) /out3=$(<[3])
}
CreateFiles myprogram : createfiles.src ;
Since all 3 $(cfiles) depend on $(>), success is dubious if it starts
the CreateFiles action and simultaneously starts the LibraryFromObject
rule (or more precisely the Cc actions). It should wait to build the
dependents (i.e. $(cfiles)) until the action that was invoked on $(>)
(i.e. createfiles.src) is completed. Again, because it does wait
correctly when an action updates only one target, I interpret this as a
limitation in the wait logic (I'll try to be P.C. and avoid calling it a
bug ;-). Since this limitation makes things quite difficult sometimes,
I plan to investigate it and hopefully enhance the wait logic to
successfully support actions with M:M dependency, rather than only 1:M
as currently.
I've recently run into a situation where at best I will need to Touch
files to help massage the dependency graph into a shape that Jam can
understand with its current 1:M wait logic.
Disclaimer: I'm expressing this in empirical terms based on what I'm
seeing. Perhaps in the end this will turn out to be as simple as a
flipped boolean somewhere. In the code, this may not strictly be a
limitation in the wait logic, but the manifest symptom is that the wait
logic needs improvement.
Sent: Monday, March 12, 2001 4:02 PM
Subject: Re: Multiple target files from one action?
I dont' think its a bug - jam just executes actions in the order that is
necessary, and since it waits for an action to complete before doing the
next, things are fine. -jn is a hack which just makes it issue n
actions at one time. They are issued in the correct order, but it
doesn't have the logic to wait for important stuff to finish. I think
it alludes to this in the description for -j
One approach is to make stuff dependent upon various phases, so that a
jam -j8 obj is fine because all source is already generated.
at least that's my understanding!
From: Jean-Daniel.Aussel@bull.net
Date: Tue, 13 Mar 2001 11:38:30 +0100
Subject: substitution built-in command for jam
For those interested, I added a substitution built-in command for jam,
supporting regular expressions. The following changes have been tested only
on Windows NT.
The subst built-in syntax is [ subst <sourcestring> <pattern>
<substitutionstring> ].
A sample use of the subst built in is
#--- JamFile --- start ---
# simple replacement
#
SOURCESTRING = x:\\private\\@WORKSPACE@\\src\\test ;
TARGETSTRING = [ subst $(SOURCESTRING) @WORKSPACE@ dummy ] ;
ECHO $(TARGETSTRING) ;
# regular expression matching
#
SOURCEPATH = x:\\private\\dummy\\src\\test ;
PATHWITHOUTDRIVE = [ subst $(SOURCEPATH) ([A-Za-z]:)(.*) "$2" ] ;
ECHO $(PATHWITHOUTDRIVE) ;
DRIVEWITHOUTPATH = [ subst $(SOURCEPATH) ([A-Za-z]:)(.*) "$1" ] ;
ECHO $(DRIVEWITHOUTPATH) ;
#--- Jamfile --- end ---
Which results in the output:
x:\private\dummy\src\test
\private\dummy\src\test
x:
Building none
...found 11 target(s)...
To implement the subst built-in:
1. Add the following file to your jam sources
/*
* Permission is granted to anyone to use this software for any
* purpose on any computer system, and to redistribute it freely,
* without restrictions.
* ALL WARRANTIES ARE HEREBY DISCLAIMED.
*/
/*
* _substcmd.c implements subst built-in command
*/
#include "lists.h"
#include "newstr.h"
#include "parse.h"
static LIST *_subst_list;
LIST *
builtin_subst(
PARSE *parse,
LOL *args )
{
/* FIXME: check number of args */
LIST* l;
char* pszIn;
char* pszToReplace;
char* pszReplacement;
char szOut[4096];
int iOnce;
void __cdecl replace( const char* szIn, const char* szOld, const char*
szNew, const char* szOut );
_subst_list = L0;
for( iOnce=1; iOnce; iOnce-- ) {
l = lol_get( args, 0 );
if(!l) { break;}
pszIn = l->string;
l = list_next( l );
if(!l) { break; }
pszToReplace = l->string;
l = list_next( l );
if(!l) { break; }
pszReplacement = l->string;
replace( pszIn, pszToReplace, pszReplacement, szOut );
_subst_list = list_append( _subst_list,
list_new( L0, newstr( szOut ) ) );
}
return _subst_list;
}
2. Add this code to the end of compile_builtins() in compile.c:
{
extern LIST* builtin_subst( PARSE *parse, LOL *args );
bindrule( "subst" )->procedure
bindrule( "SUBST" )->procedure
parse_make( builtin_subst, P0, P0, P0, C0, C0, 0 );
}
3. Add the following three four files to your jam sources: _replace.cpp,
_perlclass.cpp, _perlclass.h, _regexp.h. The last three files have been
written by Jim Morris, are public domain, and are available as part as the
toogl tool available on sgi web site.
/*
* Permission is granted to anyone to use this software for any
* purpose on any computer system, and to redistribute it freely,
* without restrictions.
* ALL WARRANTIES ARE HEREBY DISCLAIMED.
*/
/*
* _replace.cpp string replacement function supporting regular expressions;
* uses the PerlString class written by Jim Morris of sgi as
* part of the toogl tools. See copyright in perlclassp.cpp
*/
#include "_perlclass.h"
extern "C" void replace(
const char* pszIn,
const char* pszOld,
const char* pszNew,
char* pszOut )
{
int bHasChanged;
PerlString str( pszIn );
PerlString strOld( pszOld );
PerlString strNew( pszNew );
bHasChanged=str.s( strOld, strNew );
strcpy( pszOut, str );
}
/*
* Version 1.6
* Kudos to Larry Wall for inventing Perl
* Copyrights only exist on the regex stuff, and all
* have been left intact.
* The only thing I ask is that you let me know of any nifty fixes or
* additions.
* Credits:
* I'd like to thank Michael Golan <mg@Princeton.EDU> for his critiques
* and clever suggestions. Some of which have actually been implemented
*/
#include <iostream.h>
#include <string.h>
#include <malloc.h>
#include <stdio.h>
#ifdef __TURBOC__
#pragma hdrstop
#endif
#include "_perlclass.h"
// VarString Implementation
VarString& VarString::operator=(const char *s) {
int nl= strlen(s);
if(nl+1 >= allocated) grow((nl-allocated)+allocinc);
assert(allocated > nl+1);
strcpy(a, s);
len= nl;
return *this;
}
VarString& VarString::operator=(const VarString& n) {
if(this != &n){
if(n.len+1 >= allocated){ // if it is not big enough
# ifdef DEBUG
fprintf(stderr, "~operator=(VarString&) a= %p\n", a);
# endif
delete [] a; // get rid of old one
allocated= n.allocated;
allocinc= n.allocinc;
a= new char[allocated];
# ifdef DEBUG
fprintf(stderr, "operator=(VarString&) a= %p, source= %p\n", a,n.a);
# endif
}
len= n.len;
strcpy(a, n.a);
}
return *this;
}
void VarString::grow(int n) {
if(n == 0) n= allocinc;
allocated += n;
char *tmp= new char[allocated];
strcpy(tmp, a);
#ifdef DEBUG
fprintf(stderr, "VarString::grow() a= %p, old= %p, allocinc= %d\n", tmp, a, allocinc);
fprintf(stderr, "~VarString::grow() a= %p\n", a);
#endif
delete [] a;
a= tmp;
}
void VarString::add(char c) {
if(len+1 >= allocated) grow();
assert(allocated > len+1);
a[len++]= c; a[len]= '\0';
}
void VarString::add(const char *s) {
int nl= strlen(s);
if(len+nl >= allocated) grow(((len+nl)-allocated)+allocinc);
assert(allocated > len+nl);
strcat(a, s);
len+=nl;
}
void VarString::add(int ip, const char *s) {
int nl= strlen(s);
if(len+nl >= allocated) grow(((len+nl)-allocated)+allocinc);
assert(allocated > len+nl);
memmove(&a[ip+nl], &a[ip], (len-ip)+1); // shuffle up
memcpy(&a[ip], s, nl);
len+=nl;
assert(a[len] == '\0');
}
void VarString::remove(int ip, int n) {
assert(ip+n <= len);
memmove(&a[ip], &a[ip+n], len-ip); // shuffle down
len-=n;
assert(a[len] == '\0');
}
//
// PerlString stuff
//
// assignments
PerlString& PerlString::operator=(const PerlString& n) {
if(this == &n) return *this;
pstr= n.pstr;
return *this;
}
PerlString& PerlString::operator=(const substring& sb) {
VarString tmp(sb.pt, sb.len);
pstr= tmp;
return *this;
}
// concatenations
PerlString PerlString::operator+(const PerlString& s) const {
PerlString ts(*this);
ts.pstr.add(s);
return ts;
}
PerlString PerlString::operator+(const char *s) const {
PerlString ts(*this);
ts.pstr.add(s);
return ts;
}
PerlString PerlString::operator+(char c) const {
PerlString ts(*this);
ts.pstr.add(c);
return ts;
}
PerlString operator+(const char *s1, const PerlString& s2) {
PerlString ts(s1);
ts = ts + s2;
// cout << "s2[0] = " << s2[0] << endl; // gives incorrect error
return ts;
}
// other stuff
char PerlString::chop(void) {
int n= length();
if(n <= 0) return '\0'; // empty
char tmp= pstr[n-1];
pstr.remove(n-1);
return tmp;
}
int PerlString::index(const PerlString& s, int offset) {
for(int i=offset;i<length();i++){
if(strncmp(&pstr[i], s, s.length()) == 0) return i;
}
return -1;
}
int PerlString::rindex(const PerlString& s, int offset) {
if(offset == -1) offset= length()-s.length();
else offset= offset-s.length()+1;
if(offset > length()-s.length()) offset= length()-s.length();
for(int i=offset;i>=0;i--){
if(strncmp(&pstr[i], s, s.length()) == 0) return i;
}
return -1;
}
PerlString::substring PerlString::substr(int offset, int len) {
if(len == -1) len= length() - offset; // default use rest of string
if(offset < 0){
offset= length() + offset; // count from end of string
if(offset < 0) offset= 0; // went too far, adjust to start
}
return substring(*this, offset, len);
}
// this is private
// it shrinks or expands string as required
void PerlString::insert(int pos, int len, const char *s, int nlen) {
if(pos < length()){ // nothing to delete if not true
if((len+pos) > length()) len= length() - pos;
pstr.remove(pos, len); // first remove subrange
}else pos= length();
VarString tmp(s, nlen);
pstr.add(pos, tmp); // then insert new substring
}
int PerlString::m(Regexp& r) {
return r.match(*this);
}
int PerlString::m(const char *pat, const char *opts) {
int iflg= strchr(opts, 'i') != NULL;
Regexp r(pat, iflg?Regexp::nocase:0);
return m(r);
}
int PerlString::m(Regexp& r, PerlStringList& psl) {
if(!r.match(*this)) return 0;
psl.reset(); // clear it first
Range rng;
rng= r.getgroup(i);
psl.push(substr(rng.start(), rng.length()));
}
return r.groups();
}
int PerlString::m(const char *pat, PerlStringList& psl, const char *opts) {
int iflg= strchr(opts, 'i') != NULL;
Regexp r(pat, iflg?Regexp::nocase:0);
return m(r, psl);
}
//
// I know! This is not fast, but it works!!
//
int PerlString::tr(const char *sl, const char *rl, const char *opts) {
if(length() == 0 || strlen(sl) == 0) return 0;
int cflg= strchr(opts, 'c') != NULL; // thanks Michael
int dflg= strchr(opts, 'd') != NULL;
int sflg= strchr(opts, 's') != NULL;
int cnt= 0, flen= 0;
PerlString t;
unsigned char lstc= '\0', fr[256];
// build search array, which is a 256 byte array that stores the index+1
// in the search string for each character found, == 0 if not in search
memset(fr, 0, 256);
for(int i=0;i<strlen(sl);i++){
if(i && sl[i] == '-'){ // got a range
assert(i+1 < strlen(sl) && lstc <= sl[i+1]); // sanity check
for(unsigned char c=lstc+1;c<=sl[i+1];c++){
fr[c]= ++flen;
}
i++; lstc= '\0';
}else{
lstc= sl[i];
fr[sl[i]]= ++flen;
}
int rlen;
// build replacement list
if((rlen=strlen(rl)) != 0){
for(i=0;i<rlen;i++){
if(i && rl[i] == '-'){ // got a range
assert(i+1 < rlen && t[t.length()-1] <= rl[i+1]); // sanity check
for(char c=t[i-1]+1;c<=rl[i+1];c++) t += c;
i++;
}else t += rl[i];
}
// replacement string that is shorter uses last character for rest of string
// unless the delete option is in effect or it is empty
while(!dflg && rlen && flen > t.length()){
t += t[t.length()-1]; // duplicate last character
}
rlen= t.length(); // length of translation string
// do translation, and deletion if dflg (actually falls out of length of t)
// also squeeze translated characters if sflg
PerlString tmp; // need this in case dflg, and string changes size
for(i=0;i<length();i++){
int off;
if(cflg){ // complement, ie if NOT in f
char rc= !dflg ? t[t.length()-1] : '\0'; // always use last character for replacement
if((off=fr[(*this)[i]]) == 0){ // not in map
cnt++;
if(!dflg && (!sflg || tmp.length() == 0 || tmp[tmp.length()-1] != rc))
tmp += rc;
}else tmp += (*this)[i]; // just stays the same
}else{ // in fr so substitute with t, if no equiv in t then delete
if((off=fr[(*this)[i]]) > 0){
off--; cnt++;
if(rlen==0 && !dflg && (!sflg || tmp.length() == 0 ||
tmp[tmp.length()-1] != (*this)[i])) tmp += (*this)[i]; // stays the same else if(off < rlen && (!sflg || tmp.length() == 0 || tmp[tmp.length
()-1] != t[off]))
tmp += t[off]; // substitute
}else tmp += (*this)[i]; // just stays the same
}
*this= tmp;
return cnt;
}
int PerlString::s(const char *exp, const char *repl, const char *opts) {
int gflg= strchr(opts, 'g') != NULL;
int iflg= strchr(opts, 'i') != NULL;
int cnt= 0;
Regexp re(exp, iflg?Regexp::nocase:0);
Range rg;
if(re.match(*this)){
// OK I know, this is a horrible hack, but it seems to work
if(gflg){ // recursively call s() until applied to whole string
rg= re.getgroup(0);
if(rg.end()+1 < length()){
PerlString st(substr(rg.end()+1));
// cout << "Substring: " << st << endl;
cnt += st.s(exp, repl, opts);
substr(rg.end()+1)= st;
// cout << "NewString: " << *this << endl;
}
}
if(!strchr(repl, '$')){ // straight, simple substitution
rg= re.getgroup(0);
substr(rg.start(), rg.length())= repl;
cnt++;
}else{ // need to do subexpression substitution
char c;
const char *src= repl;
PerlString dst;
int no;
while ((c = *src++) != '\0') {
if(c == '$' && *src == '&'){
no = 0; src++;
}else if(c == '$' && '0' <= *src && *src <= '9')
no = *src++ - '0';
else no = -1;
if(no < 0){ /* Ordinary character. */
if(c == '\\' && (*src == '\\' || *src == '$'))
c = *src++;
dst += c;
}else{
rg= re.getgroup(no);
dst += substr(rg.start(), rg.length());
}
rg= re.getgroup(0);
substr(rg.start(), rg.length())= dst;
cnt++;
}
return cnt;
}
return cnt;
}
PerlStringList PerlString::split(const char *pat, int limit){
PerlStringList l;
l.split(*this, pat, limit);
return l;
}
//
// PerlStringList stuff
//
int PerlStringList::split(const char *str, const char *pat, int limit){
Regexp re(pat);
Range rng;
PerlString s(str);
int cnt= 1;
if(*pat == '\0'){ // special empty string case splits entire thing
while(*str){
s= *str++;
push(s);
}
return count();
}
if(strcmp(pat, "' '") == 0){ // special awk case
char *p, *ws= " \t\n";
TempString t(str); // can't hack users data
p= strtok(t, ws);
while(p){
push(p);
p= strtok(NULL, ws);
}
return count();
}
while(re.match(s) && (limit < 0 || cnt < limit)){ // find separator
rng= re.getgroup(0); // full matched string (entire separator)
push(s.substr(0, rng.start()));
for(int i=1;i<re.groups();i++){
push(s.substr(re.getgroup(i))); // add subexpression matches
}
s= s.substr(rng.end()+1);
cnt++;
}
if(s.length()) push(s);
if(limit < 0){ // strip trailing null entries
int off= count()-1;
while(off >= 0 && (*this)[off].length() == 0){ off-- ;}
splice(off+1);
}
return count();
}
PerlString PerlStringList::join(const char *pat) {
PerlString ts;
for(int i=0;i<count();i++){
ts += (*this)[i];
if(i<count()-1) ts += pat;
}
return ts;
}
PerlStringList::PerlStringList(const PerlStringList& n) {
for(int i=0;i<n.count();i++){
push(n[i]);
}
}
PerlStringList& PerlStringList::operator=(const PerlList<PerlString>& n) {
if(this == &n) return *this;
// erase old one
reset();
for(int i=0;i<n.count();i++){
push(n[i]);
}
return *this;
}
int PerlStringList::m(const char *rege, const char *targ, const char *opts)
{
int iflg= strchr(opts, 'i') != NULL;
Regexp r(rege, iflg?Regexp::nocase:0);
if(!r.match(targ)) return 0;
Range rng;
rng= r.getgroup(i);
push(PerlString(targ).substr(rng.start(), rng.length()));
}
return r.groups();
}
PerlStringList PerlStringList::grep(const char *rege, const char *opts) {
PerlStringList rt;
int iflg= strchr(opts, 'i') != NULL;
Regexp rexp(rege, iflg?Regexp::nocase:0); // compile once
for(int i=0;i<count();i++){
if(rexp.match((*this)[i])){
rt.push((*this)[i]);
}
return rt;
}
// streams stuff
istream& operator>>(istream& ifs, PerlString& s) {
char c;
#if 0
char buf[40];
#else
char buf[132];
#endif
s= ""; // empty string
ifs.get(buf, sizeof buf);
// This is tricky because a line teminated by end of file that is not terminated
// with a '\n' first is considered an OK line, but ifs.good() will fail.
// This will correctly return the last line if it is terminated by eof with the
// stream still in a non-fail condition, but at eof, so next call will fail as
// expected
if(ifs){ // previous operation was ok
s += buf; // append buffer to string
// cout << "<" << buf << ">" << endl;
// if its a long line continue appending to string
while(ifs.good() && (c=ifs.get()) != '\n'){
// cout << "eof1= " << ifs.eof() << endl;
ifs.putback(c);
// cout << "eof2= " << ifs.eof() << endl;
if(ifs.get(buf, sizeof buf)) s += buf; // append to line
}
return ifs;
}
istream& operator>>(istream& ifs, PerlStringList& sl) {
PerlString s;
// Should I reset sl first?
sl.reset(); // I think so, to be consistent
while(ifs >> s){
sl.push(s);
// cout << "<" << s << ">" << endl;
};
return ifs;
}
ostream& operator<<(ostream& os, const PerlString& arr) {
#ifdef TEST
os << "(" << arr.length() << ")" << "\"";
os << (const char *)arr;
os << "\"";
#else
os << (const char *)arr;
#endif
return os;
}
ostream& operator<<(ostream& os, const PerlStringList& arr) {
for(int i=0;i<arr.count();i++)
#ifdef TEST
os << "[" << i << "]" << arr[i] << endl;
#else
os << arr[i] << endl;
#endif
return os;
}
/*
* Version 1.6
* Kudos to Larry Wall for inventing Perl
* Copyrights only exist on the regex stuff, and all have been left intact.
* The only thing I ask is that you let me know of any nifty fixes or
* additions.
*
* Credits:
* I'd like to thank Michael Golan <mg@Princeton.EDU> for his critiques
* and clever suggestions. Some of which have actually been implemented
*
* 01/08/01 (jda) - fixed PerlListBase<T> operator= prototype for VC++ error
* renamed regexp.h to _regexp.h for collision name with
jam regexp.h
*/
#ifndef _PERL_H
#define _PERL_H
#include <string.h>
//#include "regexp.h"
// replaced as follows (jda)
#include "_regexp.h"
#if DEBUG
#include <stdio.h>
#endif
#define INLINE inline
// This is the base class for PerlList, it handles the underlying
// dynamic array mechanism
template<class T>
class PerlListBase {
private:
enum{ALLOCINC=20};
T *a;
int cnt;
int first;
int allocated;
int allocinc;
void grow(int amnt= 0, int newcnt= -1);
protected:
void compact(const int i);
public:
#ifdef USLCOMPILER
// USL 3.0 bug with enums losing the value
PerlListBase(int n= 20)
#else
PerlListBase(int n= ALLOCINC)
#endif
{
a= new T[n];
cnt= 0;
first= n>>1;
allocated= n;
allocinc= n;
# ifdef DEBUG
fprintf(stderr, "PerlListBase(int %d) a= %p\n", allocinc, a);
# endif
}
PerlListBase(const PerlListBase<T>& n);
//PerlListBase<T>& PerlListBase<T>::operator=(const PerlListBase<T>&n);
// replaced as follows (jda)
PerlListBase<T>& operator=(const PerlListBase<T>& n);
virtual ~PerlListBase(){
# ifdef DEBUG
fprintf(stderr, "~PerlListBase() a= %p, allocinc= %d\n", a, allocinc);
# endif
delete [] a;
}
INLINE T& operator[](const int i);
INLINE const T& operator[](const int i) const;
int count(void) const{ return cnt; }
void add(const T& n);
void add(const int i, const T& n);
void erase(void){ cnt= 0; first= (allocated>>1);}
};
// PerlList
class PerlStringList;
template <class T>
class PerlList: private PerlListBase<T> {
public:
PerlList(int sz= 10): PerlListBase<T>(sz){}
// stuff I want public to see from PerlListBase
T& operator[](const int i){return PerlListBase<T>::operator[](i);}
const T& operator[](const int i) const{return PerlListBase<T>::operator [](i);}
PerlListBase<T>::count;
// add perl-like synonym
void reset(void){ erase(); }
int scalar(void) const { return count(); }
operator void*() { return count()?this:0; } // so it can be used in tests
int isempty(void) const{ return !count(); } // for those that don't like the above (hi michael)
T pop(void) {
T tmp;
int n= count()-1;
if(n >= 0){
tmp= (*this)[n];
compact(n);
}
return tmp;
}
void push(const T& a){ add(a);}
void push(const PerlList<T>& l);
T shift(void) {
T tmp= (*this)[0];
compact(0);
return tmp;
}
int unshift(const T& a) {
add(0, a);
return count();
}
int unshift(const PerlList<T>& l);
PerlList<T> reverse(void);
PerlList<T> sort();
PerlList<T> splice(int offset, int len, const PerlList<T>& l);
PerlList<T> splice(int offset, int len);
PerlList<T> splice(int offset);
};
// just a mechanism for self deleteing strings which can be hacked
class TempString {
private:
char *str;
public:
TempString(const char *s) {
str= new char[strlen(s) + 1];
strcpy(str, s);
}
TempString(const char *s, int len) {
str= new char[len + 1];
if(len) strncpy(str, s, len);
str[len]= '\0';
}
~TempString(){ delete [] str; }
operator char*() const { return str; }
};
/*
* This class takes care of the mechanism behind variable length strings
*/
class VarString {
private:
enum{ALLOCINC=32};
char *a;
int len;
int allocated;
int allocinc;
INLINE void grow(int n= 0);
public:
#ifdef USLCOMPILER
// USL 3.0 bug with enums losing the value
INLINE VarString(int n= 32);
#else
INLINE VarString(int n= ALLOCINC);
#endif
INLINE VarString(const VarString& n);
INLINE VarString(const char *);
INLINE VarString(const char* s, int n);
INLINE VarString(char);
~VarString(){
# ifdef DEBUG
fprintf(stderr, "~VarString() a= %p, allocinc= %d\n", a, allocinc);
# endif
delete [] a;
}
VarString& operator=(const VarString& n);
VarString& operator=(const char *);
INLINE const char operator[](const int i) const;
INLINE char& operator[](const int i);
operator const char *() const{ return a; }
int length(void) const{ return len; }
void add(char);
void add(const char *);
void add(int, const char *);
void remove(int, int= 1);
void erase(void){ len= 0; }
};
class PerlStringList;
//
// Implements the perl specific string functionality
//
class PerlString {
private:
VarString pstr; // variable length string mechanism
public:
class substring;
PerlString():pstr(){}
PerlString(const PerlString& n) : pstr(n.pstr){}
PerlString(const char *s) : pstr(s){}
PerlString(const char c) : pstr(c){}
PerlString(const substring& sb) : pstr(sb.pt, sb.len){}
PerlString& operator=(const char *s){pstr= s; return *this;}
PerlString& operator=(const PerlString& n);
PerlString& operator=(const substring& sb);
operator const char*() const{return pstr;}
const char operator[](int n) const{ return pstr[n]; }
int length(void) const{ return pstr.length(); }
char chop(void);
int index(const PerlString& s, int offset= 0);
int rindex(const PerlString& s, int offset= -1);
substring substr(int offset, int len= -1);
substring substr(const Range& r){ return substr(r.start(), r.length ());}
int m(const char *, const char *opts=""); // the regexp match m/.../ equiv
int m(Regexp&);
int m(const char *, PerlStringList&, const char *opts="");
int m(Regexp&, PerlStringList&);
int tr(const char *, const char *, const char *opts="");
int s(const char *, const char *, const char *opts="");
PerlStringList split(const char *pat= "[ \t\n]+", int limit= -1);
int operator<(const PerlString& s) const { return (strcmp(pstr, s) < 0); }
int operator>(const PerlString& s) const { return (strcmp(pstr, s) > 0); }
int operator<=(const PerlString& s) const { return (strcmp(pstr, s) <= 0); }
int operator>=(const PerlString& s) const { return (strcmp(pstr, s) >= 0); }
int operator==(const PerlString& s) const { return (strcmp(pstr, s) == 0); }
int operator!=(const PerlString& s) const { return (strcmp(pstr, s) != 0); }
PerlString operator+(const PerlString& s) const;
PerlString operator+(const char *s) const;
PerlString operator+(char c) const;
friend PerlString operator+(const char *s1, const PerlString& s2);
PerlString& operator+=(const PerlString& s){pstr.add(s); return *this;}
PerlString& operator+=(const char *s){pstr.add(s); return *this;}
PerlString& operator+=(char c){pstr.add(c); return *this;}
friend substring;
private:
void insert(int pos, int len, const char *pt, int nlen);
// This idea lifted from NIH class library -
// to handle substring LHS assignment
// Note if subclasses can't be used then take external and make
// the constructors private, and specify friend PerlString
class substring
{
public:
int pos, len;
PerlString& str;
char *pt;
public:
substring(PerlString& os, int p, int l) : str(os)
{
if(p > os.length()) p= os.length();
if((p+l) > os.length()) l= os.length() - p;
pos= p; len= l;
if(p == os.length()) pt= 0; // append to end of string
else pt= &os.pstr[p];
}
void operator=(const PerlString& s)
{
if(&str == &s){ // potentially overlapping
VarString tmp(s);
str.insert(pos, len, tmp, strlen(tmp));
}else str.insert(pos, len, s, s.length());
}
void operator=(const substring& s) {
if(&str == &s.str){ // potentially overlapping
VarString tmp(s.pt, s.len);
str.insert(pos, len, tmp, strlen(tmp));
}else str.insert(pos, len, s.pt, s.len);
}
void operator=(const char *s) {
str.insert(pos, len, s, strlen(s));
}
};
};
class PerlStringList: public PerlList<PerlString> {
public:
PerlStringList(int sz= 6):PerlList<PerlString>(sz){}
// copy lists, need to duplicate all internal strings
PerlStringList(const PerlStringList& n);
PerlStringList& operator=(const PerlList<PerlString>& n);
int split(const char *str, const char *pat= "[ \t\n]+", int limit= -1);
PerlString join(const char *pat= " ");
int m(const char *rege, const char *targ, const char *opts=""); // makes list of sub exp matches
PerlStringList grep(const char *rege, const char *opts=""); // trys
rege against elements in list
};
// This doesn't belong in any class
inline PerlStringList m(const char *pat, const char *str, const char *opts
="")
{
PerlStringList l;
l.m(pat, str, opts);
l.shift(); // remove the first element which would be $&
return l;
}
// Streams operators
template <class T>
istream& operator>>(istream& ifs, PerlList<T>& arr) {
T a;
// Should I reset arr first?
arr.reset(); // I think so, to be consistent
while(ifs >> a){
arr.push(a);
// cout << "<" << a << ">" << endl;
};
return ifs;
}
template <class T>
ostream& operator<<(ostream& os, const PerlList<T>& arr)
{
for(int i=0;i<arr.count();i++){
#ifdef TEST
os << "[" << i << "]" << arr[i] << " ";
}
os << endl;
#else
os << arr[i] << endl;
}
#endif
return os;
}
istream& operator>>(istream& ifs, PerlString& s);
istream& operator>>(istream& ifs, PerlStringList& sl);
ostream& operator<<(ostream& os, const PerlString& arr);
ostream& operator<<(ostream& os, const PerlStringList& arr);
// Implementation of template functions for perllistbase
template <class T>
INLINE T& PerlListBase<T>::operator[](const int i)
{
assert((i >= 0) && (first >= 0) && ((first+cnt) <= allocated));
int indx= first+i;
if(indx >= allocated){ // need to grow it
grow((indx-allocated)+allocinc, i+1); // index as yet unused element
indx= first+i; // first will have changed in grow()
}
assert(indx >= 0 && indx < allocated);
if(i >= cnt) cnt= i+1; // it grew
return a[indx];
}
template <class T>
INLINE const T& PerlListBase<T>::operator[](const int i) const {
assert((i >= 0) && (i < cnt));
return a[first+i];
}
template <class T>
PerlListBase<T>::PerlListBase(const PerlListBase<T>& n) {
allocated= n.allocated;
allocinc= n.allocinc;
cnt= n.cnt;
first= n.first;
a= new T[allocated];
for(int i=0;i<cnt;i++) a[first+i]= n.a[first+i];
#ifdef DEBUG
fprintf(stderr, "PerlListBase(PerlListBase&) a= %p, source= %p\n", a,
n.a);
#endif
}
template <class T>
PerlListBase<T>& PerlListBase<T>::operator=(const PerlListBase<T>& n){
// cout << "PerlListBase<T>::operator=()" << endl;
if(this == &n) return *this;
#ifdef DEBUG
fprintf(stderr, "~operator=(PerlListBase&) a= %p\n", a);
#endif
delete [] a; // get rid of old one
allocated= n.allocated;
allocinc= n.allocinc;
cnt= n.cnt;
first= n.first;
a= new T[allocated];
for(int i=0;i<cnt;i++) a[first+i]= n.a[first+i];
#ifdef DEBUG
fprintf(stderr, "operator=(PerlListBase&) a= %p, source= %p\n", a,
n.a);
#endif
return *this;
}
template <class T>
void PerlListBase<T>::grow(int amnt, int newcnt){
if(amnt <= 0) amnt= allocinc; // default value
if(newcnt < 0) newcnt= cnt; // default
allocated += amnt;
T *tmp= new T[allocated];
int newfirst= (allocated>>1) - (newcnt>>1);
for(int i=0;i<cnt;i++) tmp[newfirst+i]= a[first+i];
#ifdef DEBUG
fprintf(stderr, "PerlListBase::grow() a= %p, old= %p, allocinc= %d\n",
tmp, a, allocinc);
fprintf(stderr, "~PerlListBase::grow() a= %p\n", a);
#endif
delete [] a;
a= tmp;
first= newfirst;
}
template <class T>
void PerlListBase<T>::add(const T& n){
if(cnt+first >= allocated) grow();
a[first+cnt]= n;
cnt++;
}
template <class T>
void PerlListBase<T>::add(const int ip, const T& n){
assert(ip >= 0);
if(first == 0 || (first+cnt) >= allocated) grow();
assert((first > 0) && ((first+cnt) < allocated));
if(ip == 0){ // just stick it on the bottom
first--;
a[first]= n;
}else{
for(int i=cnt;i>ip;i--) // shuffle up
a[first+i]= a[(first+i)-1];
a[first+ip]= n;
}
cnt++;
}
template <class T>
void PerlListBase<T>::compact(const int n){ // shuffle down starting at n
int i;
assert((n >= 0) && (n < cnt));
if(n == 0) first++;
else for(i=n;i<cnt-1;i++){
a[first+i]= a[(first+i)+1];
}
cnt--;
}
// implementation of template functions for perllist
template <class T>
void PerlList<T>::push(const PerlList<T>& l) {
for(int i=0;i<l.count();i++)
add(l[i]);
}
template <class T>
int PerlList<T>::unshift(const PerlList<T>& l) {
for(int i=l.count()-1;i>=0;i--)
unshift(l[i]);
return count();
}
template <class T>
PerlList<T> PerlList<T>::reverse(void) {
PerlList<T> tmp;
for(int i=count()-1;i>=0;i--)
tmp.add((*this)[i]);
return tmp;
}
template <class T>
PerlList<T> PerlList<T>::sort(void) {
PerlList<T> tmp(*this);
int n= tmp.scalar();
for(int i=0;i<n-1;i++)
for(int j=n-1;i<j;j--)
if(tmp[j] < tmp[j-1]){
T temp = tmp[j];
tmp[j] = tmp[j-1];
tmp[j-1]= temp;
}
return tmp;
}
template <class T>
PerlList<T> PerlList<T>::splice(int offset, int len, const PerlList<T>& l) {
PerlList<T> r= splice(offset, len);
if(offset > count()) offset= count();
for(int i=0;i<l.count();i++){
add(offset+i, l[i]); // insert into list
}
return r;
}
template <class T>
PerlList<T> PerlList<T>::splice(int offset, int len) {
PerlList<T> r;
if(offset >= count()) return r;
for(int i=offset;i<offset+len;i++){
r.add((*this)[i]);
}
for(i=offset;i<offset+len;i++)
compact(offset);
return r;
}
template <class T>
PerlList<T> PerlList<T>::splice(int offset) {
PerlList<T> r;
if(offset >= count()) return r;
for(int i=offset;i<count();i++){
r.add((*this)[i]);
}
int n= count(); // count() will change so remember what it is
for(i=offset;i<n;i++)
compact(offset);
return r;
}
// VarString Implementation
INLINE VarString::VarString(int n) {
a= new char[n];
*a= '\0';
len= 0;
allocated= n;
allocinc= n;
# ifdef DEBUG
fprintf(stderr, "VarString(int %d) a= %p\n", allocinc, a);
# endif
}
INLINE VarString::VarString(const char* s) {
int n= strlen(s) + 1;
a= new char[n];
strcpy(a, s);
len= n-1;
allocated= n;
allocinc= ALLOCINC;
# ifdef DEBUG
fprintf(stderr, "VarString(const char *(%d)) a= %p\n", allocinc, a);
# endif
}
INLINE VarString::VarString(const char* s, int n) {
a= new char[n+1];
if(n) strncpy(a, s, n);
a[n]= '\0';
len= n;
allocated= n+1;
allocinc= ALLOCINC;
# ifdef DEBUG
fprintf(stderr, "VarString(const char *, int(%d)) a= %p\n", allocinc, a);
# endif
}
INLINE VarString::VarString(char c) {
int n= 2;
a= new char[n];
a[0]= c; a[1]= '\0';
len= 1;
allocated= n;
allocinc= ALLOCINC;
# ifdef DEBUG
fprintf(stderr, "VarString(char (%d)) a= %p\n", allocinc, a);
# endif
}
INLINE ostream& operator<<(ostream& os, const VarString& arr) {
#ifdef TEST
os << "(" << arr.length() << ")" << (const char *)arr;
#else
os << (const char *)arr;
#endif
return os;
}
INLINE const char VarString::operator[](const int i) const {
assert((i >= 0) && (i < len) && (a[len] == '\0'));
return a[i];
}
INLINE char& VarString::operator[](const int i) {
assert((i >= 0) && (i < len) && (a[len] == '\0'));
return a[i];
}
INLINE VarString::VarString(const VarString& n) {
allocated= n.allocated;
allocinc= n.allocinc;
len= n.len;
a= new char[allocated];
strcpy(a, n.a);
#ifdef DEBUG
fprintf(stderr, "VarString(VarString&) a= %p, source= %p\n", a, n.a);
#endif
}
#endif
/*
* version 1.6
* Regexp is a class that encapsulates the Regular expression
* stuff. Hopefully this means I can plug in different regexp
* libraries without the rest of my code needing to be changed.
*
* 01/08/01 (jda) - renamed regexp.h to _regexp.h for collision name with
jam regexp.h
*
*/
#ifndef _REGEXP_H
#define _REGEXP_H
#include <iostream.h>
#include <stdlib.h>
#include <malloc.h>
#include <string.h>
#include <assert.h>
#include <ctype.h>
//#include "regex.h"
// replaced as follows (jda)
extern "C" {
#include "regexp.h"
}
/*
* Note this is an inclusive range where it goes
* from start() to, and including, end()
*/
class Range {
private:
int st, en;
public:
Range() { st=0; en= -1; }
Range(int s, int e) { st= s; en= e; }
int start(void) const { return st;}
int end(void) const { return en;}
int length(void) const { return (en-st)+1;}
};
class Regexp {
public:
enum options {def=0, nocase=1};
private:
regexp *repat;
const char *target; // only used as a base address to get an offset
int res;
int iflg;
#ifndef __TURBOC__
void strlwr(char *s) {
while(*s){
*s= tolower(*s);
s++;
}
#endif
public:
Regexp(const char *rege, int ifl= 0) {
iflg= ifl;
if(iflg == nocase){ // lowercase fold
char *r= new char[strlen(rege)+1];
strcpy(r, rege);
strlwr(r);
if((repat=regcomp(r)) == NULL){
cerr << "regcomp() error" << endl;
exit(1);
}
delete [] r;
}else{
if((repat=regcomp (rege)) == NULL){
cerr << "regcomp() error" << endl;
exit(1);
}
}
~Regexp() { free(repat); }
int match(const char *targ) {
int res;
if(iflg == nocase){ // fold lowercase
char *r= new char[strlen(targ)+1];
strcpy(r, targ);
strlwr(r);
res= regexec(repat, r);
target= r; // looks bad but is really ok, really
delete [] r;
}else{
res= regexec(repat, targ);
target= targ;
}
return ((res == 0) ? 0 : 1);
}
int groups(void) const {
int res= 0;
if(repat->startp[i] == NULL) break;
res++;
}
return res;
}
Range getgroup(int n) const {
assert(n < NSUBEXP);
return Range((int)(repat->startp[n] - (char *)target),
(int)(repat->endp[n] - (char *)target) - 1);
}
};
#endif
From: "Fabio Parodi" <fabio.parodi@libero.it>
Date: Tue, 13 Mar 2001 11:43:07 +0100
Subject: jam for Windows 98 - spawn: No such file or directory
I am trying to use jam on windows 98. The fourth phase of operation fails:
spawn: No such file or directory
The problem is in function execmd, file execunix.c. I guess this depends on
the differences in the fork/exec model, between 98 and NT. Has anyone a
workaround ready?
Date: Tue, 13 Mar 2001 13:20:26 +0100
From: Rainer Wiesenfarth <Rainer.Wiesenfarth@inpho.de>
Subject: Using jam with Trolltech's Qt
We have a product based on Trolltech's Qt library. This product uses
Rule and Action for the "moc" tool that work quite well. However, Qt
now includes a GUI builder that uses also another tool ("uic"). I
tried to define Rule(s) and Action(s) for this also, but as I am a
"jam-rookie", I did not succeed.
Has anyone an idea how to handle this?
For those that do not know about the tools, I try to describe it:
mygui.ui =(uic)=> mygui.h
=(uic)=> mygui.cc
mygui.h =(moc)=> mygui.moc
mygui.c <-dep--- mygui.h (#include'd)
myguiimpl.h <-dep--- mygui.h (#include'd)
=(moc)=> myguiimpl.moc
myguiimpl.cc <-dep--- mygui.h (#include'd)
<-dep--- mygui.moc (#include'd)
<-dep--- myguiimpl.moc (#include'd)
mytarget <-dep--- myguiimpl.cc
<-dep--- mygui.ui
where "=(tool)=>" means "generates using tool" and "<-dep---" means
"depends on".
Sorry, this might be explained very bad, but I find it hard to
describe it. The files known to be present are mygui.ui, myguiimpl.h,
and myguiimpl.cc, the other files are generated.
Date: Tue, 13 Mar 2001 15:54:28 +0100
From: Arnt Gulbrandsen <arnt@trolltech.com>
Subject: Re: Using jam with Trolltech's Qt
I can suggest one way, yes. I don't use uic myself, but I do use Jam and
Qt, and based on my Moc rule I can write a Uic rule that should work. You
may have to tinker a bit.
First of all, here's the Moc stuff I use. From Jamrules:
rule Moc {
TEMPORARY $(<) ;
NOCARE Moc ;
NOTFILE Moc ;
Clean $(<) ;
RmTemps $(<:S=.o) : $(<) ;
# are both of the following necessary?
Depends $(<) : $(>) ;
Depends $(<:S=.o) : $(<) ;
}
actions Moc { $(RM) $(<) }
And in Jamfile, I do something like this
Main myapp : ... .moc.cpp ;
LINKLIBS on myapp += -L$QTDIR/lib -lqt ;
Moc .moc.cpp : header1.h header2.h ... headern.h ;
This causes jam to keep a single .o file around, .moc.o, and says that
.moc.o depends on the temporary file .moc.cpp, which in turn depends on
all the header files named by the Moc rule. Jam will figure out whether it
needs to create .moc.cpp and update .moc.o, and if it does it will delete
.moc.cpp again after compiling it.
I hope this is understandable. I'll use it to build an Uic rule that
generates one .h and one .cpp file from one .ui file. Since Jam doesn't
really like multi-target rules, I hack.
My goal is to have a rule that I can use like this:
Uic mumble.h : mumble.ui ;
And to achieve it, I'll crudely hack that rule so that it'll make a
mumble.cpp file as well.
rule Uic {
TEMPORARY $(<:S=.cpp) ; # the .cpp file is temporary
Clean $(<) ; # jam clean deletes the .h
Clean $(<:S=.cpp) ; # jam clean deletes the .cpp
RmTemps $(<:S=.o) : $(<:S=.cpp) ; # jam deletes .cpp after compiling .o
Depends $(<) : $(>) ; # the .h file depends on the .ui file
Depends $(<:S=.cpp) : $(>) ; # the .cpp file depends on the .ui file
Depends $(<:S=.o) : $(<:S=.cpp) ; # the .o file depends on the .cpp file
}
I don't have time to dig into uic right now. The action for Uic has to be
something like this:
actions Uic { uic $(>) -o $(<) }
I can probably look into it more thoroughly tomorrow, but I'm pressed for time today.
From: "Fabio Parodi" <fabio.parodi@libero.it>
Date: Fri, 16 Mar 2001 14:43:25 +0100
Subject: jam for windows 9x
I needed a version of Jam for windows 9x. I made some little change on the
2.3.1 and now it works fine. The actions are serialized, one at a time.
Needs a working rm.
It autodetects the os, so the same Jam.exe works well on Windows NT and 9x.
I had some problem with a firewall and I could not put the sources in the
perforce public depot. Please find attached sources. I'd like to see it in
next official release.
From: Matt Bruce <mbruce@instipro.com>
Date: Mon, 19 Mar 2001 11:44:45 -0500
Subject: jamming apache
Has anyone had any luck getting apache to build with Jam?
I'm trying to build it on Solaris and was hoping someone had
a nice Jamfile to speed things up.
Date: Wed, 28 Mar 2001 00:39:02 -0800 (PST)
Subject: JAM documentation
I am in need of proper documentation on JAM.
I am unable configure JAM and run small program of compiling a .c file.
I think a good starter is
http://public.perforce.com/public/jam/src/Jamfile.html
which is an easy reading on creating a Jamfile and using jam.
Date: Tue, 27 Mar 2001 23:00:08 -0800 (PST)
Subject: Documentation on JAM
I am in urgent need of detailed documentation of JAM
with lots of examples about how to run programs with
it. I am finding it very difficult without documentation.
Can anyone kindly tell me where can I get it.
From: Leon Glozman <leon.glozman@schema.com>
Date: Thu, 29 Mar 2001 17:47:29 +0200
Subject: Link of library with other library
I compile my project in WATCOM. I have libraries that are linked with other libraries.
The JAM supports executable linkage with libraries (rule LinkLibraries), but
not library with library.
From: Amaury.FORGEOTDARC@ubitrade.com
Subject: Re: Link of library with other library
Date: Fri, 30 Mar 2001 09:54:45 +0100
(I didn't know it was possible to link a library with another library.)
This is not part of the standard Jambase,
but it seems easy to write in Jamfile:
Library main.lib : file1.c file2.c ;
Archive main.lib : second.lib ;
Depends main.lib : second.lib ;
and main.lib is built with file1.obj, file2.obj and second.lib.
This is done in a single invocation of wlib.
This works at least with MSVCNT, where .obj and .lib arguments can be mixed.
After a quick look at the wlib doc, it seems to work also with WATCOM.
From: "David Abrahams" <abrahams@mediaone.net>
Date: Sun, 1 Apr 2001 17:07:32 -0400
Subject: negative success
I have some unit tests which are expected to fail compilation and/or linking
if everything is working correctly. Is it possible to keep Jam from halting
in these cases? The only approach I can imagine involves building a negation
tool which passes its arguments to system() and negates the result. I'd
rather not have to do that.
From: matt@corp.phone.com
Subject: Re: negative success
Date: 02 Apr 2001 09:09:28 -0700
That is in fact what you have to do.
Unless you don't care whether the commands succeed or fail. Then you
can use "actions ignore Foo {}"
Date: Mon, 2 Apr 2001 19:59:08 -0700
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
Subject: DST and Jam
Is anyone else finding that with MSVCNT (VC6), Jam's .obj archiving
doesn't work during daylight savings time? Jam expects the timestamp in
the .lib file to be in UTC, but it's off by one hour. This throws off
the dependencies quite badly. Anyone have a workaround or fix?
Date: Mon, 2 Apr 2001 22:33:13 -0500 (CDT)
Subject: Re: DST and Jam
funny you should mention this. we are having this problem with
gmake where it thinks everything is an hour in the future, which
upsets it mightly. I switched the time back to regular and moved
the time 1 hr ahead and it started working again.
I haven't noticed the problem with jam yet, but it may be just a little
more hidden than gmake's squwaking.
I think that the time offset is not corrected for daylight savings
time, and this screws things up.
I'd appreciate any *real* fix for this problem.
ps no problem on solaris, just nt 4 sp6
Is anyone else finding that with MSVCNT (VC6), Jam's .obj archiving
doesn't work during daylight savings time? Jam expects the timestamp in
the .lib file to be in UTC, but it's off by one hour. This throws off
the dependencies quite badly. Anyone have a workaround or fix?
Subject: RE: DST and Jam
Date: Mon, 2 Apr 2001 21:19:16 -0700
From: "Chris Antos" <chrisant@Exchange.Microsoft.com>
I found an MSDN article that says there was a bug in the C runtime time
conversion logic that will manifest itself for one week during the week
of April 1, 2001, and is self-correcting after that period. I'm not
surprised: my Palm handheld also freaked out this year about DST (the
3rd party DST auto-switcher thought "last Sunday in March" was the
rule). The DST transition rules are more complicated than that, and
this year seems to hit a boundary case in the rules.
Anyway, the bug is supposedly fixed by VC6 SP3, and is not related to
Y2K at all. I had VC6 SP4, but to be safe I just now upgraded to VS6
SP5 (couldn't find VC6 SP5) and rebuilt Jam, and the problem seems to
have disappeared. YMMV.
Sent: Monday, April 02, 2001 8:33 PM
Subject: Re: DST and Jam
funny you should mention this. we are having this problem with gmake
where it thinks everything is an hour in the future, which upsets it
mightly. I switched the time back to regular and moved the time 1 hr
ahead and it started working again. I haven't noticed the problem with
jam yet, but it may be just a little more hidden than gmake's squwaking.
I think that the time offset is not corrected for daylight savings time,
and this screws things up.
FILETIME=[0D3C9770:01C0BBEA]
timestamp in
off
Is anyone else finding that with MSVCNT (VC6), Jam's .obj archiving
doesn't work during daylight savings time? Jam expects the timestamp in
the .lib file to be in UTC, but it's off by one hour. This throws off
the dependencies quite badly. Anyone have a workaround or fix?
Date: Tue, 03 Apr 2001 14:55:36 +1000
From: Graeme Gill <graeme@colorbus.com.au>
Subject: Re: DST and Jam
I haven't noticed any problems, but then my version of Jam
was modified some time ago (June '97) to use Win32 GetFileTime(),
so that it would compile with the IBM WinNT compiler (I posted
my changes to this list at the time.)
Note that Win98 has a problem in that its FILETIME is
always local, not GMT.
If you're interested I can mail you my filent.c, but you
may need to port it to the latest Jam revision.
From: "RobertWoodcock" <rmw@fractalgraphics.com.au>
Date: Tue, 3 Apr 2001 13:04:39 +0800
Subject: Header dependency caching
Reading through the archives I noted a few places were people had mentioned
modifcations to jam that support caching of the file header dependency
checks. I couldn't find any postings of the source modifications though. If
anyone is willing to put these up or give some pointers on what needs to be
done it would be appreciated. Since moving to jam our build times have
reduced dramatically but when making small changes the dependency checking
is taking way to long for our patience.
Also, a special thankyou to all of you who have posted solutions to the
mailing list over the past year or so. I read through the archives when
setting up our new system and suffice to say your input made the job much
much easier.
Date: Tue, 3 Apr 2001 00:10:15 -0500 (CDT)
Subject: Re: DST and Jam
check out http://news.cnet.com/news/0-1007-200-5424581.html?tag=nbs
From: mihir@eparle.com
Date: Wed, 28 Mar 2001 12:28:36 +0100
Subject: JAM documentation
I am in urgent need of detailed documentation of JAM with lots of
examples about how to run programs with it. I am finding it very
difficult without documentation .
Is it available on website ????
Can anyone kindly tell me where can I get it.
Date: Wed, 04 Apr 2001 16:34:14 +0200 (CEST)
From: Werner LEMBERG <wl@gnu.org>
Subject: omissions in the manual
I couldn't find documentation about the macro delimiting tokens `['
and `]'. Similarly, the `return' keyword is undocumented.
To find out that `Depends' and `Depends' are the same isn't documented
either... Is there any reason why both forms are used in Jambase? I
consider this quite irritating.
Date: Wed, 04 Apr 2001 16:30:43 +0200 (CEST)
From: Werner LEMBERG <wl@gnu.org>
Subject: targets and variables
I have a problem with setting variables for a specific target.
In our library (FreeType) we have two compile `modes'
. compile master1.c which itself includes file11.c, file12.c, ...
compile master2.c which itself includes file21.c, file22.c, ...
...
build library from master1.o, master2.o, ...
This is the default.
. compile file11.c, file12.c, ..., file21.c, file22.c, ...
...
build library from file11.o, file12.o, ...
This should be target `multi'.
The solution with GNU make is to scan MAKECMDGOAL which contains all
command line targets. If `multi' has been found, a variable `MULTI'
has been set which can then be used during the parse phase to select
the proper set of files to compile. It seems to me that this feature
is not available with jam. Is this intentional?
I tried some hours for a workaround but wasn't successful (I know that
`jam -sMULTI=true' would work).
A MAKECMDGOAL variable within jam would be useful for other things
also. For example, another target in FreeType is `devel' which
bypasses the configure script and sets a bunch of special compilation flags.
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Tue, 10 Apr 2001 20:47:54 -0400
Subject: Bug Fix
The built-in FConcat rule doesn't work. Here is a fix. This version takes an
optional second parameter which acts as a separator:
rule FConcat {
# Puts the variables together, removing spaces.
local _t _r ;
_r = $(<[1]) ;
local sep = $(>) ;
if ! $(sep){ sep = "" ; }
{
_r = $(_r)$(sep)$(_t) ;
}
return $(_r) ;
}
From: "Malloy, Michael" <MMalloy@TRADEC.com>
Date: Mon, 9 Apr 2001 18:20:22 -0700
Subject: Using JAM with VB and VC++
The project I am working on is heavily dependent on Visual Basic. Has
anyone used Jam with VB and other Visual Studio products? In particular, I
need rules to create .ocx files based on .bas and .frm files. Then, .cab
files are needed to be created from the resulting executable files. If
anyone has already done this, please let me know!
From: "Malloy, Michael" <MMalloy@TRADEC.com>
Date: Mon, 9 Apr 2001 18:20:22 -0700
Subject: Using JAM with VB and VC++
The project I am working on is heavily dependent on Visual Basic. Has
anyone used Jam with VB and other Visual Studio products? In particular, I
need rules to create .ocx files based on .bas and .frm files. Then, .cab
files are needed to be created from the resulting executable files. If
anyone has already done this, please let me know!
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Wed, 11 Apr 2001 08:45:39 -0400
Subject: Bug Report
I don't know if anybody is maintaining Jam at the moment, but...
The following rule produces surprising (at least!) results when the marked
lines are interchanged:
rule split-path {
local parent = $(<:P) ;
if ! $(parent) {
ECHO "split-path =" $(<) ; ######
return $(<) ; ######
}else{
local p ;
p = [ split-path $(parent) ] ;
local b = $(<:B) ;
p += $(b) ;
ECHO "split-path =" $(p) ;
return $(p) ;
}
}
ECHO [ split-path a/b ] ;
From: "David Abrahams" <abrahams@mediaone.net>
Date: Tue, 24 Apr 2001 16:12:58 -0400
Subject: Bug (?) report
clearing the grist on a variable seems to "normalize" the path slashes, at
least under Win32:
a = <foo>bar/baz ;
ECHO $(a:G=) ;
prints:
bar\baz
this behavior is at the very least suprising!
From: "David Abrahams" <abrahams@mediaone.net>
Date: Tue, 24 Apr 2001 17:17:11 -0400
Subject: bug (?) report
The result of using :D or :P on a path with no $(SLASH) separators seems to
be not an empty list, but an empty string. This makes code like the
following fail to work as expected:
$(x:D?=$(DOT))
or,
if $(x:D) { ... }
From: "Dowdy, Mark" <mark@ciena.com>
Date: Tue, 8 May 2001 10:26:59 -0700
Subject: What Happened to Jamlang.html?
I apologize if I missed this discussion earlier, but
what happened to jamlang.html? Sadly, Christopher
deleted this file with the 2.3 submission. Personally,
I find this to be the most useful reference document
for Jam. I know I can always use an old version of the
file but I would think new users of Jam would find the
information in this file useful.
Date: Wed, 9 May 2001 11:21:36 +0200
Subject: Bad "warning using independet target" messages
Sometimes i got the following warning:
"warning using independent target xy."
But target 'xy', is not independent. Or at least i didn't found any reason
for this message....
Have anyone experienced such problems with Jam 2.3?!
By debugging into the Jam executable i found the following:
This message comes from:
make1bind() [make1.c line 624]
which is called by
make1list() [make1.c line 549]:
>/* Sources to 'actions existing' are never in the dependency */
>/* graph (if they were, they'd get built and 'existing' would */
>/* be superfluous, so throttle warning message about independent */
>/* targets. */
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Is '!(flag & RULE_EXISTING)' is correct here?! From the comment i'd rather
write
'(flag & RULE_EXISTING)'.
From: <boga@mac.com>
Date: Thu, 10 May 2001 15:41:21 +0200
Subject: Recovering from errors
If there's a syntax error in the jam makefile, jam continues the execution.
(WinNT)
Is it usefull for someone?
Isn't it dangerous? Some variables are pointing to source directories and
some others are going to be cleaned, so a bad jam file might clean your sources.
Date: Mon, 14 May 2001 15:56:39 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Jam binaries for Windows 95/98 and OS/2
Thanks for the fix, I'll include it shortly. For those that may be
interested in them, I've put my version of Jam in the Perforce public
depot, under the path //guests/david_turner/jam/src.
A summary of the changes applied is readable at:
http://www.freetype.org/jam/index.html
From: "Fabio Parodi" <fabio.parodi@fredreggiane.com>
Date: Mon, 14 May 2001 15:16:02 +0200
Subject: Re: Jam binaries for Windows 95/98 and OS/2
I've only found one bug. It happens when the action specified in Jambase is
large:
the program tries to execute it as a batch file, but the name of the
temporary file is NULL.
It was easy to correct. Now it works fine on my windows 98 box.
Please find the attachment.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Tue, 22 May 2001 17:59:27 +0100
Subject: Generated Header files
I'm just starting out using Jam, so I apologise if this is old hat. I'm
investigating using it to replace make(1) on a medium-sized body of code, by
attempting to build some of the libraries.
I've got a directory containing a bunch of .cpp files. These are to be
built into a library. Some of these include a file called errors.h, which
is generated from a corresponding errors.mes file (it's a list of #define
error codes, and the corresponding text).
In short, I'd like to do the following:
Library something : a.cpp b.cpp c.cpp errors.mes ;
and have it invoke the relevant command to build errors.h from errors.mes,
then build something.lib from the .cpp files.
At present, of course, it attempts to build a .obj file from the .mes file,
and then UserObject complains.
My questions:
a) How do I specify that the .cpp files are dependent on errors.h? Do I need to?
b) How do I specify that errors.h is dependent on errors.mes? How do I do
this in a nice parameterised way? I've got .mes files in some of the other
library directories.
c) How do I tell Jam what commands to execute? It's got to run a Python
script to generate the header.
Date: Tue, 22 May 2001 19:24:39 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Generated Header files
In your Jamfile:
MyCustomRule errors.h : errors.mes ;
Writing MyCustomRule is left as an exercise for the reader.
The actions for MyCustomRule run your script.
rule MyCustomRule {
Depends $(<) : $(>) ;
Clean $(<) ; # so that jam clean will kill errors.h
}
Assuming that the script has -i input -o output:
actions MyCustomRule {
myScript -o $(<) -i $(<)
}
Warning: this is all typed straight in. Hopefully it'll make sense, though.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Generated Header files
Date: Tue, 22 May 2001 19:02:30 +0100
Hmmm, that didn't work. Maybe I'm being stupid. Probably am.
I've pared the problem down to bare essentials. My Jamfile looks like this:
# Jamfile
if $(NT) { C++FLAGS += -D "WIN32" -D "WINDOWS" -D "_WIN32" -D "_WINDOWS"
; }
Main bing : bing.cpp ;
bing.cpp looks like this:
// bing.cpp
#include "bing_errors.h"
int main(void) { return 0; }
This fails because the compiler can't find 'bing_errors.h'
So I add (before the Main clause), the following:
rule ErrBuild { #1
Depends $(<) : $(>) ;
Clean $(<) ;
}
actions ErrBuild { #2 copy $(>) $(<) }
ErrBuild bing_errors.h : bing_errors.mes ; #3
Same error.
Interestingly, if I rename bing_errors.mes to something else, I get
"...skipped bing_errors.h for lack of bing_errors.mes..."
...which suggests that jam knows that it wants a .mes file for the .h file.
I guess that this is done by the #1 and #3 stuff, yes?
However, it doesn't seem to know anything about the #2 stuff. What have I
got wrong? It's probably something simple.
Do I need to add bing_errors.mes to the Main line? If I do, I get the
UserObject error.
Subject: Re: [ Arnt Gulbrandsen ] Generated Header files
Date: Tue, 22 May 2001 13:52:28 -0500
From: "Gregg G. Wonderly" <gregg@skymaster.c2-tech.com>
One of the above '$(<)' should be '$(>)' should it not?
Date: Tue, 22 May 2001 22:37:22 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: [ Arnt Gulbrandsen ] Generated Header files
You're right, of course.
myScript -o $(>) -i $(<)
Date: Tue, 22 May 2001 13:10:04 -0700 (PDT)
From: sales@perforce.com
Subject: Re: [ Arnt Gulbrandsen ] Generated Header files (CALL#192429)
One of the above '$(<)' should be '$(>)' should it not?
Date: Tue, 22 May 2001 22:59:14 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Generated Header files
The "...skipped" message means that jam didn't even try to run the #2
stuff, because bing_errors.mes didn't exist.
I suspect that if you touch bing_errors.h into existence, all will be
well until the next time you delete it.
Here's my guessw^Wreasoning. Jam doesn't attempt to build .h files that it
sees included. It will use such a dependency to decide whether or not to
compile the .c, but if the .h does not exist, jam assumes it's a case of
#if defined(UNIX)
#include <mumble.h>
#else
#include <gargle.h>
#endif
where exactly one of mumble.h or gargle.h exists on any given system, but
compilation works never the less.
So, how to fix it properly? Adding a hard dependency should do it. Perhaps
something like this, but I'm hoping one of the perforce people will Know:
rule Depends { Depends $(<) : $(>) ; }
Depends bing.cpp : bing_errors.h ;
I've never tried this :)
Date: Tue, 22 May 2001 16:01:27 -0700 (PDT)
Subject: Re: Generated Header files
Yep -- there's nothing about Jam's checking for included headers that does
anything about building generated header files.
It'd probably be better to keep things more general. What you want to do
is say "generated header files need to get built before any object files".
So create a rule (say, GenHdr) that depends on the "files" pseudo-target.
For example:
rule GenHdr {
Depends files : $(<) ;
Depends $(<) : $(>) ;
Clean clean : $(<) ;
}
actions GenHdr {
copy $(>) $(<)
}
Then, in your Jamfile, you'd have:
Main bing : bing.cpp ;
GenHdr bing_errors.h : bing_errors.mes ;
(If your generated headers are always <something>_errors.mes -> .h, you
could beef up the rule to handle the filename/suffix stuff so you don't
need to include all that in your Jamfile and could instead just have
"GenHdr bing ;" -- I was just too lazy to do it myself :)
Now when you run 'jam', it'll do:
[BINKY:dianeh]: jam -n
...found 12 target(s)...
...updating 3 target(s)...
GenHdr bing_errors.h
cp bing_errors.mes bing_errors.h
C++ bing.o
cc -c -D__cygwin__ -O -o bing.o bing.cpp
Link bing.exe
gcc -D__cygwin__ -o bing.exe bing.o
Chmod1 bing.exe
chmod 711 bing.exe
...updated 3 target(s)...
Date: Tue, 22 May 2001 19:32:14 -0400 (EDT)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Generated Header files
We ran into this problem earlier and took a slightly different approach.
There is a problem with 'files' depending on the generated headers, as
there is not a direct dependency between the object file which requires the
header, and files. When doing a build with multiple jobs, there is nothing
in the dependency graph to stop Jam from building the objects before the
files (and failing). Running Jam with a single job will probably give what
you expect, as it just so happens that the order of the graph has 'files'
before 'obj'
A slightly more complex alternative, which works with multiple jobs is
to create a new node - derived-files.
Depends all : derived-files ;
NOTFILE derived-files ;
NOUPDATE derived-files ; # (requires the NOUPDATE fix posted a while ago.)
and then do not create any object before all the derived-files are built.
In the 'rule Object' code, add:
Depends $(<) : derived-files ;
...or something like that.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Generated Header files
Date: Tue, 22 May 2001 16:52:49 -0700
I believe that Jam would build included the header files except that
they are marked as NOCARE in HdrRule.
In fact, I had the same problem and worked arround it as
Diane suggests -- but only because in my cases I had
includes files driving included file (CORBA don't you know).
What I was tempted to do (and did attempt, but it did not solve
my problem) was to have NOCARE be a NOOP if the target had an
action attached to it.
This works because NOCARE was designed to allow Jam to ignore
platform specific header files which might be hidden behind
#ifdefs.
If in your local code, you never use ifdefs to include/hide
LOCAL header files, then you could modify the HdrRule to
only NOCARE header files which are not part of your project.
Date: Tue, 22 May 2001 21:28:44 -0400 (EDT)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Generated Header files
I think my previous post warning about multiple jobs was misleading,
although I believe that solution works. There are other things going
on in our Jambase which required that approach - grist I believe.
If A.c has '#include <der.h>'
and <der.h> is created from der.tmpl, the graph should be something like:
Depends A.o : A.c ;
INCLUDES A.c : der.h ;
NOCARE der.h ;
Depends der.h : der.tmpl ;
I don't think the NOCARE rule interferes with this - since the
header has the same dependency graph node name, everything works out. The
trick to making that work is either to avoid grist on derived files or to
somehow know what grist to apply in the HdrRule. I think avoiding grist
demands that header files have unique names across the system you're
building. The GenFile rule avoids grist on header files, unless SOURCE_GRIST
is set.
the HdrRule is going to be called as:
HdrRule A.c : der.h ... ;
If you know which <der.h> is intended, you can INCLUDE it and the
dependencies will work out. If there is more than one <der.h> that may be
intended, depending on the SEARCH path at bind time, then you're in
trouble. In that case my prior post gives an approach to take - build all
derived files before any objects.
Date: Wed, 23 May 2001 08:26:46 +0200
Subject: Re: jamming digest, Vol 1 #196 - 3 msgs
I think the problem is caused by the
"NOCARE bing_errors.h ;"
call in HdrRule.
So actually you have to explicitly write that bing.cpp includes
bing_errors.h !
Try adding:
HDRGRIST = "hdr" ;
INCLUDES bing.cpp : bing_errors.h ;
Or:
INCLUDES bing.cpp : <generated>bing_errors.h ;
ErrBuild <generated>bing_errors.h : bing_errors.mes ; # Replaces 3
this:
file.
From: Tony Smith <tony@perforce.com>
Subject: Re: Generated Header files
Date: Wed, 23 May 2001 11:28:22 +0100
The simplest way I've found is to use GenFile which does that for you.
Again, use GenFile. Here's a small example:
rule GenHdr {
Depends first : $(<) ;
GenFile $(<) : $(>) ;
}
GenHdr errors.h : genfile.py errors.mes ;
Library mylib : file.c ;
Note the additional bit of trickery in that the rule GenHdr makes the target
of the rule dependent on the pseudotarget "first". That'll make jam build the
header file very early on so you can be sure it's there by the time the
library gets built. That should avoid your UserObject errors.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Generated Header files
Date: Wed, 23 May 2001 15:10:15 +0100
Thanks to everyone for their help with the original question. Once I'd
upgraded to jam-2.3.2, it worked properly.
Except when trying to build recursively. I've expanded my original example
to the following:
marmalade/
lib/
peel/
src/
I think I've got the SubInclude and SubDir rules right. Without the .mes ->
.h rules (in TOP/Jamrules), it builds correctly. With the .mes -> .h rules,
I can only get it to build correctly if I start in lib/peel.
What am I missing?
You can grab a tarball of what I'm playing with at
http://differentpla.net/~roger/jam/marmalade.tar.gz if you want to look at
the files I'm using (or attempting to use <grin>).
In fact, I might write an article about my experiences and put it up there
as well. If I can get this working, it might serve as a useful tutorial.
From: Grant Glouser <Grant.Glouser@corp.palm.com>
Subject: RE: Generated Header files
Date: Tue, 22 May 2001 14:29:01 -0700
Are you using Jam 2.3? If you are, you could be running into a bug that was
fixed in a recent patch (2.3.2). I haven't encountered this one myself, but
it sounds like Jam was failing to run any actions that build header files.
This would cause the symptoms you show in your bing example.
Try 2.3.2. You might have to get it from the public perforce depot, because
the Jam homepage may not have the latest tarballs.
http://public.perforce.com/public/jam/index.html
From the 2.3.2 release notes:
"0. Bugs fixed since 2.3.1
PATCHLEVEL 2 - 3/12/2001
NOCARE changed back: it once again does not applies to targets
with sources and/or actions. In 2.3 it was changed to apply to
such targets, but that broke header file builds: files that are
#included get marked with NOCARE, but if they have source or
actions, they still should get built."
From: Tony Smith <tony@perforce.com>
Subject: Re: Generated Header files
Date: Wed, 23 May 2001 17:34:39 +0100
A couple of things I think, but the main one is that "lib" is also the name
of a built-in target in Jambase and that conflicts with the name of the lib
subdirectory.
Changing the directory name is the easiest option, but see "Using Jamfiles
and Jambase" for the alternatives.
The other thing that's missing is that your ErrBuild rule doesn't locate the
created header file in the same directory as the source, so if you build from
the top, it will create it at the top. Add a line like:
MakeLocate $(<) : $(LOCATE_SOURCE) ;
to sort that out. Here's a better version of your Jamrules which uses the
File rule to do the actual copy.
rule ErrBuild {
Depends $(<) : $(>) ;
Depends first : $(<) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
File $(<) : $(>) ;
Clean clean : $(<) ;
}
Note no HDRS line, I'd just use "SubDirHdrs" in the Jamfile in question. i.e.
in src/Jamfile:
SubDirHdrs $(TOP) mylib peel ;
where "mylib" is the renamed "lib" directory. Also, in your top Jamfile, you can use:
SubDir TOP ;
to avoid having to set the TOP environment variable in your shell. Just saves
a bit more setup time.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Generated Header files
Date: Wed, 23 May 2001 18:42:00 +0100
OK. That could be a problem with the project I'm actually trying to use
this on. The directory's already called 'lib', and it lives in CVS, which
could make things a bit messy. I'll look at the alternatives.
Can I implicitly make jam look somewhere else for Jambase, or are my only
options using -f or rebuilding it?
I also found that I needed to add a line like:
MakeLocate $(>) : $(LOCATE_SOURCE) ;
in order to tell it that the .mes file lived there, also. Don't know why.
It wouldn't work without both.
Yeah, I know about the File rule. I'm not (in the actual code) just doing a
copy. Maybe I should have changed the example to run grep over the file or
something, to make that more obvious.
Excellent. I wondered how to do that.
Well, that answers all of today's questions. I'm off to a beer festival
now, so I'll probably be back tomorrow with some more questions -- and a hangover :-)
From: Grant Glouser <Grant.Glouser@corp.palm.com>
Subject: RE: Generated Header files
Date: Wed, 23 May 2001 11:24:53 -0700
I think you also need to set SEARCH because otherwise Jam doesn't know where
to look for the .mes source file. The default IIRC is to look only in the
current directory (the one you run jam in). That's why it works in the peel
directory but not in the toplevel. Add this to your ErrBuild rule:
SEARCH on $(>) += $(SEARCH_SOURCE) ;
SEARCH_SOURCE is set automatically by the SubDir rule. (In Tony's example,
the File rule does this for you.)
This shows the advantage of using the GenFile rule (which many other people
have been suggesting), because GenFile does these things for you!
PS Works fine for me *without* "Depends first : $(<) ;".
rule ErrBuild {
SEARCH on $(>) += $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
Depends $(<) : $(>) ;
Clean $(<) ;
}
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 24 May 2001 12:47:59 +0100
Subject: Response files (Microsoft LINK command line length)
I've got a directory with 94 C++ files in it. These all get built (by a
Main rule) into a single executable. Unfortunately, the resulting command
line is too long for Microsoft LINK (about 4500 characters, rather than the
rather arbitrary 996 character limit).
Question: How do I get jam to output response files (that I can then feed to
link using the @ operator)? I can then edit the relevant section in
Jambase.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Response files (Microsoft LINK command line length)
Date: Thu, 24 May 2001 13:55:43 +0100
OK, on closer inspection, it seems that this limit is imposed by jam itself,
at line 506 of make1.c. My rule needs to have piecemeal defined, or MAXLINE
needs to be longer. It's set to 996 if NT is defined. And it is.
Date: Fri, 25 May 2001 11:55:14 +0200
Subject: Re: Response files
Jam contains no build-in response file generation. The limit of the
NT(4.0/2000) shell is not 996, but not infinite. So if you have many object
files, increasing the MAXLINE won't really help.
You can use picemeal to generate response files. I use something like this:
# Old link, replaced by the response version bellow
actions Link bind NEEDLIBS { $(LINK) $(LINKFLAGS) -o $(<) $(UNDEFS) $(>) $(NEEDLIBS) $(LINKLIBS) }
# Link with response files
actions dolink { $(LINK) $(LINKFLAGS) -o $(<) $(UNDEFS) @$(CMDFILE) $(NEEDLIBS) $(LINKLIBS) }
actions Link {}
rule Link {
CMDFILE on $(<) = $(<[1]).cmd ;
initcmdfile $(<) ;
echofilestocmdfile $(>) ;
dolink $(<) : $(>) ;
closecmdfile $(<) ;
# Insert old Link body here...
}
# initcmdfile, echofilestocmdfile, closecmdfile
actions quietly initcmdfile { copy nul: "$(CMDFILE)" > nul }
actions piecemeal quietly echoparamstocmdfile { echo $(>) >> "$(CMDFILE)" }
rule echoparamstocmdfile { NOTFILE $(>) ; }
actions piecemeal quietly echofilestocmdfile { echo "$(>)" >> "$(CMDFILE)" }
actions quietly closecmdfile { DEL "$(CMDFILE)" }
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Fri, 25 May 2001 16:26:49 +0100
Subject: Building DLLs... (and precompiled headers)
I've got an executable, which depends on a couple of DLLs, which also need
to be built. So, I cloned the 'Main' rule in Jambase, like this:
rule SharedLibrary {
SharedLibraryFromObjects $(<) : $(>:S=$(SUFOBJ)) ;
Objects $(>) ;
}
rule SharedLibraryFromObjects {
_t = [ FAppendSuffix $(<) : $(SUFSHR) ] ;
LINKFLAGS on $(_s) += /dll ;
Link $(_t) : $(_s) ;
}
Then I can use this rule to build the DLL. And it works.
My question is: how do I modify the rule to tell jam that this also emits
the import library? What I mean is: I've got:
SharedLibrary shared_lib : shr1.cpp shr2.cpp ;
Main my_exe : exe1.cpp exe2.cpp ;
LinkLibraries my_exe : lib1 lib2 shared_lib ;
Jam doesn't know that the SharedLibrary step builds shared_lib.lib. How do
I tell it that it does?
Also, I managed to implement a rule to use Visual C++ precompiled headers,
and I was wondering if anyone could offer me a critique of it:
rule PrecompileHeader {
Depends $(<) : $(>) ;
Depends first : $(<) ;
SubDirC++Flags /Fp$(LOCATE_TARGET)/$(<:S=.pch) /Yu"$(>:S=.h)" ;
MakeLocate $(>) : $(LOCATE_SOURCE) ;
# If you uncomment the following line, and I don't think that you ought to,
# remove the $(LOCATE_TARGET) from the /Fp, above.
# MakeLocate $(<) : $(LOCATE_TARGET) ;
Clean $(<) ;
}
# This one's been wrapped for clarity
actions PrecompileHeader {
$(C++) /c $(C++FLAGS) $(OPTIM)
/Fp$(LOCATE_TARGET)/$(<:S=.pch)
/Fd$(LOCATE_TARGET)/
/Fo$(LOCATE_TARGET)/
/I$(HDRS)
/I$(STDHDRS)
/Tp$(>)
/Yc$(<:S=.h)
}
There are a couple of problems with it:
a) Setting the dependency of the other C++ files on the generated .pch file.
I'd like to remove the 'Depends first', but I'm not convinced that it'll
build in the correct order.
b) It gets invoked as PrecompileHeader foo.pch, where everything else is
invoked as C++ a\b\c.cpp (i.e. with the path). Is this something I should worry about?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Building DLLs... (and precompiled headers)
Date: Fri, 25 May 2001 16:46:10 +0100
Ah, no problem. Found the fix:
MakeLocate $(_t:S=$(SUFLIB)) : $(LOCATE_TARGET) ;
...in my SharedLibraryFromObjects rule seems to have fixed it.
Subject: RE: Building DLLs... (and precompiled headers)
Date: Fri, 25 May 2001 22:13:37 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
It took me a long time to get my PCH rules and dependencies working no
matter what directory I built from, etc. The changes were extensive,
because I had to hack around the fact that Jam doesn't properly handle
the multiple target case. I still mean to crack open the dependency
code inside jam.exe and fix that, but I haven't found the time yet.
I've attached my Jambase file for your perusal. As you can see, my
attempt at a solution is a bit of a hack, but it does seem to work. Uh,
my Jambase is tailored in other ways as well, FYI. It also includes
rules for .sbr/.bsc browse information files, and .idl files.
The key changes regarding PCH files involve these rules (and also their
actions, for many of the rules):
- rule C++
- rule HdrRule (very nasty, gristing is tricky, it won't necessarily
figure out it needs to rebuild the PCH file when headers it included are updated!)
- rule Library
- rule Object
- rule Pch
- rule Res
- rule SubDir
- rule SubDirPrecompHdr
- rule SubDirPrecompHdrEnd
Maybe also these, but since I think Main is not involved, these probably
aren't either:
- rule DllLinkFlags
- rule DllMain
A sample Jamfile using the PCH stuff:
SubDir TOP foo ;
SubDirHdrs $(TOP) bar ;
SubDirPrecompHdr ;
Main foo : main.cpp ;
LinkLibraries main : user32.lib ;
The SubDirPrecompHdr rule usage is:
SubDirPrecompHdr [cppfile [: hdrfile]] ;
where brackets indicate optionalness. If cppfile is omitted, it
defaults to precomp.cpp. If hdrfile is omitted, it defaults to precomp.h.
I've even tried making several directories share a single PCH file, and
that does seem to work, except that if you build from somewhere whose
scope doesn't include the directory that is primarily responsible for
the PCH file, then if I remember correctly it doesn't know how to
rebuild the PCH file in that case.
From: Roger Lipscombe [mailto:rlipscombe@riohome.com]
Sent: Friday, May 25, 2001 8:46 AM
Subject: Re: Building DLLs... (and precompiled headers)
Ah, no problem. Found the fix:
MakeLocate $(_t:S=$(SUFLIB)) : $(LOCATE_TARGET) ;
...in my SharedLibraryFromObjects rule seems to have fixed it.
Subject: RE: Building DLLs... (and precompiled headers)
Date: Sun, 27 May 2001 10:43:20 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
I should clarify my comments about HdrRule -- I meant that without my
changes to the gristing, it failed to get the dependencies right and was
not rebuilding the PCH file when headers it included changed. The
HdrRule in my Jambase should have the gristing right so that it rebuilds
the PCH file when appropriate.
I think that gristing could be improved to automatically use the actual
path where the header file was found, rather than being given an
arbitrary prefix. For example, in my project, that would consolidate
the headers uniquely, and drop the number of dependencies from ~6000 to
~1000. I may address this eventually but for now it's not a big issue
for me, it's just a little annoying that Jam takes longer than it needs
to, when figuring out dependencies.
From: Chris Antos
Sent: Friday, May 25, 2001 10:14 PM
Subject: RE: Building DLLs... (and precompiled headers)
It took me a long time to get my PCH rules and dependencies working no
matter what directory I built from, etc. The changes were extensive,
because I had to hack around the fact that Jam doesn't properly handle
the multiple target case. I still mean to crack open the dependency
code inside jam.exe and fix that, but I haven't found the time yet.
I've attached my Jambase file for your perusal. As you can see, my
attempt at a solution is a bit of a hack, but it does seem to work. Uh,
my Jambase is tailored in other ways as well, FYI. It also includes
rules for .sbr/.bsc browse information files, and .idl files.
The key changes regarding PCH files involve these rules (and also their
actions, for many of the rules):
- rule C++
- rule HdrRule (very nasty, gristing is tricky, it won't necessarily
figure out it needs to rebuild the PCH file when headers it included are updated!)
- rule Library
- rule Object
- rule Pch
- rule Res
- rule SubDir
- rule SubDirPrecompHdr
- rule SubDirPrecompHdrEnd
Maybe also these, but since I think Main is not involved, these probably
aren't either:
- rule DllLinkFlags
- rule DllMain
A sample Jamfile using the PCH stuff:
SubDir TOP foo ;
SubDirHdrs $(TOP) bar ;
SubDirPrecompHdr ;
Main foo : main.cpp ;
LinkLibraries main : user32.lib ;
The SubDirPrecompHdr rule usage is:
SubDirPrecompHdr [cppfile [: hdrfile]] ;
where brackets indicate optionalness. If cppfile is omitted, it
defaults to precomp.cpp. If hdrfile is omitted, it defaults to
precomp.h.
I've even tried making several directories share a single PCH file, and
that does seem to work, except that if you build from somewhere whose
scope doesn't include the directory that is primarily responsible for
the PCH file, then if I remember correctly it doesn't know how to
rebuild the PCH file in that case.
From: Roger Lipscombe [mailto:rlipscombe@riohome.com]
Sent: Friday, May 25, 2001 8:46 AM
Subject: Re: Building DLLs... (and precompiled headers)
Ah, no problem. Found the fix:
MakeLocate $(_t:S=$(SUFLIB)) : $(LOCATE_TARGET) ;
...in my SharedLibraryFromObjects rule seems to have fixed it.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 31 May 2001 15:58:18 +0100
Subject: C++ rule and current directory...
When invoking 'jam -d2' from the root of the source tree, I see things like
this:
C++ lib\httpd\core\ByteRanges.obj
cl [...] /I ./lib /IP:\MSSDK\Include /Folib\httpd\core\ByteRanges.obj
/IP:\VStudio\VC98\include /Tplib\httpd\core\ByteRanges.cpp
(I've snipped and wrapped it for readability)
It seems that this occurs with most other rules, too. The current directory
is left as the root of the source tree. This causes problems with #include
"", since the path to the actual source files must be named in a /I
statement (added with SubDirC++Flags).
This strikes me as counter-intuitive -- if I was building something with the
Visual C++ IDE, or with a recursive make (i.e. make -C), I wouldn't need to
worry about this.
So:
a) Am I misunderstanding it? It is in the correct directory, and something
else is wrong?
b) Do I have to furtle with the $(HDRS) stuff? If so, how? What have I forgotten?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 31 May 2001 16:00:05 +0100
Subject: LinkLibraries and system libraries
I've got a Jamfile like this:
Main my_server : my_file.cpp ;
LinkLibraries httpd_server :
my_library
ws2_32.lib ;
ws2_32.lib is a system library.
Jam says that it can't build the project, since it doesn't know how to build
ws2_32.lib. How do I tell jam that the file exists?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 31 May 2001 16:00:52 +0100
Subject: External Makefiles
Simple question: What's the correct way to get jam to spawn make to build
something that came with its own Makefile (in this case, id3lib).
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 31 May 2001 16:35:15 +0100
Subject: Re: C++ rule and current directory...
Doh! Further reading of the supplied Jambase reveals the -I $(HDRS) in the
C++ actions, and the HDRS on $(<) in the Object rule. Seems my header file
was in the wrong place. Plus the fact that the compiler looks in the same
directory as the .cpp file, rather than the working directory anyway.
Date: Thu, 31 May 2001 19:43:35 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: LinkLibraries and system libraries
Don't add system libraries to your dependency graph.
Use LINKLIBS or NEEDLIBS instead..
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Thu, 31 May 2001 14:05:27 -0400
Subject: emacs editing mode for Jam?
One thing has been driving me crazy recently: I can't find a decent emacs
mode for editing Jam files. sh-mode, perl-mode, and python-mode all come
close in various ways, but fail in others. Hacking modes in emacs is such a
PITA that I can't easily figure out what to do. Have you come across/heard
of anything?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: LinkLibraries and system libraries
Date: Thu, 31 May 2001 19:26:28 +0100
Ah, Of course. At the moment, I've gone with something like:
rule SystemLibraries {
local _t = [ FAppendSuffix $(<) : $(SUFEXE) ] ;
LINKLIBS on $(_t) += $(>:S=$(SUFLIB)) ;
}
Main foo : foo.cpp ;
SystemLibraries foo : ws2_32.lib ;
...which works (once I thought to steal the FAppendSuffix stuff from
LinkLibraries).
However, this becomes a problem when I try to use it with my SharedLibrary
rule (which builds a DLL) instead of Main, because it assumes the $(SUFEXE) suffix.
The same problem applies to the supplied LinkLibraries rule.
Any suggested fixes? Would it be a good idea to generate some kind of
LINKLIBS-foo variable, which I could then apply in the
{Main|SharedLibrary}FromObjects rule, once I've figured out the correct suffix?
Can I change the LinkLibraries/SystemLibraries rules to know what the
Main/SharedLibrary is going to build? Should I just put up with it and
create LinkLibrariesShared/SystemLibrariesShared rules that use $(SUFSHR) instead?
From: "Brett Calcott" <brett.calcott@paradise.net.nz>
Date: Fri, 1 Jun 2001 11:38:27 +1200
Subject: layered dependencies and shared libs
The project I am working on consists of several independently usable layers,
which are separated into subdirs and are independent cvs modules. Because
each of the layers is independent, they are not arranged in a hierarchy.
I check all of the modules out to a single directory and define this subdir as TOP :
utilities\
layer1\
layer2\
app1\
app2\
where utilities and layer1 & 2 build shared libs (so or dll) and app1 & 1
build executables.
dependencies are as follows:
layer1 : utilities
layer2 : layer1 utilities
app1 : layer1 utilities
app2 : layer1 layer2 utilities
I have 2 questions:
1. How do I get a shared library to depend correctly on another shared
library. When I use LinkLibraries it attempts to create a static library as
the dependency.
2. The way I am doing things does not fit the standard hierarchical method -
and I would like the Jamrules to be in cvs. With Make I can put a
Makefile.include in another subdir & "include
$TOP/setup_dir/Makefile.include" in the other Makefiles. The Subdir rules in
Jam don't seem amenable to this. Is creating a link the easiest way?
Date: Fri, 01 Jun 2001 10:53:11 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: layered dependencies and shared libs
I don't think that this is possible with Jam currently, however, there are
ways to circumvent this (more on this below).
well, you could just use "include ...." in your Jamfile. The SubDir rule
is specially designed to deal with sub-directories of a single project.
I believe that the solution to your problem is to treat each one of
your independent layers as independent projects, that is:
- define a "Jamfile" in each of "layer1", "layer2" and "utilities",
eventually add Jamrules if you need them.
- you're not forced to use a variable named "TOP" in your SubDir rules,
so use something project-specific instead, like "LAYER1_TOP",
"LAYER2_TOP", "UTILITIES_TOP", etc..
in one of my projects:
# This Jamfile is used to compile the ZLib source code.
#
# We need to invoke a SubDir rule if the ZLib source directory top
# is not the current directory. This allows us to build the ZLib
# as part of another project easily.
#
ZLIB_TOP ?= $(DOT) ;
if $(ZLIB_TOP) != $(DOT)
{
SubDir ZLIB_TOP ; # this includes Jamrules if any..
}
#only use the source files that are required by LibPNG !!
# (we don't compile gzio.c, compress.c and uncompr.c)
#
zlib_sources = crc32.c deflate.c inflate.c zutil.c adler32.c infblock.c
inftrees.c infcodes.c inffast.c infutil.c trees.c ;
ZLIB_INCLUDE = $(ZLIB_TOP) ;
ZLIB_NEEDLIBS = $(LIBPREFIX)zlib$(SUFLIB) ;
Library $(LIBPREFIX)zlib : $(zlib_sources) ;
"""
Notice that this example only builds a static library, but
you could easily change it for a DLL. The important points are:
- the "SubDir" rule is only called if ZLIB_TOP is already
defined (to something that isn't the current directory)
This lets you compile the ZLib independently in its
directory, or as part of a larger project, using the
same Jamfile..
- the Jamfile defines ZLIB_INCLUDE and ZLIB_NEEDLIBS that
are used by other packages that depend on the ZLib
(in my example, LibPNG) when they use this version
of the library.
My main application Jamfile/Jamrules are organized as follows:
- there is a default Jamrules file, used on all systems except
Unix, where the same file is generated from a "Jamrules.in"
template through "configure"
- the Jamrules file contains a configuration variable named
"USE_SYSTEM_ZLIB". When it is "true", the Jamrules must also
define "ZLIB_INCLUDE" and "ZLIB_NEEDLIBS" (which are typically
filled automatically by "configure" on Unix).
- the Jamfile tests the value of "USE_SYSTEM_ZLIB". If it is not
true, then it defines ZLIB_TOP relative to the current path,
then calls "SubInclude ZLIB_TOP"
And of course, I use ZLIB_NEEDLIBS to link any application or DLL
than needs to link to the ZLib (even indirectly), while ZLIB_INCLUDE
should be used in "SubHdrs" rules for source code that needs to
#include the ZLib public headers..
This works flawlessly here, hope this helps.. :-)
From: Amaury.FORGEOTDARC@ubitrade.com
Subject: Re: emacs editing mode for Jam?
Date: Fri, 1 Jun 2001 11:57:15 +0200
There is the jam-mode.el from Eric Scouten
in the Perforce Public Depot:
Date: Fri, 1 Jun 2001 09:47:43 -0500 (CDT)
Subject: Re: layered dependencies and shared libs
just hack the linkLibraries rule to generate a SharedLinkLibraries rule
rule vSharedLinkLibraries {
local t ;
# make library dependencies of target
# set NEEDLIBS variable used by 'actions Main'
if $(<:S) { t = $(<) ;
} else { t = $(<:S=$(SUFEXE)) ; }
s = $(>:S=$(SUFSHR)) ;
Depends $(t) : $(s) ;
NEEDLIBS on $(t) += $(s) ;
SEARCH on $(s) += $(BUILT_LIBS) $(SHADOW_BUILT_LIBS) ;
}
There's a bit of extra stuff in there specific to our rules, so I
suspect it might be easier to hack the original LinkLibraries rule.
From: "Kimpton, Andrew" <awk@pulse3d.com>
Subject: RE: layered dependencies and shared libs
Date: Fri, 1 Jun 2001 09:27:54 -0700
We use a similar 'conditional include' mechanism here to deal with 'layers'
of libraries etc. such as you described.
One problem however we have found is that the link rule when used with the
Microsoft Library manager (we mostly build for Windows) supplies a 'TOP
relative' path. This path seems to change unfortunately depending on the
'depth' relative to TOP when you invoked Jam. This in turn means that the
same object (a.obj) can be passed to link as ../../build/a/a.obj and
../../../build/a/a.obj. The microsoft lib tool treats these as different
members of the archive since the paths are different although in reality
they are the same - which is not good.
Perhaps this is more related to our use of a separate build tree which is
actually a peer to TOP not a child. Any thoughts or suggestions on fixing
this (right now we just have to be careful about not changing the apparent
'depth' of our trees to avoid this) would be gratefully received.
Date: Fri, 01 Jun 2001 19:59:18 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: layered dependencies and shared libs
The "Library" and "LibraryFromObjects" rules use the "grist" path
that is re-computed by each SubDir invocation. Indeed, this is done
by the following line of the Jambase:
rule SubDir {
...
SOURCE_GRIST = [ FGrist $(<[2-]) ] ;
...
}
As you can see, the grist is only composed from the second-to-last
parameters to the SubDir rule. This means that the two following commands:
SubDir PROJECT1_TOP src ;
SubDir PROJECT2_TOP src ;
will produce the same grist, even if "PROJECT1_TOP" and "PROJECT2_TOP"
correspond to completely different directories
Maybe this may help you solve your problem..
By the way, I don't understand well why you'd want to link the same
object file to two different libraries or programs that are placed
in different directories ? Could you enlighten us with a practical example ??
From: "Kimpton, Andrew" <awk@pulse3d.com>
Subject: RE: layered dependencies and shared libs
Date: Fri, 1 Jun 2001 11:53:59 -0700
Yeah - re-reading my description I realized it wasn't as clear as it could
have been 8-) Here's what we do :
In each Jamfile for each library/executable we have something like
OBJDIR = obj.$(OSFULL[1]:L).debug ;
BUILD_OUTPUT_PATH = $(TOP)\\..\\build\\$(OBJDIR)\\dynamic_crt
LOCATE_TARGET = $(BUILD_OUTPUT_PATH)\\ia ;
(Actually OBJDIR and BULD_OUTPUT_PATH can have a couple of different values
depending on release or debug builds, and linking against a static or
dynamic version of the Microsoft C runtime library)
The problem is that since $(TOP) is set based on the location of the Jamfile
that is the 'starting point' $(TOP) may be '.' or '..' or '../..' depending
on whether the build was 'launched' from $(TOP) or a subdirectory 1 or 2
levels beneath it.
When the dependancies actions run the microsoft library manager to extract
the build date information from the library archive in order to determine
what may need to be rebuilt the contents of $(BUILD_OUTPUT_PATH) is used as
part of the arguments. So depending on the value of $(TOP) we get different
'results' for the build dates.
I could solve this by using something other than $(TOP) in the build output
path but our engineers place the source trees in different directories on
their machines and in fact some (such as myself) have multiple copies of the
source tree (we use perforce - so I have multiple perforce client
definitions with a different client root for each tree). Using a relative
path based on $(TOP) makes things fairly neat.
I had hoped that setting $(NOARSCAN) and/or $(KEEPOBJS) might avoid the
confusion but that doesn't seem to be the case (unless my brief testing
suffered from some other problem too).
From: "Brett Calcott" <brett.calcott@paradise.net.nz>
Date: Sat, 2 Jun 2001 21:48:39 +1200
Subject: PCCTS - parser generator rules
firstly, thanks for all the help on my previous question. I have got a
preliminary setup doing pretty much what I want.
Except for this:
Jam has support for yacc - but I use a Parser generator called PCCTS.
You run 2 programs:
antlr $(ANTLR_OPTIONS) -o $(OUTPUT_DIR) $(INPUT_GRAMMAR)
this generates the following:
$(OUTPUT_DIR)/tokens.h
$(OUTPUT_DIR)/OipParser.h
$(OUTPUT_DIR)/OipParser.cpp
$(OUTPUT_DIR)/OipGrammar.cpp
then you run this:
dlg $(DLG_OPTIONS) -o $(OUTPUT_DIR) $(OUTPUT_DIR)/parser.dlg
to generate these:
$(OUTPUT_DIR)/DLGLexer.cpp
$(OUTPUT_DIR)/DLGLexer.h
Can I use GenFile?
Any hints would be welcome...
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 7 Jun 2001 11:13:10 +0100
Subject: Bogus LOCATE_TARGET
I've got a directory tree something like this:
TOP
lib
core
app
tests
Which I've strung together with SubInclude and SubDir rules, like this:
# TOP/Jamfile
SubDir TOP ;
SubInclude TOP lib ;
SubInclude TOP app ;
SubInclude TOP tests ;
# TOP/lib/Jamfile
SubDir TOP lib ;
SubInclude TOP lib core ;
...etc....
To the default MSVCNT C++ rule, which reads like this:
$(C++) /c $(C++FLAGS) $(OPTIM) /Fo$(<) /I$(HDRS) /I$(STDHDRS) /Tp$(>)
...I've added /Fd, so it reads like this:
$(C++) /c $(C++FLAGS) $(OPTIM) /Fd$(LOCATE_TARGET)/ /Fo$(<) /I$(HDRS)
/I$(STDHDRS) /Tp$(>)
The /Fd switch tells Visual C++ where to put the .pdb file, containing debug
information, etc. However, it always passes the last directory listed in
the tree, i.e. in this case, it'll always pass /Fdtests/
I was expecting it to pass the name of the subdirectory in which the C++
rule was invoked. Now, I suspect that what is happening is that when the
actions are invoked, the value of LOCATE_TARGET is different from what I was
expecting.
The SUBDIRC++FLAGS, which ought to suffer from the same problem work because
of this line:
C++FLAGS on $(<) += $(C++FLAGS) $(SUBDIRC++FLAGS) ;
in 'rule C++'.
My question: Do I need to do something like this to fix the problem? Should
I just add the /Fd switch to this part of the rule?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Thu, 7 Jun 2001 11:14:50 +0100
Subject: Compiling multiple C++ files at once
Microsoft's C++ compiler has a natty feature where you're allowed to pass
multiple filenames to it at once. This reduces the compile time, since the
compiler only has to be spun up once for each batch.
Obviously, you have to ensure that the switches are the same for all of the files.
My question: How to do this in jam? Using 'together' on the rule doesn't
appear to do anything.
Date: Thu, 07 Jun 2001 13:18:22 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Bogus LOCATE_TARGET
Action rules are invoked after the complete build of the dependency graph,
i.e. after parsing all other rules. By default, the action command expansion
uses the "current" (i.e. last) values for each variable, which is why your
LOCATE_TARGET is "tests/" here.
You can however alter this by using the "bind VARNAME" modifier in the
action rule definition. This causes the expansion of VARNAME to use the
value that the variable had when the corresponding C++ rule was invoked..
(as an example, this is also what is used for the NEEDLIBS variable, in the CC rule)
This is another example of target-specific variable expansion. Note
that this should be read as:
$(C++FLAGS) when used in actions generating $(<) should
be expanded as the _current_ value of
"$(C++FLAGS) $(SUBDIRC++FLAGS)"
Date: Thu, 07 Jun 2001 13:26:53 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Compiling multiple C++ files at once
That's because "together" concats the source targets that apply to a single
destination target. Calling the Visual C++ (or even GNU C) compiler with
multiple source files, as in:
cl file1.c file2.c
really creates two distinct (object files) targets, and there is no way
to indicate this to Jam currently. You could probably hack some custom
rules using pseudo/temporary targets, but I wouldn't recommend it.
Another way to achieve what you need is to use a "wrapper" source code
that simply #includes other sources, as in:
#include <file1.c>
#include <file2.c>
and compile it in one pass, into a single object. Of course, this supposes
that this multiple inclusion will not create conflicts (mainly in static
data and function names), but it works pretty well with FreeType 2 :-)
From: <boga@mac.com>
Date: Fri, 8 Jun 2001 09:23:58 +0200
Subject: Compiling multiple C++ files at once
We'd like to use the same feature too. I'm not sure if it's possible with
Jam (using custom rules).
This is the feautre i'd like to use with JAM. Two object file means, if only
one of the object needs to be updated, only one of them have to be
recompiled. Has anyone implemented such custom rule?! Has anyone got any ideas?
I've tried the following:
I'd like to implement a rule like this:
local SOURCES = file1.c file2.c file3.c file4.c file5.c file6.c ;
local OBJECTS = [ multicppcompile $(TARGETDIR) : $(SOURCES) : $(CFLAGS) ] ;
Where multicppcompile is something like:
rule multicppcompile {
local i,OBJECTS;
Depends $(i:D=$(1):S=.obj) : $(i) ;
OBJECTS += $(i:D=$(1):S=.obj) ;
}
# ????
return $(OBJECTS) ;
}
What i'd need is:
- if only file1.c need to be recompiled: command should be: >cl -o
$(TARGETDIR) file1.c
- if file1.c and file2.c have to be recompile: command should be: >cl -o
$(TARGETDIR) file1.c file2.c
What i have tried:
1. 'updated' action modifier: then the objects files would have to be the
$(>) of the action, but then i couldn't get the corresponding source files
for the objects.
2. 'together' action modifer: $(<)-should be the same, so it's useless.
3. response files:
rule multicppcompile {
local i;
local OBJECTS ;
Depends $(i:D=$(1):S=.obj) : $(i) ;
OBJECTS += $(i:D=$(1):S=.obj) ;
}
initcmdfile $(OBJECTS) : $(OBJECTS[1]).cmd ;
addfiletocmdfile $(i:D=$(1):S=.obj) : $(i) ; # !!!!
}
execcl $(OBJECTS) ;
closecmdfile $(OBJECTS) ;
return $(2:D=$(1):S=.obj) ;
}
But this won't work as the line marked with #!!! should be
addfiletocomdfile $(OBJECTS) : $(i) ;
otherwise execcl might be executed before the second addfiletocmdfile!
And if it's OBJECTS all object files will be udpated.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Compiling multiple C++ files at once
Date: Fri, 8 Jun 2001 10:55:50 -0700
I have a suggestion .. I have not tried this, but could
the compile step be modeled after the archive step in the build ?
In the standard rules, libX depends in libX(A.o), libX(B.o), etc.
Then the libX(A.o) depends on A.o, libX(B.o) depends on B.o, etc
Finally A.o depends on A.cpp, B.o depends on B.cpp
Could the rules be changes so that ...
libX depends in libX(A.o), libX(B.o)
and libX(A.o) depends on A.cpp, libX(B.o) depends on B.cpp,
And then, instead of the Archive rule and Objects rule, use a ArcCpp rule
That rule might be writen as
actions updated together piecemeal ArcCpp {
$(CPP) -c $(>) etc ...
$(AR) $(<) $(>:S=.o)
$(RM) $(>:S=.o)
}
(in its most basic form)
Yes, A lot of the Jamrules would have to duplicatd to make this
work (if it can work).
From: <boga@mac.com>
Date: Sun, 10 Jun 2001 17:55:07 +0100
Subject: [Bug] jam and action with more than one target + fix.
If an action has more than one target, before executing the action, jam
should update all dependents of the targets.
However jam will update only the dependets of the first target!
Here's a very simple jam file to demonstrate the bug:
(Only tested with Jam 2.3.1 but this bug should be in 2.3.2 too.)
Test.jam:
=======
# _ a <--- a_src
# alll <-/
# \_ b <-- b_tmp <-- b_src
#
# 'upd a b' : a_src b_tmp ; is executed first and not 'upd b_tmp : b_src' ;
#
actions upd { ECHO Updating $(<) : $(>)}
upd a b : a_src b_tmp ;
upd b_tmp : b_src ;
# BUG!: Jam won't update dependent of 'b' before executing this action!
# Jam will update only dependets of 'a'
Depends a : a_src ;
Depends b : b_tmp ;
Depends b_tmp : b_src ;
Depends all : a b ;
NOTFILE all ;
NOTFILE a_src ;
NOTFILE b_src ;
The output of the rules are:
That is 'upd b_tmp : b_src' action executed after(!) 'upd a b : a_src b_tmp'
but b depends on b_tmp!
A possible fix to this problem is to insert the following code into make1.c
into make1a() function:
{
ACTIONS *actions;
for( actions = t->actions; actions; actions = actions->next ) {
TARGETS *targets;
targets->next) {
if (targets->target != t)
make1a (targets->target,t);
}
}
}
t->progress = T_MAKE_ACTIVE;
/* Now that all dependents have bumped asynccnt, we now allow */
/* decrement our reference to asynccnt. */
make1b( t );
}
Date: Sun, 10 Jun 2001 18:22:05 +0100
Subject: Re: Compiling multiple C++ files at once
Here's a solution for compiling multiple source at once. Jam+multiple
targets in action have to be fixed in order to work:
[ This version works with microsoft CL, using response files].
Thank Randy for the idea.
# OBJECTS = MultiCppCompile $(TARGETDIR) :
# $(SOURCES) : $(CFLAGS) ;
#
# This rule will compile $(SOURCES) to the
# targetdir, and will return the result objects.
#
rule MultiCppCompile {
local destdir = $(1) ;
local cflags = $(3) ;
local objects srcrefs i ;
# Set up dependecies and create reference files for each source.
# This way reference files will be updated.
local srcref = $(i:D=$(destdir)).file ;
local object = $(i:D=$(destdir):S=.obj) ;
mksrcref $(srcref) : $(i) ;
objects += $(object) ;
srcrefs += $(srcref) ;
Depends $(object) : $(srcref) ;
}
cflags += "/Fo$(destdir:G=)\\" ;
initcmdfile $(objects) : $(objects[1]).cmd ;
appendupdatedfiletocmdfile $(objects) : $(i) ;
}
addparamstocmdfile $(objects) : $(cflags) ;
domulticppcompile $(objects) : $(srcrefs) ;
closecmdfile $(objects) ;
return $(objects) ;
}
# actions domulticppcompile:
actions quietly updated domulticppcompile { cl /nologo @$(CMDFILE) }
actions quietly mksrcref { ECHO "$(>)" > "$(<)" }
rule mksrcref { Depends $(<) : $(>) ; }
# Response file utility rules:
actions quietly initcmdfile { copy nul: $(CMDFILE) > nul: }
rule initcmdfile { CMDFILE on $(<) = $(>) ; NOTFILE $(>) ; }
actions quietly closecmdfile { DEL $(CMDFILE) }
actions updated quietly appendupdatedfiletocmdfile { type $(>[1]) >> $(CMDFILE) }
actions quietly together piecemeal addparamstocmdfile { ECHO $(>) >> $(CMDFILE) }
rule addparamstocmdfile { NOTFILE $(>) ; }
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Tue, 12 Jun 2001 13:10:49 -0400
Subject: INCLUDES documentation bug?
The documentation for the INCLUDES rule reads:
INCLUDES targets1 : targets2 ;
Builds a sibling dependency: makes each of targets2 depend on
anything upon which each of targets1 depends.
But the example given doesn't seem to agree with the documentation:
Depends foo.o : foo.c ;
INCLUDES foo.c : foo.h ;
"foo.h" depends on "foo.c" and "foo.h" in this example.
According to the documentation, this would:
1. Make foo.o depend on foo.c
2. Make foo.h depend on anything on which foo.c depends
But in the example, there is no reason to think that foo.c depends on
anything. So what causes foo.h to depend on foo.c and foo.h? Is there some
information missing here? Also, isn't the circular self-dependency of foo.h
a problem?
From: Amaury.FORGEOTDARC@ubitrade.com
Subject: Re: INCLUDES documentation bug?
Date: Tue, 12 Jun 2001 20:51:49 +0200
You're right, the documentation seems incorrect.
The sentence should be something like:
Builds a sibling dependency: makes each of targets2 a
dependency of anything depending of targets1.
From: "David Abrahams" <abrahams@altrabroadband.com>
Subject: Re: INCLUDES documentation bug?
Date: Tue, 12 Jun 2001 14:39:36 -0400
I think I am having a problem getting across a language barrier here. Just
to clarify a bit, let me rephrase what you wrote. Please tell me if I got
your intention correctly:
Builds a sibling dependency: makes each of targets2 a
depend on every target that depends on a member of targets1.
Hmm, it could also mean:
Builds a sibling dependency: makes each of targets2 a
depend on every target that depends on all members of targets1.
^^^^^^^^^^^
Regardless, that would make foo.h depend on foo.o. So perhaps you didn't
mean either of these.
Maybe you meant:
Builds a sibling dependency: for each target X that depends
on a member of targets1, makes X depend on each of targets2
This is closer to the meaning of "sibling dependency", and it makes sense in
the context of the example (it would make foo.o depend on foo.h) but it
still doesn't produce the result described in the example (that foo.h
depends on foo.c and on itself).
One would hope. But what /is/ "the right thing"? ;-)
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Tue, 12 Jun 2001 15:33:49 -0400
Subject: building multiple products from a single action
Many build actions produce more than one output file. For example, when a
DLL is built on Windows, it generates an import library (.lib) and the
dynamic library(.dll). The .lib file gets statically linked into programs
that use the dynamic library, and causes the dynamic library to be loaded
when needed.
How can this be captured in Jam?
In particular, one would like:
a. A dependency on the .lib will not cause it to be rebuilt if it is
present but the .dll is not
b. a dependency on both the .lib and the .dll not cause them to be built
twice if neither is present
c. a dependency on the .dll will cause it to be rebuilt if it is not present.
The best I have been able to do so far uses INCLUDES to make the .lib a
sibling of the .dll, but this violates condition (a) above:
rule dll {
Depends $(<) : $(<).lib $(<).dll ;
Depends $(<).lib $(<).dll : $(>) ;
INCLUDES $(<).lib : $(<).dll ;
mkdll $(<).lib $(<).dll : $(>) ;
}
actions mkdll { ECHO "sources: $(>)" }
rule main { Depends $(<) : $(>) ; }
actions main { ECHO "sources: $(>)" }
dll a : foo.cpp ; # neither a.lib nor a.dll exists
dll b : foo.cpp ; # b.lib exists; b.dll doesn't
dll c : foo.cpp ; # c.lib and c.dll exist
main x : a.lib ;
main y : b.lib ;
main z : c.lib ;
Depends all : a b c x y z ;
Without the invocation of INCLUDES, I get the warning about a.dll being an
"independent target". The documentation for this warning doesn't seem to
agree with observed facts, since in this case a.dll appears in both $(<) and
$(>) of a Depends rule (right at the top of rule dll). Does anyone have a
better explanation for the warning?
Finally, it's quite unclear from the documentation exactly what it means
when a rule with build actions has multiple elements in $(<). Does that
capture the notion somehow that the elements of $(<) are built together?
Subject: RE: building multiple products from a single action
Date: Tue, 12 Jun 2001 13:00:36 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
Check the jamfile I posted on 5/27/2001.
In addition to the Pch, Idl, and other rules* it includes a DllMain rule
which builds the DLL. It does not currently note that it produces a
.lib file, but you can add that relatively easily, see the Idl rules for
an example -- they do the same kind of thing, just for .idl files that
produce .h and three .c files. In this analogy, the .h file would be
the .dll and the .c file would be the .lib (or vice versa, whatever). I
can't remember for sure if it knows how to rebuild the .c files if the
.h file exists. It's easy to get it to avoid unnecessary rebuilding,
getting it to do necessary building can be tricker. ;-)
* some bugs (mainly with Bsc file) have been fixed since 5/27/2001, plus
new rule UsePrecompHdr lets multiple directories share a pch file
generated by another directory via SubDirPrecompHdr. If people want me
to post an updated copy, let me know.
From: David Abrahams [mailto:abrahams@altrabroadband.com]
Sent: Tuesday, June 12, 2001 12:34 PM
Subject: building multiple products from a single action
Many build actions produce more than one output file. For example, when
a DLL is built on Windows, it generates an import library (.lib) and the
dynamic library(.dll). The .lib file gets statically linked into
programs that use the dynamic library, and causes the dynamic library to
be loaded when needed.
How can this be captured in Jam?
In particular, one would like:
a. A dependency on the .lib will not cause it to be rebuilt if it is
present but the .dll is not
b. a dependency on both the .lib and the .dll not cause them to be
built twice if neither is present
c. a dependency on the .dll will cause it to be rebuilt if it is not
present.
The best I have been able to do so far uses INCLUDES to make the .lib a
sibling of the .dll, but this violates condition (a) above:
rule dll {
Depends $(<) : $(<).lib $(<).dll ;
Depends $(<).lib $(<).dll : $(>) ;
INCLUDES $(<).lib : $(<).dll ;
mkdll $(<).lib $(<).dll : $(>) ;
}
actions mkdll { ECHO "sources: $(>)" }
rule main { Depends $(<) : $(>) ; }
actions main { ECHO "sources: $(>)" }
dll a : foo.cpp ; # neither a.lib nor a.dll exists
dll b : foo.cpp ; # b.lib exists; b.dll doesn't
dll c : foo.cpp ; # c.lib and c.dll exist
main x : a.lib ;
main y : b.lib ;
main z : c.lib ;
Depends all : a b c x y z ;
Without the invocation of INCLUDES, I get the warning about a.dll being
an "independent target". The documentation for this warning doesn't seem
to agree with observed facts, since in this case a.dll appears in both
$(<) and
$(>) of a Depends rule (right at the top of rule dll). Does anyone have
a better explanation for the warning?
Finally, it's quite unclear from the documentation exactly what it means
when a rule with build actions has multiple elements in $(<). Does that
capture the notion somehow that the elements of $(<) are built together?
Date: Tue, 12 Jun 2001 17:20:05 -0700 (PDT)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: INCLUDES documentation bug?
| Depends foo.o : foo.c ;
| INCLUDES foo.c : foo.h ;
| "foo.h" depends on "foo.c" and "foo.h" in this example.
That should read:
"foo.o" depends on "foo.c" and "foo.h" in this example.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: building multiple products from a single action
Date: Wed, 13 Jun 2001 09:51:22 +0100
A line like the following inside the DllMain rule does the trick for me.
I've not checked to ensure that it fulfills all of the requirements, however.
# Tell jam where it can find the import library
MakeLocate $(_t:S=$(SUFLIB)) : $(LOCATE_TARGET) ;
From: "Chris Antos" <chrisant@windows.microsoft.com>
Sent: Tuesday, June 12, 2001 9:00 PM
Subject: RE: building multiple products from a single action
I'd be interested in seeing another copy.
Date: Wed, 13 Jun 2001 13:53:25 -0400
From: Steve Leblanc <steven.leblanc@mayahtt.com>
Subject: Problem with Clean in SubDirs
A few C++ compilers that I use create a directory where they
put files which used to instantiate templates, so that
my directory tree looks like this after a build:
dir_a
|
| |
|
|
The Jamfiles in dir_b and dir_c contain the line
Clean clean : ii_files
If I do a 'jam clean' in dir_b or dir_c, all the created files
are removed, as well as the ii_files directory (I've set $(RM)
to 'rm -rf'). However, if I execute the same command in dir_a,
whose Jamfile does a SubInclude of those in dir_b and dir_c,
everything gets removed, except for the ii_files directories.
I tried adding the SOURCE_GRIST to ii_files in the Clean
command, but it didn't help. Any ideas?
Maya Heat Transfer Technologies Ltd.
Date: 14 Jun 2001 02:12:38 IST
From: Parth venkat <rvp_dac223@usa.net>
Subject: Build of Jam from sources fails:
I am trying to make a binary of jam from source on Linux Redhat
7.1 2.96-81 installation with
gcc version 2.96
make 3.79.1 gnu for i 386 redhat-Linux-gnu
I downloaded the sources and what i could gather from the README was to just
issue a make command in the source directory.
Here is the error I get
$ make
jam0
make : jam0 command not found
make *** [all] Error 127
1) I am sorry if this issue was already addressed before.
2) I am not subscribed to the list as yet. so Please ensure my email id in the
reply.
3) I could not get any help with search on the Perforce site.
4) If i missing on any system config I will be most happy to furnish any further details.
Thank you very much in advance.
Date: Thu, 14 Jun 2001 17:16:23 +1000
From: Milton Taylor <mctaylor@ingennia.com.au>
Subject: JAM Questions: BUilding for debug
We're looking at using jam to standardise our builds. I have read the
Laura Wingerd paper on how it was implemented at Sybase. Our own systems
are not nearly on the same scale as that, but we do have multiple
platforms and compilers to support, not to mention a reasonably complex
layering of C++ libraries.
I have not even tried jam yet. Before I do, I would be interested in any
insight on these issues:
Questions:
1. The paper describes some in-house customisations done to Jam for
Sybase's purposes. The relative path naming one interested me. Did these
customisations make it back into the version of Jam that exists today? (Ver 2.3)
2. I am not sure how to handle the issue of debug building, with respect
to the source file paths that are embedded into the exe or debug
databases. (e.g. on MSVC 6 there is a .pdb file that contains all this
stuff.) The problem is exacerbated when a project links in with debug
versions of shared libraries. Basically, I want to avoid having to enter
paths to source modules in the debugger. I cannot always guarantee that
the developers 'root' workspace will be the same, so I am probably in
trouble whenever absolute path names get embedded in the debug info.
Matters are complicated by the fact that the shared library may not sit
in the same workspace location when its being used to build an
application, as when it was itself built. In this case, there are
problems with both absolute and relative pathing approaches.
What do others do in respect of this?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Build of Jam from sources fails:
Date: Thu, 14 Jun 2001 15:30:37 +0100
You don't have . (dot, the current directory) in your path. Just run ./jam0
at the prompt when this fails.
From: "Parth venkat" <rvp_dac223@usa.net>
Sent: Thursday, June 14, 2001 3:12 AM
Subject: Build of Jam from sources fails:
I am working on a build system constructed on top of Jam which is designed
to address the multiple-compiler multiple-build-variant problem. I hope to
have a version ready for public inspection in the next few days, and would
welcome participation from members of this list. I have attached a
preliminary (incomplete) version of the documentation.
The system is being designed for use in my professional work and by boost
(www.boost.org), an organization developing open-source C++ libraries. It
will be hosted at boost as the Boost Build system.
The boost build system handles this. So far, the only customization has been
to the Jambase rules; we haven't had to change Jam's C++ source code. There
are a few changes we anticipate wanting to make, though, mostly for
compatibility with a wider range of platforms.
I'm not intimately familiar with the issues of where PDBs need to be located
and exactly how they work. Each toolset supported by the boost build system
has an associated toolset definition file, written in simple Jam code; I'm
hoping that experts on particular toolsets will be able to help me fill in
these details.
Hmm, sounds thorny. If you can figure out how you want the MSVC tools to be
invoked so that everything works for you, we can surely get the build system
to invoke them that way... but someone other than me will have to figure out
the toolset-specific stuff.
From: "Prabhune, Abhijeet" <APrabhun@ciena.com>
Date: Fri, 15 Jun 2001 11:32:02 -0400
Subject: Invoking batch files from within jamfile?
I read on the website that on windows NT, batch files can be invoked (from
within jamfile I presume). does anybody know how this can be done. I have a
bunch of batch files to build dlls and then I want to use these dlls to
build a object file.
From: "Brett Calcott" <brettc@paradise.net.nz>
Subject: Re: Build of Jam from sources fails:
Date: Fri, 15 Jun 2001 09:26:16 +1200
./jam0
This fooled me for a while to - it should probably get flagged as a bug and
fixed in the next release. All it needs is the ./ to be put in the Makefile.
From: "EXT-Goodson, Stephen" <Stephen.Goodson@PSS.Boeing.com>
Date: Fri, 15 Jun 2001 19:40:23 -0700
Subject: Need help with SubDir
I'm just starting to use Jam and I've run into a problem using SubDir. I've constructed
a simple example that shows my problem. Here's the Jamfile:
# Jamfile in $(TOPDIR)/Sub/
SubDir TOPDIR Sub ;
rule CatRule {
Depends $(<) : $(>) ;
Clean clean : $(<) ;
}
actions CatRule { cat $(>) > $(<) }
CatRule foo.c : somefile ;
Main foo : foo.c ;
#end of Jamfile
There is an empty Jamrules file in $(TOPDIR), and Sub/somefile exists. Originally
I had CatRule in the Jamrules file, but moved it while trying to figure this out.
When I invoke jam, it complains that it doesn't know how to make <Sub>foo.c, but
if I type "jam foo.c" it makes it just fine. Also, if I remove the SubDir rule it works fine.
What am I doing wrong? Thanks for your help.
Date: Sat, 16 Jun 2001 00:58:22 -0500 (CDT)
Subject: Re: Need help with SubDir
The reason would be that the main rule establishes a dependency
between <Sub>foo and <Sub>foo.o and <Sub>foo.c, but the
CatFile rule just makes a dependency between somefile and foo.c
Since <Sub>foo.c is a different target than foo.c, the CatFile
action is not invoked.
This is just a guess. If you run jam with debug turned on, say -d5
you will see all the info...
The <Sub>foo.c notation is called grist, and it is used to make targets
unique across directories, even if multiple directories have files of
the same name in them.
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Mon, 18 Jun 2001 18:20:22 -0400
Subject: Boost Build System prerelease
As I announced last week, the proposed Boost Build System is now available
for inspection.
It is available via:
Now supports building DLLs and (I think) shared libraries with GCC under unix.
Major code cleanup and commenting; the Jam code should be relatively
understandable now.
I'm still shoring up the documentation, but even that has been improved
quite a lot, including a gentle introduction at one end, and a guide to Jam
internals at the other.
The time has come for others to contribute. I have implemented 4 toolset
descriptions, for GCC, Metrowerks, Borland, and MSVC, and I have tested them
under Windows 2000. I need experts in various compilers and platforms
(including these) to step forward with their own toolsets and tweaks for the
4 I've got.Various other jobs that someone needs to take on are listed in
the TODO.txt file in boost/build.
http://users.rcn.com/abrahams/build_system_2001_6_18.zip
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Mon, 18 Jun 2001 18:29:05 -0400
Subject: Boost Build System prerelease (2nd try!)
That last email went out before I had edited it. It was mostly a copy of an
announcement I made to the boost.org group. Sorry!
As I announced last week, the proposed Boost Build System is now available for inspection.
It is available via:
1. http://users.rcn.com/abrahams/build_system_2001_6_18.zip (I do not intend
to keep this link current; I think the Perforce public depot might be a
better choice eventually, but I haven't got up to speed with that yet).
files repository, for boost members)
3. Anonymous CVS to the "build-development" branch of boost's "build" module at SourceForge.
Now supports building DLLs and (I think) shared libraries with GCC under unix.
Major code cleanup and commenting; the Jam code should be relatively
understandable now.
I'm still shoring up the documentation, but even that has been improved
quite a lot, including a gentle introduction at one end, and a guide to Jam
internals at the other.
As I mentioned, I would welcome contributions from members of this list. If
you are interested in the planned direction of this project, please see the
TODO.txt file enclosed in the project archive.
From: "EXT-Goodson, Stephen" <Stephen.Goodson@PSS.Boeing.com>
Subject: RE: Need help with SubDir
Date: Tue, 19 Jun 2001 17:20:22 -0700
Thanks. That helps, but I'm still not quite getting it.
Jam doesn't complain about not knowing how to make <Sub>foo.c any more,
but it won't make it either, even though it appears to know how.
If I run jam, it tries to Cc Sub/foo.o, but fails because foo.c doesn't exist.
If I type 'jam foo.c', it creates jam.c but issues the "warning: using independent
target foo.txt". After that, jam will build foo.o and foo just fine.
Interestingly, if I then 'touch foo.txt', that will cause jam to remake foo.o
(from the out of date foo.c). It won't ever make foo.c if it already exists
(even with an explicit 'jam foo.c').
My updated Jam file, and some output from jam -d is below.
As before, it works fine without the SubDir command.
# Jamfile in $(TOPDIR)/Sub/
SubDir TOPDIR Sub ;
rule CatRule {
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
Depends [ FGristFiles $(<) ] : $(>) ;
Clean clean : $(<) ;
}
actions CatRule { cat $(>) > $(<) }
CatRule foo.c : foo.txt ;
Main foo : foo.c ;
made stable /home/xgoodson/Top/Sub
make -- <Sub>foo.c
bind -- <Sub>foo.c: /home/xgoodson/Top/Sub/foo.c
time -- <Sub>foo.c: Tue Jun 19 16:50:35 2001
make -- foo.txt
bind -- foo.txt: /home/xgoodson/Top/Sub/foo.txt
time -- foo.txt: Tue Jun 19 16:51:14 2001
made* newer foo.txt
made+ old <Sub>foo.c
made+ update <Sub>foo.o
From: "David Abrahams" <abrahams@altrabroadband.com>
Subject: Re: Boost Build System prerelease (2nd try!)
Date: Wed, 20 Jun 2001 07:49:36 -0400
<<I took a quick look at your build system. It sounds very interesting.>>
<<I would really appreciate if your clarifications in the section
"Internals" about a few missing pieces in the Jam documentation.>>
I'm sorry, something is getting lost in the translation. Are you asking for
clarification? If so, what would you like clarified?
<<It took me quite a while to figure out, what you describe very clearly.
(And I never had the time to formulate it for my colleagues).>>
Well, I hope it helps. I felt I had to write it down carefully just to
understand it myself. Also, the more I wrote, the more questions I
uncovered. I would do an experiment or two to answer the questions, and
write down the answers.
<<I will discuss it with my co-workers and I think we might try a rewrite of
our adaptions using your boost system as a base.
We are using WindowsNT and cross-compile (GCC) for our PPC403-target
embedded vxWorks-system). But this will take some time (1-2 weeks).>>
This is an open-source project. If you are interested in contributing or
collaboration, it will be appreciated.
From: "Brett Calcott" <brett.calcott@paradise.net.nz>
Subject: Re: Boost Build System prerelease
Date: Thu, 21 Jun 2001 09:29:56 +1200
I am new to Jam and was going to use it for my a project of mine in c++ that
I want to use on both Win32 & Linux (which uses boost, by the way).
Firstly, well done on an amazing rework of the jam system to allow complex
multiple builds.
The approach that you have taken is a quite a change on top of the basic Jam
system. Judging by some of the questions that occur on the list (multiple
builds - dlls etc) it would be good if the whole system could be
incorporated into jam. What does perforce think of this?
The current version of Jam only works with NT, not the other versions of
Win32. (I am betting that quite a few boost users have Win98/95 installed).
David Turner at www.freetype.org has made the necessary changes for it to
work on the other Win32 platforms but has added some extra definitions as
well. There seems to be some overlap here.
Lastly, why isn't allyourbase.jam just called Jamrules? This is the
automatic toplevel that is read in without using the -f option. Or have I
missed something here?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Boost Build System prerelease
Date: Wed, 20 Jun 2001 19:09:23 -0400
I have corresponded with Christopher Seiwald, the Jam maintainer, about
making changes to the underlying Jam (C/C++) source code. He seemed
generally receptive to a collaboration on a few modifications. I have not
asked him about folding new Jamrules into the Jambase. I am guessing it
would be a hard sell, however. He (and others I suppose) have an investment
in projects built around the existing Jambase. My stuff adds a lot of code
to that, with capabilities and complexity that these existing projects
apparently don't need. I have made some effort to keep the functionality of
existing Jambase rules available in my work, but there's no guarantee that
everything works exactly as it used to.
Yes, in fact, I am beginning to find that to get certain things right the
freetype-specific "subst" rule
(http://freetype.sourceforge.net/jam/index.html#diff) may be neccessary.
Jam's built-in string/path manipulation facilities are pretty weak, and can
only get you so far, unfortunately.
Two things:
1. allyourbase.jam is a modified Jambase; I needed to replace some of the
Jambase rules, including SubDir, which gets called before Jamrules is read.
2. I still think it is useful for a project to have a Jamrules file which
starts as a blank slate. I didn't want project users to have to muck about
in allyourbase.jam just to add a few rules or variable definitions of their own.
Of course, the system is under development. Having all those files floating
around is obviously a bit inconvenient. When things get more stable, I think
it would make sense to compile allyourbase.jam and boost-base.jam into Jam
as a new Jambase. For now, I thought it would be useful if people using an
out-of-the-box Jam could try out the build system.
From: "EXT-Goodson, Stephen" <Stephen.Goodson@PSS.Boeing.com>
Subject: RE: Need help with SubDir
Date: Wed, 20 Jun 2001 17:08:50 -0700
I'm still a little baffled by this, but I have figured one more
thing out. Adding grist within the rule doesn't work, but if the
caller adds grist manually it works fine:
Depends [ FGristFiles $(<) ] : $(>) ; #doesn't work,
Depends $(<) : $(>) ;
along with
CatRule <Sub>foo.c : foo.txt ; #works
Is there anyone out there who could explain this to me? Obviously I'd
prefer not to have to add grist manually each time. How do I go about
creating a rule that works as expected with the SubDir command?
From: EXT-Goodson, Stephen [mailto:Stephen.Goodson@pss.boeing.com]
Sent: Tuesday, June 19, 2001 5:20 PM
Subject: RE: Need help with SubDir
complain about not knowing how to make <Sub>foo.c any more, but it
won't make it either, even though it appears to know how. If I run
jam, it tries to Cc Sub/foo.o, but fails because foo.c doesn't exist.
If I type 'jam foo.c', it creates jam.c but issues the
"warning: using independent target foo.txt". After that, jam will
build foo.o and foo just fine.
Interestingly, if I then 'touch foo.txt', that will cause jam to
remake foo.o (from the out of date foo.c). It won't ever make
foo.c if it already exists (even with an explicit 'jam foo.c').
My updated Jam file, and some output from jam -d is below. As
before, it works fine without the SubDir command. Any help is
appreciated. Thanks.
# Jamfile in $(TOPDIR)/Sub/
SubDir TOPDIR Sub ;
rule CatRule {
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
Depends [ FGristFiles $(<) ] : $(>) ;
Clean clean : $(<) ;
}
actions CatRule { cat $(>) > $(<) }
CatRule foo.c : foo.txt ;
Main foo : foo.c ;
made stable /home/xgoodson/Top/Sub
make -- <Sub>foo.c
bind -- <Sub>foo.c: /home/xgoodson/Top/Sub/foo.c
time -- <Sub>foo.c: Tue Jun 19 16:50:35 2001
make -- foo.txt
bind -- foo.txt: /home/xgoodson/Top/Sub/foo.txt
time -- foo.txt: Tue Jun 19 16:51:14 2001
made* newer foo.txt
made+ old <Sub>foo.c
made+ update <Sub>foo.o
From: "Brett Calcott" <brett.calcott@paradise.net.nz>
Subject: Re: Need help with SubDir
Date: Thu, 21 Jun 2001 13:00:58 +1200
I'm not sure why this does not work. The only difference I can think of
(from some of things that I am doing) is that all of my rules and actions
are in the top Jamrules file, rather than the Jamfile?
From: john@nanaon-sha.co.jp (John Belmonte)
Subject: Re: Boost Build System prerelease
Date: Thu, 21 Jun 2001 14:57:05 +0900
One point about your build system is that it doesn't address dependencies
between different targets. (I understand this is not a requirement for
Boost.) For example my embedded targets often have dependencies on native
targets, such as code generators or data coverters. It may not be good to
add a higher level system such as yours to Jam unless it is sufficiently general.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Boost Build System prerelease
Date: Thu, 21 Jun 2001 08:26:44 -0400
Yes, it does (though perhaps not at quite the level you're speaking of).
Right now, there is the <lib>path construct to generate a dependency between
different targets. You can always use Depends directly if neccessary. But I
get the sense you're talking about something else.
Yes, it is!
Could you explain a bit more about what you mean? If you have a straw-man
proposal for syntax and semantics of a construct that would allow it in
Boost.Build, that would be very useful as well.
From: john@nanaon-sha.co.jp (John Belmonte)
Subject: Re: Boost Build System prerelease
Date: Thu, 21 Jun 2001 21:44:31 +0900
I'm sorry, I wasn't following the terminology of your design document. By
"target" I meant your build variant, or in other words target platform.
I'd like to do dependencies between variants. How hard would it be to support that?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Boost Build System prerelease
Date: Thu, 21 Jun 2001 08:51:17 -0400
Oh, I see. If I understand correctly, you're not actually talking about
linking the targets of different variants with one another, but you will
produce an executable with one variant that must be used to build targets in
a different variant. I don't think that would be too hard to do. In fact,
this sounds a bit like something we need for the boost test framework. See
the last sections of TODO.txt for details. I would be very happy to
collaborate with you on this.
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Thu, 21 Jun 2001 09:23:23 -0400
Subject: Possible error in Jambase MkDir rule?
I copied the following fragment from MkDir for a rule of mine and was
surprised that it didn't work as expected:
if $(NT) {
switch $(s) {
case *: : s = ;
case *:\\ : s = ;
}
}
Is the intention of the 2nd line to match a string ending in
colon-backslash? If so, I think it won't match unless you use a quadruple
(yes!) backslash. At least, that's what my experiments tell me.
Date: Wed, 20 Jun 2001 11:30:34 +0200
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Re: Boost Build System prerelease (2nd try!)
consider changing to a mail reader or gateway that understands how to
I took a quick look at your build system. It sounds very interesting.
I would really appreciate if your clarifications in the section "Internals"=
about a few missing pieces in the Jam documentation.
It took me quite a while to figure out, what you describe very clearly.
(And I never had the time to formulate it for my colleagues).
I will discuss it with my co-workers and I think we might try a rewrite of
our adaptions using your boost system as a base.
We are using WindowsNT and cross-compile (GCC) for our PPC403-target (
embedded vxWorks-system). But this will take some time (1-2 weeks).
From: Hamish Macdonald <hamish@tropicnetworks.com>
Date: 21 Jun 2001 15:18:50 -0400
Subject: Interested in "incremental" builds....
I'm interested in enabling our developers to build incrementally upon
a daily loadbuild. The idea is that only files that have changed
since the daily loadbuild need to be rebuilt; only libraries whose
component object files needed to be built need to be re-archived
(archiving in other objects from the daily loadbuild). Only
executables whose component objects or libraries need to be rebuilt
would be re-linked.
I've got a mechanism working that uses GNU make and the VPATH/vpath
mechanism, but I'd really like to use Jam to do my builds.
Unfortunately, Jam doesn't seem to have a mechanism that works enough
like GNU makes vpath to be useful. With VPATH, GNU make will use the
target found via VPATH if it is up-to-date. If it is *not*
up-to-date, it throws away the VPATH-based path for the target and
uses the (relative) path specified in the makefile. I can thus point
VPATH/vpath at my daily loadbuild result, but have anything is rebuilt
to be placed in the local build output directory. Any subsequent
builds will use the local target for dependency checking also.
I thought that I could use:
LOCATE on $(target) = ...
and
SEARCH on $(target) = ... ...
to do this with Jam, but if $(LOCATE) is set for a target, it appears
to ignore the $(SEARCH) variable. Ideally Jam would use $(SEARCH) to
find an already existing file for a target, and use that to decide if
it needs to build it; if it needs to build it, it would then build it
in the $(LOCATE) location.
Has anyone else out there done anything like this with Jam or have any
(I'd like to avoid copying the daily loadbuild results to the users
build tree since there are 180M of object/library/debug files).
From: "David Abrahams" <abrahams@altrabroadband.com>
Date: Fri, 22 Jun 2001 14:17:18 -0400
Subject: Jam bug?
The jam docs advertise:
Start-up
Upon start-up, jam imports environment variable settings into jam
variables. Environment variables are split at blanks with each word becoming
an element in the variable's list of values. Environment variables whose
names end in PATH are split at $(SPLITPATH) characters (e.g., ":" for Unix).
On my platform, however (Win2K), $(SPLITPATH) appears to be undefined.
From: <boga@mac.com>
Date: Mon, 25 Jun 2001 18:33:12 +0200
Subject: Multiple Jam processes [WinNT]
Is there any known problem with running several Jam builds at the same time
on Windows2000? (Not the -j option).
While building one program, i'd like to build an different one.
If i do this, both build will terminate with very strange problems:
operable program or batch file.
Where 'itConnection' is different from build to build.
Has anyone else seen this behaviour?
I have the same problem with MSVC, and CodeWarrior compilers, so either the
build-system or jam uses some file/... globally.
From: "EXT-Goodson, Stephen" <Stephen.Goodson@PSS.Boeing.com>
Subject: Solved: Need help with SubDir
Date: Mon, 25 Jun 2001 14:49:55 -0700
I should have been using the gristed name everywhere, including in the
actions. Apparently the way to do that is to create a second rule
that only has actions associated with it and then call that rule
with the gristed name.
So, I think the following is the minimum needed for a rule to work
with the SubDir command:
rule CatRule {
local t = [ FGristFiles $(<) ] ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MakeLocate $(t) : $(LOCATE_SOURCE) ;
Depends $(t) : $(>) ;
Clean clean : $(t) ;
CatRuleDo $(t) : $(>) ;
}
actions CatRuleDo {
cat $(>) > $(<)
}
It would be nice if the documentation contained an example like this,
along with an explanation of how/why it works.
On a related note, I'd like to ask people who have experience using
existing Imake based build system. Our current system is quite
complicated and I am finding it extremely difficult to understand
and modify, so I am looking for something simpler. I am hoping that
jam will meet our needs, but after the difficulty I've had constructing
the above "hello world" type example, I'm having my doubts.
To get the above rule to work was not exactly straight-forward. Am I
likely to continue to encounter similar problems as I use jam, or having
figured this out, am I over the hump? I'm worried that I'll be creating
a build system that is just as complicated and mysterious to the
next person who comes along as our current system is to me.
I imagine that most jam users started in a similar situation, so I'd
be interested in any experiences or advice that anyone has to share
related to switching a moderately large project over to jam (I have
read the SCM7 paper, and noted that they chose not to use the SubDir
rule at all). I'm especially interested in comments related
to how maintainable and understandable the resulting system is.
From: EXT-Goodson, Stephen [mailto:Stephen.Goodson@pss.boeing.com]
Sent: Wednesday, June 20, 2001 5:09 PM
Subject: RE: Need help with SubDir
I'm still a little baffled by this, but I have figured one more
thing out. Adding grist within the rule doesn't work, but if the
caller adds grist manually it works fine:
Depends [ FGristFiles $(<) ] : $(>) ; #doesn't work,
Depends $(<) : $(>) ;
along with
CatRule <Sub>foo.c : foo.txt ; #works
Is there anyone out there who could explain this to me? Obviously I'd
prefer not to have to add grist manually each time. How do I go about
creating a rule that works as expected with the SubDir command?
From: EXT-Goodson, Stephen [mailto:Stephen.Goodson@pss.boeing.com]
Sent: Tuesday, June 19, 2001 5:20 PM
Subject: RE: Need help with SubDir
complain about not knowing how to make <Sub>foo.c any more, but it
won't make it either, even though it appears to know how. If I run
jam, it tries to Cc Sub/foo.o, but fails because foo.c doesn't exist.
If I type 'jam foo.c', it creates jam.c but issues the
"warning: using independent target foo.txt". After that, jam will
build foo.o and foo just fine.
Interestingly, if I then 'touch foo.txt', that will cause jam to
remake foo.o (from the out of date foo.c). It won't ever make
foo.c if it already exists (even with an explicit 'jam foo.c').
My updated Jam file, and some output from jam -d is below. As
before, it works fine without the SubDir command. Any help is
appreciated. Thanks.
# Jamfile in $(TOPDIR)/Sub/
SubDir TOPDIR Sub ;
rule CatRule {
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
Depends [ FGristFiles $(<) ] : $(>) ;
Clean clean : $(<) ;
}
actions CatRule { cat $(>) > $(<) }
CatRule foo.c : foo.txt ;
Main foo : foo.c ;
made stable /home/xgoodson/Top/Sub
make -- <Sub>foo.c
bind -- <Sub>foo.c: /home/xgoodson/Top/Sub/foo.c
time -- <Sub>foo.c: Tue Jun 19 16:50:35 2001
make -- foo.txt
bind -- foo.txt: /home/xgoodson/Top/Sub/foo.txt
time -- foo.txt: Tue Jun 19 16:51:14 2001
made* newer foo.txt
made+ old <Sub>foo.c
made+ update <Sub>foo.o
_______________________________________________
Date: Mon, 25 Jun 2001 18:09:12 -0500 (CDT)
Subject: Re: Solved: Need help with SubDir
I have several comments.
I think that grist causes many problems, and if it was automatically
done, that would simplify jam a great deal.
I think that executing all commands from the dir where jam was invoked
also causes problems when doing some sorts of things, and it runs against
"tradition" and the way people expect compilers to be invoked.
The debug output is very good, compared to make. You can figure everything
out, if you can wade thru the zillions of lines of output.
The rule-based approach makes it easy for the end-users, people who just
want to get their stuff compiled etc. and this creates a much more
uniform environment build-wise. I think this is the strongest aspect. Many
machine-dependent details can be hidden in the rules, and the resultant
Jamfiles look quite simple.
When users want to alter or make a new rule, they are mightly frustrated.
My personal experience is that when I get into writing the rules, it starts
going pretty well. My other experience is that it is hard to get back into
writing them. I think extensive comments in rules would help this a lot.
Jam's dependency processing is greatly superior to make, and the fragmented
specification of dependencies that usually goes along with make. I once
converted a small system from jam to make, and I had to discover a lot of
top-level dependencies that jam handled automatically.
We are currently using ant for java compiles, and I am beginning to wish
we had just stayed with jam. I thought that ant actually took care of
figuring out java dependencies! duh.
We use the SubDir rule, and I don't see any problems with using it. Of course,
I have never *not* used it, so my perspective may be poor.
Date: Mon, 25 Jun 2001 17:49:24 -0700 (PDT)
Subject: Re: Solved: Need help with SubDir
Jam has a number of things going for it. For one thing, it's awfully fast.
And, as Dave mentioned, the Jamfiles themselves can be very simple, so
people who deal with it at the level of just adding or deleting a file to
a list from time-to-time don't have to wade through the "guts" just to do
that. Having the rules for most common operations already set up in
Jambase makes it fairly turn-key, so getting a build-process in place can
be pretty quick -- for the first make -> jam conversion I did, I just
wrote a script that generated about 90% of the Jamfiles from the makefiles
('course, I probably couldn't have done that if I hadn't already gone
through and cleaned up all the make stuff several months earlier :)
As with any new language/tool, it does require that you do some reading
and experimenting to get the hang of how it works, but once you do,
writing your own rules for any special needs you have is usually pretty
straightforward as well. (Using the rules in Jambase as examples of how it
works is also a Good Thing, as is running in debug mode so you can see
exactly what all it's doing.)
want it to be something that always happened, since there are times when
you don't want it used.
BTW: I wouldn't recommend Jam for java-based builds -- I converted one
that took 40 minutes using Jam into one that took 4 minutes using Ant.
That's not a knock against Jam -- it's just that handing the Java compiler
the .java files one at a time was slooooow. Dave, does Ant's <depend> task
not do what you need it to?
Date: Mon, 25 Jun 2001 21:10:31 -0500 (CDT)
Subject: Re: Solved: Need help with SubDir
oh, I had the rule set up so that it would compile all the out of date
java files in one whak. You are right about the slowness if you do them
one-by-one. I think this is why for java, people just give up and compile
everything each time. The other thing is that really figuring out the
dependencies is not as simple as c++ include scanning, or at least so
I've been told. ant's depends task only orders the sequence of stuff
to do. my understanding is that if you ask a target to be built, it just
does everything in the dependency tree for that target. some of our
xml files are over 1,000 lines long now, and of course if you make
a change to a standard target, you must edit each of the xml build files...
assuming there *are* any standard targets.
Date: Tue, 26 Jun 2001 10:23:50 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Solved: Need help with SubDir
My work is to write portable software for embedded systems.
I routinely build with different compilers, on different platforms.
When I started, each project had several Makefiles, one per
toolset. Since this led to important maintenance headaches
(and a clear waste of my time), I finally designed a brand
new "build system" on top of GNU Make.
This was a _horrible_ hack, made of several Makefiles and
sub-Makefiles, that was capable of detecting the current
platform, and support multiple toolsets easily. The price
for this flexibility, using GNU Make's stupid syntax, was multiple:
- the rules needed to compile even the simplest
project sub-directory were complex and hard
to understand if you didn't know precisely how
the build system works
- the rest of the build system was really hard to
understand properly, and not the easiest thing
to maintain or extend
Sure, I could compile with several compilers and a single
set of Makefiles, but clearly, that wasn't ideal (and it was slow as well)
So I began looking at alternatives, and found Jam, and
never regretted it. Simply put it:
- I'm still able to use several compilers from
the same set of control files. Adding support
for a new toolset is also trivial
- Typical Jamfiles are a lot simpler than anything
I could find, and I do find writing new rules
easy. Of course, that's because I've studied the
"Jambase" in details in order to support new
toolsets and pretty knows how it works.
- the 'Jambase' file is pretty understandable once
you take the time to read it entirely (and slowly :-).
I think that Jam's biggest problem is the documentation,
which isn't clear enough about its inner workings.
It clearly isn't perfect, but once you master Jam, it's possible
to do very interesting things. Just have a look at the recently
released Boost build system for a very remarkable example.
You need to have a good understanding of the Jam inner workings in
order to write new rules. Unless you want really advanced features,
you should not encounter great difficulties to extend Jam.
Consider also that you won't need to write new rules for each
and every new file in your projects.
Compared to IMakefiles, I'm pretty certain that Jamfiles are
an order of magnitude more maintanable, IMHO.
Date: Tue, 26 Jun 2001 10:24:30 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Boost Build System prerelease
I just finished studying the Boost control files, and I'd like first
to congratulate you for your work. What you've done is properly amazing !!
I'd very much appreciate to be able to collaborate with you on the
Boost build system. The Jamfiles you submitted brought several questions to mind:
- your modifications are rather important, since you've replaced most
of the original Jam build rules with different ones. Do you expect
this work to be incorporated back into Jam itself, or do you
intend to create a fork, in order to create an independent build tool ?
- if you intend to fork the tools, would you be interested in
extensions of the Jam C source files in order to support:
- built-in implementations of "sort", "unique", "intersection", etc..
- additionnal language logic (e.g. variable rule call, like in
[ $(RULE) $(PARAM1) : $(PARAM2) ], substitution, globbing, etc.. )
- other kinds of enhancements that could simplify the Boost
control files tremendously.
- would you be interested in creating new features ? The one I'm
interested in would be "<ansi>on|off" to toggle ANSI compliant
compilation of C source files.
If you're interested, I'm ready to create a source archive that would
compile the Boost build system into a single executable file for easier
distribution. Simply let me know if you're interested..
I'll try to contribute toolset contributions in the next days. I should
be able to provide control files for Intel C, Watcom, LCC and a few other things..
From: "David Abrahams" <abrahams@altrabroadband.com>
Subject: Re: Boost Build System prerelease
Date: Tue, 26 Jun 2001 09:01:43 -0400
Well, I planned to gauge reactions and make a determination about how to
approach things. I would love to compile those rules into Jam directly, but
I haven't got any reason to expect that Mr. Seiwald would incorporate my
work back into Jam, so I figured it was safest to assume I was going to be
stuck with -fallyourbase.jam for the forseeable future. The boost community
has rejected the idea of a completely independent fork (and I agree), so I
am trying to stay compatible with a stock Jam executable (though some of our
users certainly need your Win95/98 work). I think something that was likely
to be merged back into the Jam release (i.e. in the right part of the public
depot, and somewhat blessed by the Jam world) would be acceptable, though.
You may have more insight into the best approach here than I do, however,
since you've been part of the Jam community longer and have made valuable
source code modifications. Suggestions?
I think these are low-priority. Informal tests show that Jam spends much
more time checking file dates than it does in any of these processing rules.
The worst thing they do is to obscure debugging (-d+5) output. My tests
could be wrong, of course.
Yes, the first one would be a huge help. There's one place in particular
that needs it. I have spoken to Mr. Seiwald about that and he seemed open to
the idea. What is globbing?
Simplicity is good. Ultimately I would love to have a Python interpreter
under the covers, but the boost community is not ready to accept Python as a
requirement for builds.
Absolutely. The supported feature set is just a proof-of-concept.
I am very interested. A little guidance with the Perforce public depot would
also go a long way. There are so many things to do, and so little time to
learn new things...
Date: Tue, 26 Jun 2001 17:08:15 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Boost Build System prerelease
First, I'd like to remark that you can use my "improved" version of Jam today
to run Boost on Windows 9x, since it is completely backwards compatible with
Jam 2.3.1 :-) The changes have been submitted a long time ago to the Perforce
team, and I'd love to see them rolled back into the main Jam source tree.
On the other hand, It seems to me that Jam and Boost differ enormously in
their designs, even if they share a common command language (and interpreter);
users of both systems need to think in vastly different ways to build their
projects with either one of them, since even the location of object files
is different between them.
Jam is a marvelous piece of code, and many companies have already made a
rather important investment on it. For example, did you know that the
Apple MacOSX App.Builder IDE uses Jam as its internal build tool ?
This investment is also why I'd be really surprised to see the changes
required for Boost integrated into the Jam source tree before long, since
this would risk breaking too many existing installations/builds.
On one hand, I don't think that creating a fork is going to hurt any
Jam users. On the other hand, it will certainly simplify the use of
Boost, as well as its development.
Agreed, though they could be used by other project-specific Jam/Boostfiles
I'll have a look and see if I can't implement the first one easily. I'm
pretty confident in the Jam sources, and I'm pretty certain that this
thing should be trivial to do..
Well, the name isn't correct. It's basically, the equivalent of the
"wildcard" function in GNU Make, which is capable of returning a list
of files or directories in a variable. It's generally something useful
to scan sub-directories automatically, instead of having to patch a
fixed list in a control file. Here are examples:
include [ wildcard $(BOOST_INSTALL_PATH)/*-tools.jam ] ;
include [ wildcard src/*/Jamfile ] ;
Well, that's probably what the guys at the Software Carpentry project
want to do. Unfortunately, it seems they're taking a long time to
develop it (fortunately, the end result should be brilliant when completed !!) :-)
In the mean-time, we'll need to design our own tools..
Agreed, but I consider that the time spent improving software tools is
nothing compared to the savings this allows in the future !!
Date: Tue, 26 Jun 2001 17:40:41 +0200
From: David Turner <david.turner@freetype.org>
Subject: new release of "FT Jam"
I'd like to inform you that I've just released a new version
of "ftjam", the improved version of Jam that runs under
Windows 9x (originally written for FreeType, but fully
bacwkwards compatible with Jam 2.3.1).
This new release fixes a few annoying things in the original
Makefile/Jamfile (they assumed that the current directory was
in the path, which rarely happens on secure Unix systems).
This new release also supports indirect rule invocation, as in
[ $(RULE) params ]
which calls the rule whose name is given by the expansion
of the RULE variable. This should be of great value to the
Boost community and to other Jam hackers..
You can download it directly (in source form, as well as a
Win32 binary), at the following address:
http://sourceforge.net/project/showfiles.php?group_id=3157&release_id=41130
Alternatively, you can grab the sources from the Perforce Public
depot. Have a look in //guests/david_turner/jam/src
Date: Tue, 26 Jun 2001 17:56:57 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: new release of "FT Jam"
Mmmm, it seems that SourceForge takes some time before updating the file
list for a new release. For the impatient, try going to:
ftp://ftp.freetype.org/pub/tools/jam
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: new release of "FT Jam"
Date: Tue, 26 Jun 2001 12:41:44 -0400
Fabulous! Does it also work without the square brackets if you don't need a
return value?
$(RULE) params ;
Another moderately-high priority for me, and one I've just discovered, is to
open up the MAXLINE constant for Windows NT. I am not interested in
supporting NT3.5 (sorry!) and it is fairly easy to come up with a link
command line that exceeds the 996 characters allocated by the default Jam on
NT. If you don't feel comfortable about folding that change into FTJam, I
would be "forced" to make the modification myself, resulting in (IMO) a very
silly code fork.
From: "David Abrahams" <abrahams@altrabroadband.com>
Subject: Re: Boost Build System prerelease
Date: Tue, 26 Jun 2001 13:13:04 -0400
Yes, I've been referring Win9x users to your version, thanks.
Well, to be fair, the Boost design is based almost entirely on the Jam
design. The things I learned by studying the Jambase code were essential to
getting the boost stuff working.
Really? Maybe I'm just naive. Could you elaborate?
I don't think the file locations should be that much of a leap; almost
anyone who's built debug and release versions of a project or built with
multiple compilers has had to use a scheme something like this.
The subvariant path branching structure is a little unorthodox: I've had
requests from boosters for a flat build directory scheme (e.g. directories
like pc-linux-gnu-release), but I don't know how to handle platforms with
filename length restrictions (MacOS) or how to come up with a simple
translation between properties and directory name components. I think the
best solution would be to provide a user- (Jamrules-) overridable hook
function for translating paths.
I think I was making something like this argument in
All the same, I have never worked on a major project that could afford to
use a build system structured the way the Jambase rules are, with no simple
way to change compilers, build a different variant of the project, or ensure
link compatibility between separately-compiled objects. It is hard for me to
understand how a cross-platform project, e.g. Perforce, can use Jam
effectively without these facilities. Not that I consider any of this to be
a major failing of Jam, mind you: it provides nearly all of the neccessary
infrastructure to do the job, with most pieces incredibly well-thought-out.
Perhaps that makes the most sense.
Sure; but they can be used that way now. What am I missing?
BTW, one of the hardest things to get right was the split-path rule. It
generates reams of debugging output and might be better as an intrinsic
rule. Also, it's unable to remove the trailing slash from the top path
component, so:
[ split-path a:\b\c\d ] = a:\b c d
You might also notice that the boost build system hijacks unix-style
pathnames to do various things like specify multiple default build
subvariants and library dependencies:
<lib>../foo/bar/my_lib # build and link in this library
<runtime-link>static/dynamic # build both versions
In order to work properly cross-platform, this will require extending the
intrinsic path parsing code for platforms other than Windows and Unix. The
change to uniform unix-style path parsing so long as it works for /all/
supported platforms. Do you have any idea how VMS pathnames work <0.2wink>?
Oh, of course: you meant /globbing/ ;-). I'm not sure I have a use for it,
actually. Did you see a way it could help?
I've corresponded with Steven Knight about this. It's hard for me to tell
whether he actually has a solid idea of the neccessary foundation for such a
system. I think it might be fairly easy to slip calls to Python into the Jam
source and come up with an excellent system, but as I've said, that wouldn't
serve my users' needs.
I think my point was just that having a "partner in crime" makes it easier
to get over some of the little bumps in the road that less about development
than technology rasslin'.
Date: Tue, 26 Jun 2001 14:40:38 -0400
From: Beman Dawes <bdawes@acm.org>
Subject: Re: Boost Build System prerelease
>From: "David Turner" <david.turner@freetype.org>
>I think my point was just that having a "partner in crime" makes it easier
>to get over some of the little bumps in the road that less about development
>than technology rasslin'.
I'm one of the Boost people who hasn't wanted to see a fork in Jam.
I was assuming that such a fork would be of interest to Boost members only,
and so we would have to maintain a piece of software only tangentially
related to our C++ library goals.
But what seems to be happening in real life is that Dave Abrahams'
innovative work on the Boost build system is of interest way beyond
www.boost.org. Developers unrelated to Boost like David Turner seem
interested enough that they might be willing to climb on board a fork.
So while I still hope that the Perforce folks can find a way to be
responsive, if the two Daves decide to fork, then they will have my
support, and I expect will get the support of a lot of other Boost members.
Date: Tue, 26 Jun 2001 20:36:17 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: new release of "FT Jam"
No, because this would involve non-trivial changes to the Jam
parser. For now, you'll need to assign a dummy variable with the
call as in:
_ignore = [ $(RULE) params ] ;
hideous, but it works..
Well, that should be trivial to change too. However, I'd appreciate if
you could make a small wish list for the next changes. I'd like to
avoid making several releases a day ;-)
Date: Tue, 26 Jun 2001 23:44:46 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: new release of "FT Jam"
Actually, a better solution would probably to define
a new rule like the following:
##########################################################
#
# invoke VARIABLE : params1 : params2 : params3 : ....
#
# a special rule used to invoke rules indirectly.
# $(<) must be a variable name and will be expanded
# to determine which rule to call
#
rule invoke # VARIABLE : params1 : params2 : .... {
local _ignore ;
_ignore = [ $($(1)) $(2) : $(3) : $(4) : $(5) : $(6) : $(7) : $(8) : $(9) ] ;
}
and a simple example would be:
RULE = "ECHO" ;
invoke RULE : "hello world" ;
Subject: RE: new release of "FT Jam"
Date: Tue, 26 Jun 2001 17:18:50 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
| Another moderately-high priority for me, and one
| I've just discovered, is to open up the MAXLINE
| constant for Windows NT. I am not interested in
| supporting NT3.5 (sorry!) and it is fairly easy
| to come up with a link command line that exceeds
| the 996 characters allocated by the default Jam
| on NT. If you don't feel comfortable about folding
| that change into FTJam, I would be "forced" to make
| the modification myself, resulting in (IMO) a very
| silly code fork.
I had two problems with the 996 limit -- (1) the limit was frequently
hit when a rule has a bug, but I couldn't see what the bug was, because
it only told me "too big"; (2) some actions are multiple lines long, and
get run as batch files, therefore it may be that the actions are say 4
lines where line 1 is 700 chars long, line 2 is 20, line 3 is 300, line
4 is 20. But the batch file would have run fine. In particular, this
causes problems for my C++/Sbr and Pch/Sbr rules, which have to call
"touch" on the long target filenames after they're produced, to force
them to have consistent timestamps.
However, Jam does need some concept of a maximum line length, so it can
optimize the PIECEMEAL actions. Btw, accordingly to comments in the Jam
source, NT isn't able to execute command lines longer than 996
characters (I don't know if that's really true, I've never tried). So
just increasing MAXLINE may not solve your problem. Perhaps in your
case it would be better to rewrite various actions to create/update an
input file, so the Link actions can simply refer to the input file.
Anyway, here's what I did for a quick improvement to address my two
issues described above:
In jam.h, added this after the #ifndef MAXLINE around line 421:
# ifndef MAXCMD
# define MAXCMD 10240 /* longest command string */
# endif
In command.h, tweaked the "struct _cmd" so:
char buf[ MAXCMD ]; /* actual commands */
In command.h and command.c, tweaked prototype for "cmd_new" so:
LIST *shell, /* $(SHELL) (freed) */
int outsize ); /* max number of chars */
In command.c, tweaked the code for "cmd_new" so:
if( var_string( rule->actions, cmd->buf, outsize, &cmd->args ) < 0 )
In make1.c, inserted this line immediately at the top of the "do..while"
loop in "make1cmds":
int outsize = ( rule->flags & RULE_PIECEMEAL ) ? MAXLINE : MAXCMD;
In make1.c, added an extra parameter to the end of the "cmd_new" call:
list_copy( L0, shell ),
outsize );
In make1.c, parameterized the "actions too long (max ###)" message:
printf( "%s actions too long (max %d)!\n",
rule->name, outsize );
End result -- Jam is still able to tweak PIECEMEAL actions for each OS,
too long. Hope this helps!
Date: Tue, 26 Jun 2001 22:26:16 -0500 (CDT)
Subject: Re: new release of "FT Jam"
our experience is that NT has a rather limited command line length, so
I don't think changing maxline is going to get you very far.
We use response files for long lists of .objs and similar things.
if you concat the macro containing the files with a macro containing
a newline, then you can get it to produce a very long sequence of
echo stmts to create the response file. I can dig up the specifics
if you are interested.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: new release of "FT Jam"
Date: Wed, 27 Jun 2001 07:57:20 -0400
The Jam documentation claims NT 4.0 and up have a maximum length of
something near 10K characters. Are you saying that's incorrect, or does 10K
correspond to your idea of "rather limited" length?
Well, at this point, I don't know. since we do not generate long lines anymore
(except on unix) I don't have any current knowledge. The error msg seemed to indicate
that dos didn't like the long lines, but perhaps it was just an error msg from
jam. Try it and see. I have had to increase the macro string size though.
Maybe we were using a slightly different dos...
I don't think that 10k is that small.
From: "Prabhune, Abhijeet" <APrabhun@ciena.com>
Date: Wed, 27 Jun 2001 13:03:45 -0400
Subject: Queries regarding jam usage.
Questions;
1) whats the utility of JAMSHELL? Can it be used to start another command
shell and call another executable from that shell, e.g. a batch file invoked
from within the new shell, or will it just start a new shell and execute jam
in its context?
2) Second question is also related to batch files, IS it possible to invoke
batch files from within jamfile? If yes how?
From: Paul Moore <gustav@morpheus.demon.co.uk>
Date: Wed, 27 Jun 2001 23:15:03 +0100
Subject: Re: new release of "FT Jam"
I tested once. Win32's raw CreateProcess API managed to handle a command line of
3M (yes, Megabytes) or so, IIRC. But if you go through the shell, you are
limited by that. The Win 9x shells (COMMAND.COM) tend to have silly limits like
128 bytes. I'm not sure about CMD.EXE, but experiment makes it 2046 bytes. 4NT
(a CMD.EXE replacement) takes 2047, but crashed on anything over 2045 for me.
So the long & short of it seems to be, on NT you should limit lines to something
on the order of 2000 characters if you are going via a shell, but if you invoke
CreateProcess directly, you can have arbitrarily long lines.
[I'd be tempted to suggest that the normal behaviour should be limit to 2000
characters and use the shell, but have a special command form which goes direct
via CreateProcess (and so doesn't support things like redirection, shell
internal commands, etc) and allows an unlimited line length, for specialist
cases.]
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: new release of "FT Jam"
Date: Wed, 27 Jun 2001 20:31:50 -0400
So, Jam writes a .bat file which it invokes. How does that square with what
you're suggesting here?
From: Paul Moore <gustav@morpheus.demon.co.uk>
Subject: Re: Re: new release of "FT Jam"
Date: Thu, 28 Jun 2001 20:46:30 +0100
The limits for .bat files are those of the shell, so you'd have to limit
individual lines to 2000 lines (obviously, the whole file can be as long as you
like) on NT. Dunno for 9x...
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: new release of "FT Jam"
Date: Thu, 28 Jun 2001 17:54:50 -0400
Can you escape lines with backslashes, or does that amount to "cheating"?
From: "Paul Moore" <gustav@morpheus.demon.co.uk>
The limits for .bat files are those of the shell, so you'd have to limit
individual lines to 2000 lines (obviously, the whole file can be as long as
you like) on NT. Dunno for 9x...
Date: Fri, 29 Jun 2001 10:48:52 +0200
From: David Turner <david.turner@freetype.org>
Subject: RFC: On the future of Jam, "FT Jam" and Boost
I'd like to collect opinion from a large pool of Jam users
regarding potential and upcoming enhancements to Jam. I'm
sorry if the following is a bit long, but I've tried to
make it as clear and accurate as possible.
To summarize the following, I'd say that I propose the following:
- to rename "FT Jam" to something a bit more pleasant
- to create a SourceForge project for it, and use it for:
* distribution os source and binary packages
* providing a mailing list related to the
developments / improvements made to the
new "FT Jam" ( using the current list for
normal Jam / Jamfile usage questions )
* providing a CVS repository for the improved
sources. This seems necessary for a lot of
people who would like to contribute but do
not master Perforce, nor want to take the
time to install and learn it on their systems.
- the project would contain two "modules":
* the first one being the enhanced Jam sources themselves
* the second one being the "boost" build
system. it will depend on the first one.
It's important to understand that all improvements integrated
into "FT Jam" should be *completely* backwards compatible with
the official Jam sources, in order to avoid breaking existing
Jamfiles. As was said previously, some companies have made
a tremendous investment in Jam.
On the other hand, the boost build system will use some of
the C sources provided by the enhanced Jam module, but
also provide its own set of control files (i.e. the equivalent
of "Jambase") as well as other C source files of its own.
This should allow the creation of a single-executable "boost.exe".
The goal of all of this is to be able to experiment nearly
freely with new "features" in the "boost" module, while
slowly moving the tested/validated ones into the "enhanced
jam" one. The Boost control files will never migrate to
this module though..
I welcome any comments. More importantly, I welcome any
suggestions for the new "enhanced jam" module. Please, please,
do not suggest cryptic acronyms :-)
I. Jam and FT Jam:
critical improvements to the base Jam sources that are
distributed under the name "FT Jam". You'll be able to
find more information about it at the following address:
http://www.freetype.org/jam/index.html
I'd like to insist on the fact that the improvements
present in this version of Jam are *completely* backwards
compatible with the Jam 2.3.2 sources, and have been
submitted to the Jam maintainer. They can be classified
into two classes of improvements:
- modifications to the C Jam sources, in order to
run correctly on Win 9x systems, as well as implement
new built-in rules (namely HDRMACRO and SUBST), etc..
- modifications to the Jambase itself, in order to
support more toolsets on Windows and OS/2 systems.
In all cases, it's important to note that _existing_ Jamfiles
should work without a single change with "FT Jam", and I'm
commited to always meet this requirement in further improvements
that could be made in the future. If you find something in
FT Jam that seems to "break" one of your builds that otherwise
work perfectly with the official release of Jam, please
contact me to get this fixed !!
find a better one for now. I welcome any suggestion to
something more appealing (possibly avoiding strange
acronyms, _please_, I like being able to spell my tools
names in my basic english :-)
II. Boost:
Recently, David Abrahams announced on this list the release
of a new build system named 'Boost.Build', which I'll call
'boost' in the rest of this document.
Boost is a set of control files that over-ride the original
Jambase and provide a different set of rules to developers
when they write "Boostfiles" (instead of Jamfiles).
Boost manages advanced concepts that are completely alien
to the original Jam/Jambase, like build variants,
features/properties, requirements, etc.. and makes a
professional developer's life a lot easier.
The differences are so great that from a user point of
view, Boost and Jam can even be thought as radically
different designs.
III. Jam limitations (wrt Boost):
Using Boost is currently a bit awkward for at least two
reasons:
- to use it, you need to copy several boost control
files to your project's top directory. Since boost
is still in rather heavy development, you need to
update continously these files if you use them
- you need to invoke Jam (or FT Jam) with the "-f"
flag in order to not use the default Jambase
Meanwhile, Boost is currently limited by some drawbacks
of the original Jam design, and would benefit greatly from
a few improvements made to the Jam C sources themselves.
I have myself released a new version of FT-Jam recently
that addressed one of these issues (while still maintaining
compatibility, I insist !! :-)
Because of these inconveniences, a recent proposal was
made to fork the Jam sources in order to enhance Boost
capabilities, while being able to build a single "boost"
executable that would be, indeed, much easier to use than
the current scheme.
The problem with this approach is that improvements to
one branch (e.g. Jam/FT Jam) would not benefit to the
other one (respectively, the Boost version of the Jam
sources).
IV. Forking isn't necessary:
After some thought, it seems however that we do not
need to make a decision as drastic as forking the
Jam source tree entirely.
Since Boost is really a set of control "Jamfiles", the
original Jam (or FT Jam) sources can be used _directly_,
to build a single "boost.exe" that would incorporate
all "Boostfiles". To explain this, I'll detail the way
the Jam sources are currently organized:
- a first set of C source files is used to create
a library, called "libjam", that provides the
base Jam functionality (i.e. control file parsing
and execution).
- a single control file, named "Jambase", contain default
rule definitions for Jam, including "Cc", "Link",
"Library", as well as various compiler-specific
variable definitions and actions
- the "mkjambase.c" program, used to convert a text
file into a embeddable C source. It is currently
used to convert "Jambase" into "jambase.c"
- a front-end program named "jam.c" which is statically
linked with "jambase.c" and "libjam", used to generate
the single executable know as "jam.exe" or "jam"
This scheme allows us to design Boost as the following:
- a set of control files, like "all-your-base.jam",
"features.jam", etc.. that can be processed through
"mkjambase" in order to convert them to C source files
- a front-end programmed, named "boost.c", which is
statically linked with "all-your-base.c", etc.. and
"libjam". It would be used to generate a single
executable named "boost.exe" or "boost".
- optionally, some other C source files used to augment
"libjam" with new features (e.g. new rules).
These two designs are not incompatible and allows boost to
benefit from all improvements made to the Jam sources.
V. Source Code Location :
The current Jam sources are currently available through the
public Perforce Depot, (see //guests/david_turner/jam/src).
Though I've submitted my changes more than one month ago,
the Perfoce Jam team didn't find the time to review them
to incorporate them into the official Jam sources (or simply
reject them).
I do not blame them for this, since they most probably have
different priorities to deal with. However, as time passes,
two things are happening:
- I'm subject to add more and more enhancements
to FT Jam, which only widens the gap between it and the
official Jam sources (NB: while still preserving
backwards compatibility). This will make the review
and integration/rejection of FT Jam enhancements
simply harder for the Jam team when they start
doing it.
- other people are starting to contribute changes, or
willing to do so. Many of them are only familiar with
CVS, and do not want to install or take the time to
learn a new tool. I admit that I'm not really confident
with Perforce myself, though I've tried rather hard :-(
I thus propose to create a new CVS repository on a SourceForge
project page to handle both the "FT Jam" and "Boost" sources.
Using SourceForge has several benefits:
- an easier management of access rights for different
writers on the CVS repository than what can be done
with the guests branch of the public Perforce depot
(it seems).
- the ability for _many_ developers to easily download
the latest sources or stable releases through CVS,
submit fixes, view revision history, etc..
- the ability to parse the CVS sources from the web
- a dedicated web page/address., plus download locations
and information through HTTP/FTP.
The current public depot sources will be kept as is, and
updated periodically in order to submit only widely tested
and stable enhancements, just in case the Jam team finds
the time to review them..
I know that Perforce is a lot better than CVS in a lot of
points, especially for complex projects with lots of
developers. However, I believe that using CVS for something
as simple as the Jam+Boost sources should not hurt us.
From: "Robert Cowham" <robert@vaccaperna.co.uk>
Subject: RE: RFC: On the future of Jam, "FT Jam" and Boost
Date: Fri, 29 Jun 2001 11:57:58 +0100
Regarding the CVS thing - it should be possible to maintain a CVS and
This is availabe from CPAN (currently in beta), and the mailing list is
revml at http://maillist.perforce.com/mailman/listinfo
Date: Fri, 29 Jun 2001 13:08:01 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Well, I'd like to seriously counter that !!
I know quite a few Unix people who would _really_ love to get
away from the _atrocities_ of the damned GNU build tools when a
sufficiently mature alternative is available.
I also know some Windows developers who would like to develop
for Unix, but are less than appealed at the idea of coping with
the "gang of four" (i.e. AutoMake+AutoConf+LibTool+Make) [1]
I believe that Jam has a _big_ potential, but is still rather
limited currently (e.g. it's inability to build DLLs or programs
that use them correctly). Fortunately, it's sufficiently flexible
to allow the addition of custom rules to overcome some of its
shortcomings, and for most developers, it's a real God's send.
It has, at least, drastically simplified the build and testing
process of a couple cross-platform projects.
Boost is a drastic improvements over the original Jam design
and promises to bring industrial-strength builds with a very
simple system.
In short, the benefits of using Jam and Boost are tremendous,
even if they still require some learning.
(Yes, I'm passionnate about this.. but I used to work in a
large company that used its own complex build system based
on make and a bunch of Perl scripts, and believe me, that
was really tough..)
On the opposite, I believe that the benefits of switching from
CVS to Perforce, while being real and proven, are of lesser
importance to the casual developer.
That's why I think that once Jam and/or Boost mature enough,
you'll see developers from all over the planet literally switch
to Jam in droves, and ditch the "Make" of the worlds :-)
Well, it's just opinion anyway ;-)
[1] And yes, I realise that AutoConf and LibTool will still be
needed with Jam/Boost on Unix systems..
Date: Fri, 29 Jun 2001 14:29:22 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Exactly, but this kind of branching is no problem for CVS either.
What concerns me is to relocate files in one branch, and not the
other, while still being able to merge them correctly..
(i.e. FTJAM/libjam/file.c would be "merged" with JAM/file.c when needed)
Does Perforce support such a thing ? If it doesn't, we'll need
another depot or repository (independent of the tool used).
Another solution is to make all file relocations before the
branch is created in the official Jam sources, and I'm
pretty certain it's not going to happen too soon, and for
very good reasons :-)
Actually, I don't think we need to keep track of revision numbers
and comments between the Jam and FTJam depot/repositories which is,
I believe, the most complex part of this process..
I had the intent of doing something around this when a stable
release of FT Jam would break through:
- copy files from libjam/ and jam/ to my client
view of //guests/david_turner/jam/src
- integrate them, using no comments (or minimal ones)
Now, the Perforce team would be free to integrate these
changes back to the official Jam sources. I intend to
update a ChangeLog in order to satisfy this task..
What do you think about it ?
Date: Fri, 29 Jun 2001 15:42:32 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
If you do that, perforce falls down to cvs standards. Which makes perforce
users complain a lot.
That's called integrated in perforce-speak.
people branch "all files in .../mumble/..." into another directory. Much
easier on the brain. But perforce can branch any file into any other file.
Keeping part of integrated changes are.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Date: Fri, 29 Jun 2001 09:49:49 -0400
If the boost build system finds wide use, it might make sense to use the
list you mention for that as well.
I think it might be unwise to start new projects at SourceForge, given the following:
http://iwsun4.infoworld.com/articles/hn/xml/01/06/27/010627hnvalinux.xml?062
We at www.boost.org are currently investigating alternatives.
Agreed.
FWIW, I like "ftjam" and wouldn't waste time finding another name. I tried
to do the same with my python/c++ binding library for boost, but just ended
up with "the boost python library" (Boost.Python).
This seems to happen every time something of interest outside the hardcore
C++ community appears at boost. People often refer to the boost python
library as simply "boost". I don't mind you calling Boost.Build just "boost"
as a kind of shorthand, as long as it's clear that boost has a very
different identity: www.boost.org is an open-source peer-reviewed C++
library development group.
Except they're still called "Jamfile".
Giving credit where it's due, the design of Boost.Build draws HEAVILY on the
design of Jam.
Just to clarify, these files don't have to live in your project's top
directory. For example, the boost Jamrules file currently contains:
BOOST_BUILD_INSTALLATION ?= $(TOP)$(SLASH)build ;
BOOST_BASE_DIR ?= $(BOOST_BUILD_INSTALLATION) ;
Which places these files in a subdirectory called "build". You could also
specify absolute paths which get them from some other location (e.g. a server).
That's quite inconvenient, and something I'd like to address.
Hmm; that's not what I envisioned.
1. allyourbase.jam is really a modified Jambase. For a while I tried to
ensure that it would be strictly compatible with the original Jambase, but
eventually gave up. Still, as long as users' Jamfiles stay away from using
variables with certain naming conventions (I'm thinking of names like
"gALL_UPPER_CASE") I think it should be possible to roll the changes back
into the Jambase from FTJam without breaking any code. There are two issues:
a. The original Jambase would cause an error if you didn't set variables
describing your single toolset. That behavior is inappropriate for
potentially multi-compiler builds.
b. The original Jambase rules are underspecified and there are no unit
tests for them, so it's hard to ensure that you've preserved the intended
behavior. We could deal with this by writing improved specification and unit
tests for the Jambase rules we modify.
2. I /like/ the fact that features.jam and the toolset definitions are not
compiled into the Jam executable. It keeps the build system configurable and
customizable. It should be possible to add features, toolsets, and variants
without recompiling the executable.
Back to the naming issue: it's a small thing, but I wouldn't like to
distribute an executable called "boost" unless the www.boost.org
participants agreed that it was appropriate.
All that said, I like the general direction you're going in.
<snip>
I'd like to suggest that you consider hosting FT Jam where boost ends up
being hosted, since we are currently exploring another CVS host with better
long-term prospects. The other advantage to this is that we anticipate
having the ability to perform server-side maintenance jobs, such as moving
files in the repository, etc., for which you currently have to petition (the
almost certainly understaffed) SourceForge.
Date: Fri, 29 Jun 2001 09:30:45 -0700
From: Jos Backus <josb@cncdsl.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Hear, hear. Maybe when Jam was designed speed was an issue, but it seems to be
a much cleaner design to keep the interpreter (jam) and the script
(Jam{base,file}, etc.) separate. This has the added advantage of avoiding
binary rebuilds when the script changes, which can be a pain when you have to
maintain versions of jam for multiple platforms.
Date: Fri, 29 Jun 2001 13:29:35 -0400
From: Beman Dawes <bdawes@acm.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
That will cause endless trouble. Boost is a repository of free C++
libraries, not a build system. Please use Boost.Build or some other name
so you don't dilute all the work Boost developers have done to make the
Boost name synonymous with high quality C++ libraries.
I'd also like to second the comments of Dave Abrahams and Joe Backus to the
effect that the Boost.Build rules shouldn't be compiled into the Jam binary.
For a site like www.boost.org which updates our libraries regularly, we
don't want to require uses to download a Jam executable every time they
download the Boost libraries. The Jam binary should stay very stable, IMO.
Date: Fri, 29 Jun 2001 20:08:21 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Humm.. there seems to be some misunderstanding. The reason for
embedding control files within the executable is to simplify
distribution by including the _defaults_ in a single file. This
doesn't mean that using other scripts should be restricted.
I have myself made quite some hacking with the Jambase in order
to support a few more toolsets, and I value the "-f" flag in
Jam very dearly :-)
I'll try to give more info on this next week (it's week-end
time here in France :o)
PS: And I agree that a different name is required for the
build system. I feel that "Boost.Build" is likely to
be abreviated by most users as simply 'boost' so I'd
rather favor a drastic name change.
"Marmalade" has been suggested, it seems nice :-)
Date: Fri, 29 Jun 2001 11:23:40 -0700
From: Jos Backus <josb@cncdsl.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Yes, I am aware of this distinction. I guess my position is that jam should
not have any built-in default behaviour, it should merely be an interpreter
for the Jam language. Just like, say, BSD make doesn't have any rules compiled
into the binary, they live in /usr/share/mk/*.mk.
How about "Jelly" or "GLU" :)
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Date: Fri, 29 Jun 2001 15:45:33 -0400
I think people would like some simple way to configure the executable to use
a different "base" file without the need for "-f".
I suppose it's easy enough to fake that with the appropriate shell script,
but it might make sense to give people a compiled-in mechanism, like the use
of a .jamrc file.
It's cute (et surtout, trs Franais), but I have these concerns:
1. It's a long name to type. Anything longer than "make" will deter adoption
2. I think I'd like to keep the boost identity attached to the architecture
somehow.
From: Paul Moore <gustav@morpheus.demon.co.uk>
Subject: Re: Re: new release of "FT Jam"
Date: Sat, 30 Jun 2001 18:00:02 +0100
Yes. Not with backslashes - you use ^ instead. So
dir ^
/?
works - both on CMD.EXE and 4NT.EXE. This is *definitely* not Win 9x compatible, though.
Date: Sun, 01 Jul 2001 08:36:57 +0200 (CEST)
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
From: Werner LEMBERG <wl@gnu.org>
`jelly' is even better, as someone else suggested. So I withdraw my
suggestion in favour of this.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Date: Sun, 1 Jul 2001 08:54:20 -0400
The low-level build system is Jam -> FTJam -> whatever you want to call it.
The high-level part written in the (FT)Jam language is what I've been
calling the Boost Build System. I would like to maintain that association
with boost.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Date: Mon, 2 Jul 2001 11:15:40 +0100
Except that it's got nothing to do with jam outside North America. :-)
Personally, I quite liked marmalade. As someone mentioned, it's a lot to
type. How about 'curd', as in Lemon?
Date: Mon, 02 Jul 2001 15:41:01 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Yes, but that scheme is not going to translate well on Windows, OS/2,
and a few other platforms supported by Jam, where hard-coded paths
aren't exactly the default..
I think that both approachs have their merits. After all, we can
easily choose to use the "/usr/share/jam/..." thing on Unix-style
systems, and the single-exe one on other ones.
As long as the tool is easily extensible by users, either though
environment variables, .jamrc files, command line flags, etc..
it really doesn't make much of a difference where the defaults are stored :-)
Date: Mon, 02 Jul 2001 15:55:20 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Yep. Either a ".jamrc" file or an environment variable should do
the trick. I think it's wiser to implement _both_ schemes, since
users have different expectations depending on the system they're
working on, typically:
- Unix users all have a HOME variable defined and can use
a ".jamrc" file easily.
to a configuration file is simpler than defining a HOME variable
then a file named ".jamrc" :-)
And of course, the command-line flags should still be there for other
users too (shell scripts invoking dynamically-generated Jamfiles
is just an example :-)
It's not French actually :-)
jam <=> confiture
marmalade <=> marmelade
jelly <=> gele
I believe that the difference between these three words
are the sugar/fruit ratios, though I'm unsure..
OK. Moreover, it's more than 8 letters long, think about how ugly
"marmal~1" is going to be on Win9x systems ;-)
Humm.. maybe we should choose two names:
A - one for the "enhanced jam" thing (currently FT Jam)
B - one for the "Boost.Build" thing
For "A", I think that any name would fit, because the final
executable should ideally still be called "jam", since it
will be 100% backwards compatible with the official Jam.
For "B", I propose "booster".
- it's short
- it has the "boost" identity
- space aeronautics are cool :-)
From: <boga@mac.com>
Date: Mon, 2 Jul 2001 16:56:58 +0200
Subject: RE: FT Jam
100 % agree with it. No binary package was very flustrating, when i was
getting started with jam.
I think that a jam user has two options:
1. Use you own Jambase (-f option, or compile into jam.exe). This is what
Apple's ProjectBuilder, and many others do (including us).
2. Try to live with the built in Jambase without any modification to it.
Generally i cannot imagine #2 for a complex build system. The current
Jambase is simply too limited for it.
I think Jambase should be for Jam, what the srd c libraries to ANSI C. It's
not that now. For example since the current Jambase don't follow any naming
rules, just adding a new variable name to Jambase can break compatiblity
with exiting makefiles!
I vote for a brand new Jambase!!!
I think that boost's jambase should be kept in the boost project.
What we done to avoid the problem of compiling the jambase inside the jam.exe:
1. We store a set of jamfiles in a folder named Jam inside the project root(ROOT).
2. We compiled a thin jambase into the jam.exe that contains only the
implementation of a 'SubDir' like rule that does the following:
- calculates the project root(ROOT)
- includes ROOT/Jam/Base.jam.
- calls _SubDir (that can be defined in the ROOT/Jam/Base.jam).
With this design we can edit our Jambase without recompiling the jam.exe,
and without using the -f option.
Date: Mon, 2 Jul 2001 11:07:48 -0700
From: Jos Backus <josb@cncdsl.com>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
True. One could use registry settings or OS2.INI entries in those environments.
From: Paul Moore <gustav@morpheus.demon.co.uk>
Subject: Re: RFC: On the future of Jam, "FT Jam" and Boost
Date: Mon, 02 Jul 2001 20:37:43 +0100
On Windows, it is trivial (ie, a single API call) to get the name of the jam
executable. Locating the default jambase alongside this (ie, jambase.jam in the
same directory as jam.exe) makes sense, and is quite a common model.
From: "Jerry Nettleton" <nett@mail.com>
Date: Fri, 6 Jul 2001 00:53:18 -0500
Subject: message & resource compilers
I am new to jam and have been experimenting with building a Windows
program. I've read about using UserObject and supporting new file
extensions. But after several days of changes, I still can't get jam to
essentially recreate the desired flow of compilation.
questions
1. Since file.rc and file.h are generated using file.mc, how can I make
sure the message compiler (mc) is always run before the resource compiler
(rc) to create file.res or compiling file.c (depedent on file.h)?
2. With this setup, jam is looking for a way to compile file.res into
file.obj. How can I change this behavior so that file.res is linked with
Main (prog)?
simulated desired flow
cc lib1.c
cc lib2.c
rem generate file.rc, file.h
mc file.mc
rem generate file.res
rc /r file.rc
copy file.res ..\lib
copy file.h ..\include
cc file.c (depedent on file.h)
lib /out:libutil.lib lib1.obj lib2.obj
cc prog1.c
cc prog2.c
link /out:prog prog1.obj prog2.obj libutil.lib file.res
jamfile
LIBSRC = lib1.c lib2.c file.c ;
PROGSRC = prog1.c prog2.c ;
RCFLAGS = /r ;
Main prog : $(PROGSRC) ;
MainFromObjects prog : file.res ;
LinkLibraries prog : libutil ;
Library libutil : $(LIBSRC) ;
InstallLib ../lib : libutil ;
#GenFile file.h : mc file.mc ;
#GenFile file.rc : mc file.mc ;
#GenFile file.res : rc /r file.rc ;
UserObject file.h : file.mc ;
UserObject file.rc : file.mc ;
UserObject file.res : file.rc ;
InstallFile ../lib : file.res ;
InstallFile ../include : file.h ;
jamrules
rule UserObject {
switch $(>) {
case *.mc : MessageCompiler $(<) : $(>) ;
case *.rc : ResourceCompiler $(<) : $(>) ;
case * : ECHO "unknown suffix on" $(>) ;
}
}
rule ResourceCompiler {
Depends $(<) : $(>) ;
Clean clean : $(<) ;
}
actions ResourceCompiler { rc $(RCFLAGS) $(>) }
rule MessageCompiler { Depends $(<) : $(>) ; Clean clean : $(<) ; }
actions MessageCompiler { mc $(MCFLAGS) $(>) }
From: "Yannick Cornet" <yannick.cornet@openwave.com>
Date: Fri, 6 Jul 2001 16:33:58 +0200
Subject: Multiple Jamfile
I would like to compile a component multiple times, using different flag
options defined in separate jamfiles, all in one pass (one jam invocation).
However I could not find the way to specify to jam to run through more than
one jamfile in the same subdirectory. I suppose this must have been asked
before, can anyone help me answer this?
Date: Fri, 6 Jul 2001 17:03:09 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Multiple Jamfile
In Jam, I'd say you want to create several targets from the same
source(s), and set the CPPFLAGS, CFLAGS or somesuch on each target.
OPTIM on fastTarget = -O2 ;
OPTIM on debugTarget = -g ;
In Jamfile: "include otherfile ;", methinks. I only ever use that to pull
in Jamrules, but it ought to work for you too.
From: "Yannick Cornet" <yannick.cornet@openwave.com>
Subject: Re: message & resource compilers
Date: Fri, 6 Jul 2001 17:17:39 +0200
I am not sure if this will help, I am also quite new to JAM but this works for us:
JAMRULES:
RSC = rc ;
MSC = mc ;
RSC_FLAGS = "/l 0x409" ;
rule UserObject {
switch $(>:S) {
case .idl : ObjectFromIdl $(<) : $(>) ;
case .mc : MessageCompiler $(<) : $(>) ;
case .rc : ResourceCompiler $(<) : $(>) ;
case * : EXIT "Unknown suffix on" $(>) "- see UserObject rule in Jamfile(5)." ;
}
}
rule ResourceCompiler { Depends $(<) : $(>) ; Clean clean : $(<) ; }
rule MC { Depends $(<) : $(>) ; Clean clean : $(<) ; }
rule MessageCompiler {
local _h = $(>:S=.h) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
MC $(<) $(_h) : $(>) ;
}
actions ResourceCompiler { $(RSC) /fo "$(<)" $(RSC_FLAGS) $(>) }
actions MC { $(MSC) $(MSC_FLAGS) $(>) }
JAMFILE:
MessageCompiler MSG00409.bin : UIStrings.mc ;
INCLUDES res$(SLASH)Basic.rc2 : MSG00409.bin ;
LinkLibraries Basic.dll : <libs> ;
Dynlib Basic.dll :
Basic.rc
StdAfx.cpp
Basic.cpp
;
where Dynlib is simply calling main but changes some link flags and also
builds the lib target, where LinkLibraries creates the library dependencies
of targets. I use the standard jambase.
* * *
Date: Fri, 6 Jul 2001 13:02:59 -0500 (CDT)
Subject: Re: Multiple Jamfile
we have done this by creating new sibling directories, and putting jamfiles
in them that use the source from the original dir. This seems to work quite
well. You just set one of the search macros to point to the source dir.
Otherwise, all rules stay the same, and just set some of the subdir flags differently.
From: Alain KOCELNIAK <alain@corys.fr>
Subject: Installation on NT fails
Date: Mon, 9 Jul 2001 08:59:57 +0200
I'm a new jam user.
I encountered a problem during jam installation on NT :
- I follow step by step the README file :
* uncomment in Makefile the lines for the NT platform
* set MSVCNT="C:\Program Files\Microsoft Visual Studio V60\Vc98"
* run nmake
- The first step of compilation ( nmake ) works well :
G:\Jam\Jam.2.3\nmake
Microsoft (R) Program Maintenance Utility Version 6.00.8168.0
cl /nologo /Fejam0 -I "C:\Program Files\Microsoft Visual Studio
V6.0\Vc98"/include -DNT command.c compile.c execunix.c ...
command.c
compile.c
execunix.c
...
Generating Code...
Compiling...
parse.c
pathunix.c
pathvms.c
...
Generating Code...
- The second step ( jam0 ) fails :
jam0
Compiler is Microsoft Visual C++
...found 131 target(s)...
...updating 29 target(s)...
Cc bin.ntx86\command.obj
command.c
jam.h(73) : fatal error C1083: Cannot open include file: 'fcntl.h': No
such file or directory
cl /nologo /c /DNT /Fobin.ntx86\command.obj /I"C:\Program\include
/IFiles\Microsoft\include /IVisual\include /IStudio\include
/IV6.0\Vc98"\include command.c
...failed Cc bin.ntx86\command.obj ...
Cc bin.ntx86\compile.obj
compile.c
jam.h(73) : fatal error C1083: Cannot open include file: 'fcntl.h': No such file or directory
cl /nologo /c /DNT /Fobin.ntx86\compile.obj /I"C:\Program\include
/IFiles\Microsoft\include /IVisual\include /IStudio\include
/IV6.0\Vc98"\include compile.c
...
It seems that the MSVCNT path is not used correctly : a /I directive is
added after each space character in it ...
I tried also with MSVCNT defined without double-quote, but then the first
step failed ( nmake ).
What is wrong in the procedure I followed ?
From: Alain KOCELNIAK <alain@corys.fr>
Subject: RE: Installation on NT fails
Date: Mon, 9 Jul 2001 11:01:41 +0200
I put your hack in Jambase file but it changes nothing.
Should I put it directly in Jambase file or in Jambase.c file ?
Date: lundi 9 juillet 2001 09:55
Nothing, but the default Jambase doesn't like the spaces in the
MSVCNT variable. I made the following chnage to Jambase
to fix this:
#if $(OS) = SUNOS && $(TZ)
#{
# Echo Warning: you are running the SunOS jam on Solaris. ;
#}
# Rule to unsplit a variable imported from the environment
# MSVCNT for us. We need it in a single variable with
# spaces and all. Jam splits it on space e.g.
# MSVCNT=D:/Program Files/DevStudio/VC
# turns into:
# $(MSVCNT[1]) = "MSVCNT=D:/Program"
# $(MSVCNT[2]) = "Files/DevStudio/VC"
# This rule puts the spaces back.
# Call it with the name of the variable
rule respace {
local _re ;
local _name ;
# Get the contents of the variable ;
_name = $($(1)) ;
if $(_name[2-]) {
local i ;
local space = " " ;
_re = $(_name[1]) ;
{
_re = $(_re)$(space)$(i) ;
}
$(1) = $(_re) ;
}
}
if $(UNIX) {
if $(OS) = QNX {
AR default = wlib ;
CC default = cc ;
...
#$(MSVC)\\lib\\kernel32.lib
;
LINKLIBS default = ;
NOARSCAN default = true ;
OPTIM default = ;
STDHDRS default = $(MSVC)\\include ;
UNDEFFLAG default = "/u _" ;
} else if $(MSVCNT) {
ECHO "Compiler is Microsoft Visual C++" ;
respace MSVCNT ;
AR default = lib ;
AS default = masm386 ;
CC default = cl /nologo ;
CCFLAGS default = ;
C++ default = $(CC) ;
C++FLAGS default = $(CCFLAGS) ;
LINK default = link ;
Date: Mon, 25 Jun 2001 19:50:39 -0700 (PDT)
Subject: Re: Solved: Need help with SubDir
[A bit off-topic for the Jam list -- if you're not curious about using
Ant, you can skip this.]
The build I was dealing with couldn't do that since it had build-order and
different-classpath issues, and trying to deal with getting that all
correct in Jam would've been a lot less straightforward than just putting
it all into Ant (plus it has all the other Java-oriented built-in stuff).
I don't think so -- what it does is associate a dependency between files
that are depended on and the files that depend on them. Eg., if Foo.java
is depended on by Bar.java and Blat.java, and Foo.java is newer than its
classfile, using the <depend> task before the <javac> task will cause not
only Foo.java to be recompiled, but Bar.java and Blat.java as well.
Very few tasks will run regardless of whether the files involved in the
target are up-to-date (<echo> is the only one that comes to mind). The
<javac> task checks source-files against class-files and only hands off to
the compiler those source-files that are newer than their class-files or
that don't have a corresponding class-file.
If all your .java files can be compiled at once, I can't think of any
reason why you'd need to duplicate a compile target in every build-file.
If you do need to for some reason, you should probably be able to use the
XML include mechanism (for now -- a better include should be coming with
Ant2) and still just have a "standard" in one place, which gets read in --
or you could have the subproject build-files use an <ant> task that runs a
target in some "standard targets" build-file.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Fri, 13 Jul 2001 17:23:59 +0100
Subject: jam -d0 returns failure?
I'm using the external makefile feature of Visual C++ to build some code
using Jam. If I invoke jam with -d1 or -d2, it works fine. If I invoke jam
with -d0, jam returns an error, but builds the project correctly anyway.
From: andreas.held@gretagimaging.ch
Date: Mon, 16 Jul 2001 11:52:50 +0200
Subject: Dependencies and SubInclude
I am quite new to Jam so please excuse if I am trying to do something stupid.
Anyway, I am trying to compile a DLL using several subprojects that are located
in subfolders. What I am doing now is to place a Jamfile into each subfolder,
specifying the compilation rules for the files contained in those folders. Using
LOCATE_TARGET I copy all object files to a central location. In the main Jamfile
I then add the targets for building my library. However, how can I make sure
that the SubInclude statements are being processed before the MainFromObjects
rule? Actually, what happens is that first the MainFromObjects is being
processed and only then the different SubIncludes are carried out. Is there a
On a more general note, what I am actually trying to do is to build a DLL by
including a configurable part of all object files. Is there a general way for
doing this?
Here is my top Jamfile:
advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib
;
ImgProcPhoenix\\Debug\\ImgProcPhoenixD.lib IntelPLSuite2.5\\lib\\msvc\\ipl.lib ;
From: "Bill Clark" <bill@occamdev.com>
Date: Fri, 13 Jul 2001 14:31:53 -0700
Subject: jam vs. cons?
We're looking at make replacements. Does anyone have thoughts
on the relative merits of Jam and Cons (http://www.dsmit.com/cons/)?
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Tue, 17 Jul 2001 15:28:44 +0100
Subject: Changing behaviour of default SubDir rule, w.r.t. ALL_LOCATE_TARGET -- a good idea?
I've been attempting to get my output files placed in "Debug" or "Release"
directories[1]. The LOCATE_TARGET variable does exactly what I want, except
that it's reset every time the SubDir rule is invoked.
The ALL_LOCATE_TARGET doesn't do what I want -- it causes all of the output
from every subdirectory to appear in the same place. In fact, I'm not sure
what the SubDir rule was attempting to do:
LOCATE_SOURCE = $(ALL_LOCATE_TARGET) $(SUBDIR) ;
LOCATE_TARGET = $(ALL_LOCATE_TARGET) $(SUBDIR) ;
This appears (to me at least) to do the following:
1. If $(ALL_LOCATE_TARGET) is not set, then LOCATE_SOURCE and LOCATE_TARGET
become $(SUBDIR). This makes sense - the output files are dumped into the
correct subdirectory.
2. If $(ALL_LOCATE_TARGET) is set to "Debug", for example, then they're set
to Debug $(SUBDIR).
In case (2), above, only the first token of $(LOCATE_TARGET) is ever used.
So why doesn't it just do it with an 'if'?
Anyway, I've changed the SubDir rule in my local Jambase so that the
relevant lines look like this:
if $(ALL_LOCATE_TARGET) {
LOCATE_SOURCE = [ FDirName $(SUBDIR) $(ALL_LOCATE_TARGET) ] ;
LOCATE_TARGET = [ FDirName $(SUBDIR) $(ALL_LOCATE_TARGET) ] ;
} else {
LOCATE_SOURCE = $(SUBDIR) ;
LOCATE_TARGET = $(SUBDIR) ;
}
Which does exactly what I want. My question: Is this a good idea? Will it
have bad side-effects that I've not forseen?
[1] It's a little more complicated than that, but this serves for the example.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Date: Tue, 17 Jul 2001 18:46:04 +0100
Subject: Invoking external build processes
What's the best way to invoke external build processes from jam?
Three scenarios:
1. External Makefile; e.g. Linux kernel needs to be built as part of the build process.
2. MS Developer Studio .dsp file; e.g. id3lib builds with a .dsp file.
3. Jamfile; e.g. freetype needs to be built and linked with, but we don't
want to touch the included Jamfiles.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Invoking external build processes
Date: Wed, 18 Jul 2001 16:54:20 +0100
Nice try, Arnt, but I'm being a little slow today. How do I persuade jam to
always build something?
I've got to deal with the following:
I've got a copy of id3lib in my TOP/lib/id3lib directory. It's to be built
on Win32, so the simplest thing to do is to invoke developer studio as follows:
msdev $(TOP)\lib\id3lib\prj\id3lib.dsp /MAKE "id3lib - Win32 Debug"
or
msdev $(TOP)\lib\id3lib\prj\id3lib.dsp /MAKE "id3lib - Win32 Release"
I'd like to invoke the rule something like this:
MSDev id3lib : TOP lib id3lib prj ;
This would make a target id3lib, which would require the same-named .dsp
file in the directory named as the second argument to be built.
I'd then want to invoke another rule, like this:
UseMSDev mainapp : id3lib ;
Which would cause mainapp to depend on id3lib.
1. Does this make sense?
2. Does it seem like a good way to do it?
3. How do I communicate to the linker when building mainapp that it needs to
drag in the stuff generated in the MSDev rule? I considered adding a third argument:
MSDev id3lib : TOP lib id3lib prj : id3lib.lib ;
This would (implicitly) attempt to build
$(TOP)/lib/id3lib/prj/$(DEBUG_OR_RELEASE)/id3lib.lib by invoking $(2).
But, how do I communicate $(3) to the UseMSDev rule?
Date: Wed, 18 Jul 2001 18:31:53 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Invoking external build processes
ALWAYS target ;
I can't really say. It kind of seems to make sense, but the sense eludes
me. A sure sign that I'm too tired. I'll try and reread it tomorrow morning.
Wouldn't a simple Depends mainapp : thestuffgenerated ; do that?
Date: Fri, 20 Jul 2001 11:44:28 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Invoking external build processes
The USeMSDev rule could say either
Depends $(<) : $(>).lib ;
or
LINKLIBS on $(<) += $(>).lib ;
or both. That ought to be enough. Of course, the hardcoding of .lib is
rather a hack. Jam's a bit weak on library handling, I think.
For .lib, substitute .dll, or whatever.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sun, 22 Jul 2001 19:28:33 -0400
Subject: Jam's dangerous syntax
I think this must be the 2nd time in the past 3 weeks I've found a bug
lurking in my Jam code because I forgot a semicolon:
my-rule $(args) : $(more-args) # oops
another-rule an-arg ;
The 2nd line is silently interpreted as part of the first rule invocation!
I'm not sure what the answer might be other than to surround all calls with [].
At least I'd find out about the problem by the end of the enclosing block.
Date: Mon, 23 Jul 2001 09:28:16 +0100
From: Julian Gardner <joolz@rsd.tv>
Subject: New User
I have just spent the weekend playing with Jam and found that once I had
made the separate jamfiles I have got most of my project building.
I need some help here please
I have a file called TEXT.C and this includes numerous .LNG files, some
of the .LNG files are taken from raw unicode and converted using an
in-house convertor. How do I go about setting up the dependencies and
conversion
e.g. in my makefile
arabic.lng: arabic.unicode
convertArabic arabic.unicode arabic.lng
text.c: arabic.lng english.lng
$(CC) text.c
Date: Mon, 30 Jul 2001 12:45:51 +0200
From: David Turner <david.turner@freetype.org>
Subject: FTJam 2.3.5 release
I'd like to inform you that "FT Jam" release "2.3.5" is now
out. The purpose of this release is mainly to:
- cleanup the build system (Makefile, Jamfile, etc..) to
perform proper compilation on Unix and Windows systems
- add a new directory named "builds" containing specific
Makefiles to build the program with Visual C++,
Borland C++ and Mingw (gcc) on Windows..
- updated the documentation. A new file named "INSTALL"
was added, detailing how to compile and install the
program on your system
- implement a new builtin, FAIL_EXPECTED, for the Boost.Build system...
Source packages, as well as Win32 and Linux binaries are
available. Please have a look at:
http://www.freetype.org/jam/index.html
FreeType mirror sites).
Note that the web pages have been slightly updated. The
complete description of changes between FT Jam and Jam is now at:
http://www.freetype.org/jam/changes.html
Note that I'll try to make a second RFC on this list in
order to explain my intentions regarding Jam/FTJam for
the near future..
PS: Since the Jam documentation itself is so "rough", I have
also started a new documentation for the Jam command
language. A draft can be read at:
http://www.freetype.org/jam/syntax.html
Date: Mon, 30 Jul 2001 13:56:53 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Jam's dangerous syntax
Compilers usually "solve" such problems with warnings. In this case, if
jam sees an argument on the beginning of a line, and that word is also the
name of a rule, then warn.
Of course, compilers genereally don't have the option of changing the
language to remove the problem.
Date: Mon, 30 Jul 2001 13:35:11 +0200
From: David Turner <david.turner@freetype.org>
Subject: Re: Invoking external build processes
I won't comment on 1. and 2. here, but the FreeType 2 Jamfiles are already
designed in a way that let you use them directly in other projects
(i.e. without touching them).
To do that, do something like that in your own Jamfile :
FT2_TOP = [ FSubDir path to freetype2 ] ;
SubInclude FT2_TOP ;
I use it for a custom project, seems to work very well..
Date: Mon, 30 Jul 2001 13:54:09 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: New User
You need a custom rule for each type of action, and you invoke that rule
for each "pair" of files. I say "pair" because it really is one target to
be built and zero or more that form the basis for building.
Maybe something like this:
rule FromUnicode {
Clean $(<) ;
RmTemps $(<:S=.o) : $(<) ;
Depends $(<) : $(>) ;
INCLUDES text.c : $(<) ; # a bit of a hack, really
}
action FromUnicode { convertArabic $(<) $(>) }
FromUnicode arabic.lng : arabic.unicode ;
Main yourfinalname : text.c other.c third.c ;
Jam will understand that text.c includes arabic.lng, and thus that text.o
depends on arabic.lng. It also knows that arabic.lng depends on
arabic.unicode, and all the right things will get executed when to 'jam
yourfinalname'.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Jam's dangerous syntax
Date: Mon, 30 Jul 2001 09:19:51 -0400
I think I'd want to be able to turn that warning off for general
consumption, but leave it on during development. Aside from that small
refinement, I think you have hit on a beautiful solution.
From: Vladimir Prus <ghost@cs.msu.su>
Date: Tue, 24 Jul 2001 19:40:46 +0400
Subject: Unnecessary recompilation
Consider a simple Jamfile
Main a : a.cpp helper.cpp ;
Main b : b.cpp helper.cpp;
The problem with it is that helper.cpp gets compiled twice. Is it bug or
feature I don't understand? How it can be changed? I understand that I can
simply grab C++ rule from Jambase and add "together" modifier, and then do
the same with C rule. Any better way?
Date: Thu, 26 Jul 2001 18:39:15 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Anyone else using Jam for BIG projects?
I work for Cisco Systems where JAM has been used for about 3 years as the
primary build tool on an embedded systems project I have been involved with
for 1 year. We have been working in somewhat of a vacuum with respect to
JAM (though we have communicated directly with Christopher Seiwald, we
haven't really participated on this mailing list much since Karl Klashinsky
left our team). As a first step to changing that isolation, I have read
all recent traffic on the archive for this list (since June 1, 2001 and
selected bits prior) and I have peeked at both Boost.Build and ntjam -- way
cool stuff by the way -- which makes me regret being so insular over here.
I have three questions for you. Any answers will be appreciated...
Our project currently consists of about 15,000 source files (several
million LoC) including 18 different source suffix types, two different
target CPUs, many different code variants for different
platforms/products/build types. The code is all located in a large,
complex directory structure managed by local Perl code over ClearCase (to
give us a copy-out model, and multi-site capabilities) with several hundred
engineers working at many sites on four continents rapidly evolving and
growing the source base. We have about 10,000 lines of Jam source to
define rules and actions for our many source types and another 65,000 lines
of code that consists mostly of rule invocations to build things. On a
typical 20 CPU Sun Enterprise E6500 build server, we are looking at around
30-60 minutes to build the main targets in one of our workspaces depending
on server load. My guess at this point is that we may be the biggest kid
on the JAM block. So my first question is:
Is anyone else out there using JAM for anything as complex as this?
Unfortunately we find that JAM has some problems in our large and complex
environment. Most particularly, dynamic header scanning is becoming
prohibitively time consuming for us. We believe it is the most significant
redundant operation performed at each jam (most of our source and header
files are read end-to-end for the regexp pattern matching at each jam --
maybe 25,000 files). We do have a few optimizations coded into our huge
home-grown header scanning rules that we use to avoid big chunks of this
pain, and we also use a redundant second set of jam infrastructure (one
automatically generated from the other to mangle the dependency tree for
speed based on hints from the engineers workspace). So my second question is:
Has anyone working on easing the pain of header scanning large numbers
of source files?
Today my group is at the point where we need to make some big changes for
the stability, and extensibility of our tooling and we feel that JAM in its
current state makes this very difficult. I am looking at either:
- altering the JAM executable significantly to fix some things we see
as design weaknesses and to add some major new features we need (beyond
those I have read about here), or
- really biting the bullet and switching to some other build tools
(e.g., I am looking at GNU Cons, GNU Make 3.79.1, and others).
By the way, GNU Cons <http://www.dsmit.com/cons> looks interesting --
similar dynamic header scanning to JAM, but nice extensible OO Perl and
infinitely more flexible than regexp scanning -- maybe slow though, I
haven't tried it. GNU Make 3.79.1 is used widely here at Cisco for other
even larger more complex code bases so we have some local expertise with it
and its predecessors. That version has some JAM-like parallelism features
and several of our users would love us to switch to it, but I am guessing
it would still likely require some recursive make-invoking-make nonsense,
which would likely be costly in our environment due to our richness of
source types. The make team here at Cisco has some nifty tooling for
handling dynamic header dependency calculations without scanning though,
but I think we can migrate that to the JAM environment. So anyway, my
third question is:
Does anyone have any thoughts on other build tools that might be worth
exploring to handle a project of this complexity (like GNU Cons, GNU make
3.79.1, or others)?
I would be interested in comparing notes with any of you who use JAM for a
large complex project. Maybe we can exchange some ideas about this
stuff. And I promise, if we do decide to go down that road of modifying
JAM, I will be back here at this mailing list to discuss what are thinking
about doing and to seek feedback and possibly collaboration opportunities.
Date: Tue, 31 Jul 2001 11:24:01 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Unnecessary recompilation
(I suppose the missing space on the second line is just a typo.)
It sounds like a pure bug in jam - jam doesn't understand that the two
helper.o targets are the same.
From: Vladimir Prus <ghost@cs.msu.su>
Date: Tue, 31 Jul 2001 16:04:20 +0400
Subject: Problems with: custom rule; cross-directory dependencies
The practical task motivating my post is the following. I need to make an
executable from two source files: main.cpp and parser/asm.wd.
Transformation that need to be applied to parser/asm.wd are:
- parser/asm.wd -> parser/asm_parser.whl, parser/asm_lexer.dlp
- parser/asm_parser.whl -> parser/asm_parser.cpp, parser/asm_parser.h
- parser/asm_lexer.dlp -> parser/asm_lexer.cpp, parser/asm_lexer.h
There are tree utilities which can perform these transformation. Resulting
*.cpp files should be compiled and linked in the executable.
The problems are:
1. Jam assumes each source in main rule should end up as object, which is not
Main main : main.cpp parser/asm_parser.cpp parser/asm_lexer.cpp ;
but this is hardy convenient.
2. Writing rules for each transformation is boring. I've wrote rule called
'UserAction' and used it like this
UserAction asm_parser.cpp asm_lexer.h : asm_parser.whl : whale ;
It worked when wd file was in the same directory with main.cpp but when put
in parser dir, a problem appeared:
Main main : main.cpp parser/asm_parser.cpp parser/asm_lexer.cpp
Here, Main rule adds grist to parser/asm_parser.cpp
UserAction asm_parser.cpp asm_lexer.h : asm_parser.whl : whale ;
Here UserAction adds grist to asm_parser.cpp, and we have two targets:
<somegrist>parser/asm_parser.cpp and
<somegrist!parser>asm_parser.cpp, which are not considered equivalent. Note
that Main rule erases existing grist on sources, so saying anything like
<parser>asm_parser.cpp in Main rule is to no purpose.
I wonder if anybody has ready solutions for those problems, or plans to do
something about them, or simply ideas which changes are required.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Tue, 31 Jul 2001 15:21:52 -0400
I've been developing a build system for boost (www.boost.org) based on Jam
that is designed to handle problems like the ones you're describing, but
have not thrown any really huge build jobs at it yet.
I think my build system may have some similar problems eventually. When you
are building a collection of targets on multiple compilers and build
variants, what do you really need to do to identify header files properly?
Well, if each header target gets scanned only once for #includes, then you
need to grist header files with:
1. The header name
2. The header search algorithm used by the compiler
3. The complete list of user (#include "...") and system (#include <...>)
paths being used to compile the source file
4. Possibly even the directory of the #including file, depending on the
search algorithm.
That means the same header file must be gristed differently when used in
different builds in order to get the correct dependencies registered. If
each distinct target (as opposed to file) is scanned separately, the same
header will be scanned many, many times.
I've been wondering if it would be reasonably easy to use Jam to generate
the equivalent of gmake's .d files, which could help us to avoid some of the
scanning. I don't have any idea about the feasibility of that idea, though.
It's probably worth noting that cons' Python derivative, scons, is now under
development at sourceforge. I think that will be a more suitable tool for
large projects, mainly because I know that a large project build system is a
codebase all its own and I think Python is better suited to large projects
than Perl. Also, a bunch of experienced and energetic people are working on it.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Anyone else using Jam for BIG projects?
Date: Wed, 1 Aug 2001 12:38:16 -0700
Our situation is not as large as you, but its comparable, I think.
Nt) with 6 variants (various levels of debug, purify, quantify)
We have some 6000 source files with about 20 file types.
We have appoximately 500 Jamfiles with 12000 lines of rule
invocations. (Rules are never defined in Jamfiles, only Jamrules)
Our Jambase is so minimal, it contains juts 37 lines. All it really
does is call $TOP/Jamrules and ./Jamfile. We have no recompiled
or relines jam executable for the last 2 years. We last recompiled
to fix a bug in jam AIX archive support (the bug still exists
in the official jam release).
Our Jam system as some 30,000 targets.
It takes about 8 hours to build the system on our fastest
machines, 1 hour is we use -j10. On Solaris it takes
3 days ! (4 hours if we use multiple machines, -j 6, and
JamShell to rsh the commands across the network)
We probably have 1.5 million lines of source code.
No, and I don't think it is really required. And something
like the Boost Build System is the wrong way to go, in my
oppion.
On our system we have implemented some very straight forward
solutions to shorten build times. We also have a system with
is as flexable as the Boost system (I believe) but much easier
to work with and understand.
Here is what we do.
We have a Bld component which contains all build and build suport
code. This component includes some little gems such as idl.pl
and link.pl which augment these portions of the build. For
example. we need to pre and post process IDL files and the IDL
output. Our jam rules know about the build dependenies but nothing
about the complexities associated with idl compilation. Link.pl
has specialized knowledged about how to handle the various linker
types that we use on each platform. It preprocess command lines
to make jam's job easier. It also handles modifing the link
for purify and quantity. Jam communicates with link.pl via
environment variables.
Periodically, our build team produces reference builds on each
platform. Developers then "shadow" the reference. The shadow
process produces a copy of the development tree, but only links
of the files (on Unix anyways). Shadowing takes only a few minutes.
unmodified shadow will build nothing, and takes less than 1 minute
normally.
Our build process uses a build/export/realease model. So "jam"
will build local targets, "jam exports" builds and exports (to
the rest of the development tree) those targets that need exporting.
"jam release" builds a release (installation tree) ready for packaging.
Calling jam in any directory ONLY builds that directories targets,
sub-directory targets, and dependencies (where ever they may be).
So if you modity a header file, say, and then change to the Vman/src
directory and "jam exports", only Vman is build. None of the other
200 some odd executables are built. Header file scanning only occures
for the files directly/indirectly required for Vman.
Calling jam from the top of the delopment tree builds all targets.
Even when building the who tree, header files scan takes less than
5 minutes, and that is a very small fraction of the total build
time, there is no needs to optimize it further.
Note, each file is scanned only once during a build, regardless
of how many times it is referenced. This is because exported
headers files are NOT gristed. Of course, we include header files as
#include <Cmp/File.h>
so the component (Cmp) becomes part of the jam node name and effectively
grists the header file.
jam is so much easier on the end users. Gmake would not handle
our whole build tree, at least the version we had 3 years ago
would not. Remember, our jam tree is equal to a make file
with 20000 lines (or more) and 30000 targets. Further, we wanted
a build process which was local (current working directory)
sensitive, which is hard to achive in make unless to degress
into a non-declaritive build system.
I would not switch from jam, there is nothing to equal it at the present.
The built distribution rule set are a fine example of jam rules,
but it will always need to be rewritten for any serious project.
Just like with make, I have always ended up completely redefining
the build in set of rules.
The new syntax helps some, but not enough to cause me to upgrade.
Jam has its problems, no doubt -- but I'll keep then to myself until I
have time to do something about them :)
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Unnecessary recompilation
Date: Wed, 1 Aug 2001 12:49:17 -0700
Main a : a.cpp helper.cpp
calls Objects a.cpp helper.cpp
which calls Object helper.o : helper.cpp
which calls C++ helper.o : helper.cpp
Main b : ...
also makes a similar call sequence,
including C++ helper.o : helper.cpp
Since the C++ rules was invoked on
helper.o twice, Jam feels the need to
invoke the C++ action twice.
Thats just the way that Jam was written.
You can see lots of evidence of this
behavior in the distrubuted jam rules.
Look for lines like
if ! $($(<)-included)) { $(v)-included = true ; }
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Problems with: custom rule; cross-directory depende
Date: Wed, 1 Aug 2001 13:09:44 -0700
I would add extension .wd to you Objects rule
(or UserObjects rule)
somthing like
case .wd:
switch $(<:S=) {
case *_parser : C++ $(<) : $(>:S=_parser.cpp} ; Whale $(>);
case *_lexer : C++ $(<) : $(>:S=_lexer.cpp} ; Whale $(>);
}
Modify your Objects rule like (do't take me too literally
rule Objects {
local i j s x ;
makeGrsistedName s : $(<) ;
{
objectFiles x : $i ;
{
Object $(j) : $(i);
}
}
}
where objectFiles convert .cpp to .o, .wd to _parser.o and _lexer.o
And Whale rule
rule Whale {
if ! $($(<)-whaled) {
WhaleDo $(<:S=_parser.cpp) $(<:S=_parser.h)
$(<:S=_lexer.cpp) $(<:S=_lexer.h) : $(<) ;
}
}
action WhaleDo { Depends $(<) : $(>) ; }
action WhaleDo { Whatever }
variations on this are possible, for example, you could divide Whale into
two rules Lexer and Paser, say.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Wed, 1 Aug 2001 18:05:50 -0400
It seems a little unfair for you to toss that remark off without an
explanation. I respect your experience in building large projects. What
about Boost.Build do you think is inappropriate?
Subject: RE: Anyone else using Jam for BIG projects?
Date: Wed, 1 Aug 2001 16:55:42 -0700
Yes, Sorry, I guess its a bit unfair. A few weeks ago (months ago :)
I intended to send you a more formal (and private) critism
about the Boost system, but one things and then another, ...
By the time I sat down to write a reply, I had convinced my self
that you were far enough along that major critism could add no value.
(From my frequent job as code reviewer, I know that most often,
going back five steps is not an option people will entertain).
If I was wrong, then I spend a Saturday and give you some feed back
that you can use.
What do I think is inappropriate ? In a nut shell I think that
the tact of adding "requirements" in the way that you have makes
the user's job of specifing jam files more difficult. What
would I propose as a solution -- well, thats what I would need
a day to formulate.
Date: Wed, 01 Aug 2001 17:39:54 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Re: Anyone else using Jam for BIG projects?
Thanks very much for your prompt and thought-provoking reply! Comments below....
I downloaded "build_system_2001_6_18.zip" and have read a little bit about
your system. I will read more about it when I get a chance. It looks very
interesting. With the large amount of JAM source we have in our workspaces
and with lots of vocal engineers, I am a bit leery of adopting anything
that will require major changes to our users' Jamfile code, but I am not
dismissing Boost.Build out of hand. I will familiarize myself with it
before making any big decisions about it. So you can probably count on
some naive questions from me on this list when that happens. :-) At the
very least, I expect to learn some new things while digging into
Boost.Build. Thanks for your efforts on this project!
We don't grist them with this information but we do store all of this
information (associated with the header name as it will be included) by
using indirectly named variables which we can retrieve later. For example:
# early on in the Parsing Phase-executed code in our header exporting
code (we require explicit exports for our public header files so we can
control and track API access) we do this
x = <header name and directory as it will be included> ; # e.g., x
foo/bar.h ;
$(x)_blah = <information we need later in the build> ;
...
# later on in the Binding Phase when scanning files and executing our
local header rules
if $($header_name_as_included)_blah) {
# This header has the "blah" information set for it so go ahead
and use it...
... $($1)_blah) ...
}
We also attach some info to just the header name base/suffix-keyed indirect
variable since some headers get included this way and the search paths take
care of picking up the right ones. I.e., as above but:
x = <header name base and suffix only> ; # e.g., x = bar.h ;
$(x)_twiddle = <information we need later in the build> ;
Yes, but we have decided not to worry about being perfect about this. We
don't want re-scanning so we accept a few spurious dependencies. In the
spirit of JAM's built-in promiscuity with respect to adding header
dependencies based on regexp pattern matching, without regard to
conditional compilation, we simply accept that a few spurious dependencies
will arise from our technique when two different public headers in
different directories have the same base/suffix (which occurs surprisingly often).
Excellent! I had wondered if anyone else out there had been thinking along
these lines. I am very excited that someone else is interested in this,
David. The GNU make 3.79.1 community elsewhere in Cisco has set up their
build to use these .d files very effectively. The essentially get 100%
accurate header dependency information (re-scanned in every context where
every header is included, with appropriate preprocessor activity) and they
get it a near zero processing cost since it gets generated by the compiler
(C, C++, and we could easily make our other dozen compilers do the same) as
a byproduct of building the files. At first glance this looks like a bit
of a chicken and egg problem in that you get the dependency information
off-by-one in the sense that you have to build your code to get its
dependency info in order to know whether to build your code in the first
place. But since the only way that the dependency profile is able to
change is by the alteration of one of these dependent files, you can be
assured of know when this dependency info is inaccurate, and you can drive
the build accordingly. I am very interesting in incorporating this
strategy into JAM. Not exactly making jam generate ".d" equivalent files
in Jam syntax, but since the compilers create these things in "make"
syntax, just make it possible for JAM to use this information (maybe using
new internal code, an external code mechanism like a plug-in module of some
kind, I think there are a few ways we could do this...). Anyway, I hope
something like this is feasible, but there are certainly some issues with
respect to generated code that will make this challenging in our (Cisco's)
environment, but I have some thoughts about that. I would love to talk
more about this with any interested folks.
Cool. I'll take a peek. Thanks for tip.
I am very glad to hear that. I will be socializing on this mailing list
some ideas our team has about significant changes to JAM. I want to see,
if we go this far into changing JAM, will we be going there on our own or
will the existing JAM community rally around this stuff and cheer. And I
am hoping to hear from folks using JAM in other contexts about whether
these ideas make sense in their contexts. We are very open to being
convinced that these changes are not needed. Anyway, we have an
embarrassingly long list of changes (currently there are about 29 or 30
fairly big things) on our wish list. There are maybe 6 or 7 themes among
them that I will bring to the list separately for discussion (one of which
is the .d stuff mentioned above). Thanks for your interest!
Date: Wed, 01 Aug 2001 18:22:32 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: RE: Anyone else using Jam for BIG projects?
Thanks for your response! Your environment looks very similar to ours in
complexity (and I think also in the architecture of your JAM code).
My comments are below...
It sounds to me like you are in the same ballpark with us in terms of build complexity.
Well, from my viewpoint, the jury is still out on this question. I need to
read more about Boost.Build.
Verrrry similar to our build environment. We have a "build" component (and
many other build support components under "buildx/*", "util", "tools/*").
Again, we have very similar tooling. To create our duplicate JAM
infrastructure (which we call "BOB" -- Binary-Oriented Build) takes us just
under a minute. The resulting infrastructure knows only where binaries are
located, not what their dependencies are (i.e., in the duplicate
infrastructure, jam does *not* know how to build these entities at
all). Build times for us run around 3 minutes in a BOBbed workspace.
We also use this extensively. Building from a component directory in our
environment takes on average 39 seconds (the last time we checked). This
technique essentially clips off the entire dependency tree outside of the
component except for those headers included and static libraries linked
from outside. Most of our engineers work mostly in this way, but whole
tree builds are required for some (such as infrastructure components) who
use BOB unless their APIs change in which case they have to do a BOB-less
whole tree rebuild. So most of our folks have build times under a minute,
the rest get to build in 3 minutes or so most of the time but occasionally
they take the big hit on API changes (typically about 15-20 minutes on a
tree that has been fully built previously and has minimal changes). About
50-60% of that time is spent executing JAM source and header scanning.
Our environments differ there.
We do the same.
We do the same, and headers are only ever scanned once in any build. We
take great care to ensure all build changes preserve this, since header
scanning is such a big part of our build.
The Cisco make gurus use make recursively to achieve this. The cost is
that build rules (how to build this suffix from that suffix) must be read
at every recursive make invocation. Since their rule files are small (few
source types) this is not an issue for them. Ours are big and so I expect
we will wan a single make from the workspace root as well as makes from the
component directories. I think this can be accomplished as JAM does it, by
using context-setting code, and by "including" component makefiles inside
the root makefile.
Also, I am not so sure that JAM is easier on the end users than all of the
above tools. Cons looks pretty user-friendly. Also, our end-users (i.e.,
the engineers inside Cisco who are my customers essentially) are pretty
vocal about their dislike of Jamfile coding and most come from a make
background of course.
Any specific thoughts about weaknesses of GNU Cons?
I feel the same about that.
Same here. I assigned one of my engineers to test out JAM 2.3 shortly
after its release. We found too many bugs in it to feel safe migrating to
it. Off the top of my head I remember a couple of nasty ones:
- 4 or 5 rules in the Jambase were removed accidentally!
- new semantics introduced for the NOCARE rule that broke generation of header files!
Though some of the new features look attractive, we have decided we will
not move to a new JAM version without working up a pretty hefty regression
suite first. The risk of having several hundred engineers on several
continents sitting on their hands due to a JAM bug is just too high for us.
Well, that's where we are right now. Fixing the remaining problems in our
build infrastructure will be challenging using the existing JAM (currently
a 2.2 derivative for us). We think changing JAM will be easier than living
with its limitations.
I plan to bring forth some ideas that my team has about JAM changes over
the next few weeks, and I am interested in hearing feedback from the list
on these topics.
P.S. From your e-mail address I am guessing you work for MDSI in
Richmond? I used to have a BC.CA address until recently myself. I'm from
Victoria. Cheers...
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Anyone else using Jam for BIG projects?
Date: Wed, 1 Aug 2001 21:44:00 -0700
I just timed the header file scan portion of a jam invokation on
a local file system on HPUX, less than 1 minute.
2 minutes, 10 seonds, when everything is over NFS.
So the question would be, why does would you scan times be
some much higher as to be a problem for you.
Remember, people who need to build the who tree will be
waiting hours anyways.
To speed up dependency analysis without changing jam itself,
you could use makedepend and process the output to create
a jam.deps file in each compoment which you then include into
your jam files. You could use jam itself to rebuild the
dependencies, but that would definitely be a pre-invokation of
jam. This would also be error prone unless jam was invoked
from within a wrapper script.
(This was one of the reasons we went to jam in the first place.
Please happly live with systems like this using make, but Ive
been burnt too many times by builds that did not refresh their
dependencies.)
If you want the makedepends like step to be automatically run
within the same build of jam, then you would be looking at
more extensive changes to jam.
In make0, before headers is called, call a new function,
dependencies say. In dependencies you would have to
a) determine is makedepend thing needs to be called. This
is the worse part, but could be done easily with a perl
script.
b) invoke make depends with all the include "stuff" required.
c) load the dependencies into jam (easy)
d) CLEAR the HDRRULE on all the files for which dependencies
where found (this will stop jam's traditional scanning)
I never did any of this because I'm not willing to force the
jamfiles to impose that all files in a directory get built with
the same C++/C/yacc/Lex (whatever) options and using the same
include path. This might be the way that 99% of the sources
are built, but its always that 1% that kills you.
As to users not liking writing Jamrules -- except for the semi-colon
thing, the jamfiles are so easy to build that most of ours are
generated automatically from our Rose Model. A rule file
is rarely more complicated than.
SubDir TOP x y z ;
Component y ;
ExportHdrs header.h header.h header.h ;
ExportIDL file.idl ;
Library libY : file.idl file.c file.cpp ;
Library localY : file.c file.cpp ;
ExportLib libY ;
ExportBin YApp ;
Server YApp : YMain.cpp ;
CommonLibraries YApp ;
LinkLocal YApp : localY ;
LinkLibraries YAapp : libA libB libC libD ;
(just more lines and more files :)
If your users need to anything more than to declare what goes
into the library or executable, what is local and what us exported,
the your jamrules need to be reworked.
(I have a component called "Fake" which fakes out all of our projects
build tools. This allows me to simulate a build in order to debug
a new set of jamfiles without having to actually build the system.
I also have a jam.exe which I have doctored to tell me why an target
is getting rebuilt. It produces too much output to be generally
usable, but it has help track down bad dependency statements)
Yes, I'm from/in/born/raised (and unless somebody rescues me, probably
die) in BC. I work in Richmond and live in Coquitlam. The commute is
the worse part.
I've always liked Victoria -- seems much more idealic than Vancouver and suburbs.
Somedays (many days :)) I wish that somebody would offer me a job
someplace warmer and DRIER. Do you know of somebody who needs a
good system/software architect, knows all the buzz-words, has bad spelling,
can out code even 16 year old hackers, dreams in UML, perl, java, and c++,
designs embedded software or enterprise systems, usually while stuck in traffic
(and builds visual basic code to turn the rose models into highly structed
code templates for developers to code against) ?
Date: Wed, 01 Aug 2001 22:23:23 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Whitespace As Delimiter -- Yuk!
You can kill that "missing-space-before-semicolon" bug once and for all by
tweaking your JAM source as follows:
In scan.c, in yylex(), around line 300 it has:
if( ( c = yychar() ) == EOF || !inquote && ( isspace( c ) ) )
break;
}
/* Check obvious errors. */
You can tweak that to:
if( ( c = yychar() ) == EOF || !inquote && ( isspace( c ) || c == ';' ) )
break;
}
/* Check obvious errors. */
Unfortunately that's a bit like trying to soak up a flood with a single
bread crumb and you *cannot* kill all other such token merging problems
this easily. Just try adding in a similar check for ':' and you'll see
what I mean. That addition would break the code that uses the $(foo:BS)
syntax, for example. JAM (unlike most programming languages) considers
that entire expression to be a single token (i.e., one lexical atom). More
typically the scanner/lexical analyzer would stream out 6 tokens to the
parser for this expression).
I think the white space delimitation of tokens in JAM's language is an
unfortunate weakness. It causes us at Cisco many headaches in the form of
difficult-to-track-down core dumps (bus errors or segmentation faults) most
of the time, and just plain wrong behavior without any error messages at
other times. This is unacceptable behavior for a production tool under any
circumstances, but for a tiny little error of omitting a space character it
is doubly so.
I advocate completely gutting out the home-grown scanner from JAM, and
replacing it with a nice normal extensible lex-built scanner in which
whitespace is irrelevant to token separation, as it is in most programming
languages. This change implies some significant language usage changes
though. For example, code like this:
SubDirCcFlags -DDEBUG=1 ;
would have to be changed to something like this:
SubDirCcFlags "-DDEBUG=1";
to avoid separating this single SubDirCcFlags parameter into a bunch of
tokens (which would later be parsed by jam into an expression that jam
itself would try to evaluate -- not really what is wanted here). There are
many other things that would require change too, since expressions like this:
$(foo)bar
would be indistinguishable from expressions like this:
$foo) bar
when delivered as a token stream by a typical scanner. Of course these
mean very different things to jam. So to code the former you would need to use:
"$(foo)bar"
So changing the scanner the way I am advocating would require people to use
quotes in a lot of places where they are not needed today. Would that be a
difficult change for you and your jam users to accept?
Note that a scanner change like this also implies a major rewrite to the
parser since the parser would be receiving a completely different kind of
input stream.
I would be very interested in hearing people's thoughts on this.
P.S. I don't accept the argument made in the JAM Language document:
Jam/MR requires whitespace (blanks, tabs, or newlines) to surround
all tokens, including the colon (:) and semicolon (;) tokens. This is
because jam runs on many platforms and no characters, save
whitespace, are uncommon in the file names on all of those platforms.
In my experience, UNIX, the Mac, Windows (all flavors), and even very old
PC-DOS and MS-DOS filenames (to choose a few of JAM's many platforms) could
*all* contain whitespace and frequently they did. Let's delimit names with
quotes not white space. Personally, I am used to using quotes even when
not required in some contexts in some languages, just to be explicit. And
of course, if one wants to embed a quote in one of these strings (e.g., to
have a filename contain one of these characters) one can use the \" escape mechanism.
Date: Thu, 2 Aug 2001 12:17:37 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Unnecessary recompilation
I understand the logic that makes it happen, but fail to see any argument
for this behaviour.
That sounds like the author realized what was happening... but I still
don't see any reason for it. Shouldn't the action execution code think
"Oh, I've done this exact action, no need to do it again"?
Date: Thu, 2 Aug 2001 12:34:51 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Anyone else using Jam for BIG projects?
What irritates me is when a build that involves ten seconds of compiler
time takes much more than ten seconds on the wall clock. Yes, I'm serious.
I also use a hacked jam that tries to compile the "right" file first, to
deliver any error messages quickly :)
Before the make, jam should look for and perhaps read a cache file. Don't
read it if it's older than a day or two, maybe.
headers() should, for each file, either take information from that file or
do the scanning, depending on whether the header file is newer than the
cache file or not.
At the end, jam should write a new cache file.
What I describe should not cause any such change... or am I wrong?
Date: Thu, 2 Aug 2001 12:41:11 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Anyone else using Jam for BIG projects?
I am wrong. Changing some variable in a Jamfile might not cause the
include information to be updated.
From: "Roger Lipscombe" <rlipscombe@riohome.com>
Subject: Re: Unnecessary recompilation
Date: Thu, 2 Aug 2001 12:01:57 +0100
Surely it's a little more complicated than that -- you could have another
rule that added a "C++FLAGS on" to one of the targets, and then invoked
MainFromObjects. This makes it difficult to tell whether the C++ rule is
exactly the same.
I admit, however, that:
a) It's not impossible to deal with this case, and...
b) You'd have to be totally insane to do this without specifying a different
output file/directory, to avoid confusing the two resulting .o/.obj files.
Date: Thu, 2 Aug 2001 13:09:43 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Whitespace As Delimiter -- Yuk!
Jam is at risk of forking. This is the sort of change that makes the fork
certain. Unless David, David and the new perforce hire all agree... IIRC
perforce hired someone who has jam as a large part of his job description
as of August 1.
Date: Thu, 2 Aug 2001 13:23:25 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Unnecessary recompilation
Rules are hard. Actions should be easier. Byt the time jam is about to run
an action, it can tell _exactly_ what the action is and omit it if the
same action has been run in the exact same circumstances.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Thu, 2 Aug 2001 10:24:17 -0400
A more recent version in available via anonymous CVS from sourceforge under
the boost/tools/build module.
I wouldn't suggest it, at least not at this point. You have a large and
stable system. Boost.Build is just getting up to speed.
If you think it holds real promise for your work, your feedback (and
especially code contributions) would be much appreciated.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Thu, 2 Aug 2001 10:30:03 -0400
Boost doesn't work that way. We thrive on peer review feedback.
Actually, yes, I would appreciate it.
Hmm, if anything it's the default-BUILD section of a target specification
that has been superfluous for me in practice. The requirements section has
been really useful and easy (for me, of course!), though I'd love to hear a
simpler interface proposal.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Date: Thu, 2 Aug 2001 10:37:58 -0400
My view, in brief, is that whitespace delimiting is great for top-level
Jamfiles but lousy for people writing Jam rules. The underlying Jam language
is rather weak for the sort of large build-system construction jobs that I'm
trying to throw at it. [It probably would also help everyone to have ";" be
a delimiter].
Date: Thu, 02 Aug 2001 07:45:20 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Re: Anyone else using Jam for BIG projects?
Exactly what I was thinking. I still plan to absorb as much as possible
from Boost.Build though. It looks very innovative.
Me too. I think my contributions to this list will probably have to occur
before 8 or after 5 (Pacific time) since daytime is usually pretty
intense. But I wouldn't want it any other way. :-)
Date: Thu, 02 Aug 2001 09:14:38 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Re: Anyone else using Jam for BIG projects?
Apparently there is (in Solaris anyway). Some of our developers here have
suggested a similar strategy.
Date: Thu, 02 Aug 2001 09:52:51 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: RE: Anyone else using Jam for BIG projects?
Hmmm... I'm not sure. I just counted and we have 8,995 ".h" files
(comprising 1,863,096 lines) and 9,414 ".c" files (comprising 4,663,950
lines), plus many other file types are scanned in our system (we have
several other source types, with various include syntaxes). We also have
around 2000 lines of code in our HeaderRules.jam code that manages this
(i.e., the code that runs during the Binding Phase, there is additional
header code that runs in the parsing phase to set up for that code). Maybe
the combination of our heavy JAM language processing, lots of
interconnection between these files and large file volume is causing the
time difference? Do you have these metrics for your code? I did the
following to get the LoC numbers:
cd <source tree root>
find . -name '*.h' | xargs wc -l | grep -v ' total$' | awk '{ s += $1
} END { print s }'
find . -name '*.c' | xargs wc -l | grep -v ' total$' | awk '{ s += $1
} END { print s }'
In our environment, a whole tree build from scratch takes around 45
minutes, never hours.
And I think you also take a big time hit for that unless you use the
"off-by-one" technique I mentioned previously and handle it very carefully.
I think there is a better way.
For first build, build everything (or everything required for this
developer's context). Generate .d files as a byproduct of the build almost
for free.
In subsequent builds, determine if any .d files are out-of-date with
respect to their corresponding source files -- or *any* of the included
files as found in the ".d" file, if so build all those sources in addition
to anything that depends on them as required of course. Because the only
way to alter the header include file dependencies is to touch either the
source file or one of the things it includes, you are guaranteed to catch
and update whenever necessary. I am very interested in making jam able to
use this technique.
We independently set compiler options, compile flags, include file search
paths, etc. for each source file. We would need to continue this practice.
Our users typically code even less complexity that that. We limit them to
using about 8 or 10 rules to build router executables (e.g., server
processes, client processes), static libraries, DLLs, and a few other
goodies. They are all simply coded like the above -- in their src/Jamfile
files. Our Jamrules are another thing. Much more complex.
I don't quite get the purpose of this. We use some test shells to unit
test/debug parts of our jam infrastructure, which includes a number of Perl
scripts and C-programs invoked during the build. There is also the "-n"
option to jam of course.
I have hacked something similar from an opposite perspective but for the
same purpose. Given a fully gristed target, it will display the complete
Depends (or optionally both Depends and INCLUDES) graph for that
target. This is very useful for understanding what's going on "inside" jam
when something goes wrong. Each target is only displayed once so the tree
doesn't get too gory (e.g., multiple references to a directories, for
example, were ugly before I changed that).
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Thu, 2 Aug 2001 13:22:33 -0400
I have some elisp which lets me "step" through the -d+5 Jam debugging
output, go to the beginning or end of a rule invocation, etc. I just
redirect Jam's output into a file:
jam -n -d+5 > c.jerr
open the file in emacs, and start walking through the results. If anybody
wants the elisp stuff, let me know and I'll post it.
Date: Thu, 02 Aug 2001 15:29:06 +0200
From: Patrick Frants <patrick@quintiq.com>
Subject: Re: Anyone else using Jam for BIG projects?
The worst problem with all make-like tools is that they forget forget
dependencies between between seaparate
invocations of the tool. At least under Win32 it would not be too difficult to
write a process that maintains a dependency
graph and keeps updating it with help of the FindFirstChangeNotification call.
I don't know unix too well, but surely there
must exist a similar call?
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Anyone else using Jam for BIG projects?
Date: Thu, 2 Aug 2001 21:53:00 -0700
You seem to have a very big tree of code but very
small results, if you can build that much stuff in 45 minutes.
We have links which take more that 30 minutes and produce executables
which are larger than 30 Megs.
You would be create a lot of .d files, which is probably OK.
You would end up timestamping a lot of .cpp and .h files
(or .d files), which, over NFS is relativity expensive.
Of course you wound not. A single C++ file can take 10 minutes to
compile on solatis (if its 2,000 lines long) and a executable can take
two hours to link (on AIX).
My problem is always -- wht did jam build X. If X depends on A, B, and C,
I want to know which caused X to recompile. If X includes, directly and
indirecty, 400 header files, then simply priniting the dependency tree
just gives you too much information.
I'va always tried for somthing like.
Building X
because A is newer
because B is newer
includes newer C
includes newer D
because C does not exist
so now I can see that D (at the very least, ws touched).
This has helped me find suff where a node depended on itself.
Also consider this
Depends a b : x ;
Depends x : y ;
TEMPORARY x ;
Depends c : a ;
Depends c : b ;
A bug in the handling of temporaries means that during dependency
processing, only the first dependency analyzed (a,b to x) will
propagate the timestamp of y to (the missing) x; The other target
(a/b) will always be cosidered out of date.
Date: Thu, 02 Aug 2001 22:53:04 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: RE: Re: Whitespace As Delimiter -- Yuk!
I think that parsing C++ with lex is easier than parsing jam with
lex. Whitespace is irrelevant to token separation in C++.
Also, consider the following additional example which makes normal scanning
of jam awkward:
rule foo {
...
}
actions foo {
...
}
To a scanner, these character sequences look essentially the same, but they
must be tokenized very differently. The {} after "rule" are just two
tokens delimiting a block. Everything inside the block has jam syntax and
must be scanned/parsed. The {} after "actions" mean something very
different to the scanner. They essentially delimit a character string,
which cannot be parsed by jam (in fact the grammar it uses in there is not
even known at this time, it could be Bourne Shell syntax or C-Shell syntax
or anything). This requires the scanner to be modal -- doable, but
yuk. There is nothing like this nastiness in C++.
They are essentially the same issue for us. They cause spurious parameters
to get attached onto the end of the rule whose termination has been
compromised. We manage this kind of stuff by a lot of parameter checking
code. E.g., if $(4) { EXIT "too many parameters to rule foo." ; }
But when the last parameter is a list of arbitrary length as it often is,
this doesn't help.
Yup we do this now.
All of the above would be nice. B and c are in ftjam already though,
aren't they?
This one is ABSOLUTELY ESSENTIAL!!!! It is so painful not having this.
Excellent. C-like file scope would be sufficient for my purposes.
Okay, would be nice.
I don't understand the above.
I don't think we have encountered this problem. Maybe because our use of
static libraries is minimal.
I have been following the current thread on this, but it is not an issue
for us. We force developers to either:
- spin out a library for the duplicate code, or
- use a CopyFile rule to copy the source, to separate the builds.
We also try to discourage the use of either technique and when this comes
up we look for alternatives.
I don't understand this one. Can't you just use this:
if ! $(x:G) { x = $(x:G=foo) ; }
to set grist if not set.
I guess you could say we use grist "everywhere". That is all targets, and
all headers use grist to allow us to unambiguously match them up when necessary.
Yup. This would be necessary.
I used to really want this. Now I look at this as another "would be nice"
feature. I was thinking of something like a shell or Perl backtick
thing. E.g.,
x = `ls *.c` ;
which essentially gives you your globbing too.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Anyone else using Jam for BIG projects?
Date: Thu, 2 Aug 2001 14:27:44 -0400
voila. Note that the latest release of FTJam has a nice feature that keeps
Jam from "wrapping" its call nesting indicator when the nesting levels get
deep (the wrapping ends up confusing my elisp).
;; Jam
(defun my-jam-debug-nesting ()
(let ((where (point)) (result nil))
(beginning-of-line)
(let ((start (point)))
(search-forward " ")
(setq result (length (buffer-substring start (- (point) 1)))))
(goto-char where)
result))
(defun my-jam-debug-move (nesting line-function)
;; Abusing mark/point here for hilighting purposes. No time to figure out
how
;; to do it "right"
(deactivate-mark)
(let ((line-form (list line-function 1)))
(eval line-form)
(while (> (my-jam-debug-nesting) nesting)
(eval line-form)))
(end-of-line)
(set-mark (point))
(beginning-of-line)
(my-activate-mark)
)
(defun my-jam-debug-out (line-function)
(my-jam-debug-move (- (my-jam-debug-nesting) 2) line-function))
(defun my-jam-debug-next ()
"go to next line in evaluation of current rule, or calling rule if no such line exists"
(interactive)
(my-jam-debug-move (my-jam-debug-nesting) 'next-line))
(defun my-jam-debug-prev ()
"go to prev line in evaluation of current rule, or calling rule if no such line exists"
(interactive)
(my-jam-debug-move (my-jam-debug-nesting) 'previous-line))
(defun my-jam-debug-finish ()
"go to next line in caller of current rule, or its caller and so on if no such line exists"
(interactive)
(my-jam-debug-out 'next-line))
(defun my-jam-debug-caller ()
"go to previous line in caller of current rule, or its caller and so on if no such line exists"
(interactive)
(my-jam-debug-out 'previous-line))
(defun my-jam-debug-mode ()
(interactive)
(local-set-key [\C-f10] 'my-jam-debug-prev)
(local-set-key [f10] 'my-jam-debug-next)
(local-set-key [\S-f11] 'my-jam-debug-finish)
(local-set-key [\C-\S-f11] 'my-jam-debug-caller)
(local-set-key [f11] 'next-line))
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Date: Thu, 2 Aug 2001 20:45:59 -0700
Subject: RE: Whitespace As Delimiter -- Yuk!
This is the best think that ever happened to jam (since it was
orginally written). Not to criticize Perforce or those of us
who contribute once in a while, but Jam has really suffered from
having no real "architect" to move it (and the user community)
forward. I have seen MANY great ideas either lost or simply
die because nobody has been taking a long term view of Jam.
Those who might have had the time to take on this role have
probably shyed away somewhat because its never been clear
whether the orginal authors wanted to past the torch or not.
Now it seems clear !
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Re: Whitespace As Delimiter -- Yuk!
Date: Thu, 2 Aug 2001 21:14:51 -0700
I would have to agree with the comments that Jam's lexer
and parser could be better. We had to increase our
Yacc stack to some 50000 nodes to parse our jam rules.
Thats because the current grammer is right-recusive
instead of left. Another "project" that I never got to :)
I don't agree that whitespace as a delimiter is a bad
thing. Careful design can build a lexer and parser that
can handle the Jam langauge and give reasonable feedback
about errors in parsing. Look at Doc++, which bsically
parses C++ in lex. If it can do that, we can surely
handle little things like $(whatever) stuff.
Making ";" into a special character, at least in certain
contexts would be good. But missing ";" are more of an
issue than forgetting to place a whitespace before the ";".
To make parsing (or execution) more fool-proof, it would be
nice if rules/actions could specify the number of arguments
they expect (the number of ":").
You might do something your self with ...
if $3 { EXIT ; }
as a type of assertion that the rule was called with 1 or 2
arguments.
Thinks that would have helped me are ...
a) globing
b) functions (the current [] notation}
c) substitution (like ksh's $(var#) $(var~) or perls =~ )
d) ability to read variables attached to objects
e) hiding rules (so they could not be called outside
of a lexical context)
f) "simple" whats out of date compared to X (makes -d, I believe)
(ie, why an I building X)
g) rule dependencies (like Sun's make, it remembers what
command(s) it ran last time for a target, and if the list
changes, it assumes that target is out of date)
h) knowledge that objects that are in a archive are
in the archive (right now, you can have a different
TARGET on a library and a library member, and if so,
the behaviour is weird)
i) Collapse multiple, identical "actions" on the same target.
(same targets, same sources) (but I can think of several
problems with this, not the least of which, jam might
end up spining during rule invocation a lot of rule
writers were not careful)
j) Simple way to grist a value only if it did not already
have grist. (This would allow gristing to be used
everywhere, for example in the C++ rule and the Objects
rule. Which ever rule added grsitig first would win
(unless the grist was reset with $(x:G=y))
I would not objects to jam saving dependency information
someplace (g) about would basically force this. But jam
would need to timestamp the dependency information so that
it could invalidate it if a file was modified.
I use to wish for access to the shell during rule invocation,
and that might still be useful. The fact that somebody could
invoke side effects during the rule phase is something that
could be lived with.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Unnecessary recompilation
Date: Thu, 2 Aug 2001 21:31:23 -0700
Lets say that we made this change ... perhaps we
break something ...
So now I
x = <funny_grist>X ;
y = <other_grist>X ;
LOCATE on $x = $(cwd)/objs ;
LOCATE on $y = $(cwd)/objs ;
Build $x ;
Build $y ;
Sevral places in Jam I use the fact that a many jam nodes
(targets) can refer to the same file at binding time.
For example
Depends exports : <export>x.h ;
LOCATE on <export>x.h = $(TOP)/include/cmp ;
Depends cmp/x.h : <export>x.h ;
Depends <export>x.h : x.h ;
SEARCH on x.h = $(SUBDIR) ;
Ln <export>x.h : x.h ;
Now, when header file scaning finds #include <cmp/x.h> jam will know
that its really the same files a <export>x.h.
also
Depends x.o : <grist>x.o ;
(now jam x.o builds ALL x.o's, but rarely is there more than
one x in the build tree).
Date: Thu, 02 Aug 2001 23:35:01 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: RE: Anyone else using Jam for BIG projects?
I guess we have some fast build servers. :-) But we think that its still
pretty darn slow. Jam still spends a lot of time goofing around before it
gets down to business and fires off the first shell commands. Also we
don't have the kind of linking pains that many similar sized projects have.
Our code is designed for modular delivery and linking is primarily dynamic
in our environment. We package large multimegabyte entities (though I do
not think any have hit 30M yet, they are growing in that direction rapidly)
but they are not all cross-linked so they build very fast. That is, our
packages bundle up DLLs and small executables into complete images or
smaller modules destined for Cisco routers.
We normally build locally on our build servers (we have build servers
locally wherever we have engineers). Tools are mounted over nfs though (on
a small handful of sites), but there are only small time penalties for
grabbing the tools once each.
We haven't noticed that kind of thing (in compilation of our C++ code), and
as noted above I think our linking requirements are simpler.
We usually resort to the -d+3 (for non-header dependencies) or -d+6 (for
headers) output for this, or we use our locally hacked jam (which I aluded
to above in the previous e-mail) to dump the dependency tree and pore over it.
Our hacked jam shows the dependency tree just like the output above, but
currently it does not annotate stat info. It could be extended to do that
easily I think. Currently when we need that info we use the -d+6 output
(i.e., make, time, make* newer, made+ old, made+ update, give that info).
I didn't know about this. Thanks.
Date: Fri, 3 Aug 2001 09:57:48 +0100
From: Paul Haffenden <pjh@unisoft.com>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Someone posted a fix for this a long time ago, to make it left recursive,
and it seems to work for jam2.3.
I have done this using a :E modifier in the variable handling,
and the code was sent to Perforce. I've since changed the
syntax, anyone is welcome to the code (expand.c).
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Date: Fri, 3 Aug 2001 09:22:32 -0400
$(x:G?=grist)
There is quite a lively discussion going on on the scons development list at
sourceforge about this very issue. Cons (and scons) uses an innovative
scheme for cacheing dependency information that I find quite compelling. I
am beginning to think we should just build this mechanism into the
underlying Jam engine. I need to look at it a bit more carefully, but it
seems like it could solve lots of problems.
Date: Fri, 3 Aug 2001 15:50:06 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
While reading mail makes for a nice change from forcing a huge chunk of
broken source code to compile on a similarly broken beta-quality and
overhyped CPU (<- letting off steam), I don't particularly want to
subscribe to another mailing list.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Date: Fri, 3 Aug 2001 10:04:03 -0400
Here is some information.
http://www.dsmit.com/cons/dev/cons.html#signatures
Yes, I had read the section, but it wasn't enough to bring
understanding. The crucial piece of information I was missing was how Cons
computes signatures for non-derived files. Let me just restate the
algorithm to make sure I got it:
1) derived file - its signature is computed from all the signatures of
it's source files and the command line used to build it. This signature
is stored in a .consign file next to the derived file.
2) non-derived file in the source hierarchy - its signature is computed
from the contents of the file and nothing else. This signature is stored
in a .consign file next to the derived file.
3) non-derived file not in the source hierarchy - its signature is
computed from the file's name (the absolute path right?) and
timestamp. This signature is not stored explicitly, but is stored
implicitly in the signatures of derived files that depend on it.
I noticed that the timestamp is also stored in the .consign file. I assume
this is stored so that the non-derived file's signature will not be
recalculated unless its timestamp differs. This is only an optimization
right? Is the timestamp for derived files in the .consign file used for anything?
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Re: Whitespace As Delimiter -- Yuk!
Date: Fri, 3 Aug 2001 11:35:10 -0700
You should try to read the Doc++ lex grammer ... its a full
state machine in itself. It does not always ignore white space.
It counts qoutes, "{", "(", switches state as it steps into
and out of structs, classes, functions, etc.
Sorry ... A Rule a b c : d e f : g h i ;
Has 3 parameters ... any parameter can have any number of elements.
Do you have rules like
rule X {
local i ; {
if $($(i)) { }
}
}
Sun's make did two things I really like.
1) they are "integrated" with the compilers using an normally
undocument (or it use to be undocument) command line arguement.
Each time a file was compiled, a file .make.dependencies, I believe,
is updated by the compiler (or make looking at special compiler output).
The file lists for each target (.o file) which files were read during
the compilations.
2) make had a file called .make.state (I believe) which was also maintained.
Each time a target was built, the make state would be updated. The next
you called make, make compared the exact commands lines for the previous
run, and if they differ, would assume the target needs to be rebuilt.
would
be stored in the make state.
a.o: CPP -o a.o a.cpp
Now you want to turn debugging on. export CXXFLAGS=-g; make a.o
Make says "last time I did "CPP -o a.o a.cpp", but this time the macro
expands to "CPP -g -o a.o a.cpp", thats different, therefore, a.o needs
to be rebuilt.
Notice, the user does not need to touch or remove anything to get this to happen.
This is also not an issue for us. By you must admit that something like
Main A : a.cpp c.cpp ;
Main B : b.cpp c.cpp ;
would be nice if c.cpp was compiled to c.o only once. The above style
is more deductive than forcing the user to build a library simply to
avoid a second compile. In this case, .o files would work just as well.
If fact, there are cases when you need to force an object file to be linked
into an application. Now I have to do things like ..
Main A : a.cpp ;
Object c.cpp ;
ExtraObjects A : c.o ;
but with grsiting, its a bit more tricky.
Your correct, a rule could be writen to grist stuff that not already
gristed. But usage of the $(X:G=g) notation is so nice and easy, that
you often don't build a temporary variable to get gristing sorted out.
A form like $(X:X=g) (where :G adds grist only if no grist is present)
would be convent.
I'm not sure if everything should be/needs to be gristed. But gristing
should be consistent.
For example, in the rules distributed with Jam, the C++ rule does not
grist its inputs. The Object rules does not grist its inputs, the Objects rule does.
So, mixing C++ rules and Objects rules in the same Jamfile is risky.
C++ a.o : a.cpp ;
MainFromObjects a : a.o ;
(very possibility) refer to differ a.o files
From: "Jerry Nettleton" <nett@mail.com>
Date: Fri, 03 Aug 2001 12:47:31 -0600
Subject: Re: Anyone else using Jam for BIG projects?
Since I'm new to Jam, I can't really offer much but you raise some interesting
Jam issues and limitations for large projects (which leads to even more questions).
I recently started looking for a better solution to improve our build process.
We have around 30,000 C/Java sources files and 33,000 generated files with
about 15 million lines of code on 6 platforms (UNIX and Windows NT/2000).
I thought Jam could help manage all of the dependencies but since it doesn't
support Java I'll probably have to improvise a little.
I like what Randy's message described with lots of Jamfiles so that individual
directories can be compiled separately. You might be able to improve build time
but it would probably increase developer compile times.
I would be interested in looking at other possibilities especially if it support
Java component dependencies into consideration.
Do you have some good advice for rookie jammers with large projects?
What are the most important things to learn first to fully understand Jamfiles?
Do you have any good examples or tutorials that you could share?
How should I organize Jamfiles to manage the build process (one or many Jamfiles)?
What kind of issues do I need to consider to support multiple platforms?
Do you think enhancing Jam to parse Java dependencies is feasible?
Date: Fri, 3 Aug 2001 14:42:28 -0700 (PDT)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: The Jam Annoyance FAQ
1. Why white space is the only delimiter in jam?
It was a _mistake_. At the time ('93) whitespace in file names
You know, $sys$device:[user.jam]parse.c;3 VMS.
I still think sparse quoting makes Jamfiles easy to read, but
requiring whitespace around the ; is very error prone.
If there is a compatible way of changing this without a "mode
bit," I'm all ears.
2. Why not cache header file scans to improve speed?
a. Because it is not that much speed (YMMV).
b. Because caching is more complex and less reliable.
c. Because I hated build tool turds.
Those were the reasons when Jam was written, and they still
stand fairly well now. Caching is more important for Make,
which must re-evaluate header dependencies for every subdirectory,
rather than just once for the whole tree with Jam.
3. Why no shell invocations during parsing? I want "x = `ls` ; "
Jam was born in a dirty environment, where "mkmf" scripts
scrounged up all sorts of garbage and built, well, whatever came
out. In reaction to this squalor, Jam insisted that how to
build the source is part of the source and not part of the build.
The goal was to ensure that a second "jam" invocation would do
a deterministic _nothing_.
This asceticism could probably be relaxed now, but I doubt
portability madmen like us would use it: native OS tools vary
widely ("think different"), and if you have to build a tool to
drive Jam you've lost a little sanity. What builds the build tools?
mailing list?
We are trying. Jam has been a bit of a stepchild here at
Perforce, but (as has been mentioned) we have hired an open
source engineer to act as Jam curator.
That doesn't mean it will take on all changes. One of Jam's
virtues is leanness, both in code and in concepts, and we will
continue to guard that. There are, however, egregious gaps in
functionality (like regexp handling and access to target-specific
variables) and porting coverage that are certain to be filled.
From: "Christopher Seiwald" <seiwald@perforce.com>
Sent: Friday, August 03, 2001 5:42 PM
Subject: The Jam Annoyance FAQ
I don't have any suggestions yet, I'm afraid, but I think the problem goes
beyond the whitespace requirement. It is also very easy to leave out a
semicolon and get silent acceptance of an unintended Jam program. I like the
suggestion to warn when a list of tokens is split across lines and lines >=
2 start with a rule name.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Date: Fri, 3 Aug 2001 19:24:42 -0400
True, but it's a special case. It is so common in my experience that A and B
actually have different requirements (e.g. one is actually a DLL and so
uses -DBUILDING_B_DLL which changes the actual content of the corresponding
.o files), that I chose in Boost.Build to have the user explicitly make a
library to get the commonality you desire.
Date: Mon, 6 Aug 2001 17:15:37 -0700 (PDT)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Anyone else using Jam for BIG projects?
We've been using a minor variant of jam 2.2 on a project roughtly this large,
perhaps a bit larger, depending on which variant you build. Millions of
lines of code, >20,000 source files, >4 different platforms, many variants
of our builds, etc. Build times vary from platform, the newest hardware
can build it in about 3hours using 2 jobs. Products of the build are all
placed into a build tree separate from the src tree, its about 1.5Gbytes
after a full build.
Our Jambase is about 5,000 lines, our Jamrules file is about the same.
Almost all of our platform differences are expressed in the Jamrules file.
The Jambase creates rules which allow us to express the build in terms of
applications, shared objects, libraries, compilation units. Dependencies
between compilation units determine include rules. Depending on platform,
linking a library to a shared object may generate an intermediate archive
or link with the .o files directly, automatically. Jam's ability for us to
create these abstractions has really saved us over the years, although it
had an initial cost in creating the initial jambase/jamrules files.
Jam has really worked out well for us. You mention you use ClearCase. We
had that once - our experience moving to Perforce has been extremely
positive...but that's a different discussion.
Yes. One of the mods to Jam I've done is to implement a header cache.
I've just recently received permission to return our changes back to the
public domain - I'll need to wait for a rainy weekend now to do it, as I
need to work outside of our corporate firewall...
One good test of a build system is the length of time it takes to do a
nothing-to-do build. If we do a full clean build, and all the targets
are built, initially a nothing-to-do build was taking about 3minutes. This
was considered too long. After implementing the header cache, this time
was reduced to 1min. On newer build platforms this is less than 20 seconds.
As a side note, our experience with Make based systems compares very poorly
in this metric. Do-nothing-builds often end up doing quite a lot when you use make.
The approach in the header cache is to create a file which records the
results of the previous jam header scan. For every file jam scans for headers,
the file contains:
@filename@ timestamp @header1@ @header2@ @header3@ ...
the filename is the filename jam opened the file with - either absolute or
relative depending on how you use Jam. Since the cache is written into the
same directory Jam is invoked in, the filenames remain consistient between
invocations. The timestamp is the last modified time of the file when it
was last scanned. The list of headers is the list returned by your regex
finding headers.
When you invoke Jam the next time, this cache is read in, put into a hash
table (which Jam has good internal support for) and whenever the routine
is called to scan a file for headers, the filename is looked up in the cache
and if the timestamps agree the cache results are returned. Whenever jam
exits, the cache is written.
If you alter the header regex you need to delete the cache file, as the
pattern isn't recorded in it anywhere and Jam will use the old results
which would be incorrect. Oh well. Other than that, the cache is
always valid - when a file is modified the cache line for it becomes invalid
and a new one is generated.
The cache always grows - if you delete files, they remain in the cache. Oh well.
The modification ended up being quite nice I think - although it does violate
one of Christophers goals for Jam, which is to not have Jam generate turds.
For large builds, I think the modification is worth it. YMMV.
I'll try to get the mods back into my branch in the public depot. As
we have a number of mods, I'm not sure how to return them all - but it
will take some time, sorry.
I don't see any design weaknesses of Jam itself, there are some stylistic
things you may like or dislike (whitespace, etc.) I like it the way it is.
We've modified it to handle things like serialized output from multiple
jobs, adding a '@' token to the start of each action line output. These
two combine to allow us to implement a automated test facility, so each of
our returns is test compiled/linked on all our platforms automatically. If
any errors are found, the logfile can be reliably parsed to generate
meaningful mail for the developers. Without the speed of Jam/Perforce,
this wouldn't have worked... There are some other minor mods we've made,
nothing you would call fixing a design flaw though.
The default Jambase is a good example, but for large projects you'll end up
customizing it heavily. Its too bad we can't share our Jambase
customizations more easily, but that's life.
We looked around. Jam seemed the most promising, and with customization
that has proven to be true I think. I don't see anything else that would
be able to replace what we have. We have developers who can build tiny
portions of the product with <1second startup time. Change what you
want built slightly and your startup times can go up to 1min, and at that
point the entire product suite is being dependency checked. Its really
remarkable I think.
Date: Mon, 06 Aug 2001 19:01:47 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: RE: Re: Whitespace As Delimiter -- Yuk!
I'm not knowledgeable about Doc++, but C++ does not require whitespace to
separate tokens. I believe all lex scanners are state machines though.
Also it is not all that difficult to make a scanner handle jam's
weirdness. It just requires more trickiness than lex was designed to
handle. The relevant lex code can be written like this:
\{ {
if ( gScanMode == kScanModeActions ) {
yylval = CollectStringUpTo( "}" );
gScanMode = kScanModeNormal;
return( T_Actions ); /* an "actions" token, a character string */
} else {
return( T_OpenC ); /* a simple { token */
}
}
I.e., if you are scanning normally and see a { just return that
token. Otherwise, if you are in the kScanModeActions scanning mode, then
gobble up the entire character string delimited by the next close curly
bracket (export that complexity to the routine called here). And later in
the lex code when it detects the keyword "actions" token, toggle the
gScanMode switch to kScanModeActions.
Yeah, but we force limits of some parameters. For example, some rules
expect a single element in the nth parameter, so we check that the second
element in that parameter is empty (weak check, but better than none at
all). An example of this is below (look for where it checks $(_includer[2]))
Kind of. To be really specific, we code all of our rules as follows:
# standard documentation header goes here
rule foo {
# first we set this variable
local _rulename = foo ; # used in Error routine and elsewhere to identify which rule is generating the error
# then we gather parameters... define local vars to give names to all parameters, e.g.:
local _includer = $(1) ;
local _inclusions = $(2) ;
# ...
# then we do parameter verification... check numbers and types of
parameters, e.g., # Make sure that scanned file is indeed a Bag generated file.
switch $(_includer) {
case *.[ch] :
# Expected file pattern -- nothing more to do.
case * :
# Unexpected file pattern.
Error "Scanned file '" $(_includer) "' is not a .c or .h generated file." ;
} # end switch $(_includer)
if $(_includer[2]) {
Error "More than one includer '" $(_includer) "'." ;
} # end if $(_includer[2])
# Make sure that there is at least one inclusion
if $(_inclusions) = "" {
Error "No included files for" $(_includer) ;
} # end if $(_inclusions) = ""
# Make sure that there are no extra parameters
if $(3) {
Error "Extra parameter '" $(3) "' for scanned file '" $(_includer) "'." ;
} # end if $(3)
# ...
# then we process the rule (do whatever rule is supposed to do)
# ...
}
We use gcc and it can create ".d" files containing this info when passed
"-MD" or "-MMD". I am hoping to restructure things to use this info in jam.
Interesting. We don't track any kind of dependency for this now. We have
considered making all targets being built from a user Jamfile (i.e.,
component owner Jamfile, which can contain SubDirCcFlags, and other
build-altering settings) depend upon that Jamfile to partially handle
this. We wouldn't catch environment variable changes or command line "-s"
parameters to jam this way though.
We don't permit the above. Instead we give them two options (CopyFile or
make a library). But we push the library option (more precisely, we
usually push the DLL option).
One syntax change I would like to suggest for jam's future is something to
alter the way the ":" operator behaves in these cases. Targets can have
the following attributes using this mechanism: B,S,M,D,P,G, and various
other things like U,L,R can be applied to provide conversions, and you can
also do the X=x stuff too. I would like to expand this mechanism to
support arbitrary attributes, and arbitrary attribute name lengths (not
just single characters). I am thinking something like this might work:
$(foo:Suffix)
$(foo:Directory,Base,Suffix,Uppercase)
$(foo:Grist=$(bar))
$(foo:MyAttribute)
$(foo:MyAttribute="whatever")
And I suppose for "almost" backward compatibility we could have "B" being a
synonym for "Base" and so on. That is, this would be backwardly compatible
with single character items but things like ":BS" would have to be re-coded
as ":B,S", or better yet as ":Base,Suffix"). Of course a translator could
be provided to parse existing files and rewrite them to help people convert
over legacy code.
Date: Mon, 06 Aug 2001 17:15:28 -0700
From: Glen Darling <gdarling@cisco.com>
Subject: Re: Anyone else using Jam for BIG projects?
I am a bad one to answer your "Java in Jam" questions because I have never
tried that, but I have some answers to some of your issues below since
nobody seems to be chiming in on this.
Have you looked at Ant (part of the Apache Jakarta project)?
http://jakarta.apache.org/
I saw it mentioned a few times in the archived discussions from
I don't follow you there. Build time versus compile time? Using local
well and gives fast local builds at virtually no additional cost to the
global builds (unlike recursive make where make is re-invoked in each
subdirectory, one jam is invoked from the root only, and it reads/executes
the subdirectory Jamfiles).
We have an elaborate dependency structure with explicitly exported public
APIs and API version number tracking (API versions used are compiled into
all objects so they can later be checked at load time for compatibility
with API versions being exported on a running box). By policy Java is
explicitly prohibited in our code base so we haven't looked at that at
all. Handling Java has been discussed on this list a few times and some
code examples are in there so you could pull the archive and search for
Java in the text.
Play with small Jamfiles first. Understand the basic tooling before
getting started on anything big. Make predictions about how to do
something, and test the predictions with small code fragments in a little
Jamfile. Code with lots of ECHOs so you can see the flow. When your
predictions all start being correct then you know you understand. :-)
I suggest this: understand the phases (parsing, binding, update) -- i.e.,
what happens in each phase, and how the Depends and INCLUDES built-in rules
work, understand the debugging tools (especially "-d+3", and what every
possible output line in that output means).
Unfortunately no. We produced about 8 hours of Video on Demand tutorials
and lots of different slide sets, and internal docs for our own use, but
they contain proprietary stuff about our code base, and mostly they tell
people how to interact with our locally written jam rules. So
unfortunately it wouldn't be much help even if I could share it.
I suggest one at the top that reads one from each subdirectory, and a
Jamrules file at the top containing any customized rule and action
definitions. If you want the build to behave differently when at the top
or in a sub dir, you can test the value of the TOP variable to detect
this. We have a few different Jamfiles at the top, some with component
info, packaging info, and a few in the components to handle API exports,
shared build contexts, and a normal Jamfile (with code to build
executables, DLLs, static libraries for that component).
Different compilers, linkers, etc, and/or different command line parameters
to them. We have a file per target host that defines variables and the
infrastructure picks up the appropriate files when necessary to cause the
right values to be set for the platform you are building for. In some
cases you may want some component Jamfiles executed more than once in
different contexts (which is easy to do with a loop around the include statement)
I have only read the last few months of this list but it looks like people
are still struggling with some issues on this. I have not seen any
comprehensive posting of how to handle Java in jam.
Date: Tue, 7 Aug 2001 12:13:09 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Re: Whitespace As Delimiter -- Yuk!
Excellent idea. The (few) times I've dealt with that I've always forgotten
what the letters mean.
Your action item for today is to write code that makes jam accept :B,S and
emit a warning (with file name and line number) if it sees :BS.
I don't think anyone's going to write a converter, and a quick hack to
provide "," now and synomyms next year makes sense to me at least.
From: "Ian Mellor" <mellorian@hotmail.com>
Date: Tue, 07 Aug 2001 11:56:02 -0700
Regarding 2.a, one particularly bad situation is when multiple header
files have the same name. For example, MSVC supports precompiled header
files, and common convention is for each directory to contain a
"precomp.h" file, which specifies what will be included in the
precompiled header file. Every .cpp file must then include the precomp.h file.
If HDRGRIST is blank, then the header scan goes quickly, but Jam can't
tell the difference between the various precomp.h files, and is quite
confused during incremental builds. If HDRGRIST is $(SUBDIR) then
header scans are glacial because Jam marks all headers it finds with the
HDRGRIST. Therefore, if there are N precomp.h files, Jam scans all the
base headers N times, and there are N different gristed names for each
of the base headers.
One way to solve this problem without introducing build tool turds (I
hate 'em too), would be to grist header files by the full pathname where
they were found. But this could complicate specifying dependencies for
generated header files.
I wrote a workaround that introduces PRECOMPHDRS and PRECOMPGRIST
(similar to HDRGRIST). PRECOMPHDRS lists which header files should be
gristed with PRECOMPGRIST; all other header files are gristed with
HDRGRIST. This allows the precompiled headers to be gristed, while at
the same time avoiding gristing system headers or common headers. This
brings the header scan back down to a reasonable time, but there are
some holes in this workaround and it's still not as good as it could be.
(These 2 new variables could be more aptly named, since their
application can be more general than precompiled headers).
From: Christopher Seiwald [mailto:seiwald@perforce.com]
Sent: Friday, August 03, 2001 2:42 PM
Subject: The Jam Annoyance FAQ
1. Why white space is the only delimiter in jam?
It was a _mistake_. At the time ('93) whitespace in file names
You know, $sys$device:[user.jam]parse.c;3 VMS.
I still think sparse quoting makes Jamfiles easy to read, but
requiring whitespace around the ; is very error prone.
If there is a compatible way of changing this without a "mode
bit," I'm all ears.
2. Why not cache header file scans to improve speed?
a. Because it is not that much speed (YMMV).
b. Because caching is more complex and less reliable.
c. Because I hated build tool turds.
Those were the reasons when Jam was written, and they still
stand fairly well now. Caching is more important for Make,
which must re-evaluate header dependencies for every subdirectory,
rather than just once for the whole tree with Jam.
3. Why no shell invocations during parsing? I want "x = `ls` ; "
Jam was born in a dirty environment, where "mkmf" scripts
scrounged up all sorts of garbage and built, well, whatever came
out. In reaction to this squalor, Jam insisted that how to
build the source is part of the source and not part of the build.
The goal was to ensure that a second "jam" invocation would do
a deterministic _nothing_.
This asceticism could probably be relaxed now, but I doubt
portability madmen like us would use it: native OS tools vary
widely ("think different"), and if you have to build a tool to
drive Jam you've lost a little sanity. What builds the build tools?
mailing list?
We are trying. Jam has been a bit of a stepchild here at
Perforce, but (as has been mentioned) we have hired an open
source engineer to act as Jam curator.
That doesn't mean it will take on all changes. One of Jam's
virtues is leanness, both in code and in concepts, and we will
continue to guard that. There are, however, egregious gaps in
functionality (like regexp handling and access to target-specific
variables) and porting coverage that are certain to be filled.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: The Jam Annoyance FAQ
Date: Fri, 10 Aug 2001 20:03:47 -0400
The problem is that the files that can be #included from a single header
file can depend on lots of things. At the very least, they depend on the
#include path(s - if you distinguish <> from "" as many compilers do), which
can vary from source file to source file. Then there is the compiler's
search algorithm... for example, MSVC will search the directories of the
chain of files that resulted in including the given header before it moves
on to looking at the #include path. Altogether nightmarish if you want to
get it precisely right.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: The Jam Annoyance FAQ
Date: Sat, 11 Aug 2001 09:44:51 -0400
Are you certain that's the case? If so, a big improvement could be made if
Jam would only scan each unique header file once, but that it would run
HdrRule with the results of the scan on each target. In other words, given:
<foo!bar>x.h the scan proceeds once, producing the list of #included files,
and is cached. When it comes time to scan <baz>x.h which is found to be
bound to the same file as <foo!bar>x.h, the cached scan results are used to
invoke the HdrRule. Of course, this only works if HDRSCAN is set the same on
both targets.
FWIW, Boost.Build is gristing each header with the directory of the
#including file and the entire include path ($(HDRS)) concatenated with '#'
characters -- which still isn't perfect but stands a better chance of being
correct than what you've proposed. It doesn't seem to be causing serious
slowness on the moderate builds I've been doing. It /does/ make the build
system more complicated and hard-to-understand than I'd like.
What's the difference between a build tool turd (e.g. header cache) and an
object file, other than the pejorative name? They serve similar purposes:
they represent an intermediate state of the total build process which is
used to reduce the amount of work required at subsequent build invocations.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sat, 11 Aug 2001 15:47:17 -0400
Subject: Minimal Jambase
I started to try to do something like that myself, but it seems there's a
lot I can't live without. For example, the SubDir rule at least needs the
definitions of $(DOTDOT), FSubDir, FDirName, FGrist, and (indirectly)
$(DOT). What's your secret?
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sat, 11 Aug 2001 16:07:07 -0400
Subject: FSubDir documentation fix
The comments on FSubDir appear to be completely wrong:
# If $(>) is the path to the current directory, compute the
# path (using ../../ etc) back to that root directory.
# Sets result in $(<)
I suggest the following replacement:
# Given $(<), the tokens comprising a relative path from D1 to
# a subdirectory D2, return the relative path from D2 to D1,
# using ../../ etc.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Date: Mon, 13 Aug 2001 10:29:15 -0700
Subject: RE: Minimal Jambase
# Bare Minimum Jambase for Advantax R7 Developmment
if $(UNIX) {
DOT default = . ;
DOTDOT default = .. ;
SLASH default = / ;
} else if $(NT) {
DOT default = . ;
DOTDOT default = .. ;
SLASH default = \\ ;
}
JAMFILE default = Jamfile ;
JAMRULES default = Jamrules ;
# Include TOP/Jamrules.
include $(TOP)$(SLASH)$(JAMRULES) ;
# Include Local Jamfile
ruleUp dummy ;
include $(JAMFILE) ;
dummyUp dummy ;
# Include top level Jamfile
# (and all sub-Jamfile)
SubInclude TOP ;
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 13 Aug 2001 14:04:16 -0400
Subject: Re: [recursive "" includes not tracked?
Good question! I don't see any, either. That's a surprising oversight
AFAICT.
It looks like a simple Jam modification to set the bound path of each
scanned header into a target-specific variable (e.g. BINDING) might do the
trick.
Sounds like you don't have FTJam. SUBST is a built-in rule in FTJam.
http://freetype.sourceforge.net/jam/index.html#where
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 13 Aug 2001 15:38:24 -0400
Subject: Re: Minimal Jambase
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Date: Mon, 13 Aug 2001 14:21:48 -0700
Subject: RE: Minimal Jambase
The developer has to set TOP. This was a requirement
of the Jambase that I started with (for multiple diurectory
projects).
Subject: RE: The Jam Annoyance FAQ
Date: Mon, 13 Aug 2001 15:49:00 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
I spent hours tracking down the problem, so unless you've down likewise
and reached a different conclusion, then yes I'm quite certain. :-)
That's another way of phrasing the solution I proposed. The part you've
quoted above is merely my temporary hack around the problem, without
changing any Jam code.
That's "more correct" yes, but it causes exactly the problem I described
as needing to be solved (but it's even more aggressive about uniquely
gristing headers, so it may exacerbate the problem). If you're happy
with the speed, more power to you. I'm not happy with the speed, and
saw the header scan time go from <1 second to >10 seconds simply from
gristing all headers. My hack described above brought it back down to
<1 second.
My intuitive definition of "build tool turds" is any file produced by
the build tool, versus by the compiler/linker/etc. These files often go
in special locations, and generally are not removed by the build tool's
"clean" (or equivalent) command.
Date: Wed, 15 Aug 2001 17:07:29 -0400
Subject: calling an executable prior to building the dependencies??
I would need some help to figure out whether this problem can be
solved with Jam.
A tool takes xml documents as input and generates a binary file.
To ensure the compilation process will not generate any error,
I want the target to depend on all the files that are referred
in the xml document. First, the relevant information can spread
across multiple line, is the Jam regexp engine capable of parsing
accross more than one line?
The same tool used to compile the xml document can be invoked to
output, as a text file, a list of dependencies.
Is there any way to make jam call this tool first and retrieve
those dependencies and later use them on with the DEPEND rule?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: JAM
Date: Tue, 21 Aug 2001 14:05:47 -0400
Download prebuilt binaries from:
http://prdownloads.sourceforge.net/freetype/ftjam-2.3.5-win32.zip
Sources available, also:
https://sourceforge.net/project/showfiles.php?group_id=3157&release_id=45917
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sun, 26 Aug 2001 13:09:03 -0400
Subject: Possible bug in execnt.c?
[I admit I'm actually looking at FTJam sources here, but I doubt it has been changed]
In execnt.c I spy the following:
/* Trim leading, ending white space */
while( isspace( *string ) )
++string;
p = strchr( string, '\n' );
while( p && isspace( *p ) )
++p;
It doesn't look like it's trimming the trailing whitespace as claimed by the comment.
From: Alain KOCELNIAK <alain@corys.fr>
Subject: Debugging level and return status
Date: Thu, 30 Aug 2001 10:32:22 +0200
When I run jam with option -a -d+2 (to force build and to show action text ),
actions are printed and executed
but jam exits with EXITBAD status, all action executions are OK.
Without -a -d+2 options jam exits with EXITOK status.
( -a is used to force the build, the problem comes from -d+2 )
Is it a normal behavior ( debugging level => EXITBAD ) ???
I try to search in the source, for the moment I'am in make1.c :
- This function return : counts->total != counts->made
- With -d+2 option : counts->total is always 0 ( counts->made is not zero )
- Without -d+2 option : counts->total is equal to counts->made
Date: Fri, 7 Sep 2001 21:31:21 -0700 (PDT)
Subject: /usr/bin/ld: cannot find -lqt
I'm getting the subject line error while trying to compile
a program under RedHat7.0. The Jamfile refers to a jamdefs
file which has the following OS switch in it:
case QT :
if $(UNIX) {
SubDirHdrs /usr/local/qt/include/ ;
extras += -lqt ;
...etc
My ld.so.conf file includes the path /usr/local/qt/lib in
it. I'm a novice with Jam so am struggling a bit with
this. What library is the ld looking for here? I'd
appreciate any advice on how to get around this linking
issue, if that's what it is.
Date: Sun, 9 Sep 2001 20:57:10 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: /usr/bin/ld: cannot find -lqt
It's referring to Qt, http://www.trolltech.com/qt/.
May I ask what program that is?
From: "Jeremy Furtek" <jeremyf@believe.com>
Date: Mon, 10 Sep 2001 10:30:12 -0700
Subject: Question on jam dependencies and the Clean rule
The Jambase file defines actions for the Clean rule, yet no procedure. Many
default Jambase rules use the Clean rule:
Clean clean : somefile ;
When invoking jam with the clean target:
jam clean
'somefile' gets removed.
My question is: How does this happen?
My first guess was that jam would have some default interpretation of the
Clean rule to establish a dependency between the clean target and
'somefile'. In other words, all targets before the colon (:) are made to be
dependent on all files after the colon.
Yet, in looking at the default rules in Jambase, there are a number that
have the following statement:
Depends $(<) : $(>) ;
indicating that the dependency must be manually specified. My own (limited)
experience in writing rules seems to confirm that there isn't an automatic
dependency.
So how exactly does "jam clean" work? Any hints or corrections to my
assumptions would be greatly appreciated.
Date: Mon, 10 Sep 2001 11:32:29 -0700 (PDT)
Subject: Re: Question on jam dependencies and the Clean rule
"clean" is specified as a NOTFILE, so its (nonexistant) timestamp is never
checked, so its dependencies are always "newer". So:
Clean clean : somefile ;
says: Do "Clean somefile" if "somefile" is newer than "clean", which it
will always be.
From: "Jeremy Furtek" <jeremyf@believe.com>
Subject: RE: Question on jam dependencies and the Clean rule
Date: Mon, 10 Sep 2001 11:56:34 -0700
I understand the NOTFILE modifier and the basic dependency mechanism (I
think ;-)). My question is:
Why is 'somefile' considered a dependency of 'clean'?
There is no procedure in Jambase for the 'Clean' rule that establishes the
dependency - only an action (at least in Perforce Jam 2.3.1). I can think of
two possible sources for the dependency.
1.) The statement "Clean clean : somefile" forces 'somefile' to be a
dependency of 'clean.' In the more general case, all of $(>) become
dependencies of $(<). If this is true, then why do other procedures in
Jambase explicitly do this:
Depends $(<) : $(>) ;
2.) The lack of a procedure for 'Clean' indicates some other default case
that establishes the dependency.
Since neither one of these is entirely satisfactory, I am hoping that there
is something that I am missing.
Date: Mon, 10 Sep 2001 12:52:09 -0700 (PDT)
Subject: RE: Question on jam dependencies and the Clean rule
I think maybe you're thinking of dependencies a little wrong. As the doc
puts it:
Jamfiles contain rule invocations, which usually look like:
RuleName targets : targets ;
The target(s) to the left of the colon usually indicate what gets
built, and the target(s) to the right of the colon usually indicate
what it is built from.
If there were a rule for Clean that associated a *dependency* between the
target-to-build (ie., the pseudo-target "clean") and the targets to build
it from (ie., the files to be removed), then those files would need to
exist before (pseudo-)target "clean" could get "built", since you would
have told Jam "clean depends on somefile (existing and being up-to-date),
so go check that and, if need be, build it first, then build clean" (which
would be rather pointless, building something just so you could remove it
From: "Jeremy Furtek" <jeremyf@believe.com>
Subject: RE: Question on jam dependencies and the Clean rule
Date: Mon, 10 Sep 2001 13:29:38 -0700
My thinking was that the dependency of 'clean' on 'somefile' could be one in
which the file would not be created if it did not already exist, and the
update action would be one that removes the file. (In this case I mean
"dependency" in a very general way. I'm not sure if that sort of
"dependency" exists in jam.)
In any case, I was definitely not interpreting things correctly. I'll have
to go back and reread all of my Jamfiles again. Thanks for clearing this up.
From: "Jeremy Furtek" <jeremyf@believe.com>
Date: Wed, 12 Sep 2001 16:14:45 -0700
Subject: Setting a variable "on" a target
I have the following minimal test case:
# Jamfile
LINKFLAGS = /DEBUG ;
# CASE 1
LINKFLAGS on test.exe += /NONSENSE ;
# CASE 2
#LINKFLAGS on test.exe = $(LINKFLAGS) /NONSENSE ;
Main test : main.cpp ;
(I commented out case 2 when running case 1 and vice versa)
The output of Case 1 is:
link /nologo /NONSENSE /out:test.exe main.obj advapi32.lib libc.lib
oldnames.lib kernel32.lib
The output of Case 2 is:
link /nologo /DEBUG /NONSENSE /out:test.exe main.obj advapi32.lib
libc.lib oldnames.lib kernel32.lib
I would expect the two to be identical. The "+=" operator, in conjunction
with the "on" restriction, seems to be overwriting the value.
I am using Jam 2.3.1 on NT, downloaded from the Perforce FTP site. I found a
thread on this in the mailing list archives from October of 1999 that went unanswered.
Is this a bug in Jam or a bug in my thought process? ;-)
From: "Jeremy Furtek" <jeremyf@believe.com>
Sent: Wednesday, September 12, 2001 7:14 PM
Subject: Setting a variable "on" a target
The first line sets the /global/ LINKFLAGS variable.
The second line appense /NONSENSE to LINKFLAGS on test.exe, which is
currently empty
The third line sets LINKFLAGS on test.exe to the global LINKFLAGS followed by /NONSENSE.
Date: Thu, 13 Sep 2001 12:17:16 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Setting a variable "on" a target
The third line sets LINKFLAGS on test.exe; should _reading_ LINKFLAGS in
that context read the global variable, or the old 'on text.exe' variable?
If the latter, then I guess the old local value is a copy of the global
one, so the example jamfile doesn't illustrate difference.
Changing case 2 to
# CASE 2
LINKFLAGS on text.exe = /TEST ;
LINKFLAGS on test.exe = $(LINKFLAGS) /NONSENSE ;
makes the jamfile show the difference.
Put another way, is the right-hand side of the expression evaluated in the
context "on main.exe" or in the global context?
It's a tricky question. I don't know what jam does now, and I also don't
know which is the more desirable behaviour. Opinions?
Subject: RE: Setting a variable "on" a target
Date: Thu, 13 Sep 2001 12:40:38 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
There's a bug, such that "x on y += z;" does exactly the same thing as
if you used "=" instead of "+=". Same problem with "x on y ?= z;". I
fixed that in my copy of Jam, but can't find anything in my Sent Items
suggesting that I ever posted the fix. The fix requires code changes to
Jam, and also changes to Jambase, and likely any Jamrules or Jamfile
that uses the "on .. +=" or "on .. ?=" syntax.
The code changes need to happen in rules.c, addsettings(). Depending on
how you fix it, you'll need to update the callers. For example, I also
updated the debug output code in compile.c, compile_settings().
The main thing I did was just change the "int append" parameter to "int
flag" to use the VAR_SET/VAR_DEFAULT/VAR_APPEND flags, rather than being a boolean.
Sorry I don't have time to package up a nice set of diffs for you to
apply. The changes aren't hard, though.
From: David Abrahams [mailto:david.abrahams@rcn.com]
Sent: Wednesday, September 12, 2001 6:10 PM
Subject: Re: Setting a variable "on" a target
The first line sets the /global/ LINKFLAGS variable.
The second line appense /NONSENSE to LINKFLAGS on test.exe, which is
currently empty
The third line sets LINKFLAGS on test.exe to the global LINKFLAGS
followed by /NONSENSE.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 17 Sep 2001 22:52:04 -0400
Subject: Useful (?) Jam modifications
I have recently completed a series of modifications to the (already
modified) FTJam source code. Please let me know if they are of any use to
you and I will post them.
Modifications:
1. A fix for the Windows NT line-length limitation. This fix works under the
following limited circumstances:
a. The build action is a single line.
b. JAMSHELL on the target is set to "%"
Though not completely general, it should be enough to handle link commands
for those compilers which can't be made to use command files (e.g. GCC).
2. A hook which allows you to find out what path a target is bound to. If
you set BINDRULE on the target, the rule named by $(BINDRULE) will be called
with the target name and the path to which it was bound. This is needed for
accurate binding of header files, since the header search algorithm for many
compilers depends on the directory of the #including file (and sometimes the
file which #included that, etc.)
3. Argument list support. I find that Jam is simply too error-prone for
building systems of substantial size without it. You can now write:
rule foo ( x y : z * ) { }
this will check that foo is invoked with 1-2 arguments, the first of which
has 2 elements. The "*" is a modifier which indicates that the second
argument, if present, can be any length. Within the body of foo, x, y, and z
will be bound as local variables to $(1[1]) $(1[2]) and $(2), respectively.
Other allowed modifiers:
"+", which is like "*" except that at least one element is required.
"?", which indicates that the argument is optional.
If the arguments don't match the argument list, Jam will exit with an
appropriate error. You can still leave out the argument list, in which case
Jam operates in the usual permissive way.
Date: Tue, 18 Sep 2001 08:07:41 -0700 (PDT)
Subject: Re: Useful (?) Jam modifications
Is this like the change I offered at:
If it is, did you do it in some other way? (I'd be interested in seeing
how you did it.)
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Useful (?) Jam modifications
Date: Tue, 18 Sep 2001 14:28:29 -0400
What I did is a lot like that, but my approach is a bit more flexible
because it allows a rule to be invoked, which can do anything you want
(including setting that variable, or any other variable you choose).
Since the patch is almost as big as search.c, which is small, I'll just give
you my search.c:
/*
* Copyright 2001 David Abrahams
*
* This file is part of Jam - see jam.c for Copyright information.
*/
# include "jam.h"
# include "lists.h"
# include "search.h"
# include "timestamp.h"
# include "filesys.h"
# include "variable.h"
# include "newstr.h"
static void call_bind_rule(
char* target_,
char* boundname_ ) {
LIST* bindrule = var_get( "BINDRULE" );
if( bindrule ) {
/* No guarantee that target is an allocated string, so be on the * safe side */
char* target = copystr( target_ );
/* Likewise, don't rely on implementation details of newstr.c: allocate
* a copy of boundname */
char* boundname = copystr( boundname_ );
if( boundname && target ) {
/* Prepare the argument list */
LOL args;
lol_init( &args );
/* First argument is the target name */
lol_add( &args, list_new( L0, target ) );
lol_add( &args, list_new( L0, boundname ) );
if( lol_get( &args, 1 ) )
evaluate_rule( bindrule->string, &args );
/* Clean up */
lol_free( &args );
} else {
if( boundname ) freestr( boundname );
if( target ) freestr( target );
}
}
}
/*
* search.c - find a target along $(SEARCH) or $(LOCATE)
*/
char *
search(
char *target,
time_t *time ) {
FILENAME f[1];
LIST *varlist;
char buf[ MAXJPATH ];
int found = 0;
char *boundname = 0;
/* Parse the filename */
file_parse( target, f );
f->f_grist.ptr = 0;
f->f_grist.len = 0;
if( varlist = var_get( "LOCATE" ) ) {
f->f_root.ptr = varlist->string;
f->f_root.len = strlen( varlist->string );
file_build( f, buf, 1 );
if( DEBUG_SEARCH ) printf( "locate %s: %s\n", target, buf );
timestamp( buf, time );
} else if( varlist = var_get( "SEARCH" ) ) {
while( varlist ) {
f->f_root.ptr = varlist->string;
f->f_root.len = strlen( varlist->string );
file_build( f, buf, 1 );
if( DEBUG_SEARCH ) printf( "search %s: %s\n", target, buf );
timestamp( buf, time );
if( *time ) {
found = 1;
break;
}
varlist = list_next( varlist );
}
}
if (!found) {
/* Look for the obvious */
/* This is a questionable move. Should we look in the */
/* obvious place if SEARCH is set? */
f->f_root.ptr = 0;
f->f_root.len = 0;
file_build( f, buf, 1 );
if( DEBUG_SEARCH ) printf( "search %s: %s\n", target, buf );
timestamp( buf, time );
}
boundname = newstr( buf );
/* prepare a call to BINDRULE if the variable is set */
call_bind_rule( target, boundname );
return boundname;
}
From: Michael Linehan <mlinehan@baltimore.com>
Date: Fri, 5 Oct 2001 13:46:20 +0100
Subject: Can I specify a full library name in a Jamfile
I am trying to build an executable which links a library libXXX.so.1.2.3.4.
However, when I put this in the external libraries section of the jamfile, I
get the error "Unable to find library libXXX.so.1.2.3
The trailing .4 has been removed. The relevant section in my jamfile looks like:
ExecutableLinksWithExternal MyTest :
libXXX.so.1.2.3.4
;
From: "Jeremy Furtek" <jeremyf@believe.com>
Date: Mon, 8 Oct 2001 17:16:45 -0700
Subject: Header file peculiarity in default rules?
Upon testing my Jam build system on multiple platforms, I came across the
following difference in Jambase default rules:
The Object rule sets the HDRS variable on a target object file as follows:
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIRHDRS) ;
The HDRSEARCH variable, set a few lines below the above statement, adds the
$(STDHDRS) value to the list that is searched for header file dependencies:
HDRSEARCH on $(>) = $(HDRS) $(SUBDIRHDRS) $(h) $(STDHDRS) ;
The default actions for Cc/C++ look like this:
$(C++) -c $(C++FLAGS) $(OPTIM) -I$(HDRS) -o $(<) $(>)
For Windows NT platforms, however, there is an override that adds the
STDHDRS path to the include path list:
$(C++) /c $(C++FLAGS) $(OPTIM) /Fo$(<) /I$(HDRS) /I$(STDHDRS) /Tp$(>)
The net result of this is that the STDHDRS variable is included on the
command line for NT, yet not on other platforms.
Is there a reason for this? I would consider this to be nitpicking, and I
can certainly come up with a workaround. If it is intended, I figured that
the reason might save someone else the time that I spent tracking it down.
(using Perforce Jam 2.3)
From: Vladimir Prus <ghost@cs.msu.su>
Date: Tue, 9 Oct 2001 11:57:51 +0400
Subject: RmTemps problem with independent targets
Suppose I want Jam to convert tex file to pdf using pdflatex. pdflatex
creates a number of auxillary files that I want to clean. Here is what I do:
rule pdflatex {
local file = $(1) ;
Depends all : $(file).pdf ;
Depends $(file).pdf : $(file).tex ;
RmTemps $(file).pdf : $(file).log $(file).aux ;
pdflatex-actions $(file).pdf : $(file).tex ;
}
actions pdflatex-actions { pdflatex $(>) }
pdflatex file ;
Jam does not clean *.aux and *.log files. The problem is that $(file).pdf has
two actions associated with it: first creates *aux and *log file, the second
removes them. But binding targets for both actions occur at the same time,
before executing actions. So when binding targets for RmTemps action, which
is defined as:
actions ..... existing RmTemps { $(RM) $(>) }
aux and log files are not found, and not passed to rm command due to
"existing" modifier.
Workarounds exit (e.g. Depends $(file).pdf : $(file).aux $(file).log ; ) but
are very non-intuitive. I took me a considerable time and looking through the
code to understand what's going on. Would be very interested in suggestions
on real fix.
Date: Wed, 17 Oct 2001 15:18:31 -0400
From: Michael Gentry <mgentry@sharemedia.com>
Subject: Working IDL Rule?
Does anyone have a working IDL rule?
I actually have a working IDL rule, but only for "simple" structures.
I'm building a static library (no nested source directories) in a
directory which contains both C/C++ files and some IDL files. This
works just fine. However, I also have a "shared" subdirectory for
creating the shared objects/library from the same C/C++ and IDL files
(only the .o's and .so's go in it). This works just fine, too, but it
regenerates the C++ files from the IDL files in the TOP directory (it
doesn't need to do this ...), which forces uneeded re-compiles when
running Jam again (as the static library is now out of date).
I'm convinced it is either a gristing or dependency problem, but I'm
pretty stumped as to where at the moment (I've spent a few days trying
to get it all working). If anyone has a working IDL rule for this type
of setup they could share, I'd appreciate it.
/dev/mrg
PS. I could post a lot more details if needed, but thought I'd try a
more sparse approach first.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Working IDL Rule?
Date: Wed, 17 Oct 2001 13:21:29 -0700
Differ IDL compler work differently. We use BEA Weblogic Enterprise
and here is the parts from our Jamrules which drive IDL ...
idl.pl is a wrapper arround the idl compiler which "fixes" its output.
It also does a cd so that the idl compiler is invoked in the correct
directory. BEA's idl compiler always produces outout in the cwd, which
is not what we want. (there is no way to specify the output file names).
Any existing outputs have to be remove first in case they are read only.
We also build a control file which controls transaction defaults on the
IDL interface.
The Object rule has been modified so that it understands the IDL outputs.
Object file_c.o : file.idl ;
Object file_s.o : file.idl ;
and this explains the gate on rule Idl.
Also
Object any.o : file.skel ;
Object any.o : file.stub ;
Allows the control of what portion of the idl output goes into a library.
For example
Library client : source1.cpp source2.cpp file.stub ;
Library server : source3.cpp file.skel ;
Notice the objectFiles rule, which knows how to map
.idl, .skel, and .stub to .o files.
# Idl file.idl ;
# Builds 4 output files file_c.h file_s.h file_c.cpp file_s.cpp
# _c files are for the client ONLY, _c, _s files are for the server
rule Idl {
local g s c n ;
if ! $($(<:G=)-idl) {
# Cheesy gate to prevent multiple invocations
$(<:G=)-idl = true ;
makeGristedName g : $(<:G=) ;
n = $(<:G=) ;
s = $(n:S=_s.h) $(g:S=_s.cpp) ;
c = $(n:S=_c.h) $(g:S=_c.cpp) ;
IdlRm $(c) $(s) : $(g) ;
IdlDo $(c) $(s) : $(g) ;
IdlMv $(c) $(s) : $(g) ;
}
}
rule IdlDo {
local h i ;
# special case because of how idl.pl works
MakeLocate $(<[1]) $(<[3]) : $(LOCATE_COMPONENT) ;
MakeLocate $(<[2]) $(<[4]) : $(LOCATE_SOURCE) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
Depends $(IDLS) : $(<) ;
Clean clean : $(<) ;
# headers often need to be pre-generated
# for dependencies analysis to work
Depends $(HEADERS) : $(<[1]) $(<[3]) ;
# alias to non gristed form
for i in $(<) {
if $(i) != $(i:G=) { Depends $(i:G=) : $(i) ; }
}
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIR_HDRS) ;
# Build a "default" ICF file
Depends $(<) : $(>:S=.xx) ;
Depends $(>:S=.xx) : $(>) $(ICFTMPLT) ;
MakeLocate $(>:S=.xx) : $(LOCATE_SOURCE) ;
RmIfLink $(>:S=.xx) ;
IdlIcf $(>:S=.xx) : $(>) $(ICFTMPLT) ;
ICFFILE on $(<) += $(>:S=.xx) ;
# tell jam that _c.cpp file includes DbugDebug.h
# (added by IdlDo action (idl.pl))
INCLUDES $(<[2]) : Dbug/DbugDebug.h ;
# in case Idl rule not called fro Object rule ...
# (or, $(>) in Object rule not the .idl file, but rather
# one of the aliases.
ScanFile $(>) ;
}
rule Library {
local o ;
objectFiles o : $(>) ;
LibraryFromObjects $(<) : $(o) ;
Objects $(>) ;
}
rule Object {
local h ;
# locate object and search for source, if wanted
Clean clean : $(<) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
# Save HDRS for -I$(HDRS) on compile.
# We shouldn't need -I$(SEARCH_SOURCE) as cc can find headers
# in the .c file's directory, but generated .c files (from
# yacc, lex, etc) are located in $(LOCATE_TARGET), possibly
# different from $(SEARCH_SOURCE).
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIR_HDRS) ;
# handle #includes for source: Jam scans for headers with
# the regexp pattern $(HDRSCAN) and then invokes $(HDRRULE)
# with the scanned file as the target and the found headers
# as the sources. HDRSEARCH is the value of SEARCH used for
# the found header files. Finally, if jam must deal with
# header files of the same name in different directories,
# they can be distinguished with HDRGRIST.
# $(h) is where cc first looks for #include "foo.h" files.
# If the source file is in a distant directory, look there.
# Else, look in "" (the current directory).
ScanFile $(>) ;
RmIfLink $(<) ;
switch $(>:S) {
case .asm : As $(<) : $(>) ;
case .c : Cc $(<) : $(>) ;
case .C : C++ $(<) : $(>) ;
case .cc : C++ $(<) : $(>) ;
case .cpp : C++ $(<) : $(>) ;
case .pc : Cc $(<) : $(>:S=.c) ;
ProC $(<:S=.c) : $(>) ;
case .f : Fortran $(<) : $(>) ;
case .idl :
switch $(<:S=) {
case *_c : C++ $(<) : $(>:S=_c.cpp) ; Idl $(>) ;
case *_s : C++ $(<) : $(>:S=_s.cpp) ; Idl $(>) ;
}
case .skel : C++ $(<) : $(>:S=_s.cpp) ; Idl $(>:S=.idl) ;
case .stub : C++ $(<) : $(>:S=_c.cpp) ; Idl $(>:S=.idl) ;
case .l : C++ $(<) : $(<:S=.cpp) ;
Lex $(<:S=.cpp) : $(>) ;
case .s : As $(<) : $(>) ;
case .y : C++ $(<) : $(<:S=.cpp) ;
Yacc $(<:S=.cpp) : $(>) ;
case * : UserObject $(<) : $(>) ;
}
}
rule Objects {
local i j s x ;
makeGristedName s : $(<) ;
for i in $(s) {
objectFiles x : $(i) ;
for j in $(x) {
Object $(j) : $(i) ;
Depends $(OBJ) : $(j) ;
# Alias gristed name as ungristed
Depends $(j:G=) : $(j) ;
NOTFILE $(j:G=) ;
}
}
}
rule objectFiles {
local _i ;
$(<) = ;
for _i in $(>) {
switch $(_i:S) {
case .idl : $(<) += $(_i:S=_c$(SUFOBJ)) $(_i:S=_s$(SUFOBJ)) ;
case .skel : $(<) += $(_i:S=_s$(SUFOBJ)) ;
case .stub : $(<) += $(_i:S=_c$(SUFOBJ)) ;
case * : $(<) += $(_i:S=$(SUFOBJ)) ;
}
}
}
actions IdlDo bind ICFFILE { $(IDL) $(IDLFLAGS) -I$(HDRS) $(>) $(ICFFILE[1]) }
actions quietly IdlMv {
test $(<[1]:D=$(>:D)) = $(<[1]) || $(MV) $(<[1]:D=$(>:D)) $(<[1])
test $(<[2]:D=$(>:D)) = $(<[2]) || $(MV) $(<[2]:D=$(>:D)) $(<[2])
test $(<[3]:D=$(>:D)) = $(<[3]) || $(MV) $(<[3]:D=$(>:D)) $(<[3])
test $(<[4]:D=$(>:D)) = $(<[4]) || $(MV) $(<[4]:D=$(>:D)) $(<[4])
}
actions ignore quietly IdlRm {
$(ISLINK) $(<[1]) && $(RM) $(<[1])
$(ISLINK) $(<[2]) && $(RM) $(<[2])
$(ISLINK) $(<[3]) && $(RM) $(<[3])
$(ISLINK) $(<[4]) && $(RM) $(<[4])
}
actions quietly IdlIcf {
$(TOP)/$(OS)/tools/icf.pl $(>[2]) < $(>[1]) > $(<)
}
Date: Fri, 19 Oct 2001 15:09:03 -0400
From: Michael Gentry <mgentry@sharemedia.com>
Subject: Re: Working IDL Rule?
Thanks to the examples from Randy Roesler, I was finally able to get my
IDL rules working for omniORB on Linux while building shared libraries.
The gating in the IDL rule really helped. I had tried that before, but
I guess I didn't get it quite right. It now only runs the IDL compiler
once (generating the C++ header/source) even though I build a shared
library in a subdirectory off the same source.
Here are the rules I'm using in case they'll help anyone else:
rule UserObject {
switch $(>:S) {
case .idl : Idl $(<) : $(>) ;
case * : EXIT "Unknown suffix on" $(>) "- see UserObject rule in Jamfile(5)." ;
}
}
rule Idl {
local i = $(>:G=) ;
local h = $(i:S=.hh) ;
local s = $(i:S=.cc) ;
Depends $(<) : $(i) $(h) $(s) ;
C++ $(<) : $(s) ;
if ! $($(<:G=)-idl) {
local g ;
local n ;
$(<:G=)-idl = true ;
makeGristedName g : $(<:G=) ;
n = $(<:G=) ;
Idl1 $(h) $(s) : $(>) ;
Clean clean : $(s) $(h) ;
}
}
actions Idl1 {
$(IDL) $(IDLFLAGS) $(>)
$(MV) $(>:B)SK.cc $(>:S=.cc)
}
Then in TOP/Jamfile to create the static library:
Library libum.a : $(LIBUM_IDLS) $(LIBUM_SRCS) ;
And in TOP/shared/Jamfile to create the shared library:
SharedLibrary libum.so : $(LIBUM_IDLS) $(LIBUM_SHARED_SRCS) ;
(The SharedLibrary rule is very similar to the standard Library rule.)
A note about the Idl1 action: omniORB's idl compiler creates both the
skeleton and the stub in a single file (foo.idl => fooSK.cc, foo.hh), so
after it creates the file, I move it back to the original filename minus
the SK suffix on the basename.
/dev/mrg
Date: Wed, 24 Oct 2001 09:26:37 +0200
From: Toon Knapen <toon@si-lab.org>
Subject: rule targets & directory tree
I'm a newbie to Jam programming but want to define my own rules for
generating PDF files from latex files.
If I'm in the subdirectory containing the latex file, `jam doc`
generates the PDF. If I'm in one of it's parents, `jam doc` does not
generate them.
Can anyone point me in the righ direction (I'm sure it has something to
do with grist and being able to locate the target and sources but .. sigh ;-(
So the Jamfile in my TOP directory reads :
# Jamfile in TOP
SubInclude TOP main doc ;
The Jamrules in my TOP directory read :
#Jamrules in TOP
Depends doc : femtown_$(MODULE)_module.pdf ;
rule latex2pdf-doc-gen {
local _s ;
_s = [ FGristFiles $(>) ] ;
LOCATE on $(<) = $(LOCATE_TARGET) ;
Depends $(<) : $(_s) ;
Clean clean : $(<) $(<:B).aux $(<:B).toc $(<:B).log ;
}
actions latex2pdf-doc-gen {
echo --- action --- $(>)
pdfelatex $(>)
}
and finally the Jamfile in TOP/main/doc reads :
# Jamfile in TOP/main/doc
SubDir TOP main doc ;
LATEXSOURCE = femtown_main_module.tex ;
latex2pdf-doc-gen femtown_main_module.pdf : $(LATEXSOURCE) ;
Date: Tue, 23 Oct 2001 13:47:00 +0200
From: Toon Knapen <toon.knapen@si-lab.org>
Subject: rule targets & directory tree
I'm a newbie to Jam programming but want to define my own rules for
generating PDF files from latex files.
If I'm in the subdirectory containing the latex file, `jam doc`
generates the PDF. If I'm in one of it's parents, `jam doc` does not
generate them.
Can anyone point me in the righ direction (I'm sure it has something to
do with grist and being able to locate the target and sources but .. sigh ;-(
So the Jamfile in my TOP directory reads :
# Jamfile in TOP
SubInclude TOP main doc ;
The Jamrules in my TOP directory read :
#Jamrules in TOP
Depends doc : femtown_$(MODULE)_module.pdf ;
rule latex2pdf-doc-gen {
local _s ;
_s = [ FGristFiles $(>) ] ;
LOCATE on $(<) = $(LOCATE_TARGET) ;
Depends $(<) : $(_s) ;
Clean clean : $(<) $(<:B).aux $(<:B).toc $(<:B).log ;
}
actions latex2pdf-doc-gen {
echo --- action --- $(>)
pdfelatex $(>)
}
and finally the Jamfile in TOP/main/doc reads :
# Jamfile in TOP/main/doc
SubDir TOP main doc ;
LATEXSOURCE = femtown_main_module.tex ;
latex2pdf-doc-gen femtown_main_module.pdf : $(LATEXSOURCE) ;
From: "eschner" <eschner@sic-consult.de>
Date: Thu, 25 Oct 2001 14:00:43 +0200
Subject: [newbie] generated files
as said in the subject, I am new to jam. Unfortunately, neither
documentation nor mailing list archive could help me with the following problem:
During our build process a couple of C-sources is generated from some text
file. Number and names of generated files vary unpredictably.
Corresponding objects should enter and leave the library depending on the
presence of the source.
Up to now I found myself unable to put the names of all those files into any
rule because there seems to be no globbing and no variable reading from files.
What did I miss? What can I do?
Date: Thu, 25 Oct 2001 14:07:05 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: [newbie] generated files
I'm not sure whether it'll work, but I think calling Depends a little might help.
Somewhere, there's a jam rule that decides that mumble.c will have to be
generated. Right? That rule could also say "Depends gargle : mumble.c ;"
and then anyone building executable gargle should compile mumble.c and
link mumble.o into gargle.
From: "Arnaldur Gylfason" <arnaldur@decode.is>
Date: Thu, 25 Oct 2001 13:47:34 +0000
Subject: Re: rule targets & directory tree
I assume you define MODULE correctly somewhere.
Don't you need SubDir TOP ;
at the head of the TOP Jamfile?
Apart from this my guess is you need gristing on the doc source :
femtown_$(MODULE)_module.pdf
Try putting the Depends clause within the rule latex2pdf-doc-gen after
gristing:
rule latex2pdf-doc-gen {
local _s ;
_s = [ FGristFiles $(>) ] ;
LOCATE on $(<) = $(LOCATE_TARGET) ;
Depends doc : $(_s) ;
Depends $(<) : $(_s) ;
Clean clean : $(<) $(<:B).aux $(<:B).toc $(<:B).log ;
}
You could also specify
NOTFILE doc ;
ALWAYS doc ;
in Jamrules.
Date: Wed, 24 Oct 2001 09:26:37 +0200
From: Toon Knapen <toon@si-lab.org>
Subject: rule targets & directory tree
I'm a newbie to Jam programming but want to define my own rules for
generating PDF files from latex files.
If I'm in the subdirectory containing the latex file, `jam doc`
generates the PDF. If I'm in one of it's parents, `jam doc` does not generate them.
Can anyone point me in the righ direction (I'm sure it has something to
do with grist and being able to locate the target and sources but .. sigh ;-(
So the Jamfile in my TOP directory reads :
# Jamfile in TOP
SubInclude TOP main doc ;
The Jamrules in my TOP directory read :
#Jamrules in TOP
Depends doc : femtown_$(MODULE)_module.pdf ;
rule latex2pdf-doc-gen {
local _s ;
_s = [ FGristFiles $(>) ] ;
LOCATE on $(<) = $(LOCATE_TARGET) ;
Depends $(<) : $(_s) ;
Clean clean : $(<) $(<:B).aux $(<:B).toc $(<:B).log ;
}
actions latex2pdf-doc-gen {
echo --- action --- $(>)
pdfelatex $(>)
}
and finally the Jamfile in TOP/main/doc reads :
# Jamfile in TOP/main/doc
SubDir TOP main doc ;
LATEXSOURCE = femtown_main_module.tex ;
latex2pdf-doc-gen femtown_main_module.pdf : $(LATEXSOURCE) ;
From: "Achim Domma" <achim.domma@syynx.de>
Date: Thu, 25 Oct 2001 22:11:17 +0200
Subject: [newbie] setup sql database with jam
I have about 50 sql scripts which I use to setup a database, and I want to
do this with jam. I tought about putting different groups of files under
differnt targets, so that it should be possible to rebuild differnt parts of
the database (for example only rebuild stored procedures, but let tables
unchanged). I tried to write a UserObject rule and a Sql rule but jam does nothing.
Could somebody give me a hint ? I have no experiences with make and can't
find my starting point.
Date: Mon, 5 Nov 2001 15:51:51 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: A few extensions to Jam
Some of the development groups at Alias|Wavefront have been using a minor
variant of Jam for a couple years with great success. We are using Jam
to build large products on multiple platforms. Like most users of Jam,
we have customized a local Jambase quite heavily. Along with the Jambase
customizations, we have extended Jam itself in a variety of ways.
The extensions to Jam may be useful for other groups working with large
projects, this mail announces the availability of the extensions.
I have a branch on the Perforce public depot at:
//guest/craig_mcpheeters/jam/src/...
That branch now contains all of the changes we have made to Jam, in a form
that is hopefully usable by a variety of people. There are 12 independent
extensions, and a few simple fixes.
There is a file in the branch called Jamfile.config which lists the extensions
in some detail. Briefly, they are:
* a header cache. Jam normally scans all source files for headers at each
run. This can be time consuming on large source trees. The header cache
saves the results of the current header scan, and it is re-used the next
time jam is run. This can save several minutes of startup time on large projects
* the output from a run of Jam using several jobs is now optionally serialized
* enable command buffers to grow dynamically. Some platforms are able to
accept multi-megabyte command buffers. With this extension, jam can generate them
* a new command line option (-p) to disable dependency checking on headers.
This is useful if you modify a header which would trigger a large build
which you want to defer
* a new debug level (-d+10) which outputs the dependency graph in a slightly
more readable format than the other debug levels offer
* a few extensions to tune Jam's output for working with large builds
* a 'NOCARE file' that works on intermediate files
* slightly improved jam errors and warnings. The Jam file and line number
are appended to the error message. These may not always be completely
accurate, but they are better than nothing when trying to find an error
somewhere in several hundred files
* there are two new modifiers for variable expansion: $(file:/) and $(file:\\)
which make all slashes uniformly forward or backward
A few of these are really useful for large projects. in particular, the
header cache, the serialization of Jam's output from multiple jobs,
enabling large command buffers and being able to disable header
dependencies are all essential for our work with large projects. By
'large' I mean several 10's of thousands of source files. Some of the
other extensions are useful but not as critical.
Any of the extensions can be built by specifying on the command line when
you build Jam that you want it. For example, to build in the header
cache, do:
jam -sHeaderCache=1
to build in all of the extensions:
jam -sAllOptions=1
See the file Jamfile.config in my branch for more details.
I don't consider this to be a new version of Jam. It is based entirely on
the Jam mainline, and contains extensions which are compatible with Jam.
If you are working on smallish projects, you may not find much in here that
is attractive. If you are working on medium to large projects, some of
this may be worth checking out.
Some of these extensions may end up in the Jam mainline. Depending on how
badly you want the extensions you are free to grab the files from my branch
directly or to wait and let the dust settle over any mainline merging that
goes on.
If you have any suggestions for improvements, please forward them to me.
Note however that I work on this project as a background task...at home.
So I may not reply as quickly as I would like. All feedback is welcome.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 5 Nov 2001 19:28:57 -0500
Subject: Re: A few extensions to Jam
I don't think much is happening with the Jam mainline.
I, too have been making a bunch of Jam core language changes, and have fixed
a number of bugs as well. The core language changes are described here:
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/~checkout~/boost/boost/tools/
build/build_system.htm#core_extensions
It would be good if those of us working on Jam enhancements would get
together, instead of working in isolation.
From: "Craig McPheeters" <cmcpheeters@aw.sgi.com>
Sent: Monday, November 05, 2001 6:51 PM
Subject: A few extensions to Jam
Are these changes based on Jam or FTJam? I'm quite interested in trying out Jam, but
having gone through the mail aliases I'm a little worried about the apparently
stalled mainline, and the various different versions that seem to be popping up. I
have a large project to work with (*.h = 10k files / 3m LOC, *.c = 23k files, 24.5m
LOC, plus other goop on top), so I think that in my case the header caching would be
vital. However, some of the stuff done for Boost also looks useful, but it doesn't
appear that I can have both at once.
p.s. How about 'Compote' as a posher alternative to 'Marmalade'?
[From Old French composte, mixture, from Latin composita, feminine past participle of
compnere, to put together]
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 6 Nov 2001 11:30:46 -0500
Subject: Re: Your Jam changes
I'm interested in most of Craig's enhancements, so if you want to merge them
into our source base I'd be happy to look at your patches.
The latest stuff is in the boost cvs repository at sourceforge in the
boost/tools/build module.
Date: Tue, 06 Nov 2001 16:37:30 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Your Jam changes
I wouldn't mind doing this if I was fairly confident that the result would actually
work for us, and ascertaining that that was the case will itself take some
considerable time. I'm not desperate for this, as I can easily test using the
existing versions, I'm more concerned by the apparent (?) lack of coordination
between the 3 (?) versions of Jam that seem to be floating around.
From: "Achim Domma" <achim.domma@syynx.de>
Date: Tue, 6 Nov 2001 17:43:06 +0100
Subject: introduction for Jam
I want to learn how to use Jam, but can not find much documentation. The
short docu I found is rather for people switching from 'make', but I don't
know much about 'make'. Could somebody point me to a tutorial or send my
some simple (but not trivial ;-) ) examples ?
Date: Tue, 6 Nov 2001 10:15:55 -0800 (PST)
From: Laura Wingerd <laura@perforce.com>
Subject: Re: introduction for Jam
I conducted a Jam tutorial session at this year's
and the slides useful. See the links at:
http://www.perforce.com/perforce/conf2001/index.html#jam
Date: Tue, 6 Nov 2001 11:26:17 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: A few extensions to Jam
I've taken a look at your core extensions, thanks for providing the URL.
It seems that you are taking a more aggressive approach to changes than I
am - which is ok, we just have different design goals. I've been trying to
restrict myself to changes which are under-the-covers, without changing the
language itself. A lot of the stuff you've got is interesting, while
loops, modules, etc. In Linux terms, perhaps I'm working in a stable
series and you're working in the next experimental/more aggressive series.
ie, 2.4 vs 2.5? (if that makes sense to you...)
At the moment, the time I have to work on this is constrained in a variety
of ways. What I have is working for us.
I would prefer to not find myself maintaining a branch of jam years down
the road. If there are changes made in the Jam mainline, I should be able
to easily incorporate them, and will do so. In an ideal world, many of the
changes I have would be accepted into the mainline and the delta between
the two would decrease. In reality, some of the changes I have may be
controversial and unnecessary by smaller projects.
Along this line, one of the nice things (among many) that I liked about jam
was its design purity. Its a tiny program which does a complex thing very
elegantly and simply. I would like to see this original quality maintained,
and one of the difficult problems for us is to decide when its 'done'.
I believe that for the internal Perforce development community, its probably
already considered done. Their products are known to be small and efficient,
I'm not aware of them working on multi-million line, 100Mbyte products. What
they have works for smallish projects. Note that the Perforce server is
'only' a few Mbytes large. Also note that in this world, small capable
programs are a good thing :-)
Are modules necessary? No, probably not. Would some people find them useful?
Yeah, I think so. Where do you draw the line?
Which is a long-winded way of saying, I prefer to minimize the amount of work
I'm doing in my branch, and would like to keep the delta between it and the
mainline as small as possible, in order to encourage its incorpration into
the mainline.
You are of course welcome to all of the changes I have made. This may make
sense in the context of a stable and experimental/more aggressive series of branches.
in a subsequent message you add some of the smaller changes you've made
in your branch:
Some of this sounds good, and it would belong in the stable series (I may be
pushing this analogy too far...) Earlier I mentioned my time is constrained
however, that's true and while I would like to incorporate some of your
changes, in reality this work will likely be pushed many months down the
road. Sorry about that.
Date: Tue, 6 Nov 2001 11:36:01 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Your Jam changes
My changes are based on Jam, I did most of this work almost 2 years ago.
My branch is a direct branch from the jam mainline, with integration
history to make it easy for the jam folks to see the differences.
Yeah, your project sounds large enough to benefit from some of my changes.
It also sounds like my branch and Boost may be compatible - although there
would be some merging required. I also mentioned earlier that I don't want
to spend a great deal of time on this - sorry about that. You can probably
have both at once, but not for free, there is some work required.
Date: Tue, 6 Nov 2001 11:57:03 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Your Jam changes
I wouldn't be overly concerned about the different versions. Its a good thing.
The different versions each seem to have different policies for accepting
changes. The policy in my branch is that I would accept changes (from myself)
which were as minimal as possible and enabled the use of Jam on large
internal projects. My policy was to avoid changes to the language where
possible, with a goal of minimizing the differences in order to ease future
integrations from the mainline, and the reverse.
The policy in the FTjam branch seems more open to changes in the language,
and other non-critical-but-useful changes. I like some of the extensions
David has created, although they don't match the policy I have established
for my branch. Different design goals, its all perfectly natural.
I'm not sure what the policy is in the mainline yet. Without an established
policy its hard to know what types of changes to propose. Give it time
though, I am. :-)
This is one of the benefits of open source, although I agree there is an
associated complexity.
Date: Tue, 06 Nov 2001 23:11:05 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Your Jam changes
I've no objection to doing the work, but I don't want to end up with it being yet
another version of Jam. However, I've looked through the mail archives, and I can't
recollect seeing any activity on the baseline version for the last 6 months, which
makes me kinda nervous.
$ head -1 /dev/bollocks
subjectively pursue shrink-wrapped best-of-breed quality vectors, going forwards
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 6 Nov 2001 19:10:17 -0500
Subject: Re: A few extensions to Jam
Actually, these are just some of the bug fixes. #2 is actually a fairly
large change.
It doesn't matter too much to me; I think software lives where it is
maintained and improved. From that point of view Jam seems to be dying at
Boost. Accordingly, I hope to steal most of your work ;-)
Date: Wed, 7 Nov 2001 13:38:58 -0800
From: Richard Geiger <opensource@perforce.com>
Subject: Jam at Perforce
I can understand how somebody could draw that conclusion. By most
appearances, work on Jam development at Perforce has been slow over
the past few years.
Nonetheless, Perforce does care about Jam and the future of Jam, as
evidenced in part by hiring me, as an Open Source Engineer.
In this role, I'm be concerned with all things Open Source at
therein, including Jam.
Over the next month or two, I'll be reviewing the state of Jam in
http://public.perforce.com/public/jam/index.html, along with changes
submitted to //guest/.../jam/... branches, reviewing and integrating
such changes (and possibly changes from internal-use versions at
Of course Perforce (and, ultimately, Christopher) reserves the right
to guide what gets into the jam "mainline" at Perforce, which will
probably not be all things to all people. There's certainly nothing
wrong with having other variants in the world; after all, that's how
Open Source is supposed to work.
I've already been in touch with some of the contributors to
//guest/../jam/... branches in the Public depot, and will be
contacting others in the coming weeks to coordinate the integration of
new functionality and bug fixes into a new //public/jam release.
If you are working on Jam development outside of the Perforce Public
Depot, and are interested in having your changes make it into the new
release, please contact me at "opensource@perforce.com", with the word
"Jam" in the Subject line.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Jam at Perforce
Date: Wed, 7 Nov 2001 18:42:12 -0500
I'm happy to hear from you! I had heard that you were being hired, but since
there has been no noise since, you can see how I drew that conclusion.
In some ways, I am glad that it has turned out this way: if I'd thought that
there was a good chance of conservative changes being rolled back into Jam,
I don't think I would have implemented many of the improvements I've come up
with. I'm convinced that these improvements will be instrumental in
implementing a robust, capable build system on the scale of Boost.Build.
Not being a P4 expert, I had a tough time understanding what might be the
"right" way to branch the Jam source. It seemed to me as though most people
had simply checked in a copy of the Jam source without creating a true
branch. Guidance would be appreciated; I'd like to try to keep a copy of
Boost Jam where you can get at it.
That's true, but a community has a better chance to thrive if momentum and
development effort is concentrated behind a single version of the code. I'd
love to convince you that all of my changes are important, judicious, and
appropriate, but I'm ... realistic about it.
Date: Thu, 8 Nov 2001 12:18:06 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Jam at Perforce
Jam's badly enough documented as it is. Multiple mostly compatible
versions means that parts of the documentation will be wrong, according to
my experience.
When some parts are wrong, no part is really trustworthy.
Date: Thu, 08 Nov 2001 12:41:26 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Some Jam questions and observations
I've now had a chance to look through the documentation for the various
versions of Jam. I'd love to hear the thoughts people who have used Jam
for real and on large projects - some of my questions/observations below
are I'm sure based on my ignorance of how a certain thing can be done in
Jam - please make allowances!
First, a bit of context. I'm at the very, very early stages of looking for
a replacement for make, and Jam seemed to be one of the best candidates.
The system I'm looking is very big - I've read the descriptions of other
large users of Jam, and this is the bigger than any I've seen so far. I'm
therefore concerned primarily by scalability and manageability issues.
The weakest part of Jam seems to be the header file scanning. Scanning
many millions lines of code every time it is run, and scanning some of it
many, many times on each run is just not feasible on such a large source
tree - simply running a 'find . -type f > /dev/null' on my source base
takes 2 minutes, and a 'find . -type f | xargs cat > /dev/null' takes 9:30
minutes, and that's on a filesystem striped & mirrored across 24 disks.
I'm not exactly clear how in detail the header file scanning works - if a.h
includes b.h, and then x.c and y.c both include a.h, does a.h just get
scanned , or does a.h + b.h get scanned? In either case, do the header
files get scanned twice, or is Jam clever enough to realise that when it
processes y.c it has already scanned them when processing x.c, so it can
just reuse the scan results?
Why doesn't Jam make use of the ability of many compilers to list the
header files as they are read? I appreciate that on the 'first pass' this
information won't be there, but in that case you are going to have to
recompile everything anyway. If this information was collected and used to
fill the header cache stuff added by Craig, couldn't the header scanning be
avoided? I'm thinking along the lines of a rule that turns this behaviour
on in the C compiler, and that can then parse the resulting file and squirt
it into the dependency graph. With our compilers, if you set the
environment variable SUNPRO_DEPENDENCIES to a filename the compiler will
write the dependency info in the familiar <target> : <dependencies> form
compiler support is there for this, the existing regexp scan mechanism
could be used instead.
I also can't find any way of setting variables based on the output of a
shell command - is this not possible? In our case, the variables
automatically set by Jam from the machine environment (OS, OSPLAT etc.)
aren't sufficient, and I can see no way of doing this. The sort of things
that we need are the user's uid, the date/time etc. Is this a deliberate
omission?
Another feature that seems to be missing is command dependency checking.
With the make that we currently use, the command-line used to build each
target is recorded along with the file dependencies of the target, and if
the command-line changes (e.g. because some external environment variable
has been modified), the target is rebuilt. This seems like it would be a
useful addition, and would not be difficult to slot into Craig's existing
caching mechanism.
I'm also concerned about migration - the exiting Makefiles consist of 5k
files and a total of 260k LOC. Obviously migrating that in one fell swoop
is not practical. Has anyone had any experiences with this, for example
running a mixed make/jam environment during the transition?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Jam at Perforce
Date: Thu, 8 Nov 2001 08:35:24 -0500
I agree. I'm trying to correct some of that by supplying supplemental
documentation, but it would be much better to have everything in one place.
Date: Thu, 8 Nov 2001 10:53:03 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Some Jam questions and observations
I believe that the original Jam scans each target. If a header is represented
in the dependency graph more than once via grist, its scanned more than once.
In your example, if there is nothing fancy going on with grist, the header is
scanned only once. In a system the size you are mentioning, you'll probably
need grist to make target names unique, in which case each combination of
grist+file would be scanned.
With the cache in place, each physical file is only scanned once. The cache
uses the boundname of the header.
I can't speak for why Christopher made this decision, but I'm glad he did!
I would mention: portability, simplicity as being the small obvious reasons.
The real reason would be that the results of the scan are needed before the
dependency graph can be run - its needed before any of the compiles start
up. The header dependencies determine the order of the graph which is
critical to the way jam works.
I don't know of a way to set a Jam variable to the result of a command
you run. We needed a similar thing though. What we do is combine some
perl/sh in our build, and create a series of files with the info we
need. On compile lines, if you need that info, either use something like:
CC -o foo -set_version `cat .version`
or create a header and include it. There are usually ways around it.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Some Jam questions and observations
Date: Thu, 8 Nov 2001 14:15:01 -0500
Right. And in a system that is really keeping track of #include paths the
way it should, that could result in a huge explosion of scanning. Why? It's ugly:
Take the Microsoft compiler. The rules for finding a file included as
#include "name "
Are that you first look in the directory of the file in which the #include
appears, then you look in the directory of the file which included that
file, and so forth until you get to the source file being compiled. THEN you
go on to look at everything in the #include path given with -I.
The upshot is that SEARCH must potentially be set differently on a header
file for each combination of -I #include paths and each directory chain of
files which include it. The only way to accomplish that is by gristing the
header differently for each of these situations, resulting in different
logical targets.
cache
So, technically, it would be scanned twice if it was bound in two different ways:
foo/baz.h vs. foo/bar/../baz.h
I'm sure it's still an enormous improvement.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Some Jam questions and observations
Date: Thu, 8 Nov 2001 11:56:54 -0800
May I make a suggestion for those who wish to use the
compiler enabled dependency analysis.
Have each Jamfile include a Jam.deps file.
Wrap the compiler command in a ksh or perl command,
that script invokes the compiler and then transforms
the dependency output from the compiler into
DEPEND directives for Jam. Update the Jam.deps file.
Now, in your Jamrules, disable jams header file scanning.
The old dependencies (in Jam.deps) will always be
acceptable because either
a) source and all headers have not changed, dependencies are accurate
b) source has changed, dependencies don't matter
c) one or more headers have changed, but the
dependencies from the source to the
at least one of these headers will be accurate
because those files have not chnaged.
(this is why sun's make integration with the compiler actually works).
The bigest problem with this approach is that the
DEPEND directives will cause Jam to try to build
a dependency even if the dependency is no longer relevant.
For example, you remove a header file and modify the
source to not require that file. The
DEPEND object : header will cause jam to complain that
it can not find the header. A NOCARE directive will fix that,
but this will (might) break any generated header files.
Date: Thu, 08 Nov 2001 23:10:42 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Some Jam questions and observations
If you rebuilt the Jam.deps file every time you built the file, wouldn't
that remove the problem? I accept that the first time you rebuilt you
would have an unnecessary dependency, but the current header scanning is
prone to this anyway, as the documentation points out.
Date: Thu, 08 Nov 2001 23:24:45 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Your Jam changes
Any chance of integrating it back into your perforce depot?
$ head -1 /dev/bollocks
uniquely merge visionary service providers, going forwards
Date: Thu, 08 Nov 2001 23:40:39 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Re: Your Jam changes
http://freetype.sourceforge.net/jam/index.html#where-ftjam:
o Through the Perforce public depot
The FreeType Jam sources are located in the directory named
//guest/david_turner/jam/src. They've been submitted to the Jam development
team, but will stay there until the changes are integrated back into the
main Jam sources (hopefully).
$ head -1 /dev/bollocks
aspirationally commoditize clicks-and-mortar total solutions
Date: Fri, 09 Nov 2001 01:20:43 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Your Jam changes
As David set the challenge above I thought I'd rise to it ;-)
I've thrown together a merged version of:
1. FTJam 2.3.5 from //guest/david_turner/jam/src/...
2 Craig's mods from //guest/craig_mcpheeters/jam/src/...
and put the resulting hairball in
//guest/alan_burlison/jam/src
in the perforce repository at http://www.perforce.com. I do mean thrown -
I've made no attempt to pick and choose what goes in (mainly because I'm
not qualified to do so), and by-and-large I have just let Perforce merge as
it sees fit, so for example there is now a :T modifier and a :\ modifier
which both probably do the same thing.
However, the resulting wad does compile and appears to run, although I have
no way of really testing it properly, so I'm sure there will be a whole
crop of jucy bugs in it. I'm hoping that if nothing else it might hasten
the discussion of what the *real* merged version might include, and
demonstrate that it is not too daunting a task.
If nothing else I've learned a bit more about Perforce.
BTW, has anyone ever run Purify on Jam?
$ head -1 /dev/bollocks
concisely kick-start leading-edge drivers
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Your Jam changes
Date: Thu, 8 Nov 2001 21:01:24 -0500
I am a a CVS clown, but a P4 idiot.
If it's a limitation for you that I'm not integrated at the depot, I can
figure out how to do it, but it would take a little investment to figure out how.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Re: Your Jam changes
Date: Thu, 8 Nov 2001 21:08:04 -0500
Those aren't my changes. Boost Jam includes MANY enhancements beyond FTJam.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Thu, 8 Nov 2001 22:16:41 -0500
Subject: Re: Your Jam changes
Rats! I have obviously not been clear enough. Here is how Richard Geiger
summarized the situation:
Boost.Build has heretofore worked with FTJam, but is evolving
and will, going forward, require the use of a still newer
Jam variant, "Boost Jam", which is based on FTJam.
The new features and bug fixes I've discussed are all only in the Boost Jam source base.
Thanks for taking the time to do this. Let me know if you'd like to try
again with my sources ;-)
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Fri, 9 Nov 2001 00:26:44 -0500
Subject: Re: Your Jam changes
Okay, to reduce confusion I've submitted Boost Jam back into the Perforce
Public Depot at file://guest/david_abrahams/jam/src.
Documentation for new features is still not available at the Depot, but can be viewed at:
From: "Alan Burlison" <Alan.Burlison@sun.com>
Sent: Thursday, November 08, 2001 6:24 PM
Subject: Re: Your Jam changes
Is that the version you have just put back into the depot?
$ head -1 /dev/bollocks
contain heterogeneous unprecedented VLDBs, going forwards
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Fri, 9 Nov 2001 07:35:53 -0500
Subject: Re: Your Jam changes
Somebody found a bug this morning; I've checked in a fix and regression
tests both at Boost SourceForge CVS and in the public depot at sourceforge
in //guest/david_abrahams/jam/src.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Some Jam questions and observations
Date: Tue, 13 Nov 2001 11:20:16 -0800
Because Jam currently scans (from scratch) on invokation,
it will never see an "old" include dependency.
Yes, rebuilting Jam.deps on each invokation would be required,
but this should be cheap enough, as only direcctories with
actual recompilations need to have their Jam.deps updated.
I have not been able to think of a way to do Sun Make's
command "dependency" stuff without actual support from the jam engine.
I would suppose that Jam.deps type of solution might work
for Jam and Java as well !
Date: Tue, 13 Nov 2001 22:40:35 +0000
From: Alan Burlison <Alan.Burlison@sun.com>
Subject: Re: Some Jam questions and observations
Nothing is 'Cheap' when it involves scanning several million LOC.
$ head -1 /dev/bollocks
correlate homogeneous e-services, going forwards
Date: Tue, 13 Nov 2001 21:46:13 -0500
From: Alex Nicolaou <alex@freedomintelligence.com>
Subject: Re: Some Jam questions and observations
For java the compiler is supposed to do dependancies, but this they
unfortunately discarded as a feature around the time of 1.2. I recommend
jikes ++ $(find . -name '*.java')
or some variation, do you have a project where this approach doesn't
work well? Because this should produce instantaneous compiles for almost
any project, after the first which may take several seconds. Jam should
only be needed by whomever builds the release product, and even then we
have our jam rule invoke jikes for 100 java files at a time.
Date: Fri, 16 Nov 2001 11:17:34 -0500
From: "Thatcher Ulrich" <tulrich@oddworld.com>
Subject: workaround for embedded space problem on Windows
So I've been evaluating Jam as a build tool for my company, and
basically I like it. However, our project is Windows-based, and I ran
into the problem of jam not being able to spawn the compiler & tools
if the path name contains an embedded space character. Unfortunately
this is the default for MSVC, and I noticed from the archives of this
mailing list that I'm not the first to butt heads with this problem.
Anyway, the consensus workarounds from the list archives were:
1. don't use spaces in the paths of tools.
2. use the 8.3 aliases of the paths (e.g. "c:/Progra~1/" instead of
"c:/Program Files/").
Neither workaround is acceptable to me. The first for legitimate
political reasons; this is a Windows shop and I only have so much
leeway to inconvenience people, and the second because the 8.3 alias
is not reliable -- e.g. on my workstation, "c:/Progra~1/Micros~1"
aliases to "c:/Program Files/microsoft frontpage", which is not going
to find the compiler! It's a marginally OK, but cheesy, workaround
for a single workstation.
Anyway, the point of all this is that after perusing the Jam source
code, I discovered a very simple workaround that seems to actually
work, on my Win2K box at least. Put the following line in your Jamrules:
JAMSHELL = cmd /C % ;
and make sure your tools path has quotes around it when Jam spawns the
command line, for example these are the definitions from my sample
Xbox project (which uses a special version of the MSVC tools):
JAMSHELL = cmd /C % ;
XDK = "c:/Program Files/Microsoft Xbox SDK" ;
VISUALC = "$(XDK)/xbox/bin/vc7" ;
C++ = \"$(VISUALC)/cl.exe\" ;
LINK = \"$(XDK)/xbox/bin/vc7/link.exe\" ;
AR = \"$(XDK)/xbox/bin/vc7/lib.exe\" ;
Note the \" around the tool .exe's. I also found I needed to put \"
's around various include and lib paths in the command-line switches.
Hope this helps; I didn't come across this workaround in the list
archives. Apologies if it's in the FAQ or something. If it's not, it should be!
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: workaround for embedded space problem on Windows
Date: Fri, 16 Nov 2001 12:00:02 -0500
Ooh, very sneaky! You're getting Jam to delay expansion of the path until
it's already past the phase of reading environment and command-line
variables.
The version of Jam I'm working with for Boost.Build contains an extension
which interprets these variables differently if they are surrounded by
double quotes. The problem and solution are described here:
http://www.boost.org/tools/build/build_system.htm#variable_splitting
and here:
http://www.boost.org/tools/build/build_system.htm#variable_quoting
Date: Fri, 16 Nov 2001 12:36:04 -0500
From: "Thatcher Ulrich" <tulrich@oddworld.com>
Subject: Re: workaround for embedded space problem on Windows
I'm not actually clever enough to have done that on purpose... I was
just trying to exercise the other code path to see if it worked better
Date: Sat, 17 Nov 2001 16:19:55 -0800 (PST)
From: cmcpheeters@aw.sgi.com (Craig McPheeters)
Subject: Fix applied to Jam in my guest area
There is an update to my guest area on the Perforce server. I found a
defect in my branch of Jam which I've now fixed.
The problem was in the dynamic command size extension. It wasn't properly
dealing with targets identified as piecemeal, now it does.
I've also incorporated a change which reverts the execcmd() function in
execunix.c to its 2.2 behaviour on NT. The 2.3 change was to create .bat
files for all targets. The problem I was running into is that there seem to
be limitations on how long lines can be in .bat files, and we were exceding
it when invoking rc.exe via a .bat file. The 2.2 logic would create fewer
.bat files and works for us. This change may or may not make it into the
mainline, as the original change must have been made for a reason, and I've
reverted to the prior logic.
The list of extensions is again:
* header cache
* serialized output
* dynamic command size
* slightly improved warnings
* slash modifiers $(foo:/) $(foo:\\)
* optional headers dependencies (jam -p)
* dependency graph debug dump (jam -d+10)
* no care internal nodes
* NT batch file fix when running more than one jam on a machine
For details, see:
//guest/craig_mcpheeters/jam/...
From: "Axelsson, Andreas" <Andreas.Axelsson@dice.se>
Date: Thu, 22 Nov 2001 15:01:51 +0100
Subject: Wildcards?
As a beginner in Jam, I'd like to know if there are any ways of parsing
wildcard file specifiers and setting up the kind of more generic
pattern-rules that one can do in GNU make or similar? I need to perform
various tasks on an unknown ammount of files and adding them to an explicit
tagrget list just isn't a good option for me right now. I need something
like the setup below:
####################
rule BuildTile { Depends $(1) : $(FOLDER)/*.tile }
rule ConvertTile { Depends $(1) : $(FOLDER)/*.tga ConvertTile $(1) }
actions ConvertTile { MyConverter $(1) $(1:S=tile) }
ConvertTile $(FOLDER)/*.tile
BuildTile Tiles ;
From: "Axelsson, Andreas" <Andreas.Axelsson@dice.se>
Date: Thu, 22 Nov 2001 09:49:30 +0100
Subject: Jam and Wildcards
As a beginner in Jam, I'd like to know if there are any ways of parsing
wildcard file specifiers and setting up the kind of more generic
pattern-rules that one can do in GNU make or similar? I need to perform
various tasks on an unknown ammount of files and adding them to an explicit
tagrget list just isn't a good option for me right now. I need something
like the setup below:
####################
rule BuildTile { Depends $(1) : $(FOLDER)/*.tile }
rule ConvertTile { Depends $(1) : $(FOLDER)/*.tga ConvertTile $(1) }
actions ConvertTile { MyConverter $(1) $(1:S=tile) }
ConvertTile $(FOLDER)/*.tile
BuildTile Tiles ;
From: "Khidkikar, Sanket" <skhidkikar@atsautomation.com>
Date: Mon, 26 Nov 2001 09:26:47 -0500
Subject: bootstrapping woes
I have problems bootstrapping jam on my QNX 6 system (with no Yacc
installed)
Here is the story:
-I run make
-When I get to the point where the jamgram.y is to be parsed to create
jamgram.c and jamgram.h (I already have these when I untar the sources)
-Complains that yacc is not found..removes the jamgram.c and jamgram.h files
-Compalins that there is not jamgram.c (and .h) to create jamgram.o
....skipping
-Cannot make the archive since jamgram.o is missing
-bootstrap is unsuccessful.
I tried to bypass this problem by tinkering make1.c
That allowed me to make the jam binary. But then ,when I try to use it to
build boost libraries I get "memory fault (core dumped) "
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: bootstrapping woes
Date: Mon, 26 Nov 2001 09:58:08 -0500
If you are building Boost.Jam, you can keep it from trying to run yacc by
setting YACC to "" in your environment. This might be tricky (i.e., you
might need to escape the quotes). On my Win32 system:
make YACC=\"\"
I'm not sure why jam might be dumping core, though.
You might be able to give me enough information to help me help you, though:
why not try running jam with -d+5 and sending me the output of that?
If you build jam with CCFLAGS and LINKFLAGS set appropriately for debug
symbols you could send me a stack backtrace, which would be /really/ helpful.
P.S. We are starting to post pre-built Jam executables at boost. Once we
figure out how to build Jam for your system, would you consider contributing an executable?
From: David Abrahams [mailto:david.abrahams@rcn.com]
Sent: Monday, November 26, 2001 9:58 AM
Subject: Re: bootstrapping woes
If you are building Boost.Jam, you can keep it from trying to run yacc by
setting YACC to "" in your environment. This might be tricky (i.e., you
might need to escape the quotes). On my Win32 system:
make YACC=\"\"
I'm not sure why jam might be dumping core, though.
You might be able to give me enough information to help me help you, though:
why not try running jam with -d+5 and sending me the output of that?
If you build jam with CCFLAGS and LINKFLAGS set appropriately for debug
symbols you could send me a stack backtrace, which would be /really/ helpful.
P.S. We are starting to post pre-built Jam executables at boost. Once we
figure out how to build Jam for your system, would you consider contributing an executable?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: bootstrapping woes
Date: Mon, 26 Nov 2001 11:11:08 -0500
Note that if it already tried to run yacc and even marginally succeeded, you
might need to restore your source tree to a pristine state first, since yacc
will overwrite jamgram.* for most values of * ;-)
From: Mike Chen <Mike.Chen@corp.palm.com>
Date: Wed, 28 Nov 2001 13:23:07 -0800
Subject: colorizing output
I'm new to the list and I have a question that I hope one of you
experts can answer.
I'd like to colorize the console output in Win2000 so that errors and warnings
are more easily distinguishable from other status output. Ideally, I'd like
different colors for errors, warnings, and status (with maybe other types
in the future).
I was able to get part of the way there by changing execunix.c to
use SetConsoleTextAttribute() before spawning the command and then
restoring the old color after the command runs. However, this makes
all output from the executed program the same color. In order to
differentiate between errors and warnings, I guess I might have to
parse the output before it goes on screen so I can choose the right
color, though I'm not sure how I'd do that when the output is going to stderr.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: colorizing output
Date: Wed, 28 Nov 2001 18:18:06 -0500
I use emacs on Win2K which handles all of that for me. Also, the Jam
extensions I've implemented at Boost include file and line number output
from -d+5, so that you can use emacs as a kind of post-mortem debugger, to
step through Jam execution.
From: Ramon Lim <Ramon_Lim@Allegis.com>
Date: Tue, 27 Nov 2001 17:12:45 -0800
Subject: Does perforce stop execution chain if it encounters an error??
I have two targets that are dependent on one another. If the action behind
one of the rules fails, I want execution to stop. I noticed that in the
code below, although I have an error in the actions schemaValidation, I
still proceed to rulesValidation. Is there a way around this?
I run the command >> jam -t schemaValidation
NOTFILE schemaValidation ;
NOTFILE rulesValidation ;
NOTFILE build ;
# schema Validation
# validate schema and generate output file
actions schemaValidation {
//Error occurs here
testwork.js ;
}
rule schemaValidation {
Depends schemaValidation : $(1) ;
Depends $(1) : $(2) ;
}
# rulesValidation
# Compile Rules and generate output file
actions rulesValidation { }
rule rulesValidation {
Depends rulesValidation : $(1) ;
Depends $(1) : $(2) ;
Depends $(2) : $(3) ;
}
# executer by jam -t tester
rule build {
Depends all : schemaValidation ;
Depends all : rulesValidation ;
Depends $(rulesValidation) : $(schemaValidation) ;
schemaValidation $(1) : $(2) ;
rulesValidation $(3) : $(4) ;
}
build $(appPath)\\output\\schemaEntity.xml : $(appPath)\\schemaEntity.xml :
$(appPath)\\output\\rules.txt : $(appPath)\\rules.txt ;
From: "Wang, Jason N (USRL)" <Jason.N.Wang@am.sony.com>
Date: Tue, 27 Nov 2001 17:36:51 -0800
Subject: Looking for Jam to Makefile conversion utility
I guess I am repeating Kevin's 4 years old question. Is there exist a
utility which can convert a Jamfile into a Makefile? What should I do if I
port a jam project on a platform which cannot have a jam environment?
From: Ramon Lim <Ramon_Lim@Allegis.com>
Date: Wed, 28 Nov 2001 12:02:39 -0800
Subject: Question about stopping dependency tree flow if an error occurs
NOTFILE schemaValidation ;
NOTFILE schemaCheck ;
NOTFILE rulesValidation ;
NOTFILE build ;
Depends rulesValidation : schemaCheck ;
Depends schemaValidation : rulesValidation ;
Depends all : schemaValidation ;
Depends all : rulesValidation ;
Depends all : styleSheetBuild ;
# SCHEMA VALIDATION
actions schemaCheck {
# An error occurs here
testwork.js ;
}
rule schemaCheck {
# Only run if the output version of file is less current than real file
Depends schemaCheck : $(1) ;
Depends $(1) : $(2) ;
}
actions schemaValidation { ECHO "ACTION: In the schema validation" ; }
rule schemaValidation { ECHO "RULE: Schema Validation " ; }
# RULES VALIDATION
actions rulesValidation { }
rule rulesValidation {
# Only run if the output version of file is less current than real file
Depends rulesValidation : $(1) ;
Depends $(1) : $(2) ;
}
rule build {
# make rules validation dependent on schemaValidation
schemaCheck $(1) : $(2) ;
rulesValidation $(3) : $(4) ;
}
build output\\schemaEntity.xml : schemaEntity.xml : output\\rules.txt : rules.txt ;
I am fairy new to jam and had a question. In the code above, I have the
rulesValidation target dependent on the schemaValidation target (each of
these targets only run a certain action if a certain file is more current
than its output version ). So basically, the correct flow should be
schemaValidation -> rulesValdation.
ISSUE 1: I am encountering that if an error occurs in the action
schemaValidation, it continues to the target rulesValidation? Is there a
way to stop the dependency flow in this case?
Subject: RE: colorizing output
Date: Wed, 28 Nov 2001 18:16:16 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
I did this a while ago. You're right -- it's more complicated than just
using SetConsoleTextAttribute.
I'll try to package up the change soon and post it here.
From: Mike Chen [mailto:Mike.Chen@corp.palm.com]
Sent: Wednesday, November 28, 2001 1:23 PM
Subject: colorizing output
I'm new to the list and I have a question that I hope one of you
experts can answer.
I'd like to colorize the console output in Win2000 so that errors and warnings
are more easily distinguishable from other status output. Ideally, I'd like
different colors for errors, warnings, and status (with maybe other types
in the future).
I was able to get part of the way there by changing execunix.c to
use SetConsoleTextAttribute() before spawning the command and then
restoring the old color after the command runs. However, this makes
all output from the executed program the same color. In order to
differentiate between errors and warnings, I guess I might have to
parse the output before it goes on screen so I can choose the right
color, though I'm not sure how I'd do that when the output is going to stderr.
Date: Thu, 29 Nov 2001 23:45:50 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Does perforce stop execution chain if it encounters an error??
You can't have that. The dependency graph is an acyclic graph, and trying
to go against the basic design has very bad karma.
What you can do is make one of the things dependent on the other. That'll
achieve your goal, I guess... but it's a hack.
DEPENDE rulesValidation : schemaValidation ;
Date: Thu, 29 Nov 2001 23:48:32 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Looking for Jam to Makefile conversion utility
You cannot port a jamfile to a makefile, in general.
What you can do is write a simple tool that runs "jam clean", runs
"jam whatever" and captures command execution, and finally writes a makefile:
whatever: <all files that weren't deleted by jam clean go here>
<the captured commands go here>
It's not much, but I guess it's sufficient for bootstrapping.
From: Bryan Branstetter <BBranstetter@s8.com>
Date: Thu, 29 Nov 2001 17:36:30 -0800
Subject: jam clean
I've got a question regarding 'jam clean': it looks to be defined
'piecemeal', as my Jambase shows:
actions piecemeal together existing Clean { $(RM) $(>) }
However, when I compile my entire tree and run 'jam clean' on my Win2k box, I get:
C:\src>jam clean
Compiler is Intel C/C++
...found 1 target...
...updating 1 target...
Clean clean
The following character string is too long:
-- cut --
I have searched through all of our Jam extensions, and we don't redefine the
actions on Clean anywhere, so I'm stumped. I thought the idea of piecemeal
was to break the commands into chunks which the OS could handle, correct?
Any guidance would be greatly appreciated.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: jam clean
Date: Thu, 29 Nov 2001 21:20:41 -0500
Probably MAXLINE is not set right for your machine. My experiments show that
Win2K has a correct MAXLINE of 2047
From: Ramon Lim <Ramon_Lim@Allegis.com>
Subject: RE: Does perforce stop execution chain if it encounters
Date: Fri, 30 Nov 2001 15:40:05 -0800
I am running the following code with the command : >> jam build
This should run schema validation and then rules validation. If schema
validation fails and/or the outschema.txt is more up to date that the
schema.txt, I don't want the rules v. to run. I only want the rules v. to
run if the schema v. executed successfully AND if the rule.txt is more up to
date than the outrule.txt. I am running into trouble adding these two
conditions for rule v. to run.
Any help would be very useful.
# SCHEMA VALIDATION
actions schemaValidation {
# AN ERROR OCCURRS HERE
}
rule schemaValidation {
Depends schemaValidation : $(<) ;
Depends $(<) : $(>[1]) ;
Depends $(>[1]) : $(>[2]) ;
}
# RULES VALIDATION
actions rulesValidation {
}
rule rulesValidation {
ECHO "RULE: In the rule validation" ;
# If I execute with the code statement below, rule v. indeed does not run
# if schema v. fails but it does not check the timestamp for outrule.txt and rule.txt
Depends rulesValidation : $(<) ;
# If I execute the code below instead, timestamp for outrule.txt and rule.txt is checked properly and rule v. called
# but the status of the success of schema v. is ignored.
Depends rulesValidation : $(<[2]) ;
Depends $(<[2]) : $(>) ;
#IS THERE ANY WAY FOR ME TO MERGE THE TWO????
}
local schemaValidation ruleValidation ;
schemaValidation = "schemaValidation" ;
ruleValidation = "ruleValidation" ;
NOTFILE schemaValidation ;
NOTFILE ruleValidation ;
Depends ruleValidation : schemaValidation ;
schemaValidation $(schemaValidation) : outschema.txt schema.txt ;
rulesValidation $(ruleValidation) outrule.txt : rule.txt ;
Depends build : schemaValidation rulesValidation ;
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Does perforce stop execution chain if it encounters
Date: Mon, 3 Dec 2001 12:49:09 -0800
Try something like ...
You forgot to depend the rule validation on
the schema validation. Instead, you were depending
the set of all rule validations on the set of
all schema validations. The difference is that
individual rule validations are independent of
the schema validations.
Jam will build all tragets are can be built, pruning
a branch from the tree only when a dependency fails.
This means that jam could (or thinks it can) validate
all the rules. The fact that "validateRules" will
fail because "validateSchema" has no bearing on the
chances of any particulat validate rule invocation failing.
It did not help that you were not using $(<) as your
output or result file. And that you were giving one
of your NOTFILE infrastructure target the same name
as one of your specific targets.
rule validateSchema {
local schema = $(<[1]) ;
local source = $(>) ;
Depends validateSchema : $(schema) ;
Depends $(schema) : $(source) ;
}
rule validateRule {
local rule = $(<[1]) ;
local source = $(>) ;
local schema = $(3) ;
Depends validateRules : $(rule) ;
Depends $(rule) : $(schema) ;
Depends $(rule) : $(source) ;
}
actions validateSchema {
# use command false to force a failure
echo "validated" > $(<[1])
# false
}
actions validateRule { echo "validated" > $(<[1]) }
Depends all : validateSchema validateRules ;
NOTFILE validateRules validateSchema ;
validateSchema SchemaA.val : SchemaA.src ;
validateRule RuleB.val : RuleB.src : SchemaA.val ;
validateRule RuleC.val : RuleC.src : SchemaA.val ;
From: "Goral, Jack" <Jack_Goral@NAI.com>
Date: Mon, 10 Dec 2001 12:22:41 -0600
Subject: If you had to start from scratch...
If you had to start from scratch your big(?) project, would you go with
'jam' again instead of makefiles?
Subject: Re: If you had to start from scratch...
From: Jose Vasconcellos <jvasco@bellatlantic.net>
Date: 11 Dec 2001 11:30:07 -0500
Yes, jam provides a simple and concise way to describe the dependencies
of your project. It has a clear and clean mechanism for separating the
dependency description from the construction rules and actions. And it's
cross platform and fast. What more do you want?
Date: Tue, 11 Dec 2001 11:59:03 -0500
From: "Thatcher Ulrich" <tulrich@oddworld.com>
Subject: Re: If you had to start from scratch...
Well, those are the advantages. It has drawbacks too. The question
is, do the advantages outweigh the drawbacks, vs other solutions? (I
won't volunteer an answer because I haven't used it enough yet.)
From: "Kimpton, Andrew" <awk@pulse3d.com>
Date: Tue, 11 Dec 2001 12:58:12 -0800
Subject: Building different objects from different sources with the same n
We use Jam to build our product(s) from a single hierarchical source tree.
We place the Binary output files into a separate hierarchy using LOCATE_TARGET.
Many of our 'components' have dependencies on each other so to minimise
build times we use SubDir and SubInclude - for example :
Component A (an executable) uses Component B (a shared library) which in
turn is built from some static libraries (C & D) and other sources.
Our Jamfiles have
A/Jamfile :
Subdir TOP A ;
If !$(s) {
SubInclude TOP B ;
SubInclude TOP C ;
SubInclude TOP D ;
}
<Blah-blah>
LinkLibraries A$(SUFEXE) : B$(SUFLIB) ;
---
B/Jamfile :
Subdir TOP B ;
If !$(s) {
SubInclude TOP C ;
SubInclude TOP D ;
}
<blah-blah>
LinkLibraries B$(SUFSHR) : C$(SUFLIB) D$(SUFLIB) ;
---
We use the !$(s) so that individual pieces can be built independantly.
A problem has arisen however since components A & B both have a source file
xyz.c one in A/xyz.c the other in B/xyz.c
If only B is built things work fine, however if we build A and rely on
dependancies to trigger the build of B building B$(SUFDLL) uses A/xyz.c NOT
B/xyz.c . I've tried working with SEARCH_SOURCE, LOCATE_* et al all to know
avail. Things are perhaps made even more complicated by the fact that 90% of
the time the builds are running on Windows (the other 10% is Unix - and I
don't want to break those builds 8-)
Subject: Re: Building different objects from different sources with the same n ame
Date: Tue, 11 Dec 2001 15:40:34 -0700
Usually grist takes care of things like this. A/xyz.c will be known
as <A>xyz.c and B/xyz.c will be known as <B>xyz.c.
For the runs that do not work, I'd just run jam with a high debugging
level and redirect its output to a file. jam -d7 spits out more than
enough. Then just search for xyz.c within jam's output to see what is
happening with it. The two xyz.c files should be differentiated with
grist, as I describe above.
Subject: Re: If you had to start from scratch...
Date: Tue, 11 Dec 2001 15:18:22 -0700
From: "Matt Armstrong" <matt+dated+1008541109.ba82d0@lickey.com>
The only criteria for the project you gave was that it was "big", so
it is a little hard to make the decision based solely on that criteria.
An example of where jam may be weak are:
- compiling Java source -- I have no direct knowledge here, but I
don't think Java dependencies can be expressed easily in Jam.
- bigger projects (>5000 files). Jam scans all source files every
time you run it looking for header files, and automatically adds
them to the dependency tree. This is nice for smaller projects,
but the scan time is prohibitive for larger ones. There are
patches that allow jam to cache this dependency information, and
I really wish one would be incorporated into the upstream jam distribution.
- projects where you distribute source to a lot of folks -- they
may not want to deal with an obscure build tool.
Examples of where I think jam is strong are:
- non-huge projects -- it is very fast
- projects requiring support for many different compilers on
different platforms -- jam's ability to separate describing the
dependency tree (rules) from how to build targets (actions)
really shines here.
- projects that need to be built on multiple platforms -- jam's
platform independence shines here.
But in general, jam is my first choice, even for larger projects.
Date: Wed, 12 Dec 2001 12:25:09 -0800 (PST)
Subject: Re: If you had to start from scratch...
Jam and Java aren't a great match, but it is doable. The main problem is
if you feed the Java source files through one at a time, because then
you're loading in the silly JVM everytime (read: sloooow).
The first build-process Jam was ever used for (literally -- it was the
build-process that Jam was used in conjunction with during its
development) was >12000 files, and it was still faster than the Make-based
process it was replacing.
If I have a choice, I always choose Jam over Make.
Subject: Re: If you had to start from scratch...
Date: Wed, 12 Dec 2001 14:53:36 -0600
From: "Gregg G. Wonderly" <gregg@skymaster.c2-tech.com>
The ultimate issue is that with Java there are no header files. The class
file replaces those, so if you have to class file, you have to provide the
source file to the compiler. Thus from a clean slate, you compile everything
at once. From a partially dirty plate, you have to compile the things that
matter. Determining what matters is the issue. Change an 'interface', you
need to recompile all implementers, and their subclasses (how do you know who
the subclasses are, without foreknowledge?)!
From my perspective, you should always recompile all files for your production
build of anything that goes into a single jar, and then you need to recompile
anything that uses that jar next. Thus, there is a dependency tree based on
jar files, but never based on .class files...
From: Jose Vasconcellos <jvasco@bellatlantic.net>
Date: 12 Dec 2001 16:54:46 -0500
Subject: Jam with Java (Was: If you had to start from scratch...)
Java users interested in using jam may want to check out the
wonka project at http://wonka.acunia.com/ They have modified
jam to support java. Here's a link to the source:
http://wonka.acunia.com/acunia-jam.tar.gz
Subject: Re: If you had to start from scratch...
Date: Wed, 12 Dec 2001 16:38:22 -0700
Faster than make doesn't necessarily define fast enough, especially
with such an obvious improvement like header caching sitting right
there screaming "implement me! implement me!"
I just patched Jam for cached header dependencies on a project (using
Craig McPheeters' patches, with improvements that I'll post here eventually).
Our project is small by comparison to jam's original Sybase project --
the cached headers only number 3600. But people are reporting
speedups of the "jam finds nothing to compile" case on the order of
6-10x. Depending on hardware, the speedup ranges from a 100 to 11
second improvement down to a 32 to 6 second improvement. Considering
that some people are saving over a minute a compile, and they might
compile 10-20 times a day, this adds up to real $$$ over the course of
a project. And it saves a few impatient nerves as well.
Date: Thu, 13 Dec 2001 10:25:24 +0100
From: Chris Gray <chris.gray@acunia.com>
Subject: Re: If you had to start from scratch...
This is somewhat mitigated if you use Jikes. (BTW your OS shouldn't be
physically loading the JVM each time - but it still needs to re-initialise it.)
I'm not sure you need to go that far, but better safe than sorry. Certainly some
very puzzling things can happen if you don't recompile everything you should:
most notoriously, compile-time constants imported from some interface are
compiled into the byte code, so if you change them you need to recompile
every class which implements that interface.
As was already mentioned, Acunia has modified Jam for use with Java class
libraries. I'm not saying that our version already solves all these problems,
but it's probably a good place to start.
Date: Thu, 13 Dec 2001 11:17:42 +0100
From: Chris Gray <chris.gray@acunia.com>
Subject: Re: If you had to start from scratch...
Actually C doesn't have header files either. All C has is a
mechanism (#include) for incorporating arbitrary text from
another file. *By convention* we group related declarations
etc. into files which *by convention* have the extension .h,
and *by convention* we #include all the ones we need near
the start of each .c file. Many say that .h files should not
#include other .h files, and/or that .h files should only contain
declarations not definitions, but even these are not universally
agreed upon. Languages such as Java, Modula, Ada, these
days probably even COBOL, have much more explicit ways
to describe dependencies than C.
From: "Smith, Stephen" <stsmith@hrblock.com>
Date: Fri, 14 Dec 2001 14:56:16 -0600
Subject: Jam on OpenVMS
I just learned about Jam through Boost and have been
experimenting. I ran into a few issues using it on OpenVMS:
1) Objects compiled from C++ source must be linked with
CXXLINK. I hacked some code into the Main rule's
procedure to get around this:
if $(VMS) {
for suffix in $(>:S) {
switch $(suffix) {
case .cpp : LINK on $(<)$(SUFEXE) = cxxlink ;
case .cxx : LINK on $(<)$(SUFEXE) = cxxlink ;
}
}
}
I think it would make more sense for the default Jambase
to use variables C++LINK and C++LINKFLAGS, but I wasn't
that ambitious given that development of Jam seems to have stalled.
2) .cxx is the extension the OpenVMS C++ compiler (CXX) assumes
if passed a file with no extension. I imagine this has lead
some OpenVMS users to use the .cxx extension on all their C++
files. It also seems like a common enough extension that Jam
should support it. I added another case statement to the
switch statement in the Object rule's procedure:
case .cxx : C++ $(<) : $(>) ;
3) GenFile doesn't work very well. On OpenVMS the actions
associated with GenFile are:
actions GenFile1 { mcr $(>[1]) $(<) $(>[2-]) }
Unfortunately, this only works if $(>[1]) contains a directory
specification. Otherwise mcr assumes a default file specification
of SYS$SYSTEM:.EXE.
I solved this issue by replacing the above actions with the
following:
actions GenFile1 { image = "$" + f$parse( "$(>[1])" ) image $(<) $(>[2-]) }
4) I don't like the fact that GenFile passes the target name as the
first parameter. Doing that buys nothing significant and makes
the rule less useful in general. Is there another rule similar
to GenFile that I am not aware of?
5) Finally, I have a comment specific to the Boost version of Jam.
Boost Jam uses alloca(), which is not portable.
Date: Fri, 14 Dec 2001 16:35:58 -0500 (EST)
From: Janos Murvai <murvai@ncbi.nlm.nih.gov>
Subject: question about installation..
I tied to install jam program. I used make.
It was started to compile but then:
ld: fatal: Symbol referencing errors. No output written to jam0
make: *** [jam0] Error 1
Exit 2
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Jam on OpenVMS
Date: Fri, 14 Dec 2001 17:04:48 -0500
The alloca call comes from bison, which is the only parser generator I have
on my machine. The best I can sugest is to regenerate the parser yourself
using yacc. The build process does this, but only after it has bootstrapped Jam0.
Hmm, I wonder if I can ship the perforce version of the parser files for
bootstrapping purposes...
I don't always pay attention to this list. Please post messages regarding
Date: Fri, 14 Dec 2001 17:21:04 -0500 (EST)
From: Janos Murvai <murvai@ncbi.nlm.nih.gov>
Subject: Re: Jam on OpenVMS
I have never used the yacc. How can I do thet?
From: "Richard Goodwin" <richardg@imageworks.com>
Date: Fri, 14 Dec 2001 15:01:03 -0800
Subject: Building variants of the same target
I am working on a system where I need to build variants of the same target.
EX: If I am building a libgraphics.a I need to build both debug and
optimized versions of this lib. Is there an easy way I can modify/extend
Jambase or Jamrules to do this without too much work.
Note: I would like to stay with stock jam (not use boost or another jam variant).
Subject: Re: Building variants of the same target
Date: Mon, 17 Dec 2001 09:07:41 -0700
If you don't mind running jam once to build the debug version and once
to build the release version it is pretty simple. See the Jamfile
that builds jam itself for examples of how it sets the LOCATE_TARGET
variable based on what os is being built. You'd do that and similar
things with CCFLAGS, etc.
Date: Tue, 18 Dec 2001 12:47:49 -0800 (PST)
Subject: Re: Building variants of the same target
Perusing the mailing-list archive is a good way to look for answers to
this sort of question. For example:
Date: Tue, 18 Dec 2001 21:35:42 -0700
Subject: Jam fix for "on target" variables during header scan
I just submitted a bug fix for jam to the perforce public depot. See
the change provided below if you'd like to grab the fix before the
next release of jam.
In addition to what is outlined in the change description, I'll
describe how I came across this bug and why it has real world
implications.
In implementing Craig McPheeters' header caching in our copy of Jam, I
noticed a large number of object files, directories, and other "non
source" files in the header cache. Investigating this, I found that
somehow the global values of HDRSCAN and HDRULE got set.
It turns out that a few of the source files in our tree #include
themselves. When this happens, the target is present in both $(<) and
$(>), so when HdrRule propagates HDRSCAN and HDRRULE from
$(<) to $(>), it sets these "on target" values on $(<).
During the execution of HdrRule, the "on target" settings on $(<)
actually serve as a temporary storage area for the *global* values of
the settings (the "on target" and global values are swapped in
pushsettings() before the HdrRule is called). So any change on $(<)'s
"on target" settings will actually be swapped out into the global
values after the HdrRule finishes.
This is how HDRSCAN and HDRRULE got set globally, and every subsequent
target that jam considered was scanned for C style #include lines, be
them object files, directories, or other binary files.
Fixing this bug reduced the number of files in the header cache from
1500 to 1100. This means that roughly 400 object files were not
header scanned (as is appropriate), so things go a little faster.
Change 1179 by matt_armstrong@... on 2001/12/18 20:07:50
Description of the bug this change fixes:
If a HDRRULE does this:
FOO on $(<) = a b c ;
Then after the HDRRULE finishes, the *global* variable FOO
will be set, and the "on target" variable FOO on $(<) won't be
changed at all.
This can occur with the default Jambase's HdrRule if a file
#includes itself.
Affected files ...
... //guest/matt_armstrong/jam/hdrscan_on_target_fix/make.c#2 edit
... //guest/matt_armstrong/jam/hdrscan_on_target_fix/rules.c#2 edit
... //guest/matt_armstrong/jam/hdrscan_on_target_fix/rules.h#2 edit
Differences ...
==== //guest/matt_armstrong/jam/hdrscan_on_target_fix/make.c#2 (text) ====
158a159
189c190,194
< pushsettings( t->settings );
---
214c219,220
< popsettings( t->settings );
---
==== //guest/matt_armstrong/jam/hdrscan_on_target_fix/rules.c#2 (text) ====
27d26
< * usesettings() - set all target specific variables
29a29
239a240,254
==== //guest/matt_armstrong/jam/hdrscan_on_target_fix/rules.h#2 (text) ====
170a171
Date: Tue, 18 Dec 2001 23:16:11 -0700
Subject: Improved Header Scanning for Jam
Inspired by David Abrahams' BINDRULE extension in Boost Jam and Diane
Holt's SCANFILE extension in
I came up with a synthesis of the two that I think is the most
"jamlike" (whatever that means).
I have to give 90% of the credit goes to Diane's 1999 implementation.
The only change this has over it is the communication of the header's
bound name in a new 3rd argument to HDRRULE (instead of through a
global variable).
As Diane's tests suggested, this this finds a few stray headers that
the stock jam's header search algorithm misses. I wonder if this
could make it into the stock distribution? It cannot produce
incorrect results, as it simply finds *more* headers than before.
The change is in two parts:
The HDRRULE is called with a new 3rd argument -- the bound name of $(<):
==== headers.c ====
if( lol_get( &lol, 1 ) )
+ {
+ /* The third argument to HDRRULE is the bound name of
+ * $(<) */
+ lol_add( &lol, list_new( L0, t->boundname ) );
evaluate_rule( hdrrule->string, &lol );
+ }
The default HdrRule is changed to add the directory the header was
found in to HDRSEARCH if it is not already there:
==== Jambase ====
rule HdrRule
{
- # HdrRule source : headers ;
+ # HdrRule source : headers : bound name of $(<) ;
# N.B. This rule is called during binding, potentially after
# the fate of many targets has been determined, and must be
...
INCLUDES $(<) : $(s) ;
+
+ # If the directory holding this header isn't in HDRSEARCH,
+ # add it.
+ if ! $(3:D) in $(HDRSEARCH)
+ {
+ HDRSEARCH += $(3:D) ;
+ }
+
SEARCH on $(s) = $(HDRSEARCH) ;
NOCARE $(s) ;
Sent: Wednesday, December 19, 2001 1:16 AM
Subject: Improved Header Scanning for Jam
FWIW, this approach is somewhat more limited in its flexibility. For one
thing, I use BINDRULE to detect the location of Jam files brought in with
"include". Since included files use SEARCH, I can, for example, keep a stock
Jamrules file at the end of the SEARCH path for includes, so there will be
no error if a user doesn't supply Jamrules... and I can detect whether it
was their Jamrules file or mine that was found.
Date: Thu, 20 Dec 2001 13:30:36 -0700
Subject: Replacement command shell for Win32?
WinNT and Win2k can pass command lines up to 10240 bytes big, but
there are cases where cmd.exe barfs with a buffer that long. So
bumping the MAXLINES setting in jam.h can lead to problems.
E.g. cmd.exe's own del and echo commands can't handle anything longer
than about 1k.
Does anybody know of a nice little command shell for Win32 that could
do the job? Preferably it'd be free and come with source. I'm
thinking of a native port of one of the free unix Korn or Borne
shells, or maybe something more windows centric.
From: "Jerjiss, Allen" <Allen_Jerjiss@intuit.com>
Subject: RE: Replacement command shell for Win32?
Date: Thu, 20 Dec 2001 12:46:14 -0800
Give Cygwin a try at www.cygwin.com.
Build & Release Engineer, Quicken.com
Sent: Thursday, December 20, 2001 12:31 PM
Subject: Replacement command shell for Win32?
WinNT and Win2k can pass command lines up to 10240 bytes big, but
there are cases where cmd.exe barfs with a buffer that long. So
bumping the MAXLINES setting in jam.h can lead to problems.
E.g. cmd.exe's own del and echo commands can't handle anything longer
than about 1k.
Does anybody know of a nice little command shell for Win32 that could
do the job? Preferably it'd be free and come with source. I'm
thinking of a native port of one of the free unix Korn or Borne
shells, or maybe something more windows centric.
_______________________________________________
Subject: RE: Replacement command shell for Win32?
Date: Thu, 20 Dec 2001 13:51:24 -0700
From: "Mike Steed" <msteed@altiris.com>
I use the Win32 port of zsh:
ftp://ftp.blarg.net/users/amol/zsh
Amol no longer works on zsh, but he does maintain the Win32 port of tcsh:
ftp://ftp.blarg.net/users/amol/tcsh
According to the docs, these shells can handle command lines up to 64
KBytes, but I haven't verified this.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Replacement command shell for Win32?
Date: Fri, 21 Dec 2001 19:48:55 -0500
Note also that in many cases (e.g. when your command is multiple lines), the
command goes through a .bat file, which on Win2K has a maximum line length
of 2047 characters.
Date: Sat, 22 Dec 2001 17:19:44 +1100
From: Darrin Smart <darrin@suresoftware.com>
Subject: Multiple Jambase files
I've just started using Jam in an attempt to replace a very large
set of recursive makefiles.
Our project contains lots of deeply nested directories, basically
in two branches. One branch is called "tools" and is a bunch of
development tools built on the local machine. The other is called
"src" which is cross-compiled using some of the programs in tools
and some system-installed programs as well.
project/tools/...
project/src/...
The tools build with gcc and that is all working just fine with the
default Jambase file.
I am trying to get the src tree to build with a cross compiler
called xcc. xcc has a completely different set of flags from gcc.
gcc -c -o project/tools/eg/eg.o -O3 -I/some/hdr/dir
project/tools/eg/eg.c
but project/src/eg2/eg2.c needs:
xcc project/src/eg2/eg2.c -eas -v=/another/hdr/dir
=fd=project/src/eg2/eg2.r
I think what I want to do is have a new Jambase file that defines
rules and actions that only apply within the src subtree. However I
still want the build system to be rooted at the top-level
("project") so a single jam command can build the tools and then
the sources.
Is it possible to do this? I tried making project/tools/Jambase and
project/src/Jambase and using SubDir to set it up but it seems that
the last one parsed is used globally.
On a related note, it would be really cool if more example
Jambase/Jamfile setups were published as part of the documentation.
Date: Sat, 22 Dec 2001 17:53:11 +1100
Subject: Re: Multiple Jambase files
From: Darrin Smart <darrin@suresoftware.com>
Sorry, substitute Jamrules for Jambase in the text below.
From: "Andreas Haberstroh" <andreas@ibusy.com>
Date: Sun, 23 Dec 2001 16:07:41 -0800
Subject: MSVC Libraries
I stumbled upon JAM by accident, and thought, this is just what I
needed. But, after a few days of working with it, i've discovered a
little annoyance with the Microsoft's LIB.EXE program.
It does not have a response file format.
Now, the trick is, how do you feed individual files to it, via JAM?
Has anyone attempted this?
I'm currently trying to create little libraries and LIB them together,
as a temporary work around, but that is proving difficult as well,
since, I haven't mastered the JAM syntax yet.
Subject: Re: Multiple Jambase files
Date: Mon, 24 Dec 2001 21:10:40 -0700
You can't do that with the default Jambase's Cc action, since it hard
codes some of the command line flags.
Yes, but non trivial.
Yes, for any rule named X there is one and only one "actions X" that
will work. In this case, you're probably calling the rule Cc for both
the gcc and xcc trees. Call them different names and you can use them
both in the same Jam run. Then the problem becomes getting the built
in Objects rule to call xcc_Cc when appropriate. I don't have any
direct experience getting Jam to build with two different compilers in
the same run, so I won't be much help.
Subject: Re: MSVC Libraries
Date: Mon, 24 Dec 2001 21:16:06 -0700
What exactly is the problem you are having? I.e. why do you want to
feed it files one at a time?
Anyway, you can try sticking this in your Jamrules file:
if $(NT) && $(MSVCNT) {
actions updated Archive {
if exist $(<) set _$(<:B)_=$(<)
$(AR) /out:$(<) %_$(<:B)_% $(>)
}
}
This removes the "together piecemeal" portion of the Archive actions
like from the default Jambase.
Date: Thu, 27 Dec 2001 17:17:34 -0800 (PST)
From: cmcpheeters@aw.sgi.com (Craig McPheeters)
Subject: Changes in my Jam guest branch
I've made several changes to the branch of jam in my guest branch. If you have
a copy, you may want to update it. There are no changes to the header
cache code. I've added a few new extensions and modified some of the earlier
ones slightly.
An earlier change in my branch was to revert a 2.3 change in execunix() back
to its 2.2 behaviour. The change was on NT to always invoke an action which has
a special shell through a temporary file. I had reverted this change, but have
now un-reverted it, and in fact now the change applies to unix as well as NT.
One of the other changes in the branch is to allow action blocks of arbitrary
size to be generated on unix or NT. This allowed action blocks larger than
could be executed via execvp() on some flavours of Unix. With this new change,
since all actions which have special shells are invoked through an intermediate
file, this can be made to work, in the same manner as for NT.
Additional changes are:
* support for % complete as jam executes
* a new debug mode, -d+11, which outputs information on a nodes fate changes.
There were cases when a node was being rebuilt, but the reason wasn't
obvious. With this debug level, you can usually always figure out why
nodes are being updated now
* created some new jam variables to describe job (-jn) information. The
variables are:
JAM_NUM_JOBS - the integer given in -jn
JAM_MULTI_JOBS - unset if JAM_NUM_JOBS < 2, set otherwise
JAM_JOB_LIST - a list of the job slot values (ie, for -j4, its: 0 1 2 3
These can be used in a variety of ways in supporting multi-job builds
* Added job slot expansion to the action blocks, the sequence !! in an action
block is replaced by the job slot the action is running in. This can be
used, for example, in the generation of unique temporary file or
directory names. Actions such as yacc could use this as they may have
fixed temporary file names (which could go into a job-slot unique directory)
* Improved the -d+10 debug level, the dependency graph. It now shows the
flags on the node as well (NOUPDATE, NOTFILE, etc.)
* Only issue the '...skipped' messages for nodes which would have been built
This fixes a problem where the percent done may go beyond 100% if there
are many targets skipped
Date: Sat, 29 Dec 2001 15:35:20 -0800 (PST)
From: cmcpheeters@aw.sgi.com (Craig McPheeters)
Subject: Re: Replacement command shell for Win32?
Sorry to revive an old thread.
Bat files on NT can grow to be really large without problem, as long as
each of the lines in it are not longer than the line length limitation.
2047 characters or so sounds about right.
hate to give the wrong attribution to it...I think Diane explained it
first? Look through the archives for the original posting.
The trick is to create two jam variables:
SPACE = " " ;
NEWLINE = "
" ;
Because of the way Jam does its variable expansion, you can use the
expansion of these variables in creative ways. If you need an action to
record all of the $(>) files into another file, the easy way is:
echo $(>) >> $(<)
but that can violate the line length limitation on NT. The trick is to
do this instead:
echo$(SPACE)$(>)$(SPACE)>>$(SPACE)$(<)$(NEWLINE)
It took me a little while to understand its beauty, but now whenever I
use that trick, I get a little smile on my face. Thanks to the original
poster for showing it to me.
There is an extension in my guest branch which removes the limitation on
the size of an action block. With that extension, and the trick above, its
possible to safely create really large action blocks on NT (or Unix) and
have them always work. Often that trick is enough to get around the
limitations of cmd.exe.
Date: Sat, 29 Dec 2001 18:47:46 -0500
Subject: Re: Replacement command shell for Win32?
From: David Abrahams <david.abrahams@rcn.com>
I don't think you need to resort to tricks. I do the same thing with a
piecemeal action that builds the response file. Actually, there are two
actions: the first erases the response file if it already exists, and
the second one appends all the filenames. The relevant code is below. It
makes use of two of my Jam extensions (named arguments and indirect rule
invocation), but they are irrelevant to the basic technique. I know the
code looks bigger than what Craig posted, but keep in mind that he's
only summarizing.
# build TARGETS from SOURCES using a command-file, where RULE-NAME is
# used to generate the build instructions from the command-file to
# TARGETS
rule with-command-file ( rule-name targets * : sources * ) {
# create a command-file target and place it where the first target
# will be built
local command-file = $(<[2]:S=.CMD) ;
LOCATE on $(command-file) = $(gLOCATE($(targets[1]))) ;
build-command-file $(command-file) : $(sources) ;
# Build the targets from the command-file instead of the sources
Depends $(targets) : $(command-file) ;
local result = [ $(rule-name) $(targets) : $(command-file) ] ;
# clean up afterwards
remove-command-file $(targets) : $(command-file) ;
return result ;
}
# Used to build command files from a list of sources.
rule build-command-file ( command : sources * ) {
Depends $(command) : $(sources) ;
# First empty the file
command-file-clear $(command) : $(sources) ;
# Then fill it up piecemeal
command-file-dump $(command) : $(sources) ;
}
# command-file-clear: silently remove the target if it exists
if $(NT) {
# NT needs special handling because DEL always barks if the file isn't found
actions quietly command-file-clear {
IF EXIST "$(<)" $(RM) "$(<)"
}
} else { actions quietly command-file-clear { $(RM) "$(<)" } }
# command-file-dump: dump the source paths into the target
actions quietly piecemeal command-file-dump { echo "$(>)" >> "$(<)" }
# Clean up the temporary COMMAND-FILE used to build TARGETS.
rule remove-command-file ( targets + : command-file ) {
TEMPORARY $(command-file) ;
Clean clean : $(command-file) ; # Mark the file for removal via clean
}
actions quietly piecemeal together remove-command-file { $(RM) $(>) }
Date: Sun, 30 Dec 2001 14:54:11 +1100
Subject: Re: Multiple Jambase files
From: Darrin Smart <darrin@suresoftware.com>
It seems like a bit of a shortcoming in Jam, particularly for very
large and complex projects.
One solution might be to make the file name matching be based on
regular expressions instead of file suffixes. Then I could override
the Object rule to select CC or xcc_Cc as you said.
Another example of the same problem I encountered is that some of
our .y files only work with yacc and some only work with bison (I
know, thats not good, but the point of the exercise is to replace
make, not fix up all the code!)
Date: Mon, 31 Dec 2001 09:25:57 +0100
From: Toon Knapen <toon.knapen@si-lab.org>
Subject: Re: Multiple Jambase files
Boost.Jam is able to use different compiles at the same time.
Actually, the 'trick' is identical to what one would do in 'make' :
create a small jamfile for every specific compiler, then include one of
these in a jam run according to some (global) variable which identifies
the compiler that should be used. (Correct me if I'm wrong David)
Actually, Boost.Jam is able to use multiple compilers at once so the the
trick is a little more subtle.
Take a look at the Boost.Jam code as it is used in the
Boost(www.boost.org) project :
cvs -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost login
cvs -z3 -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost
checkout boost
cvs -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost logout
more specifically, look in the tools/build subdir to see all compiler
specific jamfiles, look in tools/build/jam_src for the source code of boost.jam.
Date: Mon, 31 Dec 2001 09:14:57 +0100
From: Toon Knapen <toon.knapen@si-lab.org>
Subject: Re: Multiple Jambase files
Boost.Jam is able to use different compiles at the same time.
Actually, the 'trick' is identical to what one would do in 'make' :
create a small jamfile for every specific compiler, then include one of
these in a jam run according to some (global) variable which identifies
the compiler that should be used. (Correct me if I'm wrong David)
Actually, Boost.Jam is able to use multiple compilers at once so the the
trick is a little more subtle.
Take a look at the Boost.Jam code as it is used in the
Boost(www.boost.org) project :
cvs -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost login
cvs -z3 -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost
checkout boost
cvs -d:pserver:anonymous@cvs.boost.sourceforge.net:/cvsroot/boost logout
more specifically, look in the tools/build subdir to see all compiler
specific jamfiles, look in tools/build/jam_src for the source code of boost.jam.
From: mark@cleandrive.net
Date: Mon, 31 Dec 2001 11:30:07 +0000 (GMT)
Subject: Warning! Everything you do on your computer is being logged [5QleR]
Subject: Re: Multiple Jambase files
Is it just the compiling that needs to be done using 'xcc', or do you need
to use that for the link as well? If the former, it's pretty
straightforward to do what you want -- just add a switch in the case for
.c in the Object rule (in Jambase) to use the Cc rule/actions for .c files
under tools and an Xcc rule/actions (defined in your Jamrules) for all others.
If you also need to use 'xcc' to do links, then you'll need to add more
stuff (obviously :) -- but it should still be doable.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Changes in my Jam guest branch
Date: Wed, 2 Jan 2002 11:48:18 -0500
Looking at this nefarious technique again, I see that it has certain
advantages over what I've been doing. For one thing, in my scheme, if the
link fails the response file is never removed. If I've forgotten a library,
the response file is wrong and needs to be rebuilt, but according to the
dates, it appears to be up-to-date. The downside of your scheme is that it's
difficult to factor out the common logic for creating the response files
from my many toolset definitions, but I don't think that's serious enough to
avoid using it. In fact, I expect that it won't be an issue in an upcoming
revision of the build system.
A question: why the explicit $(TOUCH)? Won't the redirected echo update the
modification date? And why does the mod. date matter, anyway? The response
file never appears in the dependency graph.
Date: Tue, 1 Jan 2002 13:33:05 -0800 (PST)
Subject: Re: Multiple Jambase files
Is it just the compiling that needs to be done using 'xcc', or do you need
to use that for the link as well? If the former, it's pretty
straightforward to do what you want -- just add a switch in the case for
.c in the Object rule (in Jambase) to use the Cc rule/actions for .c files
under tools and an Xcc rule/actions (defined in your Jamrules) for all others.
If you also need to use 'xcc' to do links, then you'll need to add more
stuff (obviously :) -- but it should still be doable.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Changes in my Jam guest branch
Date: Wed, 2 Jan 2002 11:48:18 -0500
Looking at this nefarious technique again, I see that it has certain
advantages over what I've been doing. For one thing, in my scheme, if the
link fails the response file is never removed. If I've forgotten a library,
the response file is wrong and needs to be rebuilt, but according to the
dates, it appears to be up-to-date. The downside of your scheme is that it's
difficult to factor out the common logic for creating the response files
from my many toolset definitions, but I don't think that's serious enough to
avoid using it. In fact, I expect that it won't be an issue in an upcoming
revision of the build system.
A question: why the explicit $(TOUCH)? Won't the redirected echo update the
modification date? And why does the mod. date matter, anyway? The response
file never appears in the dependency graph.
Date: Thu, 3 Jan 2002 13:46:22 +0100
From: "BROSSIER Florent" <F.BROSSIER@csee-transport.fr>
Subject: Dependency with files with the same name in different directory
Let suppose I have the following files and directories.
- At root:
Jamrules
HDRS = $(TOPDIR)$(SLASH)Other ;
Jamfile
SubDir TOPDIR ;
SubInclude TOPDIR Test ;
- In the directory Other
foo.hpp
#error
- in the directory Test
Jamfile
SubDir TOPDIR Test ;
Main Test.exe : main.cpp ;
main.cpp
#include "foo.hpp"
int main(int argc, char** argv) { return 0;
}
foo.hpp
// Empty
Now the problem:
When I start Jam. The compilation is ok. (Test/foo.hpp was included!)
If I modifie Test/foo.hpp and start Jam again nothing is done.
If I modifie Other/foo.hpp and start Jam again the executable is rebuild.
Is it a bug of Jam?
What can I do to solve this problem?
It seems that the problem comes from the HDRS variable which is used by
Jam to scan for include files.
The current path is not included in the HDRS variable but is
automatically added in the list of include path for the compilation
before any others.
PS: I use Jam version 2.3 with no modifications.
Subject: Re: Dependency with files with the same name in different
Date: Thu, 03 Jan 2002 09:38:07 -0700
Yes, I think there is a bug in Jambase here.
The Object rule in Jambase sets HDRS on targets to:
$(SEARCH_SOURCE) $(HDRS) $(SUBDIRHDRS)
But it sets HDRSEARCH to:
$(HDRS) $(SUBDIRHDRS) $(SEARCH_SOURCE) $(STDHDRS)
I think the bug is that the two do not specify the same order. So the
compiler will search with one order, and Jam another.
HDRSEARCH should probably be:
$(SEARCH_SOURCE) $(HDRS) $(SUBDIRHDRS) $(STDHDRS)
If you change the Object rule in Jambase to set HDRSEARCH the same way
it sets HDRS, Jam finds the correct foo.hpp file. (put the Object
rule at the end of this message in your Jambase).
But please take notice: placing a header file with the same name
multiple times is also dangerous. In this case, you should use
something called "header grist". The safest way to do this is to put
this after every time you call SubDir:
HDRGRIST = $(SOURCE_GRIST) ;
This way, each sub directory can find a different foo.hpp. Without
this change, Jam will find one foo.hpp and stop there. You will also
want to put this rule in your Jamrules:
rule FGristSourceFiles { return [ FGristFiles $(<) ] ; }
rule Object {
local h ;
# locate object and search for source, if wanted
Clean clean : $(<) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
# Save HDRS for -I$(HDRS) on compile.
# We shouldn't need -I$(SEARCH_SOURCE) as cc can find headers
# in the .c file's directory, but generated .c files (from
# yacc, lex, etc) are located in $(LOCATE_TARGET), possibly
# different from $(SEARCH_SOURCE).
HDRS on $(<) = $(SEARCH_SOURCE) $(HDRS) $(SUBDIRHDRS) ;
# handle #includes for source: Jam scans for headers with
# the regexp pattern $(HDRSCAN) and then invokes $(HDRRULE)
# with the scanned file as the target and the found headers
# as the sources. HDRSEARCH is the value of SEARCH used for
# the found header files. Finally, if jam must deal with
# header files of the same name in different directories,
# they can be distinguished with HDRGRIST.
# $(h) is where cc first looks for #include "foo.h" files.
# If the source file is in a distant directory, look there.
# Else, look in "" (the current directory).
if $(SEARCH_SOURCE) { h = $(SEARCH_SOURCE) ; }
else { h = "" ; }
HDRRULE on $(>) = HdrRule ;
HDRSCAN on $(>) = $(HDRPATTERN) ;
#HDRSEARCH on $(>) = $(HDRS) $(SUBDIRHDRS) $(h) $(STDHDRS) ;
HDRSEARCH on $(>) = $(h) $(HDRS) $(SUBDIRHDRS) $(STDHDRS) ;
HDRGRIST on $(>) = $(HDRGRIST) ;
# if source is not .c, generate .c with specific rule
switch $(>:S) {
case .asm : As $(<) : $(>) ;
case .c : Cc $(<) : $(>) ;
case .C : C++ $(<) : $(>) ;
case .cc : C++ $(<) : $(>) ;
case .cpp : C++ $(<) : $(>) ;
case .f : Fortran $(<) : $(>) ;
case .l : Cc $(<) : $(<:S=.c) ;
Lex $(<:S=.c) : $(>) ;
case .s : As $(<) : $(>) ;
case .y : Cc $(<) : $(<:S=.c) ;
Yacc $(<:S=.c) : $(>) ;
case * : UserObject $(<) : $(>) ;
}
}
Date: Thu, 03 Jan 2002 12:01:55 -0700
Subject: Improved Header Scan Cache for Jam
I just submitted code to //guest/matt_armstrong/jam/hdrscan_cache that
implements a header scan cache for Jam.
This code is an incremental improvement over Craig McPheeters'
original version in //guest/craig_mcpheeters/jam/src/. I've talked
with Craig and he plans to roll most or all of my changes into his version.
I have even higher hopes -- I'd like it to make it into stock jam.
Rationale:
- A header scan cache can improve things when HDRGRIST is in use.
For example, with stock Jam if you always set HDRGRIST to
$(SOURCE_GRIST), standard headers such as /usr/include/stdio.h
will now get scanned once for each SubDir. With the header scan
cache, common headers will be scanned only once.
This makes it practical to always use HDRGRIST. This means that
the stock Jambase can support multiple header files of the same
name. I think this rectifies a frequently encountered weakness
in Jam.
It is important to point out that you get this benefit
regardless of whether the cache is saved to disk.
- The header scan cache is persistent across runs of Jam only if
the user wants it (controlled via the HCACHEFILE variable). So
by default Jam will not sprinkle cache files all of the source
tree, and it is possible to use LOCATE to put the persistent
copy of the cache in, e.g., a build output directory.
Storing the header cache on disk can bring real benefits. On
the medium sized project I use jam for, it seems to speed jam
startup (time to first build action) by a factor of 6. People
are happy to wait 15 seconds instead of 90.
It is important to point out that about half of this speedup
occurs even if the cache is not persistent, since our project
makes heavy use of HDRGRIST to correctly find all the header
files in the project.
- The cache is implemented in such a way that it can never change
the semantics of what Jam does. The call to a target's HDRRULE
will be identical with or without the cache code.
Here is the text of the README.header_scan_cache that is part of the submit.
This change implements a header scan cache in a form that
(cross fingers) can be incorporated into the stock version of Jam.
This code is taken from //guest/craig_mcpheeters/jam/src/ on
the Perforce public depot. Many thanks to Craig McPheeters
OPT_HEADER_CACHE_EXT #define within the code.
Jam has a facility to scan source files for other files they
might include. This code implements a cache of these scans,
so the entire source tree need not be scanned each time jam is
run. This brings the following benefits:
- If a file would otherwise be scanned multiple times in a
single jam run (because the same file is represented by
multiple targets, perhaps each with a different grist),
it will now be scanned only once. In this way, things
are faster even if the cache file is not present when
Jam is run.
- If a cache entry is present in the cache file when Jam
starts, and the file has not changed since the last time
it was scanned, Jam will not bother to re-scan it. This
markedly increaces Jam startup times for large projects.
This code has improvements over Craig McPheeters' original
version. I've described all of these changes to Craig and he
intends to incorporate them back into his version. The
changes are:
- The actual name of the cache file is controlled by the
HCACHEFILE Jam variable. If HCACHEFILE is left unset
(the default), reading and writing of a cache file is
not performed. The cache is always used internally
regardless of HCACHEFILE, which helps when HDRGRIST
causes the same file to be scanned multiple times.
Setting LOCATE and SEARCH on the the HCACHEFILE works as
well, so you can place anywhere on disk you like or even
search for it in several directories. You may also set
it in your environment to share it amongst all your projects.
- The .jamdeps file is in a new format that allows binary
data to be in any of the fields, in particular the file
names. The original code would break if a file name
contained the '@' or '\n' characters. The format is
also versioned, allowing upgrades to automatically
ignore old .jamdeps files. The format remains human
readable. In addition, care has been taken to not add
the entry into the header cache until the entire record
has been successfully read from the file.
- The cache stores the value of HDRPATTERN with each cache
entry, and it is compared along with the file's date to
determine if there is a cache hit. If the HDRPATTERN
does not match, it is treated as a cache miss. This
allows HDRPATTERN to change without worrying about stale
cache entries. It also allows the same file to be
scanned multiple times with different HDRPATTERN values.
- Each cache entry is given an "age" which is the maximum
number of times a given header cache entry can go unused
before it is purged from the cache. This helps clean up
old entries in the .jamdeps file when files move around
or are removed from your project.
You control the maximum age with the HCACHEMAXAGE
variable. If set to 0, no cache aging is performed.
Otherwise it is the number of times a jam must be run
before an unused cache entry is purged. The default for
HCACHEMAXAGE if left unset is 100.
- Jambase itself is changed.
SubDir now always sets HDRGRIST to $(SOURCE_GRIST) so
header scanning can deal with multiple header files of
the same name in different directories. With the header
cache, this does no longer incurs a performance penalty
-- a given file will still only be scanned once.
The FGristSourceFiles rule is now just an alias for
FGristFiles. Header files do not necessarily have
global visibility, and the header cache eliminates any
performance penalty this might otherwise incur.
Because of all these improvements, the following claims can be
made about this header cache implementation that can not be
made about Craig McPheeters' original version.
- The semantics of a Jam run will never be different
because of the header cache (the HDRPATTERN check ensures this).
- It will never be necessary to delete .jamdeps to fix
obscure jam problems or purge old entries.
Date: Thu, 03 Jan 2002 11:57:40 -0800
From: rmg@perforce.com
Subject: Jam release plan
I thought that this might be a good opening for me to let people where
we're at with work on a new release of Jam. (Even though I'll defer
talking about header scanning just now).
I hope to soon have (I'm aiming at next week, really!) an update to
//public/jam/... comprising the changes to Jam made internally at
integrated into //public/jam, from a branch in //guest/richard_geiger/
where the individual changes from the Perforce internal version will
be imported. These changes will *not* be packaged into a new release
at this point... But you'll be thence be able to do integrations of
these changes from //public/jam/... into your //guest/.../jam/...
branches as desired, per the plan I outline here:
Christopher and I have done a triage to consider most other Jam
changes I'm aware of for inclusion in the new release (presumably this
will be Jam 2.4). Christopher reserves the right to be the final
arbiter on these decisions. In some cases, we'll take contributed
changes "as is"; in others, Christopher likes the intent of the
change, but wants to consider alternate implementations; in others, we
may decline to pick up the change altogether, at least for now.
I'll be contacting individual contributors of the changes we've
decided to take "wholesale". I'm hoping that, in most cases, these
individuals will be able to help by integrating the changes from
//public/jam/ into their own branches, which should make it easiest
for me to then integrate the individual features we want for Jam 2.4
into the //public/jam mainline.
Of course, anybody with a //guest Jam branch will be welcome and
encouraged to also integrate these changes.
At some point - hopefully before too many more weeks pass by - we'll
have a //public/jam/ that is ready to be packaged as Jam 2.4.
Beyond that, we can start a planning process for Jam 2.5, in hopes
that with a notch more of planning and coordination on my part, we can
do the best job of improving Jam, both for the "stock" and "custom" versions.
Date: Thu, 3 Jan 2002 13:07:11 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Changes in my Jam guest branch
Its not really my scheme, its something I picked up from the jam archives.
I sure like it though.
Also note that its a general technique. It can be used to create response
files as well as many other types of action blocks. Once you start using
the newline expansion trick, you'll find other uses for it.
The touch may not be required. I know with some Unix shells, if you do:
echo hi >> foo
and 'foo' doesn't exist, the '>>' fails as there is no file to append to.
Adding a touch there ensures the file exists, and that was its only intent.
Date: Thu, 03 Jan 2002 17:37:49 -0700
Subject: Expressing "lazy always update" intermediate files
Let's say file c depends on b which depends on a
a -> b -> c
File b is an intermediate file that, for various reasons, is best not
marked TEMPORARY (to make it concrete, it is a script that some jam
actions create, and sometimes people like to re-run b to create c
without running jam).
When I run jam, I want b to always be re-built, but only in the cases
where c is being built.
The dependency graph for b is complex (it depends on some of the
jamfiles in the project, the contents of various environment
variables, etc.) so it is simpler for it to always be rebuilt.
However, if I mark it with ALWAYS, then c is always rebuilt as well.
Date: Thu, 3 Jan 2002 18:50:00 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Expressing "lazy always update" intermediate files
(A depends on B = A -> B. I use dependency arrows, not data flow.)
One approach to try is to introduce a new node or two. Can you create a
new node, which has all the dependencies that 'c' normally would have, but
isn't actually a file?
Something like:
<real>c -> b -> c -> a
with:
NOTFILE c ;
or perhaps make the earlier 'c' a gristed NOTFILE node, and the eventual
'c' ungristed. Its hard to know without knowing the details, and I don't
really want to know the details :-)
A different approach is to leave the original order, but assign two actions
with the b and c nodes. The actions are processed in the order they are
called. This is something like how a yacc action can produce two files, a
.c and a .h. The graph for that is:
yacc.c -> yacc.h -> <stuff>
If the .c doesn't depend on the .h, a multi-job build may try to use one
of the files if a different job is processing the other. Jam can't
scan the .c for headers until after its created. (this is why the yacc
rule has the .c file include the .h explicitly.)
Assuming you want the script to be created by a separate action, I'll call
two rules. If one action can create the script and the c file, only call
myCreateFile and have the action do both.
Something like:
rule myCreateScript { Depends $(<[2]) : $(<[1]) ; }
actions myCreateScript {
rm -f $(<[1])
echo echo a new script > $(<[1])
chmod 0755 $(<[1])
}
rule myCreateFile {
Depends $(<[2]) : $(<[1]) ;
Depends $(<) : $(>) ;
}
actions myCreateFile {
rm -f $(<[2])
$(<[1]) > $(<[2])
}
myCreateScript b c ;
myCreateFile b c : a ;
Depends all : c ;
Depends c : d ;
I think that works. Of course the syntax can be improved, depending on
which of the nodes names can be automatically generated. Call a higher
level rule which creates names and calls lower level rules, etc.
From: "Roesler, Randy" <rroesler@mdsi.bc.ca>
Subject: RE: Expressing "lazy always update" intermediate files
Date: Thu, 3 Jan 2002 19:04:47 -0800
How about ...
rule Newest {
# do nothing .. just introduce a node to JAM
}
action Newest {
# do nothing .. just tell jam how it can build a Newest
}
Newest N ;
# And then you a -> b -> c becomes
DEPEND b : a ;
DEPEND c : b ;
DEPEND b : N ;
I have not tried this, but it is a "trick" used in make all
of the time.
Since N never exists, its always forces the build of anything
that depends on it.
But jam only builds thos things related to the top level target
(normall all), so in c is no required by this target, it always
gets built.
(An ALWAYS applied to N could not hurt !)
Date: Fri, 4 Jan 2002 13:39:57 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Expressing "lazy always update" intermediate files
That's the same as saying you want b to be built as part of the action for
c. Go ahead.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Expressing "lazy always update" intermediate files
Date: Fri, 4 Jan 2002 17:37:11 -0500
I've tried to solve this problem, too, without success. Did any of the other
replies you received shed light on it? So far, my conclusion is that the
only way to do this is to hide construction of b by incorporating it as part
of c's build actions, much the same as you have suggested doing for response files:
a -> (b->c)
Sent: Thursday, January 03, 2002 7:37 PM
Subject: Expressing "lazy always update" intermediate files
Subject: Re: Expressing "lazy always update" intermediate files
Date: Fri, 04 Jan 2002 15:27:43 -0700
Doing it in a single action is impractical due to the limitations of
my shell (Win NT cmd.exe).
I tried this idea with Jamfile:
File <real>c : b ;
File b : a ;
Depends b : <fake>c ;
Depends <fake>c : a ;
NOTFILE <fake>c ;
Then this:
touch a
jam "<real>c"
rm c
jam "<real>c"
And b is not re-created on the second run of jam. So it doesn't do
exactly what I want.
Yes, this works. I'm not particularly eager to implement it in my
situation though -- there are 3-4 of these temporary files (scripts,
linker definition files, various response files, etc.) and passing all
of them along as the targets to the various rules that create each one
could be a maintenance program.
I'd be nicest to be able to write rules "normally" but have the right
thing happen. Hmm...
Date: Fri, 4 Jan 2002 14:59:58 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Expressing "lazy always update" intermediate files
If you can automatically determine the script/file combination from a
basename of some sort, you can avoid the maintenance problem. ie,
something like:
rule myGenerateFile {
local script file scriptDir fileDir ;
script = $(<:S=.script) ;
file = $(<) ;
...
scriptDir = ? ;
...
MakeLocate $(script) : $(scriptDir) ;
MakeLocate $(file) : $(fileDir) ;
myCreateScript $(script) $(file) ;
myCreateFile $(script) $(file) : $(>) ;
}
called something like:
myGenerateFile foo.cpp ;
Of course, add extra arguments to it as necessary to fully specify the
script and file.
myGenerateFile baseName : extra args for script : extra args for file ;
or other variations as needed.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Expressing "lazy always update" intermediate files
Date: Fri, 4 Jan 2002 18:08:39 -0500
I think that with the use of the rule indirection feature which allows rules
to be invoked through variable expansions,
$(rule-name) x y : z ;
you can wrap up all the logic in one place and avoid the maintenance
problems. I did a similar thing for response-file support in Boost.Build.
The code to implement the feature in Boost.Jam is pretty straightforward.
Let me know if you want to see it.
Date: Sat, 05 Jan 2002 00:48:38 -0700
Subject: New LAZY builtin
Yes, in a nutshell. Or Craig's idea of using more than one target in
$(1) (his post basically works out of the box). I also think using
TEMPORARY and removing the files after they're used is an alternative.
I don't find any of them very satisfactory.
I've implemented a new LAZY rule that behaves a bit like a mixture
between ALWAYS and TEMPORARY. (I was originally calling it
LAZY_ALWAYS). After the fate of a target is determined, new code
loops through its immediate dependents. If they are marked "lazy" and
their fate is not to be updated, their fate is changed to "touch" and
their dependents are similarly checked recursively. So it is "lazy"
in the sense that the targets are always rebuilt, but only when their
direct dependents are.
Here is my documentation for the feature. It explains why I don't
like many of the alternatives, then describes the feature, then an
artifact of the implementation that could be called a bug. The patch
follows. The code will show up in //guest/matt_armstrong/jam
eventually, but I won't make another announcement here.
Because the Windows NT shell (cmd.exe) sucks, it is often best to
break up complex operations into many actions. Examples include
creating various response files and linker definition files for
the link step of a compile.
The problem with this is that these files may not always be
rebuilt when necessary. It is difficult to construct a
straightforward chain of actions that guarantees that all the
response files that need to get built whenever the final link
makes use of them.
Stock Jam provides two main ways to accomplish this:
- Mark the response files TEMPORARY and remove them with
RmTemps after the link. This is problematic since removing
them just adds mystery to the final link process for the
typical engineer. People often want to look at the files to
see exactly how the link occurs.
- Perform the final link with several actions that take a list
of the final image and all the response files in $(<). Each
action would build one of the elements in $(<). This is an
obtuse hack that is difficult to explain and maintain.
The solution presented here is a new built in rule LAZY. When
called like this:
LAZY target ;
"target" is marked "lazy".
When Jam decides that a given target is to be built, it now checks
all direct dependents to see if they are marked lazy. If they
are, the lazy dependents are also marked for rebuilding, and their
direct dependents are similarly considered, and so on.
This affords the benefits of marking targets TEMPORARY (that they
will be rebuilt whenever the targets they depend on are rebuilt)
without the negatives (that they get deleted after the build).
BUGS:
There is a bug in this implementation that I do not believe will
lead to practical problems. Consider the following set of dependencies.
d -> b* (d depends on b*)
c -> b*
b* -> a
Consider b* to be marked "lazy". The current implementation will
correctly rebuild b whenever either d or c is rebuilt. However,
it does not guarantee that BOTH d and c get built whenever b* is
updated. If b* is updated only because it is lazy, some of its
dependents may not be updated. For example, if c is updated and
b* is marked for updated because it is lazy, then d may not be
updated. If d is marked for update and b* is marked for updated
because it is lazy, c may not be marked for update. I call this a
bug since it shouldn't be necessary to run Jam twice to satisfy
all dependencies.
A simple way to work around this is to mark b* with NOTFILE. This
will cause b*'s time stamp to no longer be considered. This is
arguably a reasonable thing to do, since these files are rarely
edited by hand and whenever they are used they are rebuilt.
Another workaround is to mark the final linked image with LEAF,
which will usually has a similar effect of removing b*'s time
stamp from consideration. Another workaround is to avoid having a
LAZY file with more than one dependent target (this is usually the
case anyway, which is the major reason I don't consider this
problem serious).
This patch is against my local copy of jam, which has been patched a
bit from stock jam. You can spot some features I took from Craig
McPheeters in there, and some of my own. But the diffs give plenty of
context and the changes fairly minor, so hand patching should go smoothly.
--- rules.h Wed Dec 19 23:06:47 2001
+++ c:/ext1/sc/jam/main/rules.h Fri Jan 4 23:54:58 2002
@@ -100,20 +100,23 @@
# define T_FLAG_TEMP 0x01 /* TEMPORARY applied */
# define T_FLAG_NOCARE 0x02 /* NOCARE applied */
# define T_FLAG_NOTFILE 0x04 /* NOTFILE applied */
# define T_FLAG_TOUCHED 0x08 /* ALWAYS applied or -t target */
# define T_FLAG_LEAVES 0x10 /* LEAVES applied */
# define T_FLAG_NOUPDATE 0x20 /* NOUPDATE applied */
#ifdef OPT_GRAPH_DEBUG_EXT
# define T_FLAG_VISITED 0x40 /* Used in dependency graph output */
#endif
+#ifdef OPT_BUILTIN_LAZY_EXT
+# define T_FLAG_LAZY 0x80 /* LAZY applied */
+#endif
char binding; /* how target relates to real file */
# define T_BIND_UNBOUND 0 /* a disembodied name */
# define T_BIND_MISSING 1 /* couldn't find real file */
# define T_BIND_PARENTS 2 /* using parent's timestamp */
# define T_BIND_EXISTS 3 /* real file, timestamp valid */
TARGETS *deps[2]; /* dependencies */
--- compile.c Wed Dec 26 12:04:28 2001
+++ c:/ext1/sc/jam/main/compile.c Fri Jan 4 23:50:23 2002
@@ -158,20 +158,25 @@
parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_NOTFILE );
bindrule( "NoUpdate" )->procedure
bindrule( "NOUPDATE" )->procedure
parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_NOUPDATE );
bindrule( "Temporary" )->procedure
bindrule( "TEMPORARY" )->procedure
parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_TEMP );
+#ifdef OPT_BUILTIN_LAZY_EXT
+ bindrule( "LAZY" )->procedure
+ parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_LAZY );
+#endif
+
#ifdef OPT_BUILTIN_MATCH_EXT
bindrule( "MATCH" )->procedure
parse_make( builtin_match, P0, P0, P0, C0, C0, 0 );
#endif
#ifdef NT
#ifdef OPT_BUILTIN_W32_GETREG_EXT
bindrule( "W32_GETREG" )->procedure
parse_make( builtin_w32_getreg, P0, P0, P0, C0, C0, 0 );
#endif
--- make.c Fri Jan 4 10:42:48 2002
+++ c:/ext1/sc/jam/main/make.c Fri Jan 4 23:54:07 2002
@@ -59,20 +59,25 @@
int updating;
int cantfind;
int cantmake;
int targets;
int made;
} COUNTS ;
static void make0( TARGET *t, int pbinding, time_t ptime,
int depth, COUNTS *counts, int anyhow );
+#ifdef OPT_BUILTIN_LAZY_EXT
+static void makelazy0( TARGET *t, int depth );
+static void makelazy( TARGET *t, int depth );
+#endif
+
#ifdef OPT_GRAPH_DEBUG_EXT
static void dependGraphOutput( TARGET *t, int depth );
#endif
static char *target_fate[] = {
"init", /* T_FATE_INIT */
"making", /* T_FATE_MAKING */
"stable", /* T_FATE_STABLE */
"newer", /* T_FATE_NEWER */
@@ -152,20 +157,100 @@
#endif
status = counts->cantfind || counts->cantmake;
for( i = 0; i < n_targets; i++ )
status |= make1( bindtarget( targets[i] ) );
return status;
}
+#ifdef OPT_BUILTIN_LAZY_EXT
+/*
+ * makelazy0() - checks if this target is not being built but marked
+ * lazy and if so, touches the target so it does
+ * get built.
+ */
+
+static void
+makelazy0( TARGET *t, int depth )
+{
+ TARGETS *c;
+ int i;
+
+ /*
+ * Step 1: don't bother if we are already being processed
+ */
+ if (t->fate <= T_FATE_MAKING)
+ return;
+
+ /*
+ * Step 2: don't bother if we're already being built
+ */
+ if (t->fate >= T_FATE_SPOIL)
+ return;
+
+ /*
+ * Step 3: don't bother if we're not lazy
+ */
+ if ( !(t->flags & T_FLAG_LAZY) )
+ return;
+
+ /*
+ * Step 4: say change our fate to "touched"
+ */
+#ifdef OPT_GRAPH_DEBUG_EXT
+ if (DEBUG_FATE)
+ printf("fate change %s from %s to %s by lazy touch\n",
+ t->name,
+ target_fate[t->fate], target_fate[T_FATE_TOUCHED]);
+#endif
+ t->fate = T_FATE_TOUCHED;
+
+ /*
+ * Step 5: check our dependents for laziness.
+ */
+ for (c = t->deps[i]; c; c = c->next)
+ makelazy0(c->target, depth + 1);
+}
+
+/*
+ * makelazy() - make the dependents of this target lazily
+ *
+ * makelazy() checks if this target is being built and if so
+ * changes the fate of any lazy dependents so that they
+ * are built as well.
+ */
+
+static void
+makelazy( TARGET *t, int depth )
+{
+ TARGETS *c;
+ int i;
+
+ /*
+ * Step 1: don't bother if we're not being rebuilt
+ */
+ if (t->fate < T_FATE_SPOIL)
+ return;
+
+ /*
+ * Step 2: check our dependents for lazines.
+ */
+ for (c = t->deps[i]; c; c = c->next)
+ makelazy0(c->target, depth);
+}
+#endif
+
/*
* make0() - bind and scan everything to make a TARGET
*
* Make0() recursively binds a target, searches for #included headers,
* calls itself on those headers, and calls itself on any dependents.
*/
static void
make0(
TARGET *t,
@@ -507,20 +592,26 @@
c->target->name);
#endif
hfate = max( hfate, c->target->hfate );
}
/* Step 4b: propagate dependents' time & fate. */
t->htime = hlast;
t->hleaf = hleaf ? hleaf : t->htime;
t->hfate = hfate;
+
+#ifdef OPT_BUILTIN_LAZY_EXT
+ /* Step 4c: if we're being rebuilt, rebuild any of our lazy
+ dependents. */
+ makelazy( t, depth );
+#endif
/*
* Step 5: a little harmless tabulating for tracing purposes
*/
#ifdef OPT_IMPROVED_PATIENCE_EXT
++counts->targets;
#else
if( !( ++counts->targets % 1000 ) && DEBUG_MAKE )
printf( "...patience...\n" );
Subject: Re: Expressing "lazy always update" intermediate files
Date: Sat, 05 Jan 2002 01:21:13 -0700
By "maintenance problem" I meant the maintenance and comprehension of
the rule itself, not the users of the rule (which I agree can be
adequately shielded from the implementation).
For some background, this basic problem has existed in our jam rules
for over four years with lots of smart engineers not thinking of the
above solution. From that I conclude that it is not obvious why the
above solution actually works, and I like to avoid techniques that
appear in any way "magical" (Jam is magical enough to the
uninitiated). Sure, I have dabbled in jam for years and steeped my
brain in it for about a month and I now understand how this technique
works. I'm not at all confident it'll be clear to me after an
extended period away from the code.
In my case, there are 3 auxiliary files, and the thought of passing
four separate targets in $(<) with various rules and actions
referencing them by $(<[2]) and the like makes my head spin. There is
a reason the original engineers wrote the rules and actions the
straightforward way.
And so the LAZY rule, which I just posted, is born. ;-) I figure if
Jam has built in functionality to deal with intermediate files that
are deleted (TEMPORARY) it is reasonable for Jam to have built in
functionality to deal with intermediate files that aren't (LAZY).
I am hopefully stopping short of being over zealous here. :-) I'm not
suggesting that "LAZY" be part of stock jam.
Date: Mon, 7 Jan 2002 11:33:46 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Expressing "lazy always update" intermediate files
If you don't mind a bit of topic drift, I'm curious about that. Could you
describe what would happen?
Subject: Re: Expressing "lazy always update" intermediate files
Date: Mon, 07 Jan 2002 10:17:19 -0700
Actually I don't think it is NT specific -- these "response files" are
huge, so they must be built with piecemeal actions. So, it isn't
possible to build all the response files with a single actions block.
Craig showed me how to build multiple targets by putting all of them
in $(<), but I think that gets too complex.
multiple-targets targetA targetB targetC : ... ;
Date: Mon, 7 Jan 2002 13:29:17 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: New LAZY builtin
That looks like it could be useful, nice one.
Just a minor point - the name lazy doesn't suggest to me what the new
feature is doing.
Here are some alternative names for the new keyword:
COUPLE a : b ;
RELATE a : b ;
UPDATE a : b ; # jam has NOUPDATE, this is kinda the opposite of that one
IFUPDATE a : b ;
Date: Mon, 7 Jan 2002 13:57:42 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Expressing "lazy always update" intermediate files
The limitations on the size of an action block can be removed with one
of the extensions in my branch. I find with unlimited action sizes,
and the newline expansion trick, the standard NT shell can be used
for most of my actions. Where more complex logic is needed, I use perl.
The way to make this more useful is to set a bunch of variables on some
target, each of which can grow to large lists.
For example, if you're building a shared object, you may want the list
of object files and the list of archives in separate lists. I do this
by setting different variables on the target. ie, something like:
OBJS on libfoo.dll += foo.o ;
OBJS on libfoo.dll += bar.o ;
ARCHIVES on libfoo.dll += liba.lib ;
ARCHIVES on libfoo.dll += libb.lib ;
ARCHIVES on libfoo.dll += libc.lib ;
where the above is done internally by my other rules. Then in the
action block for building the shared object:
actions myBuildShared bind OBJS ARCHIVES {
if exist $(<).objs $(RM) $(<).objs
$(TOUCH) $(<).objs
echo$(SPACE)$(OBJS)$(SPACE)>>$(<).objs$(NEWLINE)
if exist $(<).archives $(RM) $(<).archives
$(TOUCH) $(<).archives
echo$(SPACE)$(ARCHIVES)$(SPACE)>>$(<).archives$(NEWLINE)
$(LINK) ... @$(<).objs @$(<).archives ...
}
it is sometimes better to pre-process files via separate actions, but the
number of times I do that is much less now that it was when action blocks
had a fixed size limit.
Date: Wed, 9 Jan 2002 12:01:29 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Improved Header Scan Cache for Jam
fyi, this is done now. The integration in my guest branch is a
lightly modified version of Matt's modifications to my original.
Subject: Re: Re: Improved Header Scan Cache for Jam
Date: Thu, 10 Jan 2002 09:18:57 -0700
And, for the record, please consider Craig's version to be the
canonical one. I'll be rolling Craig's stuff into what I use and
deleting //guest/matt_armstrong/jam/hdrscan_cache.
Date: Thu, 10 Jan 2002 11:41:09 -0500 (EST)
From: Janos Murvai <murvai@ncbi.nlm.nih.gov>
Subject: unsubcribe
Date: Thu, 10 Jan 2002 11:02:35 -0700
Subject: "My" version of jam now available
I've published yet another custom version of Jam in
//guest/matt_armstrong/jam/patched_version/...
This represents the sum total of changes to stock jam that we've made
at our company. Some of these have been in use for 3-4 years, but
most of the code has been re-worked by me in the past month (e.g. some
of the rules were implemented in C++, I converted them to C, etc.)
//guest/matt_armstrong/jam/patched_version/LOCAL_DIFFERENCES.txt contains
a detailed description of each change. Each change should be totally
independent of the other, each selected by its own #define, so it
should be easy to pick and choose stuff you might like (thanks to
Craig McPheeters for this idea).
Here is the contents of the LOCAL_DIFFERENCES.txt.
This file details the differences between Geoworks' copy of Jam and
the stock upstream version as distributed by Perforce.
* Conventions Used for Jam Patches
All changes we've made to Jam C source are surrounded by an
#ifdef. The #ifdefs are constructed such that it is possible to
remove each feature independent of the other. This greatly eases
maintenance costs, since the next time an upstream version of jam
is merged in it'll be very easy to see why each change was made.
It also makes it easy to assess how big a tweak to jam each change
actually is.
The naming convention for the #defines is:
OPT_BUILTIN_..._EXT -- a new builtin rule
OPT_IMPROVE_..._EXT -- a new general improvement, but no real
change in functionaltiy
OPT_FIX_..._EXT -- a bug fix to Jam (suitable for including in
the upstream Jam).
OPT_..._EXT -- anything that doesn't fall neatly in the above.
All changes made to Jamfiles or Jamrules are surrounded by obvious
comments of the form:
### LOCAL CHANGE
#
stuff
#
### LOCAL CHANGE
* The builtin Jambase is slightly changed.
The builtin Jambase has a few tweaks to make it nicer under NT.
It sets the MSVCNT var from MSVCDIR if MSVCDIR is set. MSVCDIR is
the variable Visual C++ 6.0's version of vcvars32.bat sets, while
MSVCNT seems to be a Visual C++ 5.0 thing. This change has been
sent upstream.
If MSVCNT is still unset, it uses W32_GETREG and W32_SHORTNAME to
grab the installation location of Visual C++, and sets MSVCNT
appropriately. This is merely a matter of convenience for people.
It doesn't complain if it can't find a compiler under NT.
It doesn't announce that it is using Visual C++.
In all other ways (variables set, rules and actions defined) the
built in Jambase is identical to stock Jam.
* New Builtin Rules
** PWD
A new rule PWD returns the current working directory. Used like so:
pwd = [ PWD ];
This, together with some Jam logic, can be used to generate a
fully qualified path name. Currently it is only used to fully
qualify the tools/bin directry before changing the PATH.
This option is controlled by the OPT_BUILTIN_PWD_EXT #define.
** MATCH
A new rule MATCH does regexp matching on a string, returning the
result as a list of matches. Used like so:
matches = [ MATCH string : pattern ] ;
matches[1] is the portion of 'string' that 'pattern' matched.
matches[2], matches[3], etc. hold the portion of 'string' matched
The syntax of the pattern regexp is identical to that of the
HDRSCAN variable, since this rule uses Jam's internal regexp
engine.
The initial purpose of this rule is to allow the implementation of
a Split rule within Jam, so things like path names can be easily
decomposed.
This option is controlled by the OPT_BUILTIN_MATCH_EXT #define.
** W32_GETREG
Available only under WinNT (Win2k as well). Gets a value from the
registry, like so:
value = [ W32_GETREG list ] ;
This is primarily so Jam can find the location of the Visual C++
installation from the registry, which makes it a bit easier to get
a build environment up and running. Otherwise, they would have to
set the MSVCDIR environment variable, either at Visual C++ install
time or by running the vcvars32.bat file that comes with Visual
C++.
This option is controlled by the OPT_BUILTIN_W32_GETREG_EXT
#define.
** W32_SHORTNAME
Available only under WinNT (Win2k as well). Takes a string
holding a file name and returns its short name. E.g. "Program
Files" -> "PROGRA~1" etc. Used like so:
short = [ W32_SHORTNAME longname ] ;
This is primarily useful for shortening the long path name
supplied by W32_GETREG, which often contains things like "Program
Files" in it, etc., which confuses Jam later on.
This option is controlled by the OPT_BUILTIN_W32_SHORTNAME_EXT
#define.
* New Features
** Header Caching
This code is taken from //guest/craig_mcpheeters/jam/src/ on the
Perforce public depot. Many thanks to Craig McPheeters for making his
code available. It is delimited by the OPT_HEADER_CACHE_EXT #define
within the code.
Jam has a facility to scan source files for other files they might
include. This code implements a cache of these scans, so the entire
source tree need not be scanned each time jam is run. This brings the
following benefits:
- If a file would otherwise be scanned multiple times in a
single jam run (because the same file is represented by
multiple targets, perhaps each with a different grist), it
will now be scanned only once. In this way, things are
faster even if the cache file is not present when Jam is run.
- If a cache entry is present in the cache file when Jam
starts, and the file has not changed since the last time it
was scanned, Jam will not bother to re-scan it. This
markedly increaces Jam startup times for large projects.
This code has improvements over Craig McPheeters' original
version. I've described all of these changes to Craig and he
intends to incorporate them back into his version. The changes are:
- The actual name of the cache file is controlled by the
HCACHEFILE Jam variable. If HCACHEFILE is left unset (the
default), reading and writing of a cache file is not
performed. The cache is always used internally regardless
of HCACHEFILE, which helps when HDRGRIST causes the same
file to be scanned multiple times.
Setting LOCATE and SEARCH on the the HCACHEFILE works as
well, so you can place anywhere on disk you like or even
search for it in several directories. You may also set it
in your environment to share it amongst all your projects.
- The .jamdeps file is in a new format that allows binary data
to be in any of the fields, in particular the file names.
The original code would break if a file name contained the
'@' or '\n' characters. The format is also versioned,
allowing upgrades to automatically ignore old .jamdeps
files. The format remains human readable. In addition,
care has been taken to not add the entry into the header
cache until the entire record has been successfully read from
the file.
- The cache stores the value of HDRPATTERN with each cache
entry, and it is compared along with the file's date to
determine if there is a cache hit. If the HDRPATTERN does
not match, it is treated as a cache miss. This allows
HDRPATTERN to change without worrying about stale cache
entries. It also allows the same file to be scanned
multiple times with different HDRPATTERN values.
- Each cache entry is given an "age" which is the maximum
number of times a given header cache entry can go unused
before it is purged from the cache. This helps clean up old
entries in the .jamdeps file when files move around or are
removed from your project.
You control the maximum age with the HCACHEMAXAGE variable.
If set to 0, no cache aging is performed. Otherwise it is
the number of times a jam must be run before an unused cache
entry is purged. The default for HCACHEMAXAGE if left unset is 100.
- Jambase itself is changed.
SubDir now always sets HDRGRIST to $(SOURCE_GRIST) so header
scanning can deal with multiple header files of the same
name in different directories. With the header cache, this
does no longer incurs a performance penalty -- a given file
will still only be scanned once.
The FGristSourceFiles rule is now just an alias for
FGristFiles. Header files do not necessarily have global
visibility, and the header cache eliminates any performance
penalty this might otherwise incur.
Because of all these improvements, the following claims can be
made about this header cache implementation that can not be made
about Craig McPheeters' original version.
- The semantics of a Jam run will never be different because of
the header cache (the HDRPATTERN check ensures this).
- It will never be necessary to delete .jamdeps to fix obscure
jam problems or purge old entries.
** Exporting Jam variables to the environment using ENVEXPORT.
This change causes the global value of the ENVEXPORT variable to
take on special meaning. It becomes a list of Jam variables that
are to be exported into the environment.
the environment variables FOO, BAR, and BAZ will be set to
whatever values the Jam global variables of the same name were set to.
If a Jam global variable holds a list, the entire list is exported
to the environment. When the variable's name ends with "PATH",
"Path" or "path", then the list elements are concatenated together
with the SPLITPATH character separating elements (SPLITPATH is ';'
under Windows and ':' under Unix), otherwise the list elements are
concatenated with a single space.
environment variables are exported by default.
This option is controlled by the OPT_ENVIRONMENT_EXPORT_EXT
#define.
** The :X variable expansion
Expanding a variable with :X will change all \ chars in the
variable to / chars.
E.g.
foo = "a\\b\\c"
bar = $(foo:X)
# bar is now "a/b/c"
This is useful when dealing with cygwin tools that expect path
elements to be unix style, I guess.
FIXME: is this truly necessary? Or can it be solved in Jam?
E.g. we might be able to use the Split rule to get around this.
This feature is enabled with the OPT_EXPAND_UNIXPATH_EXT #define.
** Human Readable Dependency Output
This code is taken from from //guest/craig_mcpheeters/jam/src/ on the
Perforce public depot. Many thanks to Craig McPheeters for making his
code available. It is delimited by the OPT_GRAPH_DEBUG_EXT #define
within the code.
With this option, debug level 10 will print out the entire dependency
tree in a form that is more easily understood than jam's debug level 6.
** Target Fate Change Debugging
This code is taken from from //guest/craig_mcpheeters/jam/src/ on the
Perforce public depot. Many thanks to Craig McPheeters for making his
code available. It is delimited by the OPT_GRAPH_DEBUG_EXT #define
within the code.
With this option, debug level 11 prints out target fate changes as
they occur (and why they occur). This helps debug mysterious "why
is THAT file getting rebuilt" problems.
** Improved ...patience...
This changes the ...patience... lines to be printed out after the
first 100 and every subsequent 1000 files have been header scanned.
Previously, ...patience... was printed out for every 1000 targets.
This change both reduces the number of ...patience... lines printed,
and makes them more accurately reflect the work being done.
This change is enabled with the OPT_IMPROVED_PATIENCE_EXT #define.
** Improved debug level help
This change is delimited by the OPT_IMPROVE_DEBUG_LEVEL_HELP_EXT
#define within the code.
The -h option to jam now prints out what each of the debug levels do.
** Print Total Time
This change is delimited by the OPT_PRINT_TOTAL_TIME_EXT #define
within the code.
If the total time jam runs is greater than 10 seconds, the time is
printed when jam exits. This helps people back up claims that the
build is too slow and they need a faster machine. ;-)
** Improved HdrRule treatment
A new 3rd argument to HdrRule is the bound name of the 1st
argument to HdrRule. This allows HdrRule to extend the search
path for headers to include all directories headers have been
found in so far.
E.g. if a source file does "#include <foo/bar/baz.h>" and the
baz.h header is found in $(TOP)/include, this change allows
HdrRule to add $(TOP)/include/foo/bar to the HDRSEARCH path. This
way, if baz.h does #include "goo.h", any goo.h in
$(TOP)/include/foo/bar will be found.
The default Jambase makes use of this new argument to extend
HDRSEARCH on the header files.
This feature is enabled with the OPT_HDRRULE_BOUNDNAME_ARG_EXT
#define.
** Improved "compile" debug output.
With level 5 jam debugging, a jam rule execution trace is
printed. This extends that debugging output to include:
- when a new rule is defined (with a special note when the new
rule re-defines a pre-existing rule).
- when a new actions is defined (with a special note when the
new actions re-defines a pre-existing actions).
- when an included Jamfile ends.
This makes it possible to write scripts that process Jam debugging
output that look for potential errors, such as re-defining a rule
or action that is part of Jambase.
This feature is enabled with OPT_IMPROVE_DEBUG_COMPILE_EXT.
** "IFUSED" targets
Because the Windows NT shell (cmd.exe) sucks, it is often best to
break up complex operations into many actions. Examples include
creating various response files and linker definition files for
the link step of a compile.
The problem with this is that these files may not always be
rebuilt when necessary. It is difficult to construct a
straightforward chain of actions that guarantees that all the
response files that need to get built whenever the final link
makes use of them.
Stock Jam provides two main ways to accomplish this:
- Mark the response files TEMPORARY and remove them with
RmTemps after the link. This is problematic since removing
them just adds mystery to the final link process for the
typical engineer. People often want to look at the files to
see exactly how the link occurs.
- Perform the final link with several actions that take a list
of the final image and all the response files in $(<). Each
action would build one of the elements in $(<). This is an
obtuse hack that is difficult to explain and maintain.
The solution presented here is a new built in rule IFUSED. When
called like this:
IFUSED target ;
"target" is marked "ifused".
When Jam decides that a given target is to be built, it now checks
all direct dependents to see if they are marked ifused. If they
are, the ifused dependents are also marked for rebuilding, and
their direct dependents are similarly considered, and so on.
This affords the benefits of marking targets TEMPORARY (that they
will be rebuilt whenever the targets they depend on are rebuilt)
without the negatives (that they get deleted after the build).
BUGS:
There is a bug in this implementation that I do not believe will
lead to practical problems. Consider the following set of dependencies.
d -> b* (d depends on b*)
c -> b*
b* -> a
Consider b* to be marked "ifused". The current implementation
will correctly rebuild b whenever either d or c is rebuilt.
However, it does not guarantee that BOTH d and c get built
whenever b* is updated. If b* is updated only because it is
ifused, some of its dependents may not be updated. For example,
if c is updated and b* is marked for updated because it is
ifused, then d may not be updated. If d is marked for update
and b* is marked for updated because it is ifused, c may not be
marked for update. I call this a bug since it shouldn't be
necessary to run Jam twice to satisfy all dependencies.
A simple way to work around this is to mark b* with NOTFILE. This
will cause b*'s time stamp to no longer be considered. This is
arguably a reasonable thing to do, since these files are rarely
edited by hand and whenever they are used they are rebuilt.
Another workaround is to mark the final linked image with LEAF,
which will usually has a similar effect of removing b*'s time
stamp from consideration. Another workaround is to avoid having a
IFUSED file with more than one dependent target (this is usually
the case anyway, which is the major reason I don't consider this
problem serious).
* Operational Changes
** Versioning
We add a PATCHED_VERSION variable that indicates the local version
of custom jam is in use.
The variable is a list. PATCHED_VERSION[1] is the major version,
PATCHED_VERSION[2] is the minor version.
As you might expect, major version increments indicate
non-backwards compatible changes (elimination of builtin rules or
other features, changing features in an incompatible way, etc.).
Minor version increments indicate the addition of backwards
compatible features and bug fixes.
It is expected that a project's Jamrules will check the
PATCHED_VERSION variable and check for a major version mismatch,
and ensure the minor version is not too low.
This option is enabled with the OPT_PATCHED_VERSION_VAR_EXT
#define.
** Maximum Command Length for NT
Jam ships with a maximum command line length of 996 for Windows
NT. Windows NT 4.0 and greater can handle command line lengths of
at least 10240 characters long (perhaps longer, no tests have been
done).
This change increaces the maximum command line length to 10240 for
Windows 4.0 and greater.
Caveat: the default Windows 4.0 command shell can only handle
commands up to 1-2k bytes long for many of its own internal
commands, such as del and echo. So this feature has spurred the
implementation of jamshell.c, a simple shell that lives in
tools/jamshell.
This option is enabled with the OPT_FIX_NT_BATCH_EXT #define.
* Bug Fixes
** Windows NT Batch File Naming Bug
This code is taken from from //guest/craig_mcpheeters/jam/src/ on
the Perforce public depot. Many thanks to Craig McPheeters for
making his code available. It is delimited by the
OPT_FIX_NT_BATCH_EXT #define within the code.
Running jam multiple times on the same machine could break because
jam's temp batch file names were of the form jamtmpXX.bat, where
XX begins at 00 and increaces numerically.
This fix adds the jam processes own PID to the temp batch file
name, allowing multiple copies of jam.exe to run simultaneously
without interfering with each other.
** Improper handling of "on target" values during header scanning
Setting any "on target" variables for $(<) within a HdrRule will
actually set the global values for those variables and the "on
target" values will remain unchanged.
Why? Jam implements "on target" variables by swapping the current
global values with the target specific values (see pushsettings()
in rules.c) and then unswapping them when the target is no longer
in scope (see popsettings() in rules.c).
This works fine if the target variables are not changed between
calls to pushsettings() and popsettings(). But, when scanning for
header file dependencies, the HDRRULE is run, and so the "on
target" variables of $(<) can be set.
Doing so will actually cause the global value of the variable to
be set. Why? Because the target's value will be swapped with the
global value in the popsettings() call after the HdrRule is
called. The value set on $(<) will either not change (if the same
variable was previously set on the target), or be taken from the
global setting (if the variable had never been set on the target before).
This problem occurs with the default Jambase's HdrRule when any
file includes itself. In this case, $(<) will also be present in $(>).
This fix makes a copy of the target's "on target" variables and
uses the copy with pushsettings() and popsettings() in make.c's
make0() function. An alternate fix would be to freeze the "on
target" variables of $(<) within a HdrRule, disalowing any
modifications.
This code is enabled with the OPT_FIX_TARGET_VARIABLES_EXT #define.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: New LAZY builtin
Date: Fri, 11 Jan 2002 18:19:21 -0500
Why not just make this the meaning of ALWAYS + TEMPORARY? That was the first
thing I tried when I wanted this behavior.
That avoids introducing a new rule, too.
Subject: Re: New LAZY builtin
Date: Fri, 11 Jan 2002 17:16:17 -0700
Yeah, that's certainly a possibility (I tried that combination too).
Do the current semantics of ALWAYS + TEMPORARY have any useful
purpose? Let's see:
ALWAYS - mark the target for update regardless of its age on disk
TEMPORARY - if the target isn't on disk, take its age from its
oldest dependency, otherwise treat it as normal.
So it seems like right now, with a target marked ALWAYS and TEMPORARY,
ALWAYS "wins", so any current use of the combination is probably
accidental. I don't think giving new meaning to ALWAYS + TEMPORARY
would be so bad.
From: "Matt Armstrong" <matt+dated+200201161718.912441@lickey.com>
Sent: Friday, January 11, 2002 7:16 PM
Subject: Re: New LAZY builtin
BTW, TEMPORARY is broken in stock Jam - it doesn't work right if the target
has multiple parents. My patch is:
***************
*** 175,180 ****
printf( "warning: %s depends on itself\n", t->name );
return;
+ /* Deal with TEMPORARY targets with multiple parents. When a missing
+ * TEMPORARY target is determined to be stable, it inherits the
+ * timestamp of the parent being checked, and is given a binding of
+ * T_BIND_PARENTS. To avoid outdating parents with earlier modification
+ * times, we set the target's time to the minimum time of all parents.
+ */
+ case T_FATE_STABLE:
+ if ( t->binding == T_BIND_PARENTS && t->time > ptime &&
t->flags & T_FLAG_TEMP )
+ t->time = ptime;
+ return;
+
default:
return;
Date: Fri, 11 Jan 2002 17:04:00 -0800
From: rmg@perforce.com
Subject: Perforce internal jam changes to //public/jam/...
FYI: (to all who are working on your own Jam changes):
I have just submitted the following change to the Jam sources in
the Public Depot (//public/jam/...):
| Change 1319 by rmg@rmg:pdjam:chinacat on 2002/01/11 16:38:34
|
| This change is a drop of the Perforce internal Jam changes
| since the 2.3 public release. The individual changes
| represented herein are preserved in the
| //guest/richard_geiger/intjam/ branch.
|
| The intent of this drop is to provide a base, from which other
| contributors' Jam branches may be integrated into. It is not
| intended to become a packaged release in this state. We will
| be integrating changes from other users prior to creating the
| next packaged release.
|
| Please refer to the src/RELNOTES file for an overview of the
| changes present in this integration.
|
My next step (toward the next Jam release), will be to contact
individual contributors about changes we have decided to take "as-is"
for the new release, to coordinate those integrations back into the
mainline. That will start happening next week, but please do NOT draw
any conclusions based not having heard from me in that time frame.
I'll try to contact *everybody* who has Jam changes since 2.3.1 in the
Public Depot (whether or not we currently plan to grab any of your
changes), before the release is finalized.
In the event that we decide not to pick up a change that's important
to you in this round, take heart: there will be other releases down the line.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Perforce internal jam changes to //public/jam/...
Date: Fri, 11 Jan 2002 20:22:09 -0500
?? This looks like the exact same RELNOTES that's been there since 2.3.1
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Perforce internal jam changes to //public/jam/...
Date: Fri, 11 Jan 2002 20:26:55 -0500
Uh, Sorry. Better learn to sync before I speak ;-)
Subject: Re: Perforce internal jam changes to //public/jam/...
Date: Fri, 11 Jan 2002 22:08:30 -0800
From: rmg@perforce.com
Ah. Had me scared for a minute there!
Glad to know it's at least minimally discernable from the previous rev! :-)
Date: Sat, 12 Jan 2002 11:48:54 -0400
Subject: Re: New LAZY builtin
From: "Lex Spoon" <lex@cc.gatech.edu>
This would be confusing to people who don't know about it. A separate
keyword, on the other hand, privodes a clear indication to go look in
the documentation if it's unfamiliar.
Date: Mon, 14 Jan 2002 11:03:15 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Re: Subject: Perforce internal jam changes to
consider changing to a mail reader or gateway that understands how to
Thank you very much for your work. We are happy users of Jam in our
company and a new version of Jam will be put to use. E.g. the Glob
function is very welcome.
I compiled the sources under the latest version of cywin and remarked
that cygwin needs in the makefile the additional line 46a47
Furthermore I took me some time to find in the documentation how to
use the new Glob function. Could you consider changing in RELNOTES
New 'Glob' builtin that returns a list of files in a list of
directories, given a list of patterns.
To
New 'Glob' builtin that returns a list of files given
two parameters (list of directories, list of filename = pattern).
From: <boga@mac.com>
Date: Tue, 15 Jan 2002 09:14:52 +0100
Subject: TOGETHER targets not removed on failure
If an actions fails it's targets are delted by jam.
If the action is marked with TOGETHER it's targets are not deleted. Why?
I'd like them to be deleted.
Code from make1.c
Is !( cmd->rule->flags & RULE_TOGETHER ) neccesary here?!
Date: Fri, 18 Jan 2002 10:13:39 -0800
From: Raju Subbanna X4832 <hemantharaju.t.subbanna@nsc.com>
Subject: Auto jamfile creation
How can we auto create jamfile on a client creation in Perforce.
Date: Fri, 18 Jan 2002 14:33:33 -0500
From: "Wolpe, Paul" <Paul.Wolpe@blackrock.com>
Subject: Using Jam to invoke a remote Jam
I am redoing my company's make system and plan on using Jam, however,
there is one issue I am not sure how to approach.
The current build system is set up such that if a user types:
%make sol
This will rlogin to a Solaris machine dedicated to compiling and execute
the equivalent of "make all."
Alternatively, if the user were to enter:
%make lin
It will do the same process on a Linux build machine, regardless of the
platform the user is currently using.
Is there a way that in my rules I can specify a shell command to rlogin
to the appropriate machine, and then call Jam on that machine with the
appropriate arguments?
Date: Sat, 19 Jan 2002 01:24:11 -0500
Subject: external dependency scanner
I am working with a language whose dependencies cannot be analyzed by a regular
expression search (Objective Caml), but it comes
with a utility for doing such scans. Being a new Jam user, I have had to try
to bootstrap my head into the Jam mindset and try to
figure out how to do something new at the same time. But alas, I seem to have
just made the conclusion that my problem cannot be
solved without adding a primitive to the language.
The normal "make" thing to do is to use an include that depends on the
dependencies scanned from the files:
SRC := $(wildcard *.c)
OBJ := $(SRC:.c=.o)
test: $(OBJ)
$(CC) -o $@ $(OBJ)
include $(OBJ:.o=.d)
%.d: %.c
depend.sh $(CFLAGS) $< > $@
where depend.sh is a two line script which runs gcc -M and regexps the .d file
to depend on the same as the .o file. A more
thorough explanation is in Section 5.4 of:
http://www.canb.auug.org.au/~millerp/rmch/recu-make-cons-harm.html
So now of course the only reason this works is because make promises to reboot
itself if it finds any includes of out of date
files. I'm not sure if it does this after updating only one .d file or a bunch,
but it reboots itself. Apparently jam does not:
Now this would seem fair, since it would also be okay to just read the .d in and
DEPEND it. But I can't seem to find any file
reading functions. No way to bind a variable to the contents of a file.
One thing I thought of would be to use a sh backquote and
then get the file back, but again:
read command or something?
I'd like to say other than that the Jam paradigm looks really nice.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: external dependency scanner
Date: Sat, 19 Jan 2002 07:54:33 -0500
in and DEPEND it. But I can't seem to find any file
I'm sure you already realize this, but the included file has to be a Jam
language input file, so it's contents are treated the same way as what you
write in a Jamfile.
of would be to use a sh backquote and
there a known hack of the source code that gives a file
It isn't perfectly clear what your problem is, but if I understand you correctly...
Within the current Jam core language I can think of only one basic way: you
need to extend the action which builds your .d file with some postprocessing
so that either:
a. It is Jam language source code and can be handled by include
or
b. It can be scanned with the regular expression scanner to get what
you want out of it using a customized HDRRULE and HDRSCAN.
If you are willing to use one of the many extended versions of Jam out there
which has exposed a regular expression substitution facility (ours calls the
rule SUBST - binaries at www.boost.org/tools/build; documentation at
www.boost.org/tools/build/build_system.htm), you can set HDRSCAN on the
OCaml file to .* (or some other appropriately general regexp) and HDRRULE to
your own dependency-managing rule, and process all the lines yourself until
you get what you need out of it. I'm certain the next official release
(coming soon) will have this feature or something like it, since all the
code is basically already there and everybody who hacks jam adds it themselves.
Date: Sat, 19 Jan 2002 14:09:45 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Auto jamfile creation
I don't know whether it's possible, but I suspect that if you succeed, you
success will be a failure in drag.
Two newly-created clients will have the same jamfile, but two months later
one or both may have been edited a little bit. And then, one engineer's
going to write some code, test it in his client, submit it, and then it
doesn't work for another engineer, whose jamfile is different, and a day
or so is wasted trying to find the problem.
The conventional approach, keeping the jamfile/makefile in perforce along
with the source code, has its merits :)
Date: Sun, 20 Jan 2002 11:12:59 +0100
From: "Erling D. Andersen" <e.d.andersen@mosek.com>
Subject: Newbee Jam ? and comments
I moving my make system to using Jam. It look quite promising.
I have a question regarding the -I setting. I do something like
SubInclude TOP path1 ;
SubInclude TOP path2 ;
in my $(TOP)/Jamfile. I hoped when building the objects in the Jamfiles of the
two directories that they would use
-I $(TOP)/path1 -I $(TOP)/path2
so *.h files from both directories was visible. That seems not to the case.
Is there an easy way of doing this?
I have had a couple of frustrations with Jam which are:
1. When I do
TOP = c:\\mosekdev ;
Then it seems to be essential to include c: is there I think.
It took me hour to figure that out.
2. If I do
VAR1 = $(VAR2) $(VR3);
and VR3 is misspelled for VAR3 then the assignment seem to fail without any warning.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 21 Jan 2002 11:50:10 -0500
Subject: Boost.Build news
I am pleased to make the following announcements:
1. Most of the really difficult design problems of the Boost.Build
rewrite have now been solved, and we're ready to divide up the work and move
forward with implementation! I will post notice of a design document later
today for your reading enjoyment.
2. Rene Rivera has been pushing the current Boost.Build codebase
forward with the following features:
1. Fixed a problem with the Jamfiles, Jamrules, and other files
from getting multiple incuded. Sometimes as much as 15 times
in my project :-(
2. Support for <libflags> and <sysinclude> in all the current
toolsets.
3. "stage" rule that I mention sometime in the past to collect
files from the various subdirectories to a single one, with
file ranaming depending on the subvariant spec.
4. Ability to specify <dll> as a source dependency. This has the
effect that the DLL is linked in by the use of LIBPATH and
FINDLIBS instead of directly with NEEDLIBS. Which is the prefered
way for shared libraries.
3. Rene has generously agreed to take over maintenance of the current
Boost.Build codebase so that I can devote more energy to the new one. Rene
will also be participating in that project. Rene has proven his
understanding of the current system and I'm confident in his judgement.
Rene will coordinate enhancements like the testing work being done by Joerg
Walter and Brad King. Needless to say, I really appreciate Rene's help.
Date: Mon, 21 Jan 2002 17:53:37 -0700
From: Ray Caruso <Ray.Caruso@Netvion.com>
Subject: Problems Setting OPTIM on a target
I have one or two .cpp files in my project for which I would like to turn
off optimization.
I have set the global version of OPTIM and then am trying to set OPTIM on
the specific .cpp file.
OPTIM = -O3 ; # Using full opt on g++
OPTIM on foo = -O0 # turn off opt for foo.cpp
I've tried all kinds of versions of this such as
OPTIM on foo.cpp = -O0 ;
or
OPTIM on foo$(SUFOBJ) = -O0 ;
regardless, running jam -d 2 shows it compiling everything, including
foo.cpp with -O3.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 21 Jan 2002 22:54:58 -0500
Subject: Boost.Build design document
A document describing key parts of the new Boost.Build architecture (to be
implemented) is now at tools/build/architecture.html in the Boost CVS tree,
and can also be viewed at:
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/boost/boost/tools/build/architecture.html
Rene, I'm awaiting a proposed merging of the Preample and Initialization
sections from you.
Date: Thu, 24 Jan 2002 11:06:46 -0700
From: Ray Caruso <Ray.Caruso@Netvion.com>
Subject: Re: Problems Setting OPTIM on a target
Just wondering if anyone has any input on this. I am kinda in a jam (pun
not intended)
on this. I have used the "on" form on variable assignment before without
problem. Please, any clues?
Date: Thu, 24 Jan 2002 13:09:27 -0700
From: Ray Caruso <Ray.Caruso@Netvion.com>
Subject: Re: Problems Setting OPTIM on a target
Diane and everyone that replied to my plea for help.
From: Ian Godin <ian@sgrail.com>
Date: Fri, 25 Jan 2002 08:32:21 -0800
Subject: Compiling too many times
I haven't been using jam for long, and I ran into this little
problem for which I haven't been able to find a solution:
Jamfile:
Main a : a.c ;
Main b : a.c ;
I'm just playing around, so a.c is:
int main( void ) {
return 0;
}
When I run jam on this, it compiles a.c twice.
I'm not quite sure why:
...found 11 target(s)...
...updating 3 target(s)...
Cc a.o
Link a
Chmod1 a
Link b
Chmod1 b
...updated 3 target(s)...
This happens between libraries as well. I'm converting
a large project from make to jam, and compiling every
file twice is a huge waste of time :) We basically build
shared and static libraries from the same C files.
Date: Fri, 25 Jan 2002 19:41:40 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Compiling too many times
It's a weakness. I don't know how to fix it properly, but a workaround
should be easyish: Use the LibraryFromObjects rule twice and the Objects
rule once instead of using Library twice.
Btw, do you use the same compiler options for shared and static libraries?
From: Ian Godin <ian@sgrail.com>
Subject: Re: Compiling too many times
Date: Fri, 25 Jan 2002 10:48:28 -0800
Yes I do use the same flags. Thanks, that was indeed the solution.
Basically jam seems to accumulate all actions for a given target together
(went hunting through the source code). I can see a use for this behavior,
but it's a little weird. Perhaps adding a "once" modifier to actions... but
that requires a little more thought...
Anyways, that fixes my problem and it's pretty easy to work around.
So far I'm very impressed with jam... very much better than make.
Date: Fri, 25 Jan 2002 20:18:12 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Compiling too many times
I thought about it some more now, and I think jam's in the wrong. I've
heard an explanation for it, but now I think it's wrong.
If we make the basic assumption that the build process can fail (due to
compile errors, lack of disk space or other reasons) at any point and a
rebuild from that point should work, then I don't think jam has a
legitimate reason to build a target twice.
If jam builds a target twice in the exact same way, then one of the action
invocations is obviously superfluous. If jam builds a target twice in two
different ways, then it cannot know which one is left on disk in case a
build is interrupted, and that breaks the assumption above.
Date: Fri, 25 Jan 2002 14:26:30 -0500
From: Michael Gentry <mgentry@sharemedia.com>
Subject: Why does "jam install" build differently than "jam"
Does anyone know why Jam would build things in a different order if
"install" is added to the command line?
I have several custom rules that generate files which then get used in
the build process (such as .idl -> .hh and .cc). If I run the following
commands, it works fine:
jam clean
jam
jam install
But this sequence will not work:
jam clean
jam install
For some reason, a regular "jam" will build the .hh/.cc files before
compiling the things that depend upon them, but "jam install" doesn't
build them until later in the process.
Date: Fri, 25 Jan 2002 20:00:50 -0800
From: Ian Godin <ian@sgrail.com>
Subject: Re: Compiling too many times
Sorry I'm sending this from home and I don't have your orignal message.
But here are my thoughts after playing with this for a while. BTW I got
everything working to my satisfaction (by calling Objects separately).
Jam does not suffer from the problem you mentioned, because it seems
to delete files that didn't complete successfully. And it always runs all
commands, so should it rebuilds the same way everytime.
Going back to my example:
Main a : a.c ;
Main b : a.c ;
I think this behavior is wrong because when I write Jamfiles, I think about
dependencies. In the above example, I'm saying "a" depends on "a.c", and "b" depends
on "a.c". I am NOT saying build a.o twice because two things depend on a.c. The details
of how to "best" build those should be handled by jam, Jambase, and Jamrules. Now
granted, sometimes best might mean build a.o twice, but I don't think
that is the common case. So I believe the default behavior is incorrect.
But anyways, that's my thinking/philosophy on the subject. I also noticed the "together"
modifier on actions. From the docs, it seems to do pretty exactly what is needed (multiple
identical actions on a target is treated as a single action). I haven't
done any experiments yet with it.
Thanks for everyone who responded and helped. Seems I've hit a common
problem for newbies of jam :)
Date: Sat, 26 Jan 2002 17:54:24 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Why does "jam install" build differently than "jam"
The build order is in principle determined only by the dependencies jam
can see. On a single-CPU system, jam will use the same build order every
time, and there's a fairly simple rule that mostly tells you the order.
However, if you use -j (usual on SMP or netbuild systems), the build order
may be different. FWIW, I also have a hacked jam that (much handwaving
here) makes it build a lot faster, and it changes the build order even
more than -j does.
The basic rule has to be: Make your dependencies explicit.
By accident :)
Make sure there's a Depends invocation that tells jam about those
dependencies.
Date: Thu, 31 Jan 2002 02:36:16 -0800 (PST)
Subject: Problems in connecting CEPC to host through Ethernet
I am not able to connect CEPC to a host through an ethernet card.
I am using 3com EtherLink 10/100 PCI (3c905C-TC) card.
I have given the IRQ and BAse address by editing Autoexec.bat
IRQ I gave was 11 .That was what I found from the BIOS SETUP.
Date: Thu, 31 Jan 2002 09:26:24 -0800
From: rmg@perforce.com
Subject: Jam 2.4 release status
[Everything below assumes you are familiar with the Perforce Public
Depot, and its role in the ongoing evolution of Jam. If you care about
that, please see
http://public.perforce.com/public/index.html
and
http://public.perforce.com/public/jam/index.html
I would dearly love to finalize Jam 2.4 by the end of February.
Presently:
- I've integrated all of the Perforce-internal changes into
//public/jam/... in the Public Depot.
- I've integrated the handful of the "slam-dunk" changes from various
//guest/.../jam/... branches.
- I've also done a bunch of "null-change" integrations, aimed solely
at updating the revision history in Perforce, so ease the process
of looking for new contributed changes in the future.
- There's been at least one field-contributed bugfix in the last week
that's now in the mainline.
- I think I've individually contacted everybody who made changes to
//guest/.../jam/... branches, to let you know about that status of
your changes WRT 2.4. If anybody out there is thinking "What? why
wasn't *I* contacted?", please send me mail at
"opensource@perforce.com"
- There are several outstanding changes from contributors that we
know we want to integrate, though we may want to implement them
differently, and I'm not sure whether they will make it into 2.4.
I'll probably looking closest at things labeled as "bugfixes" at
this point.
- There are some contributed changes yet to be considered; some of
these are "major features" (in terms of functionality, at least, if
not perhaps code impact): for example the header scan caching
stuff. I'm guessing that these won't make it into the 2.4 release,
but are still open for consideration in the next release.
(By the way I'd hope that the next release after 2.4 will be able
to happen much quicker than 2.4 has followed 2.3).
At this point, I'd like to encourage everybody to grab the current
head revisions of //public/jam/..., and give it a go. My own resources
for testing across diverse platforms are thin, so I'm hoping that we
can shake out major portability problems (or other bugs) by way of
your efforts. I.e., I'll do what I can, but if there's a platform you
really care about, you can help by trying the current "release
candidate" there.
I will try to at least make a statement of the platforms I -expect- it
to work on, by next week. I'll also shift to a mode of posting news and
status at
http://public.perforce.com/public/jam/index.html
instead of long message (like this) to this list.
In short: the mainline has some significant improvements, which I am
aiming to make into the 2.4 release within the month. It's a great
time for you to try the head of main and report bugs (or bug me about
fixes you've already submitted, that I've missed integrating). The
process of getting 2.4 out will have also served as a learning curve
for me, which should allow work on 2.5 (3.0?) to progress that much faster.
Finally, thanks to all Jam users and developers out there. I've
enjoyed meeting you, and starting to work with you. Your attitude rocks.
Date: Mon, 04 Feb 2002 16:03:43 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Subject: Jam 2.4 release status
I never include "." in my path. Therefore I suggest that you
change in Makefile
all: jam0
jam0
to
all: jam0
./jam0
I had no problem to compile it under cygwin. It seems to work as
expected with my old Jamfiles.
When I aborted it with Ctrl-C I got the following output:
STATUS_ACCESS_VIOLATION
jam.exe.stackdump
This different compared with Jam 2.3 where it need sometimes more
than one Ctrl-C to stop, but never more output than "interrupted".
From: <boga@mac.com>
Date: Tue, 5 Feb 2002 08:39:01 +0100
Subject: Jam 2.4 vs. 2.3
The syntax of 'in' was changed in jam 2.4:
I've used to write something like:
Now i had to replace it with:
I don't mind this change, however it should be documented.
The change seems to be cause by rewrite of jamgram.yy
< | arg `in` list
to
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 5 Feb 2002 15:49:25 -0500
Subject: Re: jamming digest, Vol 1 #312 - 1 msg
For the record, I mind it.
From: "Andrew Reynolds" <andrew@syndeocorp.com>
Date: Wed, 6 Feb 2002 10:09:27 -0800
Subject: building jam for the first time - newbee help please
I am trying to build jam 2.3 itself form the source files I down loaded from
the perforce web site. I am building on Solaris 2.6. The make file
generates jam0 but I keep getting this error when running jam0:
Archive bin.unix/libjam.a
ld: fatal: file bin.unix/libjam.a: cannot open file: No such file or directory
ld: fatal: File processing errors. No output written to a.out
I am frustrated with the documentation on this - it is very vague and there
are few references on how to build jam the first time or what environment
needs to be set and I just can't seem to figure out what needs to be set to
fix this. Or maybe I have just spent too much time writing perl scripts and
have forgotten all my c.
So could some take pity on me and point me the right way on this? (either
which doc covers this or give me some instructions here).
Date: Wed, 06 Feb 2002 14:18:04 -0800
From: rmg@perforce.com
Subject: Re: jamming digest, Vol 1 #313 - 2 msgs
Ah, finally, controversy! Well, closest thing we've seen to it :-)
No final ruling yet, but it does sound like the change breaks existing
Jamfiles, and unless there's strong a reason to do so, I suspect we'll
want to put it back the way it was.
BTW, I'll likely be distracted (more the most part) from jam work
until the last week of the month, at which point I hope to be able to
complete my checklist of reviews needed before I can finalize a
packaged 2.4 release.
One item on the checklist will be to review all of the traffic on this
list since, say Jan 1, to make sure I haven't missed picking up any
important bug reports or fixes. So, please do keep them coming.
Date: Wed, 06 Feb 2002 16:16:16 -0700
From: Ray Caruso <Ray.Caruso@Netvion.com>
Subject: "in" syntax change in 2.4
Oh man, don't modify syntax, please. Add new stuff, but don't break
existing syntax.
That would be one very big reason to not move from 2.3 to 2.4.
From: Markus Scherschanski <MScherschanski@dspace.de>
Date: Thu, 7 Feb 2002 13:33:09 +0100
Subject: Single Pseudo-Target
I got a bit of a problem, I'm trying do create a target that is only called
if the command line says it, I mean something like:
jam -f myjamfile cclean
It should be a substitution for the normal clean.
I tried several ways, look what I mean:
rule CClean {
local _i ;
if $(UNLOCK_ONLY) != TRUE { ECHO removing object files and lib ; }
{
UnlockIt $(_i) ;
if $(UNLOCK_ONLY) != TRUE { CleanIt $(_i) ; }
}
actions existing CleanIt {
echo deleting $(<)
del $(<)
}
actions existing UnlockIt {
echo unlocking $(<)
attrib -r $(<)
}
CClean cclean : $(OBJ_FILES) $(LIB) ;
I also tried:
NOTFILE cclean ;
ALWAYS cclean ;
Depends ...
The phenomenon is that he calls the rule everytime and never executes the
actions commands!!!
What can I do? What the rule does, should be clear, if not: I'll explain.
Many thanks. Hopefully,
Date: Thu, 7 Feb 2002 12:40:37 -0800 (PST)
Subject: Re: Single Pseudo-Target
You were really really close... :)
Try:
NOTFILE cclean ;
ALWAYS cclean ;
rule CClean {
local _i ;
if ! $(UNLOCK_ONLY) {
CCleanMessage cclean ;
}
{ UnlockIt cclean : $(_i) ;
if ! $(UNLOCK_ONLY) {
CleanIt cclean : $(_i) ;
}
}
}
actions CCleanMessage {
echo Removing object files and lib...
}
actions existing CleanIt {
echo deleting $(>)
del $(>)
}
actions existing UnlockIt {
echo unlocking $(>)
attrib -r $(>)
}
CClean cclean : $(OBJ_FILES) $(LIB) ;
$ jam -d0 cclean
Removing object files and lib...
unlocking src/lib/a/a.o
deleting src/lib/a/a.o
unlocking src/lib/a/liba.a
deleting src/lib/a/liba.a
$ UNLOCK_ONLY=true jam -d0 cclean
unlocking src/lib/a/a.o
unlocking src/lib/a/liba.a
From: Jack_Goral@NAI.com
Subject: RE: Single Pseudo-Target
Date: Thu, 7 Feb 2002 07:10:27 -0600
I think you can take the idea from my code below:
rule Msdev # projectname {
local _t = $(1:S=.dsp) ; # make it : projectname.dsp
local _wt = \"$(1) - Win32 $(BUILD)\" ;
local _clean = [ FGristFiles clean ] ;
#
# find the .dsp file
#
SEARCH on $(_t) = $(SEARCH_SOURCE) ;
#LOCATE on $(_t) = $(SEARCH_SOURCE) ;
#
# make all depend on the target (.dsp)
#
Depends build : $(_t) ;
Depends all : build ;
Depends clean : $(_clean) ;
#
# always remake the target so 'msdev' decides what to rebuild
#
ALWAYS $(_t) ;
#
# msdev build target, for example: "NGExpertSvr - Win32 Debug"
#
if $(SUB_BUILD) {
_wt = \"$(1) - Win32 $(SUB_BUILD) $(BUILD)\" ;
}
MSDEV_BUILD_TARGET on $(_t) = $(_wt) ;
MSDEV_BUILD_TARGET on $(_clean) = $(_wt) ;
#
# run the 'msdev' project build
#
RunMsdev $(_t) ;
#
# use 'msdev' when target is 'clean'
#
RunMsdevClean $(_clean) : $(_t) ;
}
From: Markus Scherschanski [mailto:MScherschanski@dspace.de]
Sent: Thursday, February 07, 2002 6:33 AM
Subject: Single Pseudo-Target
I got a bit of a problem, I'm trying do create a target that is only called
if the command line says it, I mean something like:
jam -f myjamfile cclean
It should be a substitution for the normal clean.
I tried several ways, look what I mean:
rule CClean {
local _i ;
if $(UNLOCK_ONLY) != TRUE {
ECHO removing object files and lib ;
}
{
UnlockIt $(_i) ;
if $(UNLOCK_ONLY) != TRUE {
CleanIt $(_i) ;
}
}
actions existing CleanIt {
echo deleting $(<)
del $(<)
}
actions existing UnlockIt {
echo unlocking $(<)
attrib -r $(<)
}
CClean cclean : $(OBJ_FILES) $(LIB) ;
I also tried:
NOTFILE cclean ;
ALWAYS cclean ;
Depends ...
The phenomenon is that he calls the rule everytime and never executes the
actions commands!!!
What can I do? What the rule does, should be clear, if not: I'll explain.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sun, 10 Feb 2002 19:53:18 -0500
Subject: Nasty Jam behavior
My copy of Jam has the following behavior on Windows:
C:\>jam -f-
x = foo/bar ;
ECHO $(x:G=) ;
^Z
foo\bar
I guess it's mostly harmless when slashes and grist have their usual meaning
(though backslashes can cause trouble for some cygwin tools), but if you're
trying to do anything else, this behavior is at least surprising, and
potentially problematic. Slashes seem to automatically get reversed during
binding anyway, so is there any reason to keep this quirk in Jam?
Subject: Re: Nasty Jam behavior
From: Matt Armstrong <matt@lickey.com>
Date: Sun, 10 Feb 2002 19:42:04 -0700
The upcoming 2.4 release of stock jam's RELNOTES mentions something
related to this. It says that using any of GDBSM will result in the
variable being parsed and rebuilt as a filename, while all other
modifiers do not have this behavior (previously, they all did). I
wonder why :G is considered a filename operation though -- strictly
speaking it really has nothing to do with files.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Nasty Jam behavior
Date: Sun, 10 Feb 2002 21:54:10 -0500
Hmm. I can't see any reason for it to happen with any of the modifiers
regardless of their application as filename modifiers, since, as I said, it
happens anyway at binding time.
Subject: Re: Nasty Jam behavior
From: Matt Armstrong <matt@lickey.com>
Date: Sun, 10 Feb 2002 20:00:19 -0700
It makes some sense for things like :P and :D -- the filename
operations are OS dependent. Perhaps the win32 path functions are
smart enough to recognize '/' as a path separator, but they use '\'
when rebuilding the filename.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Nasty Jam behavior
Date: Sun, 10 Feb 2002 22:14:17 -0500
Of course I understand that. But who is this behavior helping? Surely any
system that depends on it is a fragile one. What's the scenario? To justify
the current behavior, it seems to me that all three of these conditions must
be true:
* The system supports user-specified paths with forward slashes
* The system happens to use these modifiers on all such paths
* The system depends somehow on the lack of forward-slashes in
paths that have been modified.
Subject: Re: Nasty Jam behavior
From: amaury.forgeotdarc@ubitrade.com
Date: Mon, 11 Feb 2002 09:49:57 +0100
Because of this, we chose to slightly modify Jam,
to make it more consistent:
Jam only builds filenames with '/' separators,
except when executing actions, where
$(<), $(>) and "bind" variables are rewritten with '\'.
With this change, we always write our Jamfiles with '/'
in path, even for NT-only rules. We even made our
developers think that backslashes are forbidden in
Jamfiles (which is not true. Jam still recognizes both).
The advantages are consistency and platform-independency.
The drawback is that every variable containing a file name
must be declared as "bind", otherwise it will appear with '/'.
From: "Vianney Lecroart" <lecroart@nevrax.com>
Date: Tue, 12 Feb 2002 15:57:41 +0100
Subject: jambase & keepobjs
I'm a newbie about Jam and I try to port my big project into Jam (on windows
platform to start).
I want to modify Jambase default file so I put a file Jambase in the same
directory where the Jamfile is, and I execute "jam" on the same dir but it
doesn t take my Jambase file but a default one. I have to execute
"jam -fJambase" and in this case, it calls mine but I think it s not cool,
is there another way to call my Jambase automatically?
Other question: I create a (big) Library. Strangely, it create all objs,
create the lib and erase all objs but in the default Jambase, there that:
if ! ( $(NOARSCAN) || $(KEEPOBJS) ) { RmTemps $(_l) : $(_s) ; }
and NOARSCAN is set to "true" on NT so the if should not enter and objs
should not erased? right?
Another strange things is that when I set KEPPOBJS to true, objs are not
deleted but the lib is not created, I think it s because:
Depends lib : $(_l) ;
is not call if KEEPOBJS is true and so, it doesn't check if the lib is here
or not (i not sure about that)
From: "Vianney Lecroart" <lecroart@nevrax.com>
Subject: Re: jambase & keepobjs
Date: Tue, 12 Feb 2002 17:08:15 +0100
Exact, so I understand the behavior now.
Ok, but my project is a library so I want the lib :) Is there a tricks to
generate the lib even if KEEPOBJS is true?
Now, another question, how to manage in a portable way the directory
separator? putting / doesn t work when creating a directory with MkDir on
windows so I use $(SLASH) like that:
TOP = r:$(SLASH)code$(SLASH)nel ;
But it s not very readable and the user could change the path so it s not a
very user friendly format, another way to perform this?
From: Patrick Frants <patrick@quintiq.com>
Date: Wed, 13 Feb 2002 08:08:57 +0100
Subject: How do I *not* build a file
I have a file b.cpp file which is included by a file a.cpp file.
When using Glob all .cpp files turn up including b.cpp.
Adding b.o to the depot and specifying both NOTFILE and NOUPDATE for b.o works,
but I think it's very ugly. I would
rather have one of these solutions (in this order):
1. Somehow exclude b.cpp from the list of source files returned by GLOB.
I can't find any operator for excluding an item
from a list. Maybe I could write a rule to exclude an item from a list with a for loop.
2. Somehow tell jam that b.cpp is very special and should not be compiled.
Date: Wed, 13 Feb 2002 18:46:32 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: How do I *not* build a file
Name it b.inc instead of *.cpp.
Yes, you can do that.
Isn't that what you're doing with NOUPDATE?
From: "Achim Domma" <achim.domma@syynx.de>
Date: Thu, 14 Feb 2002 15:36:17 +0100
Subject: First tries with boost-jam
I'm just doing my first steps with jam, but have no success. I downloaded
jam binaries for windows from the boost page and extracted them to a folder.
Then I set the following environment variables :
BOOST_BUILD_INSTALLATION=J:\boost-build
INTELC="D:\Program Files\IntelC++\compiler50\ia32"
VISUALC="D:\Program Files\Microsoft Visual Studio\VC98"
JAM_TOOLSET=INTELC
then I write the following simple Jamfile :
project-root ;
exe MyTestExe : first.cpp
second.cpp
some_more.cpp
;
Executing Jam without parameters I get :
Compiler is Intel C/C++
warning: unknown rule project-root
warning: unknown rule exe
...found 7 targets...
As far as I understand this means that jam does not know what to do with
'project-root' and 'exe'. I think jam should find the required files via
BOOST_BUILD_INSTALLATION !?
could somebody give me a hint in the right direction
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: First tries with boost-jam
Date: Thu, 14 Feb 2002 10:27:22 -0500
Try first setting BOOST_ROOT to the root directory of your boost
installation. BOOST_BUILD_INSTALLATION is becoming obsolete and that part of
the documentation may have gotten out-of-synch.
From: Markus Scherschanski <MScherschanski@dspace.de>
Date: Thu, 14 Feb 2002 16:51:44 +0100
Subject: Release what?!
first of all, many thanks to those, who had helped me with my little problem
building a single object (once!!!).
I was just testing Jam whether it would fit the needs of my firm. So in that
point I'm also interested in what's the future of Jam.
Everybody is talking about a 2.4 release, but when has the time arrived for
it and what will be the features. Will it be Win95-compatible and will have
all the functions ftjam has.
How about regular expressions and some support for e.g. testing the
existence of files, not just via actions and bloody shell programming.
My last wish would be A BETTER DOCUMENTATION!!!
Nevertheless, I must say, Jam is the best Make-Tool I ever saw, and I'll try
to make my bosses like it too.;)
Date: Thu, 14 Feb 2002 17:08:33 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Release what?!
I'm using that right now, and you can too. Just check it out from the
perforce public depot. 2.4 is at //public/jam/...
From: "Achim Domma" <achim.domma@syynx.de>
Subject: RE: First tries with boost-jam
Date: Thu, 14 Feb 2002 18:00:02 +0100
it works so far, thanks ! Now I have the 'Command-line and Environment
Variable Quoting' Problem. As mentioned in the documentation I put double
quotes around the path, but it does not work. Here are the top rows of the output :
Jamrules: No such file or directory
...found 43 targets...
...updating 7 targets...
intel-win32-C++-action
bin\PythonISAPI\intel-win32\debug\runtime-link-dynamic\Re
quest.obj
'D:\Program' is not recognized as an internal or external command,
operable program or batch file.
"D:\Program Files\IntelC++\compiler50\ia32"\bin\icl
/Zm400 -nologo -GX -c
/Zi /Od /Ob0 /GX /GR
Dd -I"." -Fo"bin\PythonISAPI\intel-win32\debug\runti
me-link-dynamic\Request.obj" -Tp"Request.cpp"
...failed intel-win32-C++-action
bin\PythonISAPI\intel-win32\debug\runtime-link-
dynamic\Request.obj ...
intel-win32-C++-action
bin\PythonISAPI\intel-win32\debug\runtime-link-dynamic\Response.obj
From: "Achim Domma" <achim.domma@syynx.de>
Date: Mon, 18 Feb 2002 13:55:13 +0100
Subject: Building COM Objects with boost jam
I have successfully build my first dlls with the boost version of jam. To
use jam as our build-system I must be able to build COM objects, so I have
to implement rules and actions for compiling *.idl files. I read the
documentation of boost.build but the included jamfiles are quite large.
Could somebody give me a starting point where to hook idl processing into
the boost build process ?
PS.: Is it ok to post in this list even if it's boost.build specific ? I
thought this list fits better than the boost list.
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Building COM Objects with boost jam
Date: Mon, 18 Feb 2002 08:50:17 -0500
We have our own mailing list for Boost.Build-related discussions:
Date: Wed, 20 Feb 2002 18:06:58 +0000 (GMT)
From: Richard Smith <richard@ex-parrot.com>
Subject: Auto generated files
I'm trying to write a Jamfile for a project that has some automatically
generated .cc and .h files, and the automatically generated .cc files can
include the automatically generated .h files. I would like to use Jam's
ability to generate the files on demand to generate the .hs, however I've
been unable to get the dependencies quite right. I wonder whether someone can help.
I have the following test Jamfile
rule AutoSource {
Clean clean : $(1) ;
Depends $(1) : $(2) ;
}
rule AutoHeader { Clean clean : $(1) ; }
actions AutoHeader { touch $(<) }
actions AutoSource { cp $(>) $(<) }
AutoHeader foo.h ;
AutoSource foo.cc : foo.src ;
Main foo : foo.cc ;
# End file
... in a directory containing just a file, foo.src:
#include "foo.h"
int main() {}
When I run Jam the first time it builds foo.cc correctly but it doesn't
know that it needs to build foo.h before compiling it, and so fails. If I
re-run Jam, it is able to analyse the included files in foo.cc and it then
builds foo.h and compiles and links foo happily.
Is there a way of getting this to work? ( I do not want to just add a
"Depends $(1) : first ;" line to the AutoHeader rule. )
Subject: Re: Auto generated files
From: Matt Armstrong <matt@lickey.com>
Date: Wed, 20 Feb 2002 12:36:21 -0700
Once actions start running (and, say, build your foo.cc) there is no
way to modify the dependency information. For jam to build something
in one run, it needs the complete dependency tree before it starts
building anything. There is no way to scan foo.cc for headers after
Jam has just built it.
I wouldn't do that either.
Since foo.cc is generated, presumably the header files it will depend
up on are known ahead of time. You could add "Depends foo.cc : foo.h"
in there yourself. You might be able to wrap the messy details in the
rules that generate foo.cc. This is kindof yucky, since you're
inserting more knowledge about how foo.cc is generated into the Jam
rules than you might like, but doing stuff like this is the only way
to get it to work with Jam.
From: Badari Kakumani <badari@cisco.com>
Date: Wed, 20 Feb 2002 12:15:02 -0800
Subject: Re: emacs editing mode for Jam?
i am new to the list. i could browse the above changes using the
cgi script. but how do i download the actual source files?
do i need some perforce client running on my unix box to download these?
Date: Wed, 20 Feb 2002 17:21:26 -0800 (PST)
Subject: Re: emacs editing mode for Jam?
You could do that, or you could use my version of P4DB to get to the page
that has a "Download file" link (in the legend at the top):
http://www.tsoft.com/~dianeh/cgi-bin/p4db/fv.cgi?FSPC=//guest/eric%5fscouten/jam%2dmode/jam%2dmode.el&REV=1
Date: Wed, 20 Feb 2002 17:59:34 -0800 (PST)
Subject: Re: Auto generated files
If foo.h only needs to get generated when it doesn't exist (ie., its
generation isn't dependent on whether it's newer than foo.src), just
change:
rule AutoHeader {
Depends files : $(1) ;
Clean clean : $(1) ;
}
But if foo.h should be re-gen'd when it's out-of-date with foo.src, then
you'd need to make foo.h depend on foo.src:
rule AutoHeader {
Depends $(1) : $(2) ;
Clean clean : $(1) ;
}
and change:
AutoHeader foo.h : foo.src ;
Date: Wed, 20 Feb 2002 19:46:03 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Another compatibility problem in Jam2.4
In early Feb, Miklos (boga@mac.com) reported that the syntax for 'in'
changed in the current development version of Jam 2.4. His reported
fix is to alter the jamgram.yy file:
A second problem is with expressions like the following:
list = 1 2 3 4 ;
element = 1 ;
if ! $(element) in $(list) {
Echo Wrong. ;
}
With Jam 2.3 the echo is not reached. With 2.4-dev, the echo is executed.
If you add brackets to the example, jam 2.4 will do the right thing. The
change breaks a lot of my code.
One solution is to revert that section of jamgram.yy back to an earlier
style, with the extensions that 2.4 needs. The new section which is working
for me is:
expr : arg
{ $$.parse = peval( EXPR_EXISTS, $1.parse, pnull() ); }
| arg `=` arg
{ $$.parse = peval( EXPR_EQUALS, $1.parse, $3.parse ); }
| arg `!=` arg
{ $$.parse = peval( EXPR_NOTEQ, $1.parse, $3.parse ); }
| arg `<` arg
{ $$.parse = peval( EXPR_LESS, $1.parse, $3.parse ); }
| arg `<=` arg
{ $$.parse = peval( EXPR_LESSEQ, $1.parse, $3.parse ); }
| arg `>` arg
{ $$.parse = peval( EXPR_MORE, $1.parse, $3.parse ); }
| arg `>=` arg
{ $$.parse = peval( EXPR_MOREEQ, $1.parse, $3.parse ); }
| expr `&` expr
{ $$.parse = peval( EXPR_AND, $1.parse, $3.parse ); }
| expr `&&` expr
{ $$.parse = peval( EXPR_AND, $1.parse, $3.parse ); }
| expr `|` expr
{ $$.parse = peval( EXPR_OR, $1.parse, $3.parse ); }
| expr `||` expr
{ $$.parse = peval( EXPR_OR, $1.parse, $3.parse ); }
| arg `in` list
{ $$.parse = peval( EXPR_IN, $1.parse, $3.parse ); }
| `!` expr
{ $$.parse = peval( EXPR_NOT, $2.parse, pnull() ); }
| `(` expr `)`
{ $$.parse = $2.parse; }
;
From: sam th <sam@uchicago.edu>
Date: 22 Feb 2002 16:11:44 -0600
Subject: Newbie question
I'm just learning Jam, but I like it quite a lot. However, there's one
thing that I just can't seem to figure out how to get jam to
understand. I have a project with a couple subdirectories, call them
proj/lib and proj/bin. I compile a library in proj/lib, and that works
fine. But then I want to compile a program in proj/bin, and I can't get
Jam to find the library. I've tried lots of things. What's the best
way to do this?
Date: Tue, 26 Feb 2002 10:10:29 -0800
From: rmg@perforce.com
Subject: jam jobs in the Public Depot
(I will at some point mark the "must-fix" ones severity 'A', but have
not done so yet - I'm just collecting the list so far).
This further(*) introduces the use of Perforce jobs as a feature of
the public depot, a somewhat uncharted territory, but one that, will
be useful as the Public Depot revs up. In particular, expect the
jobspec (think of it as the bug tracking database schema) to change as
we learn more about this. I've tried to keep it very very simple for
now. There may also be some Perforce access control issues with jobs
in a public repository to work out, but that's just part of the game.
If you are Jam *and* Perforce user, who feels comfortable with jobs,
and would like to submit Jam bug reports (or feature requests) using
jobs in the Public Depot - go for it!
Once I have what I think is the list of "must-fix"es for 2.4, I'll
post to this list, so you'll have a final shot at arguing for other ones.
* Well, actually, Sam Wise of Perforce was the real ground breaker,
having started tracking p4hl issues with "jobs". Thanks Sam!
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: jam jobs in the Public Depot
Date: Tue, 26 Feb 2002 13:36:45 -0500
I think you should seriously consider fixing the buffer overrun problems
also, especially those in expand.c.
We have done that in Boost.Jam by implementing a string "class" in 'C'.
We're currently rolling the Perforce 2.4 changes into our source (compile.c
and expand.c are causing some significant trouble), but should have that
done in the next coupla weeks.
From: <rmg@perforce.com>
Sent: Tuesday, February 26, 2002 1:10 PM
Subject: jam jobs in the Public Depot
in v1 v2 v'
Jam source'
stock Ja'
removed on'
direction on'
Subject: Re: jam jobs in the Public Depot
Date: Tue, 26 Feb 2002 10:52:33 -0800
From: rmg@perforce.com
favor delaying the "finalization" of 2.4 for this?
My _intent_ of the moment is just to fix things that would inhibit a
happy "stock" 2.3.1 user from wanting to upgrade to 2.4; my _hope_ of
the moment is that this could happen very soon. (Also, we'll benefit
from my simply having gone through a first iteration at the release
packaging (etc) process).
Larger changes, (including ones like buffer overrun problems, to which
your fixes are, I presume, extensive) would be addressed in a later release.
From: "Stephen Smith" <khadrin@hotmail.com>
Date: Thu, 28 Feb 2002 11:44:58 -0500
Subject: Locating Includes in OpenVMS
This message regards the practice of including a relative path when
specifying include files:
// foo.cxx
#include "foo/bar.h"
int main() {}
The Compaq C++ compiler for VMS can get along with this fine. For
example, assuming a project laid out as follows
/root/foo/src:
foo.cxx
/root/foo:
bar.h
compile it like this
cxx/include=("/root") foo.cxx
and all is well.
What I cannot figure out is how to write the jamfile so that the
compiler is happy and jam is able to correctly scan header dependancies.
For example, if
HDRS = \"/root\" ;
the compiler is happy, but jam cannot locate "foo/bar.h" to check it's
timestamp.
We could also try specifying the directory VMS style...
HDRS = root:[000000] ;
but then both the compiler and jam are unhappy.
Date: Thu, 28 Feb 2002 11:23:30 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: re: TOGETHER targets not removed on failure
The answer is no, it is a mistake. The intention is to keep library
archives from being deleted if, for example, a single file can't be
added to the archive. But I'm thinking the check should be:
!( cmd->rule->flags & RULE_UPDATED )
Namely, if the target only sees 'updated' sources on its action list,
it presumably must maintain state of its own and therefore shouldn't be
deleted on failure to update.
The original confusion came about because 'updated' and 'together' modifiers
are always used together in the stock Jambase, and so the incorrest test didn't hurt.
I just wanted to check with you: would switching the test to the 'updated'
instead of 'together' actions modifier address your problem?
From: <boga@mac.com>
Date: Fri, 1 Mar 2002 11:39:12 +0100
Subject: Re: TOGETHER targets not removed on failure
Yes i think that output of actions marked with UPDATED must not be deleted.
(But this should be documented!)
From: Markus Scherschanski <MScherschanski@dspace.de>
Date: Fri, 1 Mar 2002 16:11:39 +0100
Subject: How many jams are there?
has anyone an overview how many Jam-Versions there are and which kinds of
features they provide? How about a global merge?
E.g. Matt's version has a MATCH-function - FTJAM has SUBST - FTJAM can do
Win9x - Matt's version not, so what now?
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 5 Mar 2002 12:28:19 -0500
Subject: short-circuit evaluation
We recently discovered a difference in behavior between Jam 2.3.2 and
2.4: Aside from the fact that Jam now accepts '&' and '|' in addition to
'&&' and '||' as conditional operators, the behavior of the old
operators has been changed to match that of the new ones: "short-circuit
evaluation" has been disabled. To see this, throw the following at
'jam -f-':
if $(FALSE) && [ ECHO 'this is Jam 2.4' ] {}
It doesn't make much sense to me to have added '&' and '|' to the
language if they're not going to operate differently from '&&' and '||'.
The fix is pretty easy. This is the one we use for our merged version of
Jam. I think the only difference for stock Jam is that you need to
replace the use of "frame" in compile.c with "lol", but don't hold me to it:
===================================================================
RCS file: /cvsroot/boost/boost/tools/build/jam_src/compile.c,v
retrieving revision 1.8.4.3
diff -r1.8.4.3 compile.c
160c160
< LIST *lr = parse_evaluate( parse->right, frame );
---
175c175,179
< if( ll && lr ) status = 1;
---
arse_evaluate( parse->third, frame );
179c183,191
< if( ll || lr ) status = 1;
---
arse_evaluate( parse->third, frame );
Index: jamgram.yy
===================================================================
RCS file: /cvsroot/boost/boost/tools/build/jam_src/jamgram.yy,v
retrieving revision 1.5.4.3
diff -r1.5.4.3 jamgram.yy
69a70
arse_make( compile_eval,l,P0,r,S0,S0,c )
216c217
< { $$.parse = peval( EXPR_AND, $1.parse, $3.parse ); }
220c221
< { $$.parse = peval( EXPR_OR, $1.parse, $3.parse ); }
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 5 Mar 2002 15:21:57 -0500
Subject: Re: jamming digest, Vol 1 #327 - 1 msg
I /swear/ I didn't actually write "arse_make" or "arse_evaluate"!
There's a missing initial 'p' in these cases, in case it isn't obvious.
Date: Wed, 6 Mar 2002 18:20:54 +0100 (CET)
From: Jan Langer <jan@langernetz.de>
Subject: LOCATE_TARGET on
i'm new to jam and this list and read most of the docs i found but some
simple things i just can't find.
i have three directories 'bin', 'obj' and 'src'. the sources reside in
'src', the object files shall be compiled to 'obj' and the executable
shall go to 'bin'.
this is my Jamfile, but it says that 'c4p.o depends on itself' and the
executable goes to obj and not bin.
SEARCH_SOURCE = src ;
LOCATE_TARGET = obj ;
LOCATE_TARGET on c4p = bin ; # this seems to be ignored
Main c4p : c4p.cc ;
i left C++, C++FLAGS and HDRS out, because i think they're not important
for this example.
what is wrong with my Jamfile?
jan langer ... jan@langernetz.de
"pi ist genau drei"
From: Craig Allsop <callsop@auran.com>
Date: Fri, 8 Mar 2002 10:16:43 +1000
Subject: compile errors?
If jamgram.c and jamgram.y are supplied with the jam source, shouldn't they
be up to date with jamgram.yy? The source at public.perforce.com/jam/src
compiles as so:
jamgram.c
jamgram.y(311) : error C2065: 'EXEC_UPDATED' : undeclared identifier
jamgram.y(313) : error C2065: 'EXEC_TOGETHER' : undeclared identifier
jamgram.y(315) : error C2065: 'EXEC_IGNORE' : undeclared identifier
jamgram.y(317) : error C2065: 'EXEC_QUIETLY' : undeclared identifier
jamgram.y(319) : error C2065: 'EXEC_PIECEMEAL' : undeclared identifier
jamgram.y(321) : error C2065: 'EXEC_EXISTING' : undeclared identifier
I've run yyacc over jamgram.yy and used bison to generate jamgram.c, however
jamgram.y includes rules.h which has the following typedef:
typedef struct _rule RULE;
This doesn't compile as RULE is a token used by the grammar.
Date: Sun, 10 Mar 2002 16:41:18 +0100 (CET)
From: Jan Langer <jan@langernetz.de>
Subject: SubDirHdrs
i just wondered why the ruls SubDirHdrs in the builtin Jambase file is
rule SubDirHdrs { SUBDIRHDRS += $(<) ; }
and not
rule SubDirHdrs { SUBDIRHDRS += [ FDirName $(<) ] ; }
the documentation (Jambase Manpage) says:
"SubDirHdrs d1 ... dn ;
Adds the path d1/.../dn/ to the header search paths for source
files in SubDir's directory. d1 through dn are elements of a directory path."
i think this means that d1 to dn are composed together to one path.
what is wrong?
ps: i there a collection of user defined jam rules on the net. i have
written some rules to handle PCCTS (a well-known parser generator)
files. although i not sure if i did it correctly (i just works quite
well in my case) i would like to share it with others who need it.
Date: Sat, 9 Mar 2002 03:10:14 -0800
From: David Lindes <user-perforce-jam@daveltd.com>
Subject: SoftLink rule?
I recently discovered jam, and I've been playing with it a bit,
trying to learn my way around... (I hope you don't mind a
non-list-member posting... that doesn't seem to be discouraged on the web site)
In my experiments with it, I came upon a desire to have a
Jamfile of mine create a symbolic link to a file in a different
directory, which would then be compiled, and which I wanted 'jam
clean' to get rid of for me...
I didn't see an obvious way of doing that (easily) with the
existing Jambase, but I figured this might be a common enough
thing that perhaps an addition to Jambase would be warranted...
So, I tried making a new Jambase file with the following changes...:
--- /home/lindes/src/otherware/devel/jam/jam-2.3/Jambase Thu Jan 4 07:53:08 2001
+++ /usr/tmp/Jambase.SoftLink Sat Mar 9 11:03:15 2002
@@ -681,6 +681,14 @@
SEARCH on $(>) = $(SEARCH_SOURCE) ;
}
+rule SoftLink
+{
+ Depends files : $(<) ;
+ Depends $(<) : $(>) ;
+ SEARCH on $(>) = $(SEARCH_SOURCE) ;
+ Clean clean : $(<) ;
+}
+
rule HdrRule {
# HdrRule source : headers ;
@@ -1539,6 +1547,11 @@
actions HardLink
{
$(RM) $(<) && $(LN) $(>) $(<)
+}
+
+actions SoftLink
+{
+ $(RM) $(<) && $(LN) -s $(>) $(<)
}
actions Install
... and that seems to work just fine for me. I don't know my
way around jam well enough to know if there's something I might
be doing that would be generally problematic and/or naive, and I
certainly don't know if this would create problems (and if so,
what sorts of problems, and/or what a good fix would be) on
platforms that aren't particularly similar to my own... So do
with this what you will, but I think this change (or one
comparable to it) would be a nifty feature...
David Lindes, possible future-jam-addict ;-)
P.S. I also thought about just adding the Clean line to the
HardLink rule, but I can see reasons why that might be bad
in some situations. In mine it would have been fine, but
I figured I'd create a SoftLink rule instead so as not to
be suggesting something that might cause problems for people. :-)
Date: Mon, 11 Mar 2002 17:01:04 -0800
From: rmg@perforce.com
Subject: Re: SubDirHdrs
What revision of the builtin Jambase are you using?
//public/jam/src/Jambase#10, which will be in the upcoming 2.4
release, seems to have the correct definitions (which is the one you
apparently expected):
rmg $ p4 print //public/jam/src/Jambase#10 | egrep SUBDIRHDRS Jambase | grep +=
SUBDIRHDRS += [ FDirName $(<) ] ;
I suspect that the Jambase you are looking at is out of date
WRT the documentation.
None that I'm aware of; for now, you could register as a Perforce
Public Depot user, and at least post your useful jam rules there.
I hope in upcoming months to add infrastructure to the Public Depot to
make it easier to post such things in ways that will make it easy for
interested persons to find them.
Date: Mon, 11 Mar 2002 21:20:12 -0800 (PST)
Subject: Re: LOCATE_TARGET on
You can do either:
SEARCH_SOURCE = src ;
LOCATE_TARGET = objects ;
Main c4p$(SUFEXE) : c4p.c ;
LOCATE on c4p$(SUFEXE) = bin ;
or:
SEARCH_SOURCE = src ;
LOCATE_TARGET = objects ;
Main c4p$(SUFEXE) : c4p.c ;
MakeLocate c4p$(SUFEXE) : bin ;
Note that I changed your output dir to "objects" -- that's to get rid of the warning.
(I also added $(SUFEXE) because I'm on an NT :) (Or :(, depending on how
you want to look at it :)
Date: Tue, 12 Mar 2002 11:37:52 +0100 (CET)
From: Jan Langer <jan@langernetz.de>
Subject: Re: Re: SubDirHdrs
2.3 from the boost version of jam.
i already changed it in my Jambase file and compiled jam again. so i'm
quite comfortable with it, if the error occurs only in this version.
yes, i will do that. but they work not like i want them to work. the
problem is the following:
i have a grammar file. from this file the actual cpp and h files are
created. now i want to search the grammar file for include directives
and make them dependencies of the cpp-file. currently i use this
construct:
HDRRULE on $(_grm) = HdrRule ;
HDRSCAN on $(_grm) = $(HDRPATTERN) ;
HDRSEARCH on $(_grm) = $(HDRS) $(SUBDIRHDRS) $(SEARCH_SOURCE) ;
HDRGRIST on $(_grm) = $(HDRGRIST) ;
Date: Tue, 12 Mar 2002 13:44:37 -0000
From: "Joolz " <Joolz@rsd.tv>
Subject: Help With a Small Problem
I have a build working with JAM, god its easier than using makefiles but
I have a small problem that I would like some help if possible
My code in JAMRULES
actions Counter {
ECHO actions Counter $(<) ;
"tools\Counter\Counter" $(<) ;
}
rule IncRevision {
ECHO Rule IncRevision $(<) ;
Depends revision : $(<) ;
Counter $(<) ;
}
My Code in JAMFILE
Depends revision.c : $(MyLibraries) ;
IncRevision revision.c ;
Library revision : revision.c ;
What I am trying to do is that when any of the libraries are modified by
a change in the source the file revision.c (which contains a counter and
date information) gets passed to a tool call counter which updates this
to reflect a change in the project.
This then builds the library revision.a which is then linked into the final image
From: Craig Allsop <callsop@auran.com>
Date: Wed, 13 Mar 2002 08:46:36 +1000
Subject: file_dirscan
I'm interested to know if other NT jam users had problems with Clean when
using Glob? The del command under cmd shell does not like forward slash
characters in the filenames. I looked through the archives but could not
find any issues on this. The file_dirscan routine is the only place in jam
that is different. The Glob function uses this routine for collecting files
and this is where this character is introduced. If I use Glob to scan for
source files it causes problems with Clean. To solve this issue, I've made
the following change to file_dirscan to use the define for this character.
#ifdef __USE_NT_PATH_DELIM
sprintf( filespec, "%s%c*", dir, PATH_DELIM );
#else // !__USE_NT_PATH_DELIM
sprintf( filespec, "%s/*", dir );
#endif // !__USE_NT_PATH_DELIM
Should this correction be made to the original jam?
Subject: Re: file_dirscan
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 12 Mar 2002 16:14:20 -0700
I think it should, but since PATH_DELIM is around for both NT and Unix
builds, jam can just use your __USE_NT_PATH_DELIM code for both.
From: Craig Allsop <callsop@auran.com>
Subject: RE: file_dirscan
Date: Wed, 13 Mar 2002 12:17:48 +1000
I agree, the #ifdef is only to show what was changed (an internal policy of ours).
From: Matt Armstrong [mailto:matt@lickey.com]
Sent: Wednesday, March 13, 2002 9:14 AM
Subject: Re: file_dirscan
I think it should, but since PATH_DELIM is around for both NT and Unix
builds, jam can just use your __USE_NT_PATH_DELIM code for both.
Date: Tue, 12 Mar 2002 21:45:48 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: file_dirscan
I'm a little confused: filespec (with the / instead of \) isn't used to
construct full pathnames - path_build() is. path_build() is given the
original directory name and the file within that directory. It should
do the right thing on Windows and put in a \.
Admittedly, passing dir/* to Windows' findfirst/findnext isn't quite
right, but I don't see how that / shows up in the results of Glob.
Which jam are you using? (jam -v)
Date: Wed, 13 Mar 2002 08:54:27 -0800
From: rmg@perforce.com
Subject: Whence Jam 2.4?
As usual, good news and bad news.
The Bad:
As you may have noticed, the 2.4 release hasn't happened yet, two
weeks now beyond my stated target of 3/1/2002.
The Good:
Christopher has been spending time on Jam work, and this will
result in the presence in 2.4 of some features that wouldn't have
been present had we held to the 3/1 target.
Having learned better, I will not venture to predict a revised target
date, but will let slip that Christopher signed a recent email on the
topic with
Date: Tue, 19 Mar 2002 16:09:26 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: jam's attitude to libraries...
I just converted a project from make.
Preamble:
Once I had removed about 9,000 lines of makefile generation from the
makefiles, I was left with something like this:
app: main.c this/libthis.a that/libthat.a other/libother.a ...
$(CC) -o app main.c -lthis/this -lthat/that -lother/other ...
That's not syntactically correct, but you get the idea.
Jam doesn't quite like that approach. It builds all the libraries, and it
builds the application and links in the libraries, but the dependencies
aren't quite what I want.
Question:
I cannot find a pretty way to say that the application depends on
$(MYLIBS), so that the libraries must be built first. Hints?
Subject: Re: jam's attitude to libraries...
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 19 Mar 2002 08:55:23 -0700
Does the LinkLibraries rule that comes with the standard Jambase do
what you want?
Example usage is in Jam's own Jamfile.
Date: Tue, 19 Mar 2002 17:06:35 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: jam's attitude to libraries...
It's meant to. But the project has libmumble.a in directory .../mumble,
and that confuses jam. It doesn't realize that the libmumble.a that's
being built is the mumble/libmumble.a that's needed by the application in
the mumble's parent directory.
My options appear to be:
- use my evil hack
- use MakeLocate in each and every Jamfile to put the libraries in one
central location
Subject: Re: jam's attitude to libraries...
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 19 Mar 2002 09:45:38 -0700
Once you start using the SubDir rule, Jam starts using grist (by
default). The library target will have grist stuck onto it, and the
gristed form is the one Jam knows about.
So you have to tell Jam which libmumble.a you want with explicit grist:
LinkLibraries myapp : <mumbledir1!mumbledir2>libmumble.a
Doing "jam -d5 | grep libmumble.a" might clue you into what Jam calls
the .a file.
You can probably make this nicer with a rule like this (untested):
rule MyLinkLibrary {
local lib_grist = [ FGrist $(3) ] ;
local lib = $(>:G=$(lib_grist:E)) ;
LinkLibraries $(<) : $(>) ;
}
And used like this:
MyLinkLibrary myapp : libmumble : mumbledir1 mumbledir2 ;
It basically takes care of putting the proper grist on libmumble, and
then calls LinkLibraries.
OR, you can eliminate all grist by doing this:
SOURCE_GRIST = ;
after every call to SubDir in your Jamfiles. This will cause problems
if you ever have two files or libraries of the same name in different
dirs, but will make things simpler otherwise.
Date: Tue, 19 Mar 2002 09:08:39 -0800 (PST)
Subject: Re: jam's attitude to libraries...
Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Rather than dealing with all grist stuff, can't you just simply put an
"include" in your Jamfile that references libmumble?
Date: Tue, 19 Mar 2002 18:26:08 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: jam's attitude to libraries...
The project has lots of files by the same name, unfortunately.
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Date: Tue, 19 Mar 2002 18:29:30 +0100
Subject: Re: jam's attitude to libraries...
That's effectively what I do now, not so prettily. (I just stuck a for
loop in there, making the Mail target depend on the gristed form of each
library. Grisly hack.)
The SubDir stuff seems to be the weakest part of Jam{,base} to me.
Date: Tue, 19 Mar 2002 09:52:28 -0800 (PST)
Subject: Re: jam's attitude to libraries...
Sorry, but I don't know what this means, in the sense of why it would
prevent you from using the "include" statement.
Date: Tue, 19 Mar 2002 19:12:33 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: jam's attitude to libraries...
As I understand it, I need gristing to differentiate between files of the
same name. To get gristing, I need to use SubDir, not include.
Date: Tue, 19 Mar 2002 10:31:58 -0800 (PST)
Subject: Re: jam's attitude to libraries...
Using SubDir/SubInclude doesn't prevent you from also using "include".
For example, in your Jamfile that has the Main for foo:
SubDir TOP src ;
Main foo : foo.c ;
LinkLibraries foo$(SUFEXE) : libbumble libmumble ;
include $(TOP)/bumble/Jamfile ;
include $(TOP)/mumble/Jamfile ;
If there's an a.c in both bumble and mumble, SubDir takes care of keeping
them uniquely named (by "gristing" them), so you don't have to deal with
any of that yourself.
From: "EXT-Goodson, Stephen" <stephen.goodson@boeing.com>
Subject: RE: jam's attitude to libraries...
Date: Tue, 19 Mar 2002 13:02:21 -0800
Our project has a similar layout to what you describe, and
LinkLibraries works fine for us. It gets the dependencies exactly
right, without any evil hacks, and without libraries in a central
location. Libraries don't get grist (at least in stock jam 2.3) so as
long as you don't have 2 libraries with the same name, there shouldn't
be any problem with jam determining what library you mean. It's hard
to guess what the problem might be without more details on what you're
doing, but if you're giving the path to libmumble in the LinkLibraries
rule, that might be what is confusing jam. You don't need to give the
path or suffix when you mention libmumble in the LinkLibraries rule.
From: Arnt Gulbrandsen [mailto:arnt@gulbrandsen.priv.no]
Sent: Tuesday, March 19, 2002 8:07 AM
Subject: Re: jam's attitude to libraries...
It's meant to. But the project has libmumble.a in directory .../mumble,
and that confuses jam. It doesn't realize that the libmumble.a that's
being built is the mumble/libmumble.a that's needed by the application in
the mumble's parent directory.
My options appear to be:
- use my evil hack
- use MakeLocate in each and every Jamfile to put the libraries in one
central location
Subject: Re: jam's attitude to libraries...
From: Matt Armstrong <matt@lickey.com>
Date: Wed, 20 Mar 2002 11:50:03 -0700
I was wrong and confused myself and Arnt. The libraries themselves
are not gristed, so Diane's idea looks like it'll work. In fact, I
just assumed Arnt was already doing this. :-)
From: Vladimir Prus <ghost@cs.msu.su>
Date: Fri, 22 Mar 2002 12:58:50 +0300
Subject: "if" behaviour change from 2.3 to 2.4
the following code:
l = "" a b ;
if $(l) { ECHO "Okay" ; }
Behaves differently in 2.3 and the most most recent version from the public
depot. Should this be considered a bug?
The problem is in compile.c:
LIST *
compile_eval(
PARSE *parse,
LOL *args )
{
...........................
switch( parse->num ) {
case EXPR_EXISTS:
if( ll && ll->string[0] ) status = 1;
^^^^^^^^ here's the problem
should check all the elements of the list.
It appears to be trivial to fix.
From: Craig Allsop <callsop@auran.com>
Date: Mon, 25 Mar 2002 14:55:13 +1000
Subject: Assert?
I'd be interested to know what other people have done to aid debugging
jamfiles. I'm considering adding an Assert rule that can be enabled via the
command line but is otherwise ignored. Anyone have an suggestions? I.e.
Assert expr : msg ;
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Assert?
Date: Mon, 25 Mar 2002 02:17:26 -0500
Boost.Jam contains a BACKTRACE builtin which is used to implement
assertions that show a trace of rule invocations and line numbers,
enabling you to pinpoint the source of an error.
Date: Tue, 26 Mar 2002 10:38:13 -0800
From: rmg@perforce.com
Subject: jam2.4, release candidate 1, now available
We've just finished rolling what changes we can into jam to make up
the 2.4 release. It is available at:
http://public.perforce.com/public/jam/jam-2.4.tar
http://public.perforce.com/public/jam/jam-2.4.zip
(Release notes only)
http://public.perforce.com/public/jam/src/RELNOTES
We've decided to give it the "rc1" (release candidate 1) tag. If nothing
compellingly broken is reported back in two weeks, We'll quietly remove
the "rc1" designation.
The changes between Jam 2.3[.2] and 2.4 are faithfully noted in the
RELEASE notes, but there were a few bugs introduced and fixed during
the 2.4-dev stage. These are only mentioned here:
- Change 1587 by rmg@rmg:pdjam:chinacat on 2002/03/25 11:32:53
if ( "" a b ) once again returns true.
Caught by Vladimir Prus <ghost@cs.msu.su>
- Change 1539 by seiwald@golly-seiwald on 2002/03/13 15:00:39
Fix definitions of FIncludes/FDefines for OS2 and NT, mistakes
caught by Craig McPheeters.
- Change 1537 by seiwald@golly-seiwald on 2002/03/12 16:29:31
Fix to 1319: make jam's &&, &, |, and || operators short circuit
as they did before. 'in' now short-circuits as well.
- Change 1489 by seiwald@thin-seiwald on 2002/02/27 23:29:36
Revert syntax of "expr : expr `in` expr" to jam 2.3's
"expr : arg `in` list", because:
a) It broke the precedence of `in` so that it was looser than
!, parsing "! xxxx in yyy" as '( ! xxx ) in yyy".
b) It didn't allow providing an in line list as "$(f) in a b c".
Note that this release includes only the smallest of outside contributions.
Now is the time to examine more closely the major forks of jam to see
how much of them can and should be folded back.
Finally: Thanks again to everybody who's contributed to Jam.
We are very appreciative. Keep it up!
Date: Wed, 27 Mar 2002 10:43:17 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: re: jam2.4, release candidate 1, now available
consider changing to a mail reader or gateway that understands how to
I compiled and installed jam 2.4rc1 without any problem on my WindowsNT
cygwin system.
The new function glob does not work as epected:
I add the following to lines to the jamfile and calling ./jam0
echo Glob1 [ Glob . : jam*.c ] ;
echo Glob2 [ Glob . : *.c ] ;
Then I called ./jam0 and got the following results
$ ./jam0
Glob1
Glob2 ./builtins.c ./command.c ./compile.c ./execmac.c ./execunix.c
./execvms.c
./expand.c ./filemac.c ./filent.c ./fileos2.c ./fileunix.c ./filevms.c
./glob.c
./hash.c ./headers.c ./jam.c ./jambase.c ./jamgram.c ./lists.c ./make.c
./make1.
c ./mkjambase.c ./newstr.c ./option.c ./parse.c ./pathmac.c
./pathunix.c ./pathv
ms.c ./regexp.c ./rules.c ./scan.c ./search.c ./timestamp.c
./variable.c
...found 161 target(s)...
ng@WS1092 /cygdrive/e/jam/2_4_rc1
I expected the following output for Glob1
Glob1 ./jam.c ./jambase.c ./jamgram.c
Is this a bug or a intended behaviour?
My workaround is:
echo Glob3 [ Glob . : ./jam*.c ]
Date: Fri, 29 Mar 2002 15:33:17 -0500 (EST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Proposed change to MATCH
I just ran across a situation where a variation of the new match rule
would be really useful. In a header rule, I need to parse a string which
contains a list of white space delimited words into a jam list. In
parsing a template file, it describes includes as:
INCLUDE foo.h bar.h jaz.h
the template is processed to contain real #include's. I need to generate
these dependencies for the template file.
I can create a regex to scan the file to grab the list of headers, but
they are returned as a single string. Decomposing the string into a
real jam list is proving difficult - does anybody know a way to go
from "foo.h bar.h jaz.h" -> "foo.h" "bar.h" "jaz.h" ?
The new match rule looks like my best bet, except that it only performs
a single match against the string. What I want is for the following
to work:
local list = [ MATCH "[ ]*([a-z]+)" : "foo.h bar.h jaz.h" ] ;
With the mainline version, this returns a list with one item, "foo.h".
I've modified a local version to apply the regex against the string
until it runs out of matches. This does change how you use MATCH, but
I think it provides a capability that is otherwise missing from Jam?
Its also useful to quickly decompose a path into its parts, rather than
the jam looping you need now:
local tokens = [ MATCH "([^/\\]+)" : "/this/is/a/filename.c" ] ;
-> tokens = "this" "is" "a" "filename.c" ;
The change to builtins.c is the modified function below. Basically,
rather than a single 'if' testing regexec() there is now a 'while' and
a little logic to find the end of the previous match. Its a simple change.
---
LIST *
builtin_match(
PARSE *parse,
LOL *args ) {
LIST *l, *r;
LIST *result = 0;
/* For each pattern */
for( l = lol_get( args, 0 ); l; l = l->next ) {
regexp *re = regcomp( l->string );
/* For each string to match against */
for( r = lol_get( args, 1 ); r; r = r->next ) {
char *string = r->string;
while( string && regexec( re, string ) ) {
int i, top;
/* Find highest parameter, set new string to its end */
string = 0;
for( top = NSUBEXP; top-- > 1; )
if( re->startp[top] != re->endp[top] ) {
string = re->endp[top];
break;
}
/* And add all parameters up to highest onto list. */
/* Must have parameters to have results! */
for( i = 1; i <= top; i++ ) {
char buf[ MAXSYM ];
int l = re->endp[i] - re->startp[i];
memcpy( buf, re->startp[i], l );
buf[ l ] = 0;
result = list_new( result, newstr( buf ) );
}
}
}
free( (char *)re );
}
return result;
}
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Proposed change to MATCH
Date: Fri, 29 Mar 2002 15:42:14 -0500
Boost.Build is doing this (with our SUBST rule inherited from FTJam, but
I think it can easily be made to work with MATCH):
# Returns a list of the following substrings:
# 1) from beginning till the first occurence of 'separator' or till the end,
# 2) between each occurence of 'separator' and the next occurence,
# 3) from the last occurence of 'separator' till the end.
# If no separator is present, the result will contain only one element.
#
rule split ( string separator ) {
local result ;
local s = $(string) ;
# Break pieaces off 's' until it has no separators left.
local match = 1 ;
while $(match) {
match = [ SUBST $(s) ^(.*)($(separator))(.*) $1 $2 $3 ] ;
if $(match) {
result = $(match[3]) $(result) ;
s = $(match[1]) ;
}
}
# Combine the remaining part at the beginning, which does not have
# separators, with the pieces broken off.
# Note that rule's signature does not allow initial s to be empty.
return $(s) $(result) ;
}
Subject: Re: Proposed change to MATCH
From: Matt Armstrong <matt@lickey.com>
Date: Fri, 29 Mar 2002 13:58:04 -0700
You can do it in jam pretty easily. I called the rule Split, and
you'd use it like:
list = [ Split "foo.h bar.h jaz.h" : "[ ]+" ] ;
This rule is written for my MATCH rule, which I think is different
from the one now in stock jam (someday I'll have time to sync up).
This isn't to say that there is no argument for similar functionality
written in C.
Also, recently, Chris installed some new behavior to stock Jam's MATCH
rule that I haven't had time to look at. It might be relevant.
#
# Return a list consisting of a string split where a regexp matches
#
# Usage: var = [ Split string : regexp ] ;
#
# This rule requires the builtin function MATCH which was is not part
# of the stock jam (as of this writing).
#
rule Split {
local match = [ MATCH $(1) : "^(.*)("$(2)")(.*)" ] ;
if $(match) && $(match[2]) != $(1) {
local last, element ; {
last = $(element) ;
}
return [ Split $(match[2]) : $(2) ] $(last) ;
} else { return $(1) ; }
}
Date: Fri, 29 Mar 2002 16:37:38 -0500 (EST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Proposed change to MATCH
Oops. It turns out this is easily accomplished with the new builtin
MATCH rule, thanks to David and Matt for pointing this out. The rule
I have now which works with the jam mainline is:
rule Split {
local pat = ([^$(2:E=$(SLASH))]+)(.*) ;
local match = [ MATCH $(pat) : $(1) ] ;
local result = $(match[1]) ;
local string = $(match[2]) ;
while $(string) {
match = [ MATCH $(pat) : $(string) ] ;
result += $(match[1]) ;
string = $(match[2]) ;
}
return $(result) ;
}
---
Running it with:
a = [ Split " foo.h bar.h jaz.h " : " " ] ; # last string is space-tab
ECHO files are .$(a). ;
a = [ Split "foo.h" : " " ] ;
ECHO file is .$(a). ;
a = [ Split "/this/is/a/filename.c" ] ;
ECHO components are .$(a). ;
a = [ Split "this" ] ;
ECHO component is .$(a). ;
yields:
files are .foo.h. .bar.h. .jaz.h.
file is .foo.h.
components are .this. .is. .a. .filename.c.
component is .this.
Hey, I like this new version of Jam!
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sat, 30 Mar 2002 08:05:36 -0500
Subject: GLOB behavior on NT
The previously-noted behavior that GLOB prepends the entire directory
name to its result has some unintended consequences on NT. In order to
match *.jam in the directory c:\foo\bar\baz, you need the following invocation:
GLOB c:\\foo\\bar\\baz : c:\\\\foo\\\\bar\\\\baz\\\\*.jam
I am using the following to work around the issues:
# A fix for the broken behavior of built-in glob
rule glob ( dirs * : patterns * ) {
return [ GLOB $(dirs:T) : $(dirs:T)\\$(SLASH)$(>) ] ;
}
Where :T changes backslashes to forward slashes (I encourage Perforce to
adopt :/ and :\ as primitives instead). A workaround that uses no
language extensions would be much more complicated, using MATCH to split
the path and :J=/ to re-join it.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Sun, 31 Mar 2002 21:46:48 -0500
Subject: Odd behavior of MATCH
input:
x = [ MATCH (foo)(.*) : foo ] ;
ECHO -$(x)+ ;
output:
-foo+
Shouldn't this print:
-foo+ -+
instead? The 2nd pattern was matched, after all.
From: Badari Kakumani <badari@cisco.com>
Date: Sun, 31 Mar 2002 21:20:58 -0800
Subject: bug in Archive rule?
i am trying to debug a situation which is dropping one of our object
files (G0_distrib_lib_msg_gen.o) from
the corresponding library (libens.g0.a).
i ran jam with -d9 and have included the relevant output from jam.
jam is simply skipping
<infra!distrib!lib!obj-4k>libens.g0.a(G0_distrib_lib_msg_gen.o).
which should have been part of $(>) for the Archive rule.
anyone has ideas on what could cause jam to drop items
from $(>) list? is this some known bug?
relavant portions of ' jam -d9' output:
=======================================
get AR = ar.98r1-v0.mips64 -csr
expanded to ar.98r1-v0.mips64 -csr
expand '$(<)'
expand '$(>)'
Archive /vws/vpw/badari/jam_cleanup/infra/distrib/lib/obj-4k/libens.g0.a
From: Badari Kakumani <badari@cisco.com>
Date: Mon, 1 Apr 2002 07:55:51 -0800
Subject: Re: bug in Archive rule?
this file, G0_distrib_lib_msg_gen.o is getting archived into THREE libraries.
so it was already present when libens.g0.a was needed to be built.
they had KEEPOBJS enabled for this segment of Jamfile and that kept all
the object files around after they are built and archived.
what if KEEPOBJS is set and all the objects are present AND the library
was accidentally removed? would that cause the library to be NOT generated
at all (since none of the objects themselves got built)?
i think 'updated' should mean that:
a) if the library already had the object AND
b) the object was updated.
In this particular case since the libens.g0.a was non-existant, a) above
is NOT satisfied. so 'updated' clause should NOT have kicked in and jam should
have proceeded to archive G0_distrib_lib_msg_gen.o
that might be the bug in jam.
to work around this bug, i tried putting in 'actions' for 'Archive' WITHOUT
the 'updated' modifier. when i put the modified Archive in Jamrules, still
jam has the buggy behaviour.
Only when i over-rode jambase with -fJambase.txt AND my Jambase.txt did NOT
have 'updated' modifier, it worked properly.
so we may have to over-ride the Jambase to fix this problem.
Date: Mon, 1 Apr 2002 10:23:47 -0800 (PST)
Subject: Re: bug in Archive rule?
It shouldn't matter that the .o files are kept around -- you could compile
whether the object file inside the archive is older than the .o outside of
the archive, in which case it should be included in the ar update.
If your Jambase is compiled into your 'jam' executable, changes you make
to Jambase won't take effect (unless you explicitly point to it) -- you'd
need to make the changes in jambase.c and rebuild 'jam'.
From: "David Hoogvorst" <dc.hoogvorst@inter.nl.net>
Date: Mon, 1 Apr 2002 20:48:35 +0200
Subject: Jam and static source checks. Any advice?
I'm quite new to jam, but quite enthousiastic I should say, and try to
change the build system at my work to jam. Now, we have a home-made system
merely based on jam, but with a lot of Perl and Ruby scripting around it.
We develop software since 1982, so in the whole system parts are in Fortran,
parts in C and parts in C++. Furthermore, a lot of code is generated with
Pascal tools and Perl and Ruby scripts, due to (inter)nationalization needs
in the Fortran bit. The whole codebase is about 10,000 files I guess.
What we do, is that while developing (on NT or W2000), one has a local copy
of the module in a working directory. (We don't use the MS Visual
environment or anything like that). The rest of the sources remain in the
repository. Whenever one tries to build, his sources are put through a
number of static checks (line length, diacritic symbols, lint), to avoid as
many problems as possible when porting the software to the platforms we do:
OpenVMS, AS/400, IBM Mainframe, most Unixes (Unices?), and NT/W2000.
I've been struggling getting my jambase right. What I want to do first, is
to get the sources through these static checks. These checks are all command
line tools that exit with nonzero if not succesful. Furthermore, they yield
report files.
What I want is the following: If the static check is successful, remove all
the report files. If the static check fails, leave the report files and stop
the build (or leave a clear message where the build started to go wrong).
Jam cleans up when things go wrong, and leaves no trace, so that's just the
opposite of what I want.
Furthermore I wonder how I should go about with this local work directory.
Thanks a lot in advance. About 30 developers are straining at the leash to
get going with jam...
From: "David Hoogvorst" <dc.hoogvorst@inter.nl.net>
Date: Mon, 1 Apr 2002 20:51:08 +0200
Subject: Jam static checks, correction...
Sorry, the current system is not based on jam, but on make...
From: Vladimir Prus <ghost@cs.msu.su>
Date: Wed, 3 Apr 2002 11:41:08 +0400
Subject: "echo" behavour
I've noted that the following:
echo "hi" ;
outputs the character sequence 'h', 'i', ' ', '\n'.
Note that whitespace. Why would I care? The reason is that when writing tests
for Boost.Build I will surely need to compare jam output with expected
output, and having trailing spaces in tests can lead to many problems. Would
it be reasonable to apply the following trivial patch?
--- lists.c Sun Mar 31 17:18:56 2002
+++ ../../../../boost-cvs/tools/build/jam_src/lists.c Sun Mar 31 17:16:32
2002
@@ -172,8 +172,12 @@
void
list_print( LIST *l ) {
- for( ; l; l = list_next( l ) )
- printf( "%s ", l->string );
+ LIST *p = 0;
+ for( ; l; p = l, l = list_next( l ) )
+ if ( p )
+ printf( "%s ", p->string );
+ if ( p )
+ printf( "%s", p->string );
}
/*
Date: Wed, 03 Apr 2002 11:21:52 -0800
From: rmg@perforce.com
Subject: jam2.4, release candidate 2, now available
Based on feedback from jam2.4rc1 use, we have made some changes, which
become jam2.4rc2. These are now available for download as
http://public.perforce.com/public/jam/jam-2.4.tar
http://public.perforce.com/public/jam/jam-2.4.zip
ftp://ftp.perforce.com/jam/jam-2.4.tar
ftp://ftp.perforce.com/jam/jam-2.4.zip
From the RELNOTES:
| 0. Changes between 2.4rc1 and 2.4rc2:
|
| THESE NOTES WILL BE REMOVED WITH THE FINAL 2.4 RELEASE, SINCE THEY
| REFER EXCLUSIVELY TO ADJUSTMENTS IN BEHAVIORS NEW BETWEEN 2.3 and
| 2.4:
|
| Make MATCH generate empty strings for () subexpressions that
| match nothing, rather than generating nothing at all.
| Thanks to David Abrahams.
|
| GLOB now applies the pattern to the directory-less filename,
| rather than the whole path. Thanks to Niklaus Giger.
|
| Make Match rule do productized results, rather than
| using just $(1[1]) as pattern and $(2[1]) as the string.
(For more detail on the effect of these changes, please refer to the
change descriptions of changes 1601, 1612, 1616 and 1614 in the Public
Depot, and to the updated Jam.html included in the release.)
If no additional compelling bugs are reported back in two weeks, we'll
quietly remove the "rc2" designation.
Date: Thu, 4 Apr 2002 15:40:10 -0800 (PST)
Subject: MS VC++ and Perforce
We are looking to move to Perfoce and have a major
stumbling block. There are many people at my company
that are familiar and hope to remain using Visual
Studio for development as well as debugging. I think
debugging is the biggest hurdle.
Please excuse my Microsoft ignorance, but if we use
the 'cl' command line compiler then we seem forced to
debug using VC++. However to take full advantage of
the MS VC++ debugger we need the binary to be part of
a VC++ project.
How are others handling this? Can you offer some
advice? I'm hoping to avoid checking the .dsp and .dsw
files into source control along with the Jamfile.
Subject: RE: MS VC++ and Perforce
Date: Thu, 4 Apr 2002 17:00:20 -0700
From: "Mike Steed" <msteed@altiris.com>
We use jam with the Microsoft compiler (among others).
To debug an exe built without a Visual Studio project, start the IDE,
then choose File->Open Workspace, change the file type to "Executable Files",
and open your exe. You can set command line parameters for your app with
Project->Settings->Debug.
From: Roger Lipscombe <RLipscombe@sonicblue.com>
Subject: RE: MS VC++ and Perforce
Date: Fri, 5 Apr 2002 01:41:48 -0800
I wrote some (simple) notes after trying to use jam for building Win32 apps.
Start here: http://www.differentpla.net/~roger/devel/jam/
In particular, look here:
http://www.differentpla.net/~roger/devel/jam/tutorial/mfc_app/building_in_devstudio.html
Date: Sun, 7 Apr 2002 18:15:25 -0700
From: "Steve Johnson" <steve_johnson@Equilibrium.com>
Subject: A newbie question
I'm just starting to play with Jam. I'm having trouble figuring out
just how the locations of sources and targets can/should be specified.
I've read the 3 main documents that explain abstractly what SubDir,
SEARCH, LOCATE, SEARCH_SOURCE, LOCATE_TARGET are for, but I can't seem
to make them work quite right. I've had better luck with these if I set
them globally rather than per target, but I know I'm not really supposed
to do that. Unless I'm missing something, no examples are provided in
the main documentation to describe how these should really be used.
Can anyone point me at any additional documentation or examples that can
show me how to use these variables correctly? Are there any actual
public domain projects out there that use Jam that I could download and
look at?
My particular situation is this...I'm building a fairly standard tree
structure using the SubDir rule at the top of each of my Jamfiles. The
tree is populated with various subdirectories, some of which contain
sources for library modules, and others that contain executable sources.
For each module directory, I have source files in a "Sources" subdir,
and headers in an "Includes" subdir. I want to put my object files
(both intermediates and final targets) in a separate tree from my
sources, hopefully in a hierarchy that mirrors the source hierarchy.
Date: Mon, 8 Apr 2002 02:18:07 -0700 (PDT)
Subject: Re: A newbie question
If you don't set $(TOP), all the subdir's are relative instead of full
paths. So set ALL_LOCATE_TARGET in a Jamrules file to point to the
build-tree path (up to where you want the source-tree hierarchy to begin),
and set $JAMRULES in your env to point to that file.
For example:
$ cat $JAMRULES
BUILD_DIR = $(HOME)/build ;
ALL_LOCATE_TARGET = $(BUILD_DIR) ;
$ cd $HOME/work/jam
$ cat Jamfile
SubDir TOP ;
SubInclude TOP src ;
$ cat src/Jamfile
SubDir TOP src ;
Main foo : foo.c ;
$ jam -n
...updating 2 target(s)...
Cc /home/me/build/foo.o
/usr/bin/gcc -c -D__cygwin__ -O -Isrc -o /home/me/build/foo.o src/foo.c
Link /home/me/build/foo.exe
/usr/bin/gcc -D__cygwin__ -o /home/me/build/foo.exe /home/me/build/foo.o
Chmod1 /home/me/build/foo.exe
chmod 711 /home/me/build/foo.exe
...updated 2 target(s)...
Date: Mon, 08 Apr 2002 13:47:36 +0100
From: Derek Burgess <derek.burgess@cursor-system.com>
Subject: Why does changing a variable locally affect a global value?
I have an IDL rule I borrowed from the archive. I want to modify the
LOCAL_IDLFLAGS in Idl1 actions with a line in a local Jamfile like this:
LOCAL_IDLFLAGS += -m $(TOP)/foo/bar/message_ids.decl ;
I find that this value is then fixed. So if I have a number of Jamfiles
that I want to set a local value for LOCAL_IDLFLAGS I get just one of
the values for all invocations of the rule.
Have I missed something ? Any suggestions appreciated.
rule Idl {
# based on the Yacc rule
local h ;
h = $(<:BS=.h) ;
MakeLocate $(<) $(h) : $(LOCATE_SOURCE) ;
# Some places don't have an Idl.
if $(IDL) {
Depends files : $(<) $(h) ;
Depends $(<) $(h) : $(>) ;
Idl1 $(<) $(h) : $(>) ;
Clean clean : $(<) $(h) ;
}
INCLUDES $(<) : $(h) ;
}
actions Idl1 {
$(IDL) $(LOCAL_IDLFLAGS) $(IDLFLAGS) -c -i $(>) -o
$(LOCATE_SOURCE)/$(>:B)
}
From: Roger Lipscombe <RLipscombe@sonicblue.com>
Subject: RE: Why does changing a variable locally affect a global value?
Date: Mon, 8 Apr 2002 06:02:43 -0700
There's no such thing as local variables (on a per-Jamfile basis). All of
the Jamfiles are wedged together into one big (conceptual) Jamfile. I think
that the closest way to get what you want is either:
1. LOCAL_IDLFLAGS on foo += whatever ;
2. Reset LOCAL_IDLFLAGS in SubInclude or SubDir.
Date: Mon, 08 Apr 2002 14:55:07 +0100
From: Derek Burgess <derek.burgess@cursor-system.com>
Subject: Re: Why does changing a variable locally affect a global value?
Ah thanks. I think that's put me on the right track but I still have a problem
- the attempt to set LOCAL_IDLFLAGS has seemingly no effect. In my (sub)Jamfile
I have the following:
name = simulated ;
lib = $(PREFLIB)$(name) ;
dll = $(PREFDLL)$(name:S=$(SUFDLL)) ;
LOCAL_IDLFLAGS on $(lib) += -m $(SEARCH_SOURCE)/message_ids.decl ;
Library $(lib) : simulated.idl xxxsimulated.cpp ;
MainFromObjects $(dll) ;
( Jamrules as before )
But I get only this generated:
./bin/idl -m -c -i foo/bar/simulated.idl -o ./foo/bar/simulated
^^^ nothing here
Date: Mon, 8 Apr 2002 07:49:01 -0700 (PDT)
Subject: Re: A newbie question
Never mind. I must've been on drugs when I wrote that -- for some reason,
I saw it including the subdirectory name as well, but of course, it
wasn't, and would just end up putting everything into the same directory.
Once I have several cups of coffee this morning, I'll get back to this
(unless someone more coherent gets to it first :)
From: "EXT-Goodson, Stephen" <stephen.goodson@boeing.com>
Subject: RE: A newbie question
Date: Mon, 8 Apr 2002 13:45:48 -0700
If you are using the SubDir rule, it will set SEARCH_SOURCE, LOCATE_SOURCE,
and LOCATE_TARGET for you in each directory. The other jam rules then
use these values to set target specific SEARCH and LOCATE, which are the
variables that actually control where stuff is searched for and located.
Unfortunately, the SubDir rule won't set them quite the way you want.
You'll either need to modify SubDir, or after every time you use the
SubDir rule, put something like:
LOCATE_TARGET = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS) ] ;
LOCATE_SOURCE = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS) ] ;
where you have set ALL_LOCATE_TARGET in your Jamrules file to the top of the
tree you want generated files to appear in. [As an aside, wouldn't this
be a more sensible way for the built-in SubDir rule to use ALL_LOCATE_TARGET?]
Then, as long as sources appear in the directory with the Jamfile, and you
want generated files to go to the corresponding place under
ALL_LOCATE_TARGET, you shouldn't have to do anything.
If there are special cases that require setting target specific SEARCH or
LOCATE, don't forget to use the gristed name of the target.
From: Steve Johnson [mailto:steve_johnson@equilibrium.com]
Sent: Sunday, April 07, 2002 6:15 PM
Subject: A newbie question
I'm just starting to play with Jam. I'm having trouble figuring out just
how the locations of sources and targets can/should be specified. I've read
the 3 main documents that explain abstractly what SubDir, SEARCH, LOCATE,
SEARCH_SOURCE, LOCATE_TARGET are for, but I can't seem to make them work
quite right. I've had better luck with these if I set them globally rather
than per target, but I know I'm not really supposed to do that. Unless I'm
missing something, no examples are provided in the main documentation to
describe how these should really be used.
Can anyone point me at any additional documentation or examples that can
show me how to use these variables correctly? Are there any actual public
domain projects out there that use Jam that I could download and look at?
My particular situation is this...I'm building a fairly standard tree
structure using the SubDir rule at the top of each of my Jamfiles. The tree
is populated with various subdirectories, some of which contain sources for
library modules, and others that contain executable sources. For each module
directory, I have source files in a "Sources" subdir, and headers in an
"Includes" subdir. I want to put my object files (both intermediates and
final targets) in a separate tree from my sources, hopefully in a hierarchy
that mirrors the source hierarchy.
Any help at all would be much appreciated.
Date: Mon, 8 Apr 2002 16:02:46 -0700 (PDT)
Subject: RE: A newbie question
Just a couple of notes...
If you have executables under your various modules that have the same
name, Stephen's approach won't work right, unless you qualify the names in
the Jamfile because, otherwise, the "LOCATE on" will be the last one set.
In other words, if you have something like:
/home/me/work/src/mod1/Sources/foo.c
/home/me/work/src/mod2/Sources/foo.c
And in your Jamfiles you have:
Main foo : foo.c ;
then 'foo(.exe)' will end up getting linked, first against the mod1 foo.o,
but put into the mod2 build output dir, then linked again against the mod2
foo.o and, again, put into the mod2 build output dir. So you'd need to do
something like:
Main $(LOCATE_TARGET)/foo : foo.c ;
or
Main <mod1>foo : foo.c ;
in order to keep them separate. (Of course, if you never have exe's with
the same name, then there's no problem.)
Also, with Stephen's way of doing it, you'll get the entire subdir
structure recreated in the build output tree, including not only the
module subdirs but the Sources subdir under them as well. If that's not
what you want (I doubt I would :), then you can instead use:
LOCATE_TARGET = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS[1]) ] ;
LOCATE_SOURCE = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS[1]) ] ;
which will put the targets into /path/to/build/tree/{mod1,mod2,modN...}
rather than /path/to/build/tree/{mod1,mod2,modN...}/Sources.
Also, you can put the above lines in a file, then just use the "include"
directive in your individual Jamfiles, so if you ever need to change them,
you can do it in just one place (and it helps keep the Jamfiles a bit less
cluttered, as well).
Guess that's about it (hope this one's a bit more correct :)
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 8 Apr 2002 21:11:48 -0500
Subject: Force a target's removal in case of failure?
Does anyone have a trick for this?
I want to arrange that if any of a particular target's direct or
indirect dependencies fails to build, that target is removed.
I've tried about 100 different combinations of approaches to get this to
happen, but haven't come up with anything. I'm about to resort to a Jam
patch, but I'd rather not.
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Mon, 8 Apr 2002 22:17:15 -0500
Subject: LEAVES
The LEAVES rule doesn't seem to have the effect I'd expect it to.
Namely, given a dependency graph like this:
Depends A : B ;
Depends B : C ;
LEAVES A ;
with A and C up-to-date, a request to build A will still cause B to be built.
Does anyone know a way to prevent intermediate targets from being built
if the root target is up-to-date?
Date: Tue, 09 Apr 2002 17:29:44 +0100
From: Derek Burgess <derek.burgess@cursor-system.com>
Subject: How do you describe a target when using the 'on' rule?
I have a Jamfile (below) in which I want to set LOCAL_IDLFLAGS.
I had this working briefly once (without understanding why) when ...
s/simulated.cxx/simulated.h/ in the Jamfile
I have had to change the idl rule slightly and can no longer get
LOCAL_IDLFLAGS to have any effect.
I suppose my question is what is the right way to construct the TARGET
in a line like:
THING on TARGET = whatever ;
{Here is the output from ./jam -- see the target of the Idl1 rule is
'simulated.cxx' - so why does the ON rule not have any effect?
Idl1 ./gen/QNXNTO/parts/lmu3/comms/tests/pseudomodem/simulated.cxx
./bin/QNX/idl -s ./aux/idl/symbols.decl -m
./aux/idl/message_ids.decl -c -i
parts/lmu3/comms/tests/pseudomodem/simulated.idl -o
./gen/QNXNTO/parts/lmu3/comms/tests/pseudomodem/simulated ;
}
Jamfile:
name = simulated ;
lib = $(PREFLIB)$(name) ;
dll = $(PREFDLL)$(name:S=$(SUFDLL)) ;
LOCAL_IDLFLAGS on simulated.cxx = -m $(SEARCH_SOURCE)/message_ids.decl
;
Library $(lib) : simulated.idl xxxsimulated.cpp ;
There is a rule in Jamrules for idl files:
rule UserObject {
switch $(>) {
case *.idl : C++ $(<) : $(<:S=.cxx) ;
Idl $(<:S=.cxx) : $(>) ;
case * : EXIT "Unknown suffix on" $(>) "- see UserObject rule
in Jamfile(5)." ;
}
}
rule Idl {
# based on the Yacc rule
local h ;
h = $(<:BS=.h) ;
MakeLocate $(<) $(h) : $(LOCATE_SOURCE) ;
# Some places don't have an Idl.
if $(IDL) {
Depends files : $(<) $(h) ;
Depends $(<:B) $(h) : $(>) ;
Idl1 $(<) : $(>) ;
Clean clean : $(<) $(h) ;
}
INCLUDES $(<) : $(h) ;
}
actions Idl1 {
$(IDL) $(LOCAL_IDLFLAGS) $(IDLFLAGS) -c -i $(>) -o $(<:DB) ;
}
From: "Kai Wang" <wangk@rpi.edu>
Subject: A newbie question - how to delete object file after a DLL building
Date: Thu, 11 Apr 2002 16:29:21 -0400
When building DLL from both sources and built libraries, I use the
following:
LINKLIBS = .... (external libs) ;
SRCS = foo1.cpp foo2.cpp... (source files) ;
LOCATE_TARGET = $(BINDIR) ;
Main myproj$(SUFSHR) : $(SRCS) ;
LinkLibraries myproj$(SUFSHR) : $(LINKLIBS) ;
The DLL myproj.dll is successfully built and put in $(BINDIR), but foo1.obj,
foo2.obj ... are not deleted automatically and messing up the $(BINDIR)
directory. I played with the RmTemps rule but it doesn't work for me.
From: Toon Knapen <toon.knapen@si-lab.org>
Date: Wed, 17 Apr 2002 15:11:14 +0200
Subject: shell-commands
Since my jam-build also needs to call `make` for some small subsystem, I
defined following in my Jamfile
The echo in my rule is executed but not the echo (nor the call to make) in
the actions. What am I doing wrong ?
rule call_make_all {
Depends $(<) : $(>) ;
ECHO calling make ;
}
actions call_make_all {
echo executing make all ;
make all ;
}
call_make_all subsystem$(SUFEXE) : subsystem$(SUFOBJ) ;
Date: Wed, 17 Apr 2002 08:26:13 -0700 (PDT)
Subject: Re: shell-commands
If you do 'jam subsystem[.exe]', you'll see your actions get run. In other
words, you don't have a dependency in your rule on anything other than the
target itself. Try adding:
Depends exe : $(<) ;
From: Toon Knapen <toon.knapen@si-lab.org>
Date: Wed, 17 Apr 2002 22:19:01 -0400
Subject: MkDir
If I want jam to just create a directory for me, should'nt a Jamfile only
containing following line :
MkDir mydirectory ;
not be sufficient ? There's no way to add a dependency AFAICT ?
Date: Wed, 17 Apr 2002 13:42:04 -0700 (PDT)
Subject: Re: MkDir
MkDir is a dependency of the pseudo-target "dirs", so you'd need to run
'jam dirs'. If you don't want to have to do that step, you can have a rule that does:
rule MakeDir {
Depends all : dirs ;
MkDir $(<) ;
}
then use MakeDir instead of MkDir.
Date: Wed, 17 Apr 2002 15:41:23 -0500 (CDT)
Subject: Re: MkDir
Jam is dependency driven, if nothing depends upon that directory
being there, then there is no reason to create it.
if something does depend upon it, then you'll need to express that
Depends mydirectory : somethingelse ;
There are some other targets that conveniently create dirs
as needed to place files in them, MakeLocate I believe.
I'm a little rusty.
From: Derek Burgess <derek.burgess@cursor-system.com>
Date: Mon, 22 Apr 2002 10:39:12 +0100
Subject: Dependency on autogenerated headers
Any idea how do I deal with the following :
2 inital files: foo.idl bar.cpp
An Idl compiler reads foo.idl and creates 2 new files: foo.cpp and foo.h
bar.cpp #includes foo.h
bar.cpp is therefore always older than foo.h and so gets recompiled every time...
Date: Mon, 22 Apr 2002 16:26:38 -0400 (EDT)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Another jam extension in my personal branch
I've added another extension to Jam into my personal branch in the perforce
public depot. Here it is, if you need something this in the future you
know where to look.
The extension is a way to serialize actions in parts of the graph so only
one is run at a time.
This gets a little technical, if you're not interested in jam source
extensions you may want to ignore this.
The problem I had been trying to solve has to do with how you compile
files on NT which use PDB debug information. There are two debug formats
on NT, C7 and PDB. C7 debug info gets placed into the .obj files, which
works really well with Jam. PDB debug info is stored in an external file,
which the compiler must open for writing when it is compiling a file. The
problem had to do with running Jam with multiple jobs, turning on pre-compiled
headers and using PDB debug info. With that combination, you need to ensure
only one compile job is running at a time which references the common .pdb
file.
When you use pre-compiled heades, you compile some source file to generate
the pre-compiled header info. Assume pch.cpp is compiled and produces a
pch.pch pre-compiled header file. That .pch file is provided to all of the
remaining related compiles to provide the pre-compiled header info. When
PDB debugging is being used, the compile which produces pch.pch will also
produce pch.pdb - the debug info. Any compile which is provided pch.pch
must also provide the pch.pdb file. Specifying a different .pdb file
results in a compile error. This is where the compiles are forced to
become serial.
C7 debug info has a hard limit of 64K symbols, which wasn't enough for us.
PCH is a significant compile win for us. We have many collections of files
which use different PCH files, so there is a lot of available work to do
but each of the pools of files must only have one job active at a time.
The approach I used to solve this was to allow you to specify a semaphore
node for targets. This is done with:
JAM_SEMAPHORE on $(target) = nodeName ;
the semaphore node can be placed on a related set of targets, and only one
of those targets will have an active job running on it at a time. The
semaphore node shouldn't be part of the graph, nothing should depend on it.
Its treated specially by jam with this extension which messes with the normal
binding/execution treatment of nodes.
With this facility, my problem can be solved by setting common semaphore
nodes on all .obj files which reference the same .pch file with PDB debug
info is enabled. If no semaphore is set on a node, there is no change from
the old jam behaviour.
Jam semaphores can also be used to ensure only one of a I/O intensive operation
be active at one time. For example, we have an action which links a shared
object. This action is often very I/O intensive, consuming vast amounts of
memory. Now by setting a common semaphore on the this target, only one
shared object link happens at a time.
These semaphores are used to solve a problem the normal dependency graph
can't express. A dependency over time rather than through files or
relationships. In my experience to date semaphores have a very limited
scope where they are useful, most problems can be fixed using the other
techniques. Yacc rules which use common temporary files can be solved
by creating unique sub-directories for example. So while this is available,
it should be used with a little restraint.
The extension turned out to be relatively minimal I think, much less bad
than I thought it would be. Its available in:
//guest/craig_mcpheeters/jam/src/...
That area is up-to-date with respect to the jam mainline (which is now
the official 2.4 release - way to go!)
Date: Mon, 22 Apr 2002 18:31:05 -0700 (PDT)
Subject: Re: Dependency on autogenerated headers
Don't foo.{cpp,h} only get re-gen'd when foo.idl is newer -- and then
wouldn't you want bar.cpp to be recompiled?
Date: Tue, 23 Apr 2002 16:23:06 -0700 (PDT)
Subject: Builds don't stop on error
I've seen a few posts about this subject before, but cannot find a clear
answer. When JAM compiles a source file, and the compile fails, JAM happily
continues to attempt to the link the file into a .lib, and then link the
.libs into an EXE. Of course, these actions all fail since the original
source file failed to compile in the first place.
Why doesn't JAM stop on the first error? I've tried the -q option, but it
has no effect. Here are my rules:
Main $(TARGET) : ; # Target source files
LinkLibraries $(TARGET) : data.lib # Target libraries
common.lib
... etc. more lib files ;
Library data.lib : tester.c
... etc. more source files ;
Library common.lib : ... etc. more source files ;
From: "Toqir Khalid" <toqir.khalid@openwave.com>
Date: Thu, 25 Apr 2002 10:37:22 -0700
Subject: Setting profile information with JAM
Is it possible to set any flags/options so that you can get profile
information for the binary that you are trying to build, using JAM. For
example with CC you can set -gp to get gprof information. Is there anything
within JAM that you can set to get this information.
From: michael.allard@acterna.com
Date: Thu, 25 Apr 2002 13:31:15 -0500
Subject: Re: Setting profile information with JAM
Here is how I handled this.
1. I created a "GP" variable to hold the profiling flags (architecture/compile
specific - Solaris wants -xpg, AIX wants -pg, etc.)
2. I use an "ARCH" variable for object files - we store our object files in an
architecture-specific subdirectory underneath the source directory. For
example, if the source is in "foo/source", the objects end up in
"foo/source/$(ARCH)".
3. I added support for a "PROF" flag to be set on the command line, e.g. "jam
-sPROF=Y".
4. If $(PROF) is Y, then I add a ".p" to the ARCH variable (e.g., it becomes
"AIX.p" or "SunOS.p"). This makes profiled objects end up in separate object
directories from their non-profiled counterparts (which IMO is only sane :-). I
also then set GP to its proper value; otherwise it's blank.
Snippets from my Jamrules:
if $(OS) = SOLARIS {
ARCH ?= SunOS ;
...
if $(PROF) = Y {
GP ?= -xpg ;
}
}
else if $(OS) = AIX {
ARCH ?= AIX ;
...
if $(PROF) = Y {
GP ?= -pg ;
}
}
if $(PROF) = Y { ARCH = $(ARCH).p ; }
CCFLAGS = $(O) $(DBG) $(GP) ... ... ;
Then, in foo/source/Jamfile (or any Jamfile that compiles C code), I have:
LOCATE_TARGET = $(SEARCH_SOURCE)/$(ARCH)
This places the objects in "AIX" or "SunOS" for non-profiled output, and places
profiled object in "AIX.p" or "SunOS.p".
Just typing "jam" makes non profiled output; typing "jam -sPROF=Y" makes
profiled outputs.
Caveat: I haven't played with this in a while, and it's just for testing in the
hopes of imnproving our build system someday, but it worked last time I tried it!!
Subject: Setting profile information with JAM
Is it possible to set any flags/options so that you can get profile
information for the binary that you are trying to build, using JAM. For
example with CC you can set -gp to get gprof information. Is there anything
within JAM that you can set to get this information.
Date: Fri, 26 Apr 2002 14:59:49 +0100
From: Derek Burgess <derek.burgess@cursor-system.com>
Subject: User Object rule not being invoked
I am trying to add a rule to generate C++ code from an IDL file and then compile it.
The sequence I am trying to create is:
But UserObject seems to expect that overall
And so loses interest in all my idl files. Is there anything I can do to make this work?
rule UserObject {
switch $(>) {
case *.idl : C++ $(<:S=_idl.o) : $(>:S=_idl.cpp) ;
Idl $(>:S=_idl.cpp) : $(>) ;
case * : EXIT "Unknown suffix on" $(>) "- see UserObject rule in Jamfile(5)." ;
}
}
From: Markus Scherschanski <MScherschanski@dspace.de>
Date: Tue, 30 Apr 2002 15:06:50 +0100
Subject: Dynamic Include-Files?
is there any way to read out other files and use their input in Jam?
I thought about this way, for example:
#### In Jambase
rule FileExists {
Depends first : $(<) ;
ALWAYS $(<) ;
NOTFILE $(<) ;
}
actions FileExists {
echo $(<)_EXIST = > _tmp_exist_$(<)
if exist $(>) echo TRUE >> _tmp_exist_$(<)
type $(SEMICOLON) >> _tmp_exist_$(<)
}
###### In Jamfile
FileExists TESTFILE : test.fil ;
include _tmp_exist_TESTFILE ;
if ($(TESTFILE_EXIST) = TRUE) { ECHO WOW,IT EXISTS }
It's just one case (sad enough that Jam cannot check whether file exists), I
also want to read in filelists and so on.
It's surprisingly works too, but only the second time I call the Makefile,
you know why?
Right! Actions are performed after the file is read, so bloody shit.
Here's my question:
Can I force Jam to execute some actions before rules and are the changes
included or can I include data from files in another way?
From: "David Abrahams" <david.abrahams@rcn.com>
Subject: Re: Another jam extension in my personal branch
Date: Tue, 30 Apr 2002 17:56:47 -0500
I haven't yet tried to use Jam's parallel build facilities, but Scons
(another build tool) lays claim to an interesting feature which most
build tools do not implement: it keeps all of the available build
processors busy at all times. Recursive dependency-tree satisfaction
(like most of what I've seen in Jam's make process) tends to limit the
number of simultaneous build processes to something related to the
branching factor of the dependency graph.
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: Dynamic Include-Files?
Date: Fri, 3 May 2002 16:24:37 +0400
It is possible to check if file exists using the GLOB rule in 2.4
if [ GLOB /home/ghost : a.html ] {
# do something
}
I don't know about any way. Maybe invoking jam recursively will work, but I
never tried it myself.
Date: Fri, 3 May 2002 14:54:14 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Dynamic Include-Files?
It's not the Jam Way...
In jam, the jamfiles are a complete, self-contained description of how to
build. Trying to put part of the build description somewhere else goes
against jam's design.
Date: Fri, 3 May 2002 18:08:12 -0700 (PDT)
From: "John D. Mitchell" <johnm-jam@non.net>
Subject: Current Java support?
Base question: What's the current state of Jam's support for Java?
Background: I've hunted through the archives of this mailing list and there
seems to be a real dearth of Java support coverage. Is anyone really using
Jam with Java or has everybody switched to Ant? I really dislike Ant.
FWIW, I've used Jam on some small C/C++ projects but I'm certainly not a Jam guru. :-)
I've got Ames' stuff from the old days but I'm hoping that there's a newer,
better place to start.
Date: Fri, 3 May 2002 18:26:55 -0700 (PDT)
Subject: Re: Current Java support?
I reworked a Jam-based (on "Ames' stuff") Java build process into an Ant
one and took it down from 40 minutes to 4, so I feel just a tad
differently about Ant than you do :) The problem with the Jam-based one
was no wildcarding, so you had these long lists of source files in the
Jamfiles that you had to maintain, and feeding the Java files to the
compiler one at a time, which is clearly gonna make the build incredibly slow.
Just out of curiosity, what is it about Ant that makes you dislike it?
Date: Fri, 3 May 2002 18:38:17 -0700 (PDT)
From: "John D. Mitchell" <johnm-jam@non.net>
Subject: Re: Current Java support?
Ah, I was hoping that someone had overcome the single .java file per
compiler invocation issue. I.e., feeding the java compiler all of the
.java files that it needs to process in each directory. :-(
XML stupidity. See: Humans should not have to grok XML:
http://www-106.ibm.com/developerworks/xml/library/x-sbxml.html
Date: Fri, 3 May 2002 18:53:59 -0700 (PDT)
Subject: Re: Current Java support?
Well, don't get your hopes down just yet -- someone may well turn up to
say they've done just that (the rework I did was about a year and a half ago).
In that case, you can look forward to Ant2:
Ant2, like Ant1, uses build files written in XML as its main input,
but it will not be restricted to it.
But, really, if your build process is relatively straightforward (which,
unfortunately, mine wasn't -- but going with Ant was still a vast
improvement), you can get away with a pretty small, top-level build file
(albeit, still in XML, but at least there wouldn't be all that much of it to look at :)
Date: Sat, 4 May 2002 06:28:41 -0700 (PDT)
From: "John D. Mitchell" <johnm-jam@non.net>
Subject: Re: Current Java support?
Yeah, I've got a lot of Ant stuff -- it's pretty much impossible to not
have these days if you use any open-source Java stuff.
Subject: Re: [ "John D. Mitchell" ] Current Java support?
Date: Sat, 04 May 2002 09:40:12 -0500
From: "Gregg G. Wonderly" <gregg@skymaster.c2-tech.com>
Except for people using Ant for a replacement to CPP (this is what property
files, JAR file meta-info etc are for), I generally just do
javac -d . `find ${SRC} -name '*.java' -print`
and be done with it.
I have 700+ class file projects that this works fine on. Now, if you are
going beyond a few thousand files, this might be a problem. But, if Ant can
do your builds for you, then plain old javac can do it with some simple care
to packaging and parameterization of your code.
My number one argument about Ant is that it demands that the source tree
mirror the class packaging structure, which is just not realistic for a number
of reasons. So, I don't use Ant, and you know what, I am still able to build
and distribute code with no problems.
Ant is a solution for a non-problem. It seems that technologies that are in
fact useful for some things (I use XML in lots of places) have been made into
something that is really more of a hinderence. Lots of people complain about
Ant's shortcommings. Yet, it still continues to get used... It's kinda like MS windows...
From: "Jan Mikkelsen" <janm@transactionware.com>
Subject: RE: Current Java support?
Date: Mon, 6 May 2002 08:59:53 +1000
I'm actually working on adding the following things to Jam at the moment:
- Parsing .class files to extract dependencies and put them into the
dependency graph.
- Modifying the Jambase to invoke the compiler once for the objects
associated with a particular target (jar, whatever). This is necessary
for correct builds as well as performance because dependencies from
.class files only gives the set of files to be build, not the order. I
know the compiler can do some of this, but I don't always trust it and I
don't want to depend on having source files in directories following
package name conventions.
- Adding support for multiple compilers (Javac, jikes, gcj).
- Adding support for native code generation using gcj from the same
Jamfile as a conventional Java build.
Acunia-Jam has some interesting stuff (particularly the :P modifier).
Acunia-JamJar (or something just like it) will be necessary.
I want to get my first version running in the next week or so, but
that's what I said at the beginning of last week and other stuff
intervened. I'll post here when it is ready.
Date: Mon, 6 May 2002 13:48:49 +0200 (CEST)
From: "chris.gray" <gray@acunia.com>
Subject: Re: Current Java support?
We made our own version to solve the problems we had with Java:
http://wonka.acunia.com/download.html#tools
So far we have failed to submit this to the Perforce depot: more
specifically, I have failed to do so. 8-0 I'll try to do something about
that RSN (I've just downloaded the command-line client).
Hi, we build our software in layers, with lower layers providing services to
upper layers. Lower layers must not depend on upper layers. To help enforce
this we build and _test_ lower layers before building upper layers. That
means that many executables should be built before libraries from upper
layers, and it's this last bit that is tripping me up. Jam seems to prefer
to build libraries and then executables.
To add to the mix one of our layers builds an executable that will then be
used in subsequent layers to generate source code. So we have,
layer 1 shlib + tests
layer 2 shlib + tests
layer 3 codegen + tests
layer 4 shlib + tests
layer 5 shlib + tests
and so on. We really do want layer 1 to build completely before layer 2 is
started. And in any case it is essential that layer 3 is built before layer
4 which would otherwise fail pretty quickly.
I realise that I could probably get what I want by expressing all of the
dependencies but there is something illogical about layer2 shlib depending
on layer 1 unit tests, and in any case the whole thing will become
unmanageable. So I'm hoping for a more elegant solution :-)
Date: Thu, 9 May 2002 16:02:50 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: layering and code generation
Oh. JFYI, I hereby notify you that I ignore your legal nonsense
completely. Please do sue me. ;)
This doesn't follow... I don't see why you should order that strictly. As
long as every build produces an error message for every error, the build
order shouldn't matter.
There are advantages to ordering more sloppily. For one thing, it allows
jam to use all the CPUs all the time.
The executable is no problem.
In jam, you need to define some rule that runs your executable, and use
that rule for various targets in layers 4 and 5. That rule needs to
contain one extra line, saying that $(<) (the target to be built) depends
on the executable.
Jam will then make sure that the actions to build the executable are run
before the actions that use the executable.
On an SMP machine, I'd think that some parts of layer 2 can be built while
layer 1 is being tested, etc.
You could have a "layer1" target, on which all layer2 parts depend... it's
not pretty.
Date: Thu, 9 May 2002 18:10:40 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: layering and code generation
Maybe I do see. You want to be very, very sure that there are no bad
dependencies, right?
Here's one way you might do that. It's rather experimental and I don't
know exactly how to do it. I have a secret hope that Diane Holt will step
in with the exact syntax you need ;)
Jam generally figures out source code dependencies by itself, by looking
at source files. That's a good idea, IMO. Eliminates a failure mode. It
also means that jam builds an up-to-the-second map of all dependencies.
You can use that map to spot bad depencencies.
You need a rule for the layer-n tests that depends on all those tests and
that has the LEAVES modifier set. Its action should get a list of
everything on which those tests depend. If that list contains something
from the wrong layers, you can give an error message.
This is safe, because if jam misses a dependency for a test, then either
the compile or the execution of that test won't succeed anyway.
Date: Thu, 9 May 2002 19:37:16 -0700 (PDT)
Subject: Re: layering and code generation
I think what you'd need to do is to not use any of the SubDir and
SubInclude stuff in your source-tree top-level Jamfile (which I'm assuming
you're currently doing), but instead chain your layers together via the
'include' directive at the bottom of each of your layers' top-level
Jamfiles, with your source-tree top-level Jamfile just doing a:
SubInclude TOP layer1 ; # the first link in the chain
And layer1's Jamfile having:
SubDir TOP layer1 ;
SubInclude TOP layer1 subdir1 ; #etc., for all subdirs of layer1
# Any targets that live at this level (usually none)
include $(TOP)$(SLASH)layer2$(SLASH)Jamfile
Do that (with layer2 doing an 'include' of layer3's Jamfile, etc.) for
each but the final layer, which doesn't need an 'include', since you're at
the end of the chain.
P.S. Shameless plug: Anyone know of a company in the bay area looking for
a Supremo Build&Release Engineer (that'd be me :)
From: Chris Higgins <chris.higgins@cursor-system.com>
Subject: RE: layering and code generation
Date: Fri, 10 May 2002 13:37:59 +0100
Please keep the abuse coming, it might help me get the nonsense removed :-)
Right, no upward dependencies.
I tried this but had no success and to be honest I don't see what you are
getting at. Why would avoiding SubInclude make much difference? The standard
Jambase SubInclude doesn't look much different than the above to me.
From: Patrick Frants <patrick@quintiq.com>
Date: Fri, 10 May 2002 14:19:52 +0200
Subject: How do I define a rule for .cpp -> .i (preprocessor output)
I would like to define a rule to convert .cpp to .i files.
I cannot do it with the UserObject rule because it looks at the suffix of the
source file, not the target file.
Date: Fri, 10 May 2002 12:02:52 -0700 (PDT)
Subject: RE: layering and code generation
Ack! -- I've been battling one of the nastiest colds I've had in years,
and clearly my proferred "solution" came out of a fever dream that made me
think using the include directive at the end would get all of "layer1"
built before anything in "layer2" did. Sorry about that (hangs head in shame).
There's only two ways I can think of to do what you're asking for, and one
way was getting to be too much work to do for free :) -- so I went with
the easy way, which is to run 'jam' in each of your layers' top-level
Jamfiles (via pseudo-targets -- one to build, one to clean) from your
source-tree's top-level Jamfile.
Here's the Jamrules:
NOTFILE layer cleanall ;
ALWAYS layer cleanall ;
rule Build {
Depends all : $(<) ;
Depends $(<) : $(>) ;
local dir ; {
BuildLayer layer : $(dir) ;
}
}
actions BuildLayer {
echo "Building $(>)..."
cd $(>) && jam
}
rule CleanAll {
Depends cleanall : $(>) ;
local dir ;
{
CleanLayer cleanall : $(dir) ;
}
}
actions CleanLayer {
echo "Cleaning $(>)..."
cd $(>) && jam clean
}
The top-level Jamfile:
SubDir TOP ;
Build layer : layer1 layer2 layer3 ;
CleanAll cleanall : layer1 layer2 layer3 ;
From: Chris Higgins <chris.higgins@cursor-system.com>
Subject: RE: layering and code generation
Date: Mon, 13 May 2002 15:13:54 +0100
Thanks for your approach, appreciate it. I think it gives away too much of
why I want to use Jam though (no recursive invocations, complete dependency
graph) so I probably won't use it.
I've come to the conclusion that what I want is outwith Jam's scope (at
least can't be captured by a dependency) and needs to be handled in some
other manner. I'm happy with that for the moment,
Now on to other problems, java looms :-), automated testing :-)
From: "Tim Baker" <dbaker@direct.ca>
Subject: Re: layering and code generation
Date: Wed, 15 May 2002 14:55:39 -0700
Maybe this will work:
Depends layer4 : layer3 ;
Depends layer3 : layer2 ;
Depends layer2 : layer 1 ;
NOTFILE layer1 layer2 layer3 layer4 ;
rule L1Main {
Main $(<) : $(>) ;
Depends layer1 : $(<) ;
}
rule L1Library {
Library $(<) : $(>) ;
Depends layer1 : $(<) ;
}
rule L2Main {
Main $(<) : $(>) ;
Depends layer2 : $(<) ;
}
rule L2Library {
Library $(<) : $(>) ;
Depends layer2 : $(<) ;
}
etc etc.
Then from the command line invoke "jam layer4" to build all layers or "jam
layer2" to build layer2 and layer1.
Date: Wed, 15 May 2002 16:17:19 -0700 (PDT)
Subject: Re: layering and code generation
Yeah, that was the other way I thought of that I ended up not going down,
since it involves more work than just what you've shown here -- because
once you hit the Main and Library rules, you're back to hitting
dependencies on the "lib" and "exe" pseudo-targets, which are dependencies
of "all", so you're back to Jam building all the libraries (in all the
layers) then all the exes. IOW, you'd need to go a lot further down the
road of creating new rules and pseudo-targets to distinguish the targets
being built for each layer (and, like I said, that was more than I felt
like doing for free :)
Date: Wed, 15 May 2002 20:19:26 -0700 (PDT)
Subject: Re: layering and code generation
Aaargh... this thread's making me crazy (shouldn't have worked on it at
all that day with that nasty cold, I suppose). Having looked at it again,
yes, this approach does work, so long as you include the "layer"
pseudo-target in your 'jam' command, which I wasn't doing, since I always
(do and aim to) just run 'jam'. So if you don't mind running 'jam layerN'
whenever you're at the top level, this works fine.
Date: Thu, 16 May 2002 09:40:31 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: layering and code generation
That approach may seem to work, but it's begging for problems:
- running "jam" runs a "successful" build, normally satisfying the
invoking user
- running "jam layer5" runs the correct build.
If running "jam" is faster than "jam layer5", I can easily imagine people
running "jam" more and more often...
From: "David Abrahams" <david.abrahams@rcn.com>
Date: Thu, 16 May 2002 09:02:13 -0500
Subject: undefined behavior
My OSF compiler pointed out to me that code in fileunix.c invokes undefined
behavior. The enclosed patch fixes that.
*** fileunix.c.~1.4.~ Mon May 6 15:22:27 2002
--- fileunix.c Thu May 16 07:00:24 2002
***************
*** 218,224 ****
while( read( fd, &ar_hdr, SARHDR ) == SARHDR &&
!memcmp( ar_hdr.ar_fmag, ARFMAG, SARFMAG ) )
{
! char lar_name[256];
long lar_date;
long lar_size;
long lar_offset;
while( read( fd, &ar_hdr, SARHDR ) == SARHDR &&
!memcmp( ar_hdr.ar_fmag, ARFMAG, SARFMAG ) )
{
! char lar_name_[257];
! char* lar_name = lar_name_ + 1;
long lar_date;
long lar_size;
long lar_offset;
(Exim 3.33 #1)
Date: Fri, 24 May 2002 13:03:55 +0200
From: Michael Voucko <voucko@fillmore-labs.com>
Subject: Problems finding object files for linking
I'm new to Jam and not able to figure out how to do the following:
Consider a directory layout like this:
... -- UtilLib --- Util1 -- util1.c
|
-- Util2 -- util1.c
|
..
-- Utiln -- utiln.c
What I try to accomplish is to compile all the utilx.c files and link a
library from the resulting objects.
All the Jamfiles in the 'Utilx' sub directories look like this:
# Jamfile in TOP ... UtilLib Utilx
SuDir TOP ... UtilLib Utilx
Objects Utilx.c
The Jamfile from UtilLib should do the linking and looks like this
!!
# Jamfile in TOP ... UtilLib
SuDir TOP ... UtilLib
LibraryFromObjects UtilLib : Util1$(SUFOBJ) .... Utiln$(SUFOBJ) ;
SubInclude TOP ... UtilLib Util1
SubInclude TOP ... UtilLib Util2
...
SubInclude TOP ... UtilLib Utiln
Problem is that the LibraryFromObjects rule tries to locate
Utilx$(SUFOBJ) in <...!UtilLib> instead of <...!UtilLib!Utilx>
since in the LibraryFromObjects rule any path prefix of the object files
is discarded (at least from what I understand - and as I said I'm a
newbie;-). That is, changing the above rule to
LibraryFromObjects UtilLib : <...!UtilLib!Util1>Util1$(SUFOBJ)
....
<...!UtilLib!Utiln>Utiln$(SUFOBJ) ;
changes nothing at all.
Is it correct that when doing the link step a library is the Jam
'target' and the object files are the 'source' for it?
If so, can I modify the search path for the objects by setting
SEARCH_TARGET ? (I tried this one as well - same result, but I'm not
sure wether another modification might have influenced it).
Any explanation, pointer to documentation or sample jam file is apprecited.
Date: Sat, 25 May 2002 21:46:14 -0700
From: Rich Young <wratchdog@cox.net>
Subject: Mac OS X/Darwin Jambase
Does anyone happen to have a Jambase (other than what Apple uses) file
for Mac OS X/Darwin?
Mac OS X uses Jam for building except their Jambase is very specific to their
tools and has a lot of extra crap that I don't want to parse through to figure
out how to use it with my jam files.
What I would like is the equivalent of the Jambase that comes with the Jam package
but with the appropriate changes for OS X. I was tweaking the default file (from
I'm a newbie to using Jam but I was hoping it would be better than
GnuMake (which is the make utility I currently use with OS X).
Date: Sat, 25 May 2002 22:47:28 -0700
Subject: Re: Mac OS X/Darwin Jambase
Rich -- be aware that the /usr/bin/jam that is on Mac OS X has also been
customized for use by Project Builder. We do intend to change things such
that jam behaves by default like the standard jam does;
(I don't think we've yet fixed that, but we do intend to).
So, if you want standard jam behavior, you'll probably want to build it from
the standard sources.
It's possible that may fix some of the other problems you've been having with
Perforce's defaults Jambase.
From: "Radke, Kevin" <Kevin.Radke@nexiq.com>
Date: Tue, 28 May 2002 09:44:01 -0500
Subject: Treat MSVCNT special?
Having been bit by the "feature" that causes Jam to split
environment variables at spaces, I was wondering why variables
like MSVCNT, LIB, and INCLUDE are not treated "special" like
PATH is in variable.c and split at SPLITPATH instead of space.
In fact, I'm curious why the split variable isn't ';' on windows
instead of ' ', since it is already different (a comma) on OS_MAC.
When I made this mod, I still needed to double quote MSVCNT, but
only once, and not the multiple times other "workarounds" mentioned
on the list have discussed.
Anyone who has used Jam a lot more than me see any downside
in this change? Any way to not have to double quote MSVCNT at all?
Date: Tue, 28 May 2002 10:29:34 -0500 (CDT)
Subject: Re: Treat MSVCNT special?
don't install it on a path with blanks in it.
From: "Radke, Kevin" <Kevin.Radke@nexiq.com>
Subject: RE: Treat MSVCNT special?
Date: Tue, 28 May 2002 10:52:05 -0500
While this workaround may be ok for some, it isn't for us. Most
people unfortunately take install defaults, and MS likes spaces...
(And yes, I know you can use the short 8.3 name as well, but
with NTFS, short name support is optional.)
I've seen many people frustated by tools that don't "just work"
out of the box. If some simple changes allow Jam to work with
the default visual studio install path, more people will
be less resistent to using Jam. This is a big win in
my opinion. The odds are that most people looking at Jam
will have already installed MSVC, and not want to re-install.
I guess the real question is:
"On Windows we already know MSVCNT is special, why not treat it that way?"
Dave Abrahams sent this by private email, but I haven't explored
it's use yet:
Date: Wed, 29 May 2002 09:48:13 +0200
From: Michael Voucko <voucko@fillmore-labs.com>
Subject: linking with shared libraries
I have a problem with linking shared libraries.
On Windows everything works fine when using the LinkLibraries rule since
it is ok to replace the .dll sufffix by .lib.
But if I'm on Unix replacing .so by .a is not what I want.
Is there a different way to link shard libraries or do I have to write a
new rule which is able to figure out which suffix has to be appended
when adding a shared library using LinkLibraries.
From: Eyal Soha <eyal@procket.com>
Date: Tue, 28 May 2002 19:01:50 -0700
Subject: emacs mode for jam?
Is there an emacs mode for editing jam files?
From: Markus Scherschanski <MScherschanski@dspace.de>
Date: Wed, 5 Jun 2002 16:03:04 +0100
Subject: -q Option
is it intended that this quickquit option only works when debugging is
enabled (not -d0)?
Or are there any other reasons for this?
Date: Mon, 10 Jun 2002 14:24:24 -0400 (EDT)
From: David Berton <db@research.nj.nec.com>
Subject: jam and qt
Using jam 2.4, I am able to compile a Qt-based project (which uses moc)
using the rules outlined here (hi, Arnt):
However, I am having trouble getting this working when the executable I am
trying to build is a sub directory of a larger project (I keep getting
"don't know how to make <path!to!subdir>.moc.cpp").
Does anyone have rules that will handle moc generated files for Qt-based
projects, yet which also work if that project is a subdir?
Date: Tue, 11 Jun 2002 09:59:02 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: jam and qt
Yeah, I ran into that myself later last year, and made it work. I'm rather
unhappy about jam's SubDir/SubInclude stuff... I'm not sure whether this
is the best way, but it's the way I used.
rule Moc {
# o is the .o file, t is the target .cpp
local t = [ FGristFiles $(<) ] ;
local o = $(t:S=$(SUFOBJ)) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
LOCATE on $(t) = $(LOCATE_TARGET) ;
Clean $(t) ;
RmTemps $(o) : $(t) ;
LEAVES $(o) ;
Depends $(o) : $(t) ;
Depends $(t) : $(>) ;
Moc2 $(t) : $(>) ;
}
actions Moc2 {
$(RM) $(<)
echo $(>) | xargs -n1 -r $(QTDIR)/bin/moc > $(<)
}
Date: Tue, 11 Jun 2002 09:19:30 -0400 (EDT)
Subject: Re: jam and qt
Yes. Left Oslo almost a year ago now, wound up back in the States.
Excellent, thanks. For some workarounds for SubInclude that I think are
useful, see:
Date: Wed, 12 Jun 2002 11:37:24 +0300
From: Yurii Rashkovskii <yrashk@openeas.org>
Subject: OCaml
Does anybody made OCaml support for Jam/MR?
From: "Pitha, Robert" <rpitha@arqule.com>
Date: Mon, 17 Jun 2002 12:12:53 -0400
Subject: Help with "grist"
OK, newbie question... I have looked through a reasonable amount of the
archives, without finding any help there, but I can't help but feel I'm
overlooking an obvious source of information...
Anyway, my question is, what is "grist", or more importantly, where can I
find out how to use it effectively? I have looked through all the documents
linked by the main Jam page, and while they mention "grist", none of them
define or discuss it. As far as I can tell, it's a way to "uniquefy"
filenames, but there has to be more to it than that, and even so, it's
causing me problems and I need to know how to coexist peacefully with it.
Specifically, I have a directory hierarchy, say:
$(BUILD_ROOT), containing (so far):
/afb, containing (so far)
/afb
/arg
I have a Jamrules in $(BUILD_ROOT), and Jamfiles in each directory with
appropriate SubDir and SubInclude lines. If I go to one of the two
lowest-level directories (say, $(BUILD_ROOT)/afb/afb) and run jam, that
directory builds just fine. If, however, I go one level up
($(BUILD_ROOT)/afb) and run jam, I get a lot of complaints like "don't know
how to make <afb!afb>afbAKMCluster.cpp" (that's one of the file names in the
afb/afb directory - although it's actually in afb/afb/gen, the Jamfile is
set up to deal with that). This <afb!afb> looks like what I've seen hinted
at in the documentation as being this grist thing, but without any actual
documentation about "grist" I can't be sure, nor can I really figure out
what it's trying to tell me. Why does this Jamfile work when invoked in one
location and not another? Where is better documentation available?
Date: Tue, 18 Jun 2002 15:42:29 -0700
From: Jos Backus <jos@catnook.com>
Subject: Jam 2.4 Makefile buglet
Not everybody has . in her $PATH, so
all: jam0
jam0
in the Makefile should really be changed to
all: jam0
./jam0
?
From: Chris Rumpf <Chris.Rumpf@calix.com>
Date: Wed, 19 Jun 2002 17:49:36 -0700
Subject: Bug in Jam 2.4 - Naming targets the same as directories in the pa
th to the src code
I can't seem to have an executable w/ the same name as part of the
path.
Has anyone else run into this? See below for my example.
This is from freshly unzipped jam 2.4 and a test directory I set up
with the following structure:
/
/jam
/foo/src
Sitting in /
:->ECHO $TOP
.
:-> cat Jamfile
SubInclude TOP foo ;
:->
:-> cat foo/Jamfile
SubInclude TOP foo src ;
:-> cat foo/src/Jamfile
SubDir TOP foo src ;
/foo/src/foo
:-> jam/bin.solaris/jam -d2
warning: foo depends on itself
...found 18 target(s)...
...updating 2 target(s)...
Cc foo/src/a.o
gcc -c -o foo/src/a.o foo/src/a.c
MkDir1 foo/src/foo
mkdir foo/src/foo
Link foo/src/foo
gcc -o foo/src/foo foo/src/a.o
ld: cannot open output file foo/src/foo: Is a directory
collect2: ld returned 1 exit status
...failed Link foo/src/foo ...
...failed updating 1 target(s)...
...updated 1 target(s)...
:->
But if I change the foo/src/Jamfile to look like this:
:-> cat foo/src/Jamfile
SubDir TOP foo src ;
:->
It works fine!
:-> jam/bin.solaris/jam -d2
...found 19 target(s)...
...updating 2 target(s)...
Cc foo/src/a.o
gcc -c -o foo/src/a.o foo/src/a.c
Link foo/src/foobar
gcc -o foo/src/foobar foo/src/a.o
Chmod1 foo/src/foobar
chmod 711 foo/src/foobar
...updated 2 target(s)...
:->
Anyone care to explain why Jam has this fundamental problem?
From: Tim Docker <timd@macquarie.com.au>
Date: Sat, 22 Jun 2002 02:44:13 +1000
Subject: Confusion on subdirectories
I'm presently examining some alternative build systems, including jam. I'm trying to
build a library, containing object files derived from c++ files, derived from a
grammar file, in a subdirectory.
Forgetting at first the subdirectory bit, the following (all in one) jamfile [1] appears
to do what I need, giving output [2] (Should I be concerned about the independent target
errors here?)
When I try and put this into a directory hierarchy, and factor out the rules
then everything goes awry. I have the toplevel Jamrules [3] and Jamfile [4],
and the subdirectory Jamfile
[5]. If I build in the top level directory, I get [6]. If I build in the
subdirectory, I get [7].
I'm probably missing many subtleties here (I've only been playing with this for
an hour or two). A hint as to what I'm doing wrong would be much appreciated.
C++ = /opt/gcc/gcc-2.95.3/bin/g++ ;
CC = /opt/gcc/gcc-2.95.3/bin/gcc ;
LINK = $(C++) ;
C++FLAGS = -I/vobs/mts/include -I/usr/prod/mts/platform/i386/sybase-11.9.2/include ;
LIBDIR = /tmp ;
rule Antlr {
Depends $(<) : $(>) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
Clean clean : $(<) ;
}
actions Antlr {
/vobs/other/antlr/antlr_tool $(>) ;
}
Antlr DateExprParser.cpp DateExprLexer.cpp
DateExprLexer.hpp DateExprParser.hpp DateExprParserTokenTypes.hpp : DateExpr.g ;
Library $(LIBDIR)/libTools :
DateExprParser.cpp DateExprLexer.cpp ;
[timd@AA800315 tools2]$ jam -d2 lib
...found 10 target(s)...
...updating 5 target(s)...
warning: using independent target DateExprLexer.hpp
warning: using independent target DateExprParser.hpp
warning: using independent target DateExprParserTokenTypes.hpp
Antlr DateExprParser.cpp DateExprLexer.cpp DateExprLexer.hpp DateExprParser.hpp
DateExprParserTokenTypes.hpp
/vobs/other/antlr/antlr_tool DateExpr.g ;
C++ DateExprParser.o
/opt/gcc/gcc-2.95.3/bin/g++ -c -o DateExprParser.o -I/vobs/mts/include
-I/usr/prod/mts/platform/i386/sybase-11.9.2/include -O
DateExprParser.cpp
C++ DateExprLexer.o
/opt/gcc/gcc-2.95.3/bin/g++ -c -o DateExprLexer.o -I/vobs/mts/include
-I/usr/prod/mts/platform/i386/sybase-11.9.2/include -O
DateExprLexer.cpp
Archive /tmp/libTools.a
ar ru /tmp/libTools.a DateExprParser.o DateExprLexer.o
Ranlib /tmp/libTools.a
ranlib /tmp/libTools.a
RmTemps /tmp/libTools.a
rm -f DateExprParser.o DateExprLexer.o
...updated 5 target(s)...
[timd@AA800315 tools2]$ jam -d2 clean
...found 1 target(s)...
...updating 1 target(s)...
Clean clean
rm -f DateExprParser.cpp DateExprLexer.cpp DateExprLexer.hpp DateExprParser.hpp
DateExprParserTokenTypes.hpp /tmp/libTools.a
...updated 1 target(s)...
[timd@AA800315 tools2]$
C++ = /opt/gcc/gcc-2.95.3/bin/g++ ;
CC = /opt/gcc/gcc-2.95.3/bin/gcc ;
LINK = $(C++) ;
C++FLAGS = -I/vobs/mts/include -I/usr/prod/mts/platform/i386/sybase-11.9.2/include ;
LIBDIR = /tmp ;
rule Antlr {
Depends $(<) : $(>) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
Clean clean : $(<) ;
}
actions Antlr {
/vobs/other/antlr/antlr_tool $(>) ;
}
SubInclude TOP tools ;
SubDir TOP tools ;
Antlr DateExprParser.cpp DateExprLexer.cpp
DateExprLexer.hpp DateExprParser.hpp DateExprParserTokenTypes.hpp : DateExpr.g ;
Library $(LIBDIR)/libTools :
DateExprParser.cpp DateExprLexer.cpp ;
[timd@AA800315 jamtest]$ TOP=/tmp/jamtest jam lib
don't know how to make <tools>DateExprParser.cpp
don't know how to make <tools>DateExprLexer.cpp
...found 13 target(s)...
...can't find 2 target(s)...
...can't make 3 target(s)...
...skipped <tools>DateExprParser.o for lack of <tools>DateExprParser.cpp...
...skipped <tools>DateExprLexer.o for lack of <tools>DateExprLexer.cpp...
...skipped /tmp/libTools.a for lack of /tmp/libTools.a(DateExprParser.o)...
...skipped 3 target(s)...
[timd@AA800315 jamtest]$
[timd@AA800315 tools]$ TOP=/tmp/jamtest jam lib
don't know how to make <tools>DateExprParser.cpp
don't know how to make <tools>DateExprLexer.cpp
...found 13 target(s)...
...can't find 2 target(s)...
...can't make 3 target(s)...
...skipped <tools>DateExprParser.o for lack of <tools>DateExprParser.cpp...
...skipped <tools>DateExprLexer.o for lack of <tools>DateExprLexer.cpp...
...skipped /tmp/libTools.a for lack of /tmp/libTools.a(DateExprParser.o)...
...skipped 3 target(s)...
[timd@AA800315 tools]$
Date: Fri, 21 Jun 2002 16:21:37 -0700 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: You could argue that this is a bug in Jam...
I just ran across something that seems to have been in Jam for a while, but
was surprising to me.
On Windows, try this. In some .bat file, create these two lines:
set fred=this
^^space,space
jam
and have some Jamfile with:
ECHO fred is :$(fred): ;
It prints out:
fred is :this: :: ::
The two trailing spaces from the 'set' line in the .bat file become part of
the variable on Windows. The equivalent Unix line using setenv drops the
trailing spaces. The code in Jam which converts the environment variables
into Jam variables uses ' ' to split the strings, and it results in
a variable with three elements where the two trailing spaces become empty
list elements.
This was suprising to me and caught me for a while. I ended up deciding I
didn't like that behaviour and have modified my source to change it.
In variable.c, var_defines(), there is some code which now looks like:
/* Do the split */
for( pp = val + 1; p = strchr( pp, split ); pp = p + 1 )
{
strncpy( buf, pp, p - pp );
buf[ p - pp ] = '\0';
#ifdef OPT_ENVIRONMENT_FIX
if( strlen( buf ) > 0 )
#endif
l = list_new( l, newstr( buf ) );
}
#ifdef OPT_ENVIRONMENT_FIX
if( strlen( pp ) > 0 )
#endif
l = list_new( l, newstr( pp ) );
/* Get name */
I've added the #ifdef's and the strlen checks.
Its something to watch out for anyway.
Date: Mon, 24 Jun 2002 00:36:50 -0400
Subject: doing source code generation before dependency scan
I've built up a jambase for Objective Caml. I had to modify jam to
generalize dependency scanning because it's too complicated to simply
match a regular expression to figure it out. The language provides its
own dependency scanner ocamldep which takes a source file and prints
makedepend style output, eg:
bash2.05 jehenrik@localhost % ocamldep cparser.ml
cparser.cmo: cabs.cmi clexer.cmo cparser.cmi
cparser.cmx: cabs.cmx clexer.cmx cparser.cmi
My modification is only a few C source lines (on unix anyway) and
creates a new variable, HDRPIPE which, if set, causes jam to execute the
command with the source file as a parameter and look at the output,
rather than look at the file itself. Then in Jamrules when I find a .ml
extension, I set it as follows:
switch $(>:S) {
case .ml : {
HDRRULE on $(>) = OcamlHdrRule ;
HDRSCAN on $(>) = $(OcamldepPATTERN) ;
HDRPIPE on $(>) = "ocamldep" ;
Ocamlc $(<) : $(>) ;
}
This looks like it would work great for the java users, just plug in a
java dependency scanner like javad:
http://code.werken.com/javad/
I'll put a patch for jam 2.4 to make this extension at the end of this email.
So now I have a problem. ML standing for "meta language", Caml people
often have files which generate others. One of the simpler cases is
the lex and yacc analogs ocamllex and ocamlyacc. The problem is that
ocamldep will omit dependencies for source files which do not yet
exist. So automatic generation of any kind must run as a preprocessing
step _before_ dependency scanning. I know this is not the best
constraint for the jam model of the world. But jam being otherwise so
elegant, I really am happy to deal with a workaround.
Its seems like it's probably reasonable to expect the user to hand code
the dependencies for the autogeneration. But I just need to get stuff
to run early. I've been trying to figure out how to do this with some
sort of scheme of pseudotargets and recursive jam calling, but I haven't
thought of anything elegant. Does anyone else have any ideas?
PS- here's the patch. popen is a unix call, I don't remember if Windows
has one called that, but they definitely have one called something
else. It just runs a command and captures the output to a file handle.
diff -Nau jam-2.4.pristine/headers.c jam-2.4/headers.c
--- jam-2.4.pristine/headers.c Sat Mar 2 23:28:48 2002
+++ jam-2.4/headers.c Tue May 21 14:36:17 2002
@@ -13,7 +13,6 @@
# include "regexp.h"
# include "headers.h"
# include "newstr.h"
-
/*
* headers.c - handle #includes in source files
*
@@ -86,6 +85,8 @@
free( (char *)re[--rec] );
}
+
/*
* headers1() - using regexp, scan a file and build include LIST
*/
@@ -100,13 +101,25 @@
FILE *f;
char buf[ 1024 ];
int i;
+ LIST *hdrpipe;
- if( !( f = fopen( file, "r" ) ) )
- return l;
-
- while( fgets( buf, sizeof( buf ), f ) )
- {
- for( i = 0; i < rec; i++ )
+ if( hdrpipe = var_get( "HDRPIPE" ) )
+ {
+ strncpy(buf,hdrpipe->string,1024);
+ strncat(buf," ",1024);
+ strncat(buf,file,1024);
+ strncat(buf,"\n",1024);
+ //printf("shelling %s\n",buf);
+ f = popen((const char*)buf, "r");
+ }
+ else
+ f = fopen( file, "r" );
+
+ if(!f) return l;
+
+ while( fgets( buf, sizeof( buf ), f ))
+ {
+ for( i = 0; i < rec; i++ )
if( regexec( re[i], buf ) && re[i]->startp[1] )
{
re[i]->endp[1][0] = '\0';
@@ -118,7 +131,10 @@
}
}
- fclose( f );
+ if(hdrpipe)
+ pclose(f);
+ else
+ fclose( f );
return l;
}
From: Chris Higgins <chris.higgins@cursor-system.com>
Subject: RE: layering and code generation
Date: Thu, 4 Jul 2002 14:33:35 +0100
I gave up on this for a while but came back to it after an idea that is
similar to what Diane is saying at the end here but not much work at all.
The key is being prepared to override jambase rules but the changes are
trivial and (I believe) backwards compatable.
To my jamrules I add
shell_t ?= shell ;
files_t ?= files ;
lib_t ?= lib ;
exe_t ?= exe ;
obj_t ?= obj ;
first_t ?= first ;
Now I replace any uses of these standard pseudotargets with their variable
counterparts. So for example Objects becomes
rule Objects {
local _i ;
{
Object $(_i:S=$(SUFOBJ)) : $(_i) ;
Depends $(obj_t) : $(_i:S=$(SUBFOBJ)) ;
}
}
(apologies for any typos, consult your jambase).
Now string the layers together:
rule MkLayers {
if $(<) {
Layer $(<[1]) ;
Layers $(<[1]) : $(<[2-]) ;
}
}
rule Layer {
first_t = $(<)first_t ;
shell_t = $(<)shell_t ;
obj_t = $(<)obj_t ;
files_t = $(<)files_t ;
lib_t = $(<)lib_t ;
exe_t = $(<)exe_t ;
Depends all : $(first_t) $(shell_t) $(obj_t) $(files_t) $(lib_t) $(exe_t) ;
Depends all $(shell_t) $(obj_t) $(files_t) $(lib_t) $(exe_t) : $(first_t) ;
NotFile $(first_t) $(shell_t) $(obj_t) $(files_t) $(lib_t) $(exe_t) ;
SubInclude LAYER_TOP $(<) ;
}
rule Layers {
if $(>) {
Depends $(>[1])first_t : $(<)exe_t ;
Layer $(>[1]) ;
Layers $(>[1]) : $(>[2-]) ;
}
}
In your topmost Jamfile write MkLayers a b c. It only works at the top-level
because of that SubInclude, bit more work to try to generalise that.
So, the idea is that each layer dances to its own set of pseudo-targets and
layers are placed on top of each other last-of-one to first-of-next.
This seems to do what I want, but its not well-tested yet. Can any more
experienced jammers spot gotchas?
From: Chris Rumpf <Chris.Rumpf@calix.com>
Date: Mon, 8 Jul 2002 17:14:17 -0700
Subject: LOCATE bug in Jam 2.4?
Does the LOCATE variable leak - or perhaps it's the "on" operator?
In the below example why does Jam add the binding of c.yy to a.x?
Let me explain:
I simply want to set the LOCATE variable on c.yy ONLY.
I can't figure out how to do that. Jam is also putting the
binding on a.x (see below)
I use the "LOCATE on" operator - then print out the values of
the LOCATE variable for each target. It looks great UNTIL
the action is executed when magically the binding has happened
on a.x.
Can anyone explain to me what is going on here?
Am I doing something wrong here?
:-> cat Jamfile
SubDir TOP a src ;
rule see { return $(1) ; }
rule r1 { LOCATE on c.yy = /home1/crumpf/abc ; }
actions r1 { /bin/cp $(2) $(1) }
a1 = [ on c.yy see $(LOCATE) ] ;
a2 = [ on a.x see $(LOCATE) ] ;
Echo ====== a1 $(a1) ;
Echo ====== a2 $(a2) ;
r1 c.yy : a.x ;
:->
:-> jam c.yy
Environment: gcc - solaris2
====== a1 /home1/crumpf/abc
====== a2
...found 1 target(s)...
...updating 1 target(s)...
warning: using independent target a.x
r1 /home1/crumpf/abc/c.yy
cp: cannot access /home1/crumpf/abc/a.x
/bin/cp /home1/crumpf/abc/a.x /home1/crumpf/abc/c.yy
...failed r1 /home1/crumpf/abc/c.yy ...
...failed updating 1 target(s)...
:->
I expect this action line to look like
/bin/cp a.x /home1/crumpf/abc/c.yy
Subject: Re: LOCATE bug in Jam 2.4?
From: "Ingo Weinhold" <bonefish@cs.tu-berlin.de>
Date: Tue, 09 Jul 2002 12:40:54 CEST (+0200)
That is strange. You must have set LOCATE on c.yy somewhere else,
because r1 is invoked _after_ you look up the value.
In doubt consider this a feature, not a bug. ;-)
I didn't have a look into the source code, but it seems that the LOCATE
variable of a target is used for independent source targets with an
empty LOCATE as well. Just add the missing dependency (jam warns you
about it) in rule r1:
rule r1 {
LOCATE on $(1) = /home1/crumpf/abc ;
Depends $(1) : $(2) ;
}
From: Roger Lipscombe <RLipscombe@sonicblue.com>
Date: Mon, 22 Jul 2002 07:51:49 -0700
Subject: Problem: Directory with same name as target creates directory with same name as target
I've got a directory with the same name as the target to be generated in
that directory. This doesn't seem to work.
What am I doing wrong?
$ ls
Jamfile foo/
$ ls foo
Jamfile foo1.cpp foo1.h foo2.cpp foo2.h
$ cat Jamfile
SubDir TOP ;
SubInclude TOP foo ;
$ cat foo/Jamfile
SubDir TOP foo ;
Main foo : foo1.cpp foo2.cpp ;
I get:
$ jam
Jamrules: No such file or directory
warning: foo depends on itself
...found 14 target(s)...
...updating 3 target(s)...
C++ foo/foo1.o
C++ foo/foo2.o
MkDir1 foo/foo
Link foo/foo
/usr/bin/ld: cannot open output file foo/foo: Is a directory
collect2: ld returned 1 exit status
cc -o foo/foo foo/foo1.o foo/foo2.o
...failed Link foo/foo ...
...failed updating 1 target(s)...
...updated 2 target(s)...
$ jam clean
Jamrules: No such file or directory
...found 1 target(s)...
...updating 1 target(s)...
Clean clean
rm: `foo/foo' is a directory
rm -f foo/foo foo/foo1.o foo/foo2.o
...failed Clean clean ...
...failed updating 1 target(s)...
The Jamrules line is harmless, I know. The question is: why the MkDir1
foo/foo? Am I allowed to do this kind of thing?
Date: Mon, 22 Jul 2002 10:12:44 -0500 (CDT)
Subject: Re: Problem: Directory with same name as target creates
directory with same name as target
the easy fix it to not name them the same. there is a bug
in the mkdir target (perhaps a feature) that makes the
directory to be made a target name, thus it conflicts with
the other target of the same name.
From: Roger Lipscombe <RLipscombe@sonicblue.com>
Subject: RE: Problem: Directory with same name as target creates
Date: Mon, 22 Jul 2002 08:43:22 -0700
OK. So what's the hard fix? I want them to have the same name. Moreover,
I'm not calling MkDir anywhere, so what's going on?
Date: Mon, 22 Jul 2002 17:43:25 +0200 (MET DST)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Problem: Directory with same name as target creates
To avoid the name clash explicitly add grist:
Main <$(SOURCE_GRIST)>foo : foo1.cpp foo2.cpp ;
The reason for that is the line
MakeLocate $(_t) : $(LOCATE_TARGET) ;
in the MainFromObjects rule. After variable expansion it reads
MakeLocate foo : foo ;
Which in theory is absolutely correct as the executable `foo' should be
located in directory `foo', but the name clash causes trouble:
MakeLocate in turn sets LOCATE on `foo' to `foo', makes `foo' depend on
`foo' and does a `MkDir foo'. Due to the LOCATE value the directory
target `foo' is bound to `foo/foo' (as is the executable `foo'). It
doesn't exist yet and therefore MkDir1 is invoked to make it.
Adding grist to the identifier of the executable target makes the two
targets syntactically different and thus avoids the cyclic dependency as
well as setting of LOCATE on the wrong target (too).
Date: Mon, 22 Jul 2002 08:51:50 -0700 (PDT)
Subject: Re: Problem: Directory with same name as target creates directory with same name as target
You'll be happy to know this looks to be getting addressed in Jam 2.5. The
change description for change 1904:
Internal Perforce jam changes. See RELNOTES for details,
but they're:
1. MkDir grists dir names so as not to conflict
with other targets.
2. SubDir reworked to allow multiple, cooperating trees.
3. Makefile now uses $(EXENAME) so that . doesn't
need to be in the user's path.
These changes are bound for jam 2.5.
See the change details at:
http://www.rawbw.com/~dianeh/cgi-bin/p4db/chv.cgi?CH=1904
Date: Mon, 22 Jul 2002 18:25:20 +0200 (MET DST)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Problem: Directory with same name as target creates
I wonder, what is the right procedure for suggesting changes. I have a
some patches ready -- reaching from simple adjustions to Jambase to set
sane variable values for BeOS up to the addition of a rule that executes
commands at parsing time (supplying their return values) and changes
that make Jam always read the whole Jamfile tree regardless of in which
subdirectory it is executed (nevertheless only the subtree is built) --
but I didn't dare to propose anything yet. ;-)
Date: Tue, 23 Jul 2002 14:22:40 -0700 (PDT)
Subject: rules that generate layered dependancies.
Hi, I spent some time playing with Jamrules and
reading a few hundred K of the email archive, but
maybe you can help me...
I'm working on a project that I'm attempting to
convert completely over to Jamfiles from Makefiles.
Most of which is fairly straight forward with the
exception of a subproject that uses 'noweb' to
generate multiple .c files from a single .nw file.
You can think of this problem as a tape archive (.tar)
containing multiple c files, basically I want to have
program (depends on) tape archive
and two things to take place:
(some variable) = `tar -tvf` (tape archive)
program (depends on) (some variable)
(some variable) being a listing of .c files
each c file in (some variable) needs a custom action
for expansion with 'tar -xvf'
now that everything is listed/expanded can now be
handled implicitly by jambase as usual.
From: <boga@mac.com>
Date: Mon, 29 Jul 2002 17:28:53 +0200
Subject: BUG: Recursive includes & JAM
If one have a file that includes itself Jam goest crazy.
For example if you have a file called "Test.h" that contains the line:
#include "Test.h"
what happens in C code level is that:
- make0("Test.h")
pushsettings ("Test.h"->settings); *
headers ("Test.h"):
headers detects the #include "Test.h" line and calls the HdrRule rule:
It executes:
HDRRULE on "Test.h" = "HdrRule" ;
But since we executed (pushsettings on "Test.h"*). That swapped
the value tables, and now the "Test.h"->settings will write the global settings
...
popsettings ("Test.h"->settins);
*** Here the global symbol HDRRULE will be defined.
Possible fixes:
- In pushsettings mark the pushed settings so that we don't execute a second
pushsettings on the same settings list again.
actions createTestFile {
ECHO #include "$(<:G=:D=)" > $(<)
ECHO RunAgainToTest
}
rule TestHdrRule {
ECHO "-" $(<) "-" $(>) ;
HDRRULE on $(>) = $(HDRRULE) ;
HDRSCAN on $(>) = $(HDRSCAN) ;
}
rule SetHeaderScan {
HDRSCAN on $(<) = "^[ ]*#[ ]*include[ ]*[<\"](.*)[\">].*$" ;
HDRRULE on $(<) = TestHdrRule ;
}
createTestFile "Test.h" ;
createTestFile "Test2.h" ;
Depends all : "Test.h" "Test2.h" ;
SetHeaderScan "Test.h" ;
NOTFILE all ;
From: Lev Assinovsky <LAssinovsky@algorithm.aelita.com>
Date: Wed, 31 Jul 2002 16:16:58 +0400
Subject: Linking problems
I would like to do a very simple thing:
link my main tst.cpp with the third party library $(MYDIR)/libfoo.a.
The final command line should be:
g++ -o tst tst.cpp -L$(MYDIR) -lfoo
Currently I have in my Jamfile:
exe tst : tst.cpp
: <include>$(ADC_ROOT) <include>$(BOOST_ROOT)
<include>"$(HOME)/inc"
;
What must be added to provide linking with libfoo.a?
Date: Wed, 31 Jul 2002 15:07:54 +0200 (MET DST)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Linking problems
The LINKLIBS variable is used for external libraries:
LINKLIBS on tst += $(MYDIR)/libfoo.a ;
The command line will look a bit different, but, I guess, you don't care
that much. If you do, you will have to fiddle with the link flag:
LINKFLAGS on tst += -L$(MYDIR) ;
LINKLIBS on tst += -lfoo ;
PS: If LINKLIBS/LINKFLAGS have a meaningful contents a bug or feature (?)
forces you to adjust the line a bit to keep that value. For the first
version that would read:
LINKLIBS on test = [ on test return $(LINKLIBS) ] $(MYDIR)/libfoo.a ;
From: "Ehab Teima" <ehab_teima@hotmail.com>
Date: Wed, 31 Jul 2002 13:50:47 -0400
Subject: Including different Jam in another one
I'm just getting started with Jam. I have several jam files that build
different pieces of a product. I'm trying to write another jam (wrapper)
that includes all of those ones to build the whole product in one shot. My
question is too simple but I am just a new dev to Jam. How do I tell jam to
cd to a directory before trying to build some pieces? My question is because
all pieces that have to be built from a different directory failed while the
ones in the same directory of the wrapper succeeded.
Date: Thu, 1 Aug 2002 11:57:26 +0200 (MET DST)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Including different Jam in another one
I'm not sure, if I understand your problem correctly. You have several
Jamfile trees building different parts of your project, but being
completely independent from each other. And now you want to create a
common root Jamfile that joins the parts.
The problem with the current Jam is, that each Jamfile needs to know its
exact location in the tree, that is the path from the root to itself. If
you have different trees with different roots and want to create a single
root, you'll have to adjust *all* Jamfiles. Something you can do with
make, to cd into a subdir and invoke another make for it, does work for
Jam too, but could not be considered being very nice. You can define
pseudo targets that are being built by cd'ing into a subdirectory and
invoking Jam therein. This would look like:
(Untested!)
rule SubDirBuild {
# SubDirBuild <pseudo target> : <directory> ;
Always $(1) ;
NotFile $(1) ;
Depends $(1) : $(2) ; # to avoid warnings
Depends all : $(1) ;
}
actions SubDirBuild {
cd $(2) ;
jam
}
SubDirBuild sub1 : subdir1 ;
SubDirBuild sub2 : subdir2 ;
SubDirBuild sub3 : subdir3 ;
As with make, using this strategy you lose some features, e.g. to be able
to specify targets on the command line and job control. So in general the
unified Jamfile tree approach is to be preferred.
To analyze the actual source of your problem: It's the stupidity of the
SubDir and SubInclude rules in the current Jam version. Even now and with
only few changes SubInclude could take a dir argument relative to the
location of the Jamfile. With a bit more work SubDir could be made clever
enough to abandon the path argument. Then Jamfiles wouldn't contain any
hardcoded locations any more, which increased the maintainability a lot.
It would be painless to move a complete subtree, as changes had to be made
only to the Jamfiles in the two concerned superdirectories.
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Date: Thu, 1 Aug 2002 12:47:12 +0200
Subject: JAM 2.4 & Digitalmars C++
Hi, I try to compile the 2.4 releas with DigitalMars DMC++ compiler
(http://www.digitalmars.com) and got some problems in the direcotry
scanning code. The findfirst etc. functions don't seem to be very
portable between different compilers.
1. Has someone successfully build 2.4 with DMC?
2. At least on Windows the FindFirst etc. functions are provided by the
Win32 API. Wouldn't it be better to rely on these functions than on the
runtime lib of the compiler?
Date: Fri, 9 Aug 2002 18:32:07 +0200
From: Matthias Braun <matzebraun@web.de>
Subject: Bug with SubDir rule and ALL_LOCATE_TARGET? Jambase improvement for mingw.
a) I'm using the SubDir and SubInclude commands in my jamfile for including other
Jamfiles in subdirectories. Now I want my generated object and exec files to go
to another folder. So I was setting:
ALL_LOCATE_TARGET = out
however this doesn't actually work like expected. the file plugin/test.cpp should
go to out/plugin/test.o, but instead goes to out/test.o. After looking into the
Jambase this line in the SubDir rule seems like a bug to me:
LOCATE_SOURCE = $(ALL_LOCATE_TARGET) $(SUBDIR)
shouldn't this be:
LOCATE_SOURCE = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS) ] ;
? Or do I miss someting here?
b) I was using jam on mingw, it seems to me that Jambase has another bug here
as it isn't defining MV, CP, RM and SLASH variables though they should differ
from the unix defaults: I'd suggest to include this into Jambase line 244:
MV ?= move /y ;
CP ?= copy ;
RM ?= del /f/q ;
SLASH ?= \\ ;
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Date: Tue, 13 Aug 2002 13:48:28 +0530
Subject: Using $(3) in actions
Hi, I need to access an extra argument in an action, and can't. I
simplified the Jamfile down to this, which IMO should print a c on the
third line but doesn't.
rule T { Depends exe : $(<) ; }
actions T {
echo a works: $(<) again: $(1)
echo b works: $(>) again: $(2)
echo c does not work: $(3)
}
T a : b : c ;
When I run it, I see this:
$ jam
...found 8 target(s)...
...updating 1 target(s)...
warning: using independent target b
T a
a works: a again: a
b works: b again: b
c does not work:
...updated 1 target(s)...
$
I'm using a recent jam from the public repository.
Subject: Re: Using $(3) in actions
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 13 Aug 2002 07:11:07 -0600
When an action runs, only $(<) $(>) and any variables set "on" $(<)
are available. So...
rule T {
Depends exe : $(<) ;
FOO on $(<) = $(3) ;
}
actions T {
echo a works: $(<) again: $(1)
echo b works: $(>) again: $(2)
echo c does not work: $(3)
echo getting c via FOO works: $(FOO)
}
Date: Tue, 13 Aug 2002 19:11:37 +0530
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Using $(3) in actions
I think it's a bug, though. $(1) is available, $(2) is and I don't why
there should be such a big difference beteeen $(1), $(2) and $(3).
Date: Mon, 2 Sep 2002 17:30:02 +0200
From: Matthias Braun <MatzeBraun@web.de>
Subject: How to get jam from public depot?
I'm using jam 2.4 but are dealing with some bugs (all mentioned in this list already),
now I wanted to look at the latest version in the public depot and see
if they're fixed now. But I got totally confused by this system...
Can someone please give me a sequence of unix commands that download the latest version of jam.
I'd also like to know if someone can tell me if there's a chance that FTJam or BoostJam
will be integrated into the main jam.
From: Craig Allsop <callsop@auran.com>
Date: Wed, 11 Sep 2002 17:39:14 +1000
Subject: duplicate headers & sufobj
Given a library that has a sub directory containing sources and headers for
the library and these are supplied to Library rule, for example:
Library ui : ui\window.cpp ;
This produces the target in the sub directory ui. However, if LOCATE_TARGET
is used to redirect the target for example:
LOCATE_TARGET = [ FDirName $(SUBDIR) bin ] ;
Will cause pass the target $(SUBDIR)\bin\ui\window.obj to the compiler which
fails as the ui directory under the bin directory doesn't exist.
I was expecting it to pass the target $(SUBDIR)\bin\window.obj to the
compiler instead, however there are a number of places in the jambase which
convert the source file using $(source:S=$(SUFOBJ)). Should this be
$(source:GBS=$(SUFOBJ)) instead?
The real problem is that I have two libraries containing a window.h, so when
included from program that uses these libraries I use:
#include <ui/window.h>
From: Badari Kakumani <badari@cisco.com>
Date: Wed, 11 Sep 2002 11:35:53 -0700
Subject: existance of TEMPORARY targets
in our environment we have situations needing a given object
file x.o needs to be part of multiple archive files x.a
given that, we are having a strange situation - some targets
that are marked TEMPORARY exist even after the completion of
jam. in such sitation, re jam considers targets based on those
temporary targets to be out of date and causes rebuilds (even though
rebuild was NOT necessary). has anyone encountered similar problem?
i traced rebuilds to the following code segment in make.c in jam sources:
else if( anyhow && !( t->flags & T_FLAG_NOUPDATE ) ) {
fate = T_FATE_TOUCHED;
}
/* else if( t->binding == T_BIND_EXISTS && t->flags & T_FLAG_TEMP )
{
fate = T_FATE_ISTMP;
} */
else if( t->binding == T_BIND_EXISTS && pbinding == T_BIND_MISSING ) {
fate = T_FATE_NEWER;
}
note that i commented out the elsif clause for temporary targets with
existing bindings.
the questions i have in this regard are:
- what is that special elsif clause about?
- what special treatment
needs to be given for temporary targets existing?
- would i incur any inconsistencies if i comment out the above elsif clause?
From: Craig Allsop <callsop@auran.com>
Date: Fri, 13 Sep 2002 16:13:30 +1000
Subject: rule File?
Is it intended the File rule would set the current time on the destination file?
I'm using jam on NT and the File rule to copy header files. If a dependency
is newer than the header being copied, jam will assume the newest time for
this header and will always perform the copy because the copy (for NT) uses
the same time as the source file (as opposed to the one jam knows).
I added a 'touch' into the File action to correct this, but is it right?
Date: Fri, 13 Sep 2002 07:19:09 -0600
Subject: Re: rule File?
From: matt@lickey.com (Matt Armstrong)
I think if you reset the modification time of the destination to be that
of the source everything will be fine.
From: "Hoff, Todd" <Todd.Hoff@ciena.com>
Date: Fri, 13 Sep 2002 11:36:44 -0700
Subject: faster build times
Currently we use a build farm of win2k machines for builds. We need to produce windows
images and cross compile for vxworks on PPC. Adding more machines won't help
much because the network dominates the build times. We are looking into SANs.
I know some people are using multi-processor machines for builds. I was wondering
if you recommend this approach? What configuration you are using? What
are your build times?
Has anyone cross compiled on a unix machine to work on windows?
From: "Craig Allsop" <callsop@sceptre.net>
Subject: Re: rule File?
Date: Mon, 16 Sep 2002 09:16:08 +1000
I thought so too at first but I traced the fate of one my problem files and
discovered that jam compares the time of the target against the newest
dependency. The time of the source is not enough so I'm using the current
time, as if the file was "built", not just copied.
From: Craig Allsop <callsop@auran.com>
Subject: Re: rule File?
Date: Mon, 16 Sep 2002 10:28:56 +1000
I've just tried to create a small test case:
--- a.cpp
#include <a.h>
void main(){}
--- a.h
#include <b.h>
--- b.h
--- jamfile
SubDirHdrs . ;
Main a : a.cpp ;
File c.h : a.h ;
I was trying to recreate the problem described initially (which it doesn't),
however another problem that I'm having trouble with is shown.
The File rule here will cause jam to evaluate the a.h node (as it is
dependant on 'files') before the a.cpp node (which adds HDRRULE on it). No
headers are scanned on a.h as a result. Take out the File rule from my
jamfile and dependancies on a.h are processed as expected.
The problem is, variables are set during the bind pass, sometimes after the
fate is already determined. Perhaps the bind pass should comprise of two
passes, one for determining header dependencies/variables and another to
determine fate? What implications does this have?
Provided that I can overcome this issue, I'd like to compile/link files with
c.h as I have one project that generates libraries and headers to a common
location in my source tree and several other projects would be
compiled/linked against the common ones.
Sent: Saturday, September 14, 2002 3:34 AM
Subject: Re: rule File?
Date: Sun, 15 Sep 2002 18:57:02 +0100
From: Barnaby Gray <bee@pickle.me.uk>
Subject: Getting CORBA to play nicely with Jam
I've looked back through the archives, but never found a solution for
this problem that actually works for me.
I'm using the MICO CORBA implementation and have the Idl compiler
compiling the .idl files into corresponding .cc and .h files. These
are then compiled in a .o - the problem is the header scan on the
intermediate .h files doesn't appear to be performed, as jam doesn't
seem to see the cross-dependancies between the generated headers from
multiple .idl files that include each other.
ie.
A.idl -> A.h A.cc
A.cc -> A.o
B.idl -> B.h B.cc
B.cc -> B.o
but B.idl has a #include "A.idl" in it, so the generated B.h has a
#include "A.h" in it. However, despite my best efforts I can't get jam
to realise there's this dependency.
Here's my Jamrules:
IDL = idl ;
IDLFLAGS = ;
rule UserObject {
switch $(>) {
case *.idl : C++ $(<) : $(<:S=.cc) ;
IdlObject $(>) ;
case * : ECHO "unknown suffix on" $(>) ;
}
}
NOTFILE idlfiles ;
rule Idl {
MakeLocate $(<) : $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
# handle #includes for generated .h and .cc files
HDRRULE on $(<) = HdrRule ;
HDRSCAN on $(<) = $(HDRPATTERN) ;
HDRSEARCH on $(<) = $(HDRS) $(SUBDIRHDRS) $(STDHDRS) ;
HDRGRIST on $(<) = $(HDRGRIST) ;
Depends files : $(<) ;
Depends idlfiles : $(<) ;
Depends $(<) : $(>) ;
Clean clean : $(<) ;
}
actions Idl {
$(IDL) $(IDLFLAGS) $(>)
}
rule IdlObject {
Idl $(<:S=.h) $(<:S=.cc) : $(<) ;
}
PS. I'm not subscribed to the list, so if you could cc me in the replies.
Date: Tue, 24 Sep 2002 18:15:08 +0400
From: Vladimir Prus <ghost@cs.msu.su>
Subject: gettext support?
does anybody have implemented jam support for gettext tools? I'm trying
to do that now, but have some problems.
Date: Thu, 26 Sep 2002 16:42:54 +0400
From: Vladimir Prus <ghost@cs.msu.su>
Subject: possible bug with SEARCH/LOCATE
I've noticed that if I do something like:
SEARCH on J1 = /tmp/X/2/.. ;
include J1 ;
Then J1 won't be found even if it's present in the specified dir. It
appears that jam just can't handle ".." in pathname. Should this be
considered a bug?
Date: Fri, 27 Sep 2002 08:42:53 +0200
From: boga@mac.com
Subject: Re: possible bug with SEARCH/LOCATE
This is by design. The "include" executed in the "Parsing" state, while
the file binding based on SEARCH variable is executed at "Bind" time.
See http://public.perforce.com/public/jam/src/Jam.html for more details.
However you could try something like this:
J1 = $(J1:D="/tmp/X/2/..") ;
include $(J1) ;
Or if you want to search in multiple directories you can use the GLOB rule.
rule MyPreBind {
local grist = $(<:G) ;
local result = [ on $(<) GLOB $(SEARCH) : $(<:D="":G="") ] ;
result = $(result:G=$(grist)) ;
}
SEARCH on J1 = "/tmp/X/2/.." ;
J1 = [ MyPreBind $(J1) ] ;
include $(J1) ;
Date: Fri, 27 Sep 2002 10:56:57 +0400
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: possible bug with SEARCH/LOCATE
>> SEARCH on J1 = /tmp/X/2/.. ;
>> include J1 ;
>>
>> Then J1 won't be found even if it's present in the specified dir. It
>> appears that jam just can't handle ".." in pathname. Should this be
>> considered a bug?
I'm afraid it's not so. If that were true then SEARCH settings would not
affect "include" at all. However, it does, and we use that all the time
in Boost.Build.
Going back to my example, there was my error. Jam handles ".." in SEARCH
values correctly. Sorry for the noise.
Date: Fri, 27 Sep 2002 09:19:18 +0200
From: boga@mac.com
Subject: Re: possible bug with SEARCH/LOCATE
You're right. I was not aware of that feature.
http://public.perforce.com/public/jam/src/Jam.html:
>include:
target (see Binding above) but unlike a regular target the >include
file cannot be built.
From: Badari Kakumani <badari@cisco.com>
Date: Tue, 1 Oct 2002 11:34:22 -0700
Subject: modified NOUPDATE ??
is it possible for a target T1 to depend on a target xxx.dll.a
and yet be NOT affected by time-stamp on the target xxx.dll.a ?
we are in a situation where updating actions of the target xxx.dll.a
may or may NOT really update the target xxx.dll.a
if it ended up so that actions associated with xxx.dll.a did NOT
actually change the contents of xxx.dll.a, is there a way
to NOT rebuild target T1 that depends upon xxx.dll.a ?
the NOUPDATE builtin rule provides some of this functionality.
but the problem with NOUPDATE is that once the target xxx.dll.a gets
built, actions associated with it never get executed. what we wanted
target needs update or not.
i would appreciate any help that you could provide on this.
Date: Wed, 2 Oct 2002 10:53:18 +0200 (MET DST)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: modified NOUPDATE ??
You certainly mean the actions building xxx.dll.a, not the ones for T1?!
AFAIK there is no jam feature allowing for that. You can perhaps work
around by introducing another target (an actual file) that is touched by
the xxx.dll.a actions when something has changed, but left alone when
everything is still the same. You would make T1 depend on this helper
target (not on xxx.dll.a) and thus cause updates only when needed.
From: Badari Kakumani <badari@cisco.com>
Date: Wed, 2 Oct 2002 08:53:07 -0700
Subject: Re: modified NOUPDATE ??
only caveat with this is that when you invoke:
'jam -j8 T1'
it will NOT check to see if xxx.dll.a is outdated and rebuild it.
i wanted actions associated with building xxx.dll.a be run
whenever xxx.dll.a is outdated - only that
a) the actions may or may NOT really update xxx.dll.a AND
b) T1 should NOT be considered outdated because of xxx.dll.a (though
jam figured xxx.dll.a is outdated and invoked
actions associated with xxx.dll.a)
Subject: Re: modified NOUPDATE ??
From: "Ingo Weinhold" <bonefish@cs.tu-berlin.de>
Date: Thu, 03 Oct 2002 15:43:37 CEST (+0200)
It's worse, as it wouldn't even work, if don't specify a target (or
build `all'). First jam analyzes the dependencies and which targets
need to be updated and thereafter it actually updates the targets. So
there is no feedback possible at all. Your actions can't influence what
is updated.
Sorry, I shouldn't reply, when I'm not completely awake. ;-)
Well, unless I'm missing something, there shouldn't be any way to do
that.
From: Badari Kakumani <badari@cisco.com>
Date: Thu, 3 Oct 2002 08:41:52 -0700
Subject: Re: modified NOUPDATE ??
i am NOT saying actions should influence what is updated.
all i need is a way to tell jam that the target xxx.dll.a will NOT outdate
its parents by using a built-in rule similar to NOUPDATE...
i guess all i need is a rule called NOPARUPDATE and when i invoke:
NOPARUPDATE xxx.dll.a
jam should understand that it should NOT rebuild the parents because
of xxx.dll.a becoming out-of-date.
any ideas on implementing such builtin rule?
why? all i am saying is for a way to tell jam to rebuild the child-target
BUT the rebuild of this particular child target to NOT cause the parent
target to be out-of-date.
Subject: Re: modified NOUPDATE ??
From: "Ingo Weinhold" <bonefish@cs.tu-berlin.de>
Seems that I misunderstood you. You can approximate this behavior
pretty closely by introducing an intermediate target, say T0. T1
depends on T0, T0 on xxx.dll.a. T0 is a real file target with NoUpdate
property. That could look like:
rule Touch { Depends $(1) : $(2) ; Clean clean : $(1) ; }
actions Touch { touch $(1) }
NoUpdate T0 ;
Touch T0 : xxx.dll.a ;
RuleToBuildT1 T1 : T0 : xxx.dll.a ...
RuleToBuildT1 could of course as well include the two T0 rule.
Adding the rule is simple -- add a flag T_FLAG_NOPARUPDATE in rules.h,
and duplicate the entry for NoUpdate in load_builtins() in builtins.c
replacing the name and the flag with yours -- making it work is much
harder. ;-)
make0() in make.c would be your playground...
From: Badari Kakumani <badari@cisco.com>
Date: Fri, 4 Oct 2002 15:56:39 -0700
Subject: Re: modified NOUPDATE ??
yep. precisely this was the reason. i incorrectly concluded
based on the behaviour of the older-jam.
we are still using jam 2.2 (as of 1997??) and that seems
to NOT behave correctly.
when i used the latest jam 2.4, it seems to work as you noted.
this solution would certainly work for us.
sort of behaviour.
From: Matthew Bloch <matthew@no51.com>
Date: Thu, 10 Oct 2002 23:46:35 +0100
Subject: Handling of whitespace in Windows / directory separator hack
I've been setting up a series of Jamfiles for porting a Win32-only game from a
Visual Studio build, using the latest Jam checked out from the Public Source
Depot.
My persistent stumbling block is Jam's handling of white space in directory
names. Because I've got VC++ & the Platform SDK in their default locations
(as do the other programmers on this project), I've got to have Jam handle
directories with whitespace properly, and I'm not sure whether it can, or
whether I'm just not doing it correctly. Either way the docs don't really
address this.
The most pressing problem I'm facing is that the call to the linker ends up
looking like this:
link program.exe ... tons of object files ...
"c:/program\lib\advapi32.lib files/microsoft\lib\advapi32.lib
visual\lib\advapi32.lib studio/vc98/"\lib\advapi32.lib
"c:/program\lib\libc.lib files/microsoft\lib\libc.lib visual\lib\libc.lib
studio/vc98/"\lib\libc.lib "c:/program\lib\oldnames.lib
files/microsoft\lib\oldnames.lib visual\lib\oldnames.lib
studio/vc98/"\lib\oldnames.lib "c:/program\lib\kernel32.lib
files/microsoft\lib\kernel32.lib visual\lib\kernel32.lib
studio/vc98/"\lib\kernel32.lib
Now all I did was to specify MSVCRT="c:\program files\microsoft visual
studio\vc98" and let Jam handle the call to the linker. It seems as if its
"product" variable expansion is causing this problem, but is there any way to
avoid it? Can someone at least talk me through what's going on?
Also, the only way I've found to add include directories containing spaces
looks like this; is it the only way? And should I risk allowing the user to
specify these directories as -s settings (e.g. -sDIRECTX_HEADERS="c:\directX
SDK" but I suspect that won't work):
HDRS = $(HDRS) "\"c:\\program\ files\\microsoft\ visual\
studio\\vc98\\stlport-4.5.3\\stlport\"" ;
This doesn't work, but I thought it ought to have been equivalent:
HDRS += "\"c:\\program\ files\\microsoft\ visual\
studio\\vc98\\stlport-4.5.3\\stlport\"" ;
I don't know exactly why it works, it just does because the C compiler appears
to put it on the command-line correctly, but I'm not sure such a pathname is
any use to Jam to scan for headers. Not so important for system headers, but
for libraries within the build directory it is.
There seems to be quite a lot of customisation to Jam, certainly ftjam appears
to try to handle this, but I've used "classic" Jam/MR because of a few rules
that are missing from the ftjam build (well FDefines, it seems to lag behind
on a few points, making it only backwards compatible with an old version).
Not only this there's Boost.jam and Jam for building Wonka... will any of
these new features be integrated into the Perforce version? Would any
particular version suit my needs a little better? Or is somebody out there
rolling all these new features together into an uber-Jam? So far I've found:
http://www.freetype.org/jam/
http://www.boost.org/tools/build/build_system.htm
ftp://wonka.acunia.com/pub/wonka/acunia-jam.tar.gz
Ironically I found win32 Jam couldn't build itself from the depot with an
older version of itself (because of problems with spaces again), I had to use
ftjam and use its JAM_TOOLSET variable instead (but as I said, I couldn't use
that because I was using FDefines).
Finally, just a point of style: in order to have Jam build
architecture-specific subdirectories of object files, I've got this rule to
replace SubDir in the top of my per-directory Jamfiles:
rule SubDirObjs {
if $(TARGET_OS) = $(OS) {
SubDir TOP Builds native-$(TARGET_OS) $(<[2-]) ;
} else {
SubDir TOP Builds cross-$(OS)-$(TARGET_OS) $(<[2-]) ;
}
local _objdir = $(LOCATE_TARGET) ;
SubDir $(<) ;
LOCATE_TARGET = $(_objdir) ;
}
Is this the most sensible way of going about things? I'd like to just say:
SubDir TOP Source Game
LOCATE_TARGET = $(TOP)/Builds/cross-$(OS)-$(TARGET_OS)/Game
but of course those need to be backslashes because we want backslashes for
NT,so I've used the SubDir rule to generate pathnames.
I'm excited by Jam, and I can see it working for my project already (at least
Linux & Win32 builds) but I'm a bit confused by its variable handling and the
multitude of versions out there. Any comments would be appreciated.
From: johan.nilsson@esrange.ssc.se
Date: Mon, 14 Oct 2002 12:50:27 +0200
Subject: GLOBbing and OpenVMS
this format, some or all of this message may not be legible.
I'm having trouble using GLOB under OpenVMS Alpha 7.1 (Jam 2.4). Anything
wrong with the following usage?
SOURCE_FILES = [ GLOB [.UNIT] : *.CPP ] ;
Echo $(SOURCE_FILES) ;
From: johan.nilsson@esrange.ssc.se
Date: Thu, 17 Oct 2002 11:41:18 +0200
Subject: VMS, dependencies, relative headers
this format, some or all of this message may not be legible.
it seems to be impossible to get both jam (2.4) and the DEC C++ 6.5 compiler
(cxx) to locate the header files correctly when using relative includes
under OpenVMS, e.g.:
--- snip from whoever.cpp ---
#include <Whatever/Whoever.h>
To get cxx to find the headers, I created a logical root for the include root, e.g.:
def/job/trans=conc INC_ROOT "DISK$USER:[MYDIR]"
Then I can use this in a unix-style path to cxx, e.g.:
cxx /INCLUDE_DIR=("/inc_root") => jam "-sHDRS=/inc_root"
And then I can merrily compile the stuff - but it seems like jam can't to
locate the header; if I change anything in
"DISK$USER:[MYDIR.WHATEVER]WHOEVER.H" it doesn't result in a recompile of
whoever.cpp !
I also tried to include the VMS style directory specification in combination
with the unix-style stuff, e.g.:
jam "-sHDRS=/inc_root,DISK$USER:[MYDIR]"
Could somebody help me out here, please. Is there a way out or does anyone
have a patch available for jam?
Oh, and on a related question (i.e. jam+OpenVMS+cxx): When creating a
library (olb) using c++ templates, has anyone been able to insert the
template instantiations from the repository directory using jam? I tried the
following (which did not work) :
Library MyLib : file1.cpp file2.cpp ;
LibraryFromObjects MyLib : [ glob [.cxx_repository] : *.obj ] ;
Date: Thu, 24 Oct 2002 12:08:43 +0000 (GMT)
From: ian.macarthur@baesystems.com
Subject: Query about UserObject rule?
This is probably a really simple thing, so I apologise in advance....
I have a test harness that was last built about 4 years ago.
Today, I had to make a "tweak" to it, so I pulled the source from archive,
stuck it onto a host, installed jam, made my changes...
This is on NT4 with mingw as the compiler.
Invoking jam, I get:
# jam
Compiler is GCC with Mingw
Unknown suffix on utils.o - see UserObject rule in Jamfile(5).
Hmmm, this used to work, albeit with some older version of jam, and probably an
older version of gcc too...
It *seems* to have understood $(SUFOBJ), though, 'cos it complains about utils.o.
The jamfile looks like this:
Objects utils.c ;
# Main DAP : DAP_end.c utils$(SUFOBJ) ;
# Main PC_menu : PC_menu.c utils$(SUFOBJ) ;
Main Flashit : Flash_gui.cxx client.cxx utils$(SUFOBJ) ;
I'm only trying to build the last Main item, so I commented out the first two.
IIRC, I had originally added the Objects rule for utils.c to stop it getting
built again by each target...
I tried adding another rule:
Object utils$(SUFOBJ) : utils.c ;
In case that might help. It didn't.
So in the end I did:
Main Flashit : Flash_gui.cxx client.cxx utils.c ;
Which worked, of course.
I'm *almost* certain this jamfile used to work - where have I gone wrong here?
This was with jam-2.4:
Jam 2.4. OS=MINGW. Copyright 1993-2002 Christopher Seiwald.
And I also tried ft-jam, just in case it was different:
Jam/MR Version 2.3.2. Copyright 1993, 2000 Christopher Seiwald. OS=NT.
Date: Thu, 24 Oct 2002 09:29:12 -0400
From: Oliver Schoenborn <oliver.schoenborn@utoronto.ca>
Subject: building several execs from common sources
Hello, I am new to Jam and hoping that it will continue to be as
intuitive as it first seemed :) I have a simple problem: building two
different targets (executables) that share common sources. I have done it thus:
# Jamfile contents
COMMON_SRCS
DeformableObject.cc
HySPEngine.cc ;
Main hysp : $(COMMON_SRCS) main.cc ;
Main ghysp : $(COMMON_SRCS) ParticleWindow.cc main.cc ;
This works, except that all COMMON_SRCS files get built twice because
the variable appears in more than one Main rule. So clearly I'm not
understanding something fundamental about Jam, what is it? How should
I do this, instead of the above?
In case it matters: this is the latest jam release (2.4) on SGI IRIX 6.5.
Date: Thu, 24 Oct 2002 16:05:19 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: building several execs from common sources
This can be viewed as a feature. Jam doesn't trust that the "two" .o files
are built using the same flags, and so on.
Jam could become smarter, and find out that the two actions are the same.
I don't think anyone's going to do that work, because most people use a
different way of building, namely:
Library common.a : DeformableObject.cc HySPEngine.cc ;
Main hysp : main.cc common.a ;
Main ghysp : ParticleWindow.cc main.cc common.a ;
(Something looks wrong with that code... but I suppose you get the idea,
so I'm not going to hunt the problem down.)
Date: Thu, 24 Oct 2002 16:04:46 +0200 (MET DST)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: building several execs from common sources
I think, the reason is, that the C++ rule is not designed to be invoked
more than once. You can work around by using MainFromObjects for one of
the targets. If you add `main.cc' to COMMON_SRCS as well, you can e.g. use:
MainFromObjects hysp : $(COMMON_SRCS:S=$(SUFOBJ)) ;
In case you plan to add non-common sources, you need to apply the Objects
rule to compile them, of course.
Date: Thu, 24 Oct 2002 11:58:18 -0400
From: Oliver Schoenborn <oliver.schoenborn@utoronto.ca>
Subject: Re: building several execs from common sources
Ah yes of course, good idea. This works well. However, the Library rule
deletes the object files after creating the library. This is weird
default behavior since it means that if even one source for the library
changes, all the source files have to be recompiled.
So until I find a way to fix that, I use a different solution, that
extends what Ingo suggested: I use the Objects rule to compile the
common files to .o, and I add a rule that allows to build an executable
from those objects plus extra sources:
COMMON_SRCS
DeformableObject.cc
HySPEngine.cc ;
rule MainWCommon {
Objects $(2) ;
MainFromObjects $(1) : $(2:S=.o) $(COMMON_SRCS:S=.o) ;
}
MainWCommon hysp : main.cc ;
MainWCommon ghysp : ParticleWindow.cc grmain.cc ;
Date: Thu, 24 Oct 2002 18:11:57 +0200
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: building several execs from common sources
The .o files are stored inside the library. At recompile time, only those
files in the library that need rebuilding are rebuilt. Works for me.
Date: Thu, 24 Oct 2002 14:45:45 -0400
From: Oliver Schoenborn <oliver.schoenborn@utoronto.ca>
Subject: Shared libs
What is the proper way of creating a shared library with jam? I've tried:
SUFLIB = .so ;
Objects $(COMMON_SRCS) ;
MainFromObjects libhysp$(SUFLIB) : $(COMMON_SRCS:S=.o) ;
Depends hysp : libhysp ;
Main hysp : main.cc ;
LinkLibraries hysp : libhysp ;
where I use Objects rule because I'll eventually want to build both the
static and shared libraries (which can be done from the same code with
SGI's CC compiler). The shared library gets properly built, but jam
tells the compiler to link against the static library, so fails. I've
looked at the archives and all I can find are posts where people have
implemented (or mention they have) a rule for shared libs, if that's the
case then how come so much reinventing the wheel?
Date: Fri, 25 Oct 2002 16:37:15 -0400
From: Oliver Schoenborn <oliver.schoenborn@utoronto.ca>
Subject: Re: Shared libs
Since this is a faq: for the benefit of others, here is one simple way
of using shared libs built with jam:
SUFLIB = .so ;
LINKFLAGS on libYourLib$(SUFLIB) += -shared ;
Main libYourLib$(SUFLIB) : file1.cc file2.cc file3.cc ;
LINKLIBS on YourTarget += -L. -lYourLib ;
Depends YourTarget : libYourLib$(SUFLIB) ;
Main YourTarget : main.cc other.cc etc.cc ;
I.e. you first build the shared library as an executable (which it is,
from the point of view of linkage, to the compiler), add the -l flag for
link stage, and you're in business.
From: "Oliver Schoenborn" <oliver.schoenborn@utoronto.ca>
Date: Sun, 27 Oct 2002 14:25:27 -0500
Subject: how to make jam run a built test program
Hello, I'm trying to create a test action that will run a bunch of test
programs built by jam. I have
rule RunMany {
# not sure yet what next three
# lines are for
NotFile dummy ;
Depends dummy : $(<) ;
Depends $(<) : $(>) ;
# run each test
local _i ;
for _i in [ FGristFiles $(2) ] { Run $(_i) ; }
}
actions Run { $(1) }
RunMany tests : testFoo1 testFoo2 testBar1 testBar2 ;
When I type "jam tests", nothing happens. If I type "jam -d+5 tests |
grep Run" I can see
The four test files, when run from the command line, produce text
output. Since I don't see any with "jam tests" it doesn't look like they
are being run. Any ideas?
Date: Sun, 27 Oct 2002 17:03:36 +0100
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: how to make jam run a built test program
I'm afraid I don't really have time to think about this problem, but at
first glance it look as if:
a) you could use an ALWAYS invocation somewhere, so that the tests are run
always, not just when they've been built.
Oliver Schoenborn <oliver.schoenborn@utoronto.ca>
b) you're trying to use a dirty hack that works with make.
Date: Sun, 27 Oct 2002 14:11:29 -0800 (PST)
Subject: Re: how to make jam run a built test program
If you always want all the tests to run whenever you ask for them, then
you need to specify that with "Always". If you want them to only run when
either whatever they're testing is newer than their output or they are
themselves newer, then you'd need to specify that association, and you
wouldn't use "Always".
The NotFile line specifies a "pseudo-target" (ie., a target that never
actually exists as a file) called "dummy". The Depends line says the
"dummy" pseudo-target depends on the target specified in a RunMany (in
your example, "tests"). The Depends (which is the same as Depends, just
old-style casing) line says that the target of a RunMany depends on its
"source". I suppose the idea was to have a generic rule for "running many
anythings specified to the right of the first colon and collectively
called that which is specified to the left of it". But the "dummy" is,
frankly, a bit silly, since a) the comment says "run each test", which
sort of undermines the whole "generic" thing, and b) you could just do
"NotFile $(<) ;" instead of having "dummy" (or just not have a NotFile at
all -- although it's probably better to, since it makes things a bit
clearer).
Because the only things that "tests" depends on are the tests named as its
"source", and they exist and are up-to-date with respect to "tests", which
is itself missing, but doesn't have any updating actions associated with
it, since you're passing the test to Run, not "tests" (so it's regarded as
a NotFile anyway, so Jam shines it on).
Also, where you're gristing will make it so that in order to run something
individually, from anywhere other than TOP, you'd need to include the
grist as part of the thing's name, which is pretty gruesome.
To have a generic run-many-anythings rule, do:
rule runMany {
NotFile $(<) ; # not required, but probably clearer {
Depends $(<) : $(thing) ;
local t = [ FGristFiles $(thing) ] ;
if $(t) = $(thing) { t = $(t:G=<.>) ; }
Depends $(thing) : $(t) ;
SEARCH on $(t) = $(SEARCH_SOURCE) ;
Always $(t) ;
runOne $(t) ;
}
}
actions runOne { $(<) }
And in your Jamfile, do:
runMany tests : test1 test2 ;
runMany scripts : script1 script2 ;
runMany doodahs : doodah1 ;
To run all the tests, do 'jam tests'. To run a single test, 'jam test1'.
Ditto for any runMany target.
If you wanted just a dedicated run-tests rule, it'd be essentially the
same, except you'd hard-code the pseudo-target name in the rule and not
include it in the Jamfile target, and $(<) references would be referring
to the tests (ie., there wouldn't be any $(>)'s):
rule runTests {
{
Depends tests : $(test) ;
local t = [ FGristFiles $(test) ] ;
if $(t) = $(test) { t = $(t:G=<.>) ; }
Depends $(test) : $(t) ;
SEARCH on $(t) = $(SEARCH_SOURCE) ;
Always $(t) ;
runTest $(t) ;
}
}
actions runTest { $(<) }
And in your Jamfile:
runTests test1 test2 ;
Since "tests" doesn't appear in the Jamfile itself, though, you'd need to
know that was the pseudo-target's name (of course that's true for most
pseudo-targets [eg., lib, exe, etc.]).
From: "Oliver Schoenborn" <oliver.schoenborn@utoronto.ca>
Subject: Re: how to make jam run a built test program
Date: Sat, 2 Nov 2002 21:13:00 -0500
Thank you Diane and Arnt for your replies. I also found in the tutorial a
whole section on this issue. I haven't had a chance to try it out or compare
with your suggestions but given that it differs from the ones you proposed,
it may be of interest to you too. It is the at
http://www.perforce.com/perforce/conf2001/wingerd/WPLaura.pdf. Best,
Date: Mon, 18 Nov 2002 16:18:34 +0100 (MET)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: compile.c: compile_on(): Why search()?
I'm currently doing a bit of profiling to optimize a rather large build
system. And while doing so, I stumbled over the following line in
compile_on():
t->boundname = search( t->name, &t->time );
Why would one do that? I somehow suspect, that it is an unneeded leftover
from copying and pasting compile_include(). At least the unchanged comment
for `parse->left' (preceding the function definition) seems to support
that suspicion.
For search() causes disk access (one or more timestamp()s), excessive
use of on target rule execution can result in a significantly decreased
build system performance.
Date: Mon, 18 Nov 2002 12:28:22 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: compile.c: compile_on(): Why search()?
| I'm currently doing a bit of profiling to optimize a rather large build
| system. And while doing so, I stumbled over the following line in
| compile_on():
|
| t->boundname = search( t->name, &t->time );
Sure looks bogus to me. I'll tidy it up for jam 2.5.
I'd be interested in your profiling results. IIRC, the overwhelming
time is spent compiling :-), next header file scanning, and then the
hash code in hashitem().
Subject: Re: compile.c: compile_on(): Why search()?
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 18 Nov 2002 14:05:07 -0700
Header file scanning far dominates any other task in my tests, hence
the existence of the various header file cache patches to jam. The
cache reduces jam's "start up time" by a factor of 4-14 times under
Windows, depending on hard disk speed.
Date: Mon, 18 Nov 2002 13:09:24 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: compile.c: compile_on(): Why search()?
Note that if you are working on a large enough project for the header
scanning code to consume a significant amout of time, you will benefit from
the header scanning extension in my public branch. That extension caches the
results of the header scan and it reuses the cache for the next invocation of
jam if the source files hasn't been modified. Look in:
//guest/craig_mcpheeters/jam/src/...
The profiles I do of a large project show jam spending all its time in the
hash code, as Christopher said above. Oh, that and compiling and linking :-)
Subject: Re: compile.c: compile_on(): Why search()?
From: "Ingo Weinhold" <bonefish@cs.tu-berlin.de>
Date: Tue, 19 Nov 2002 01:40:23 CET (+0100)
I kinda did already. I went with Matt Armstrong's version of the header
caching, which is based on your code. So thanks to both of you.
Actually the overhead jam introduces is insignificant compared to the
compile and link times when building a fresh copy of the sources. But
when working on code usually only a few files are changed and need to
be recompiled/linked. And then jam's overhead becomes significant. For
the project I did some testing, running jam on a completely up to date
tree, header scanning and up to date checks turn out to be the big
runtime hogs (75%). A bit surprisingly reading the jamfiles (disk
access only, not including parsing) follows with 15%. I guess, the
measurings heavily depend on the underlying file system.
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 20 Nov 2002 14:42:13 +0100
Subject: Compiling objects from .vsh shader files
I'm still a bit new to jam, and I have a problem compiling nvidia .vsh
shaders files into .c (just a C source file with a big array in it),
with the nvasm assembler.
I suppose this is dead-simple, but can't make it happen. Not that I
haven't tried.
What I want is either:
1) Compile the .vsh file into a .h file which may be included by a .c
file, and make jam understand that the .h file needs to be compiled
before the .c file.
or
2) Compile the .vsh file into a .c file and then into an object file,
which may be linked with, just by specifying the .vsh file in the source
files list.
Anyone have an example of how to do something like this? Would make me
so very happy.
Date: Wed, 20 Nov 2002 15:44:03 +0100 (MET)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Compiling objects from .vsh shader files
The basic thing you have to do is to write a rule that does the
compilation. Something like:
rule NvAsm {
MakeLocate $(1) : $(LOCATE_SOURCE) ;
Depends $(1) : $(2) ;
Clean clean : $(1) ;
}
actions NvAsm {
nvasm ... whatever the command line looks like, in: $(2) out: $(1)
}
Then override the UserObject rule (or extend yours, if you already have one):
rule UserObject {
switch $(2) {
case *.vsh : Cc $(1) : $(1:S=.c) ;
NvAsm $(1:S=.c) : $(2) ;
case * : ECHO "unknown suffix on" $(2) ;
}
}
All the stuff goes preferably into your Jamrules file. The UserObject rule
is a fallback for the Object rule. Overriding it adding the `.vsh'
extension makes it possible to simply use the Object rule to compile your
`.vsh' files to objects. Thus all rules that use the Object rule (Objects,
Main, Library) will eat these files willingly.
Subject: Re: Compiling objects from .vsh shader files
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 20 Nov 2002 18:06:08 +0100
Thanks, both to you and Arnt, I just ended up with something almost like
this derived from the Yacc rules myself. The only problem about this
approach is that the stupid NVAsm used as DWORD type which it expects to
be defined beforehand, so its output cannot compile standalone. Lame. In
the end I had to modify the nvasm.exe binary by hand to fix this brain damage.
Date: Wed, 20 Nov 2002 14:48:51 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: compile.c: compile_on(): Why search()?
| But
| when working on code usually only a few files are changed and need to
| be recompiled/linked. And then jam's overhead becomes significant.
One trick I did regularly back in the early days of Jam was to scope it
to just certain directories: my part of the code and the directories
where the Main (linking) rules were. I just had an otherwise empty
directory with a Jamfile that included (via SubInclude) directly the
Jamfiles of the interesting subdirectories.
That way I could go to that directory for quickie rebuilds and to the
root for a full scan.
I admit, it is having the developer do some of what Jam should do (figure
out how extensive the code changes are), but to my mind it is no more
unsettling than the developer being mindful of Jam keeping state.
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: compile.c: compile_on(): Why search()?
Date: Thu, 21 Nov 2002 10:28:14 +0100
I can't see any guarantee that a rebuild in that directory will cause
exactly the same actions as a rebuild in the root directory. Any
difference there would be a Bad Thing. Any undetected difference would
be a Very Bad Thing.
It also seems likely to need O(>1) human actions; the quickie rebuilds
would sometimes require changes to the quickie Jamfile.
IMNSHO it's philosophically worse. The developer knowing that jam keeps
a file in /tmp/jam is one thing; the developer regularly spending brain
cells on doing jam's work is quite another.
Subject: Re: compile.c: compile_on(): Why search()?
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 21 Nov 2002 10:53:08 +0100
Just to chip in. I'm running Jam on both Linux and Windows here, and it
seems to me the problem (as always) really lies in NTFS' horrible
performance and bad caching of metadata. On linux on ext3, there is not
much of a problem.
I know its not possible to have MS fix NTFS, but to me it seems like the
only right cure. If the alternative is to introduce an element of
uncertainty into the build process, I would recommend against doing it.
People should move to a decent file system instead.
Date: Thu, 21 Nov 2002 11:14:32 +0100 (MET)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: compile.c: compile_on(): Why search()?
I have to agree. I actually modified my jam copy to always read the whole
jamfile tree -- even when invoking it in a subdirectory -- to ensure
consistency (nevertheless building only the targets in the concerned
subtree and whatever they depend on). This increases the turn around times
for the developers, but, after having experienced some unwanted effects, I
think it's worth it.
Exactly. What I like jam for particularly, is that the developer doesn't
need to meddle with build system issues other than extending or adding a
jamfile from time to time. A simple `jam [target]' does the trick for
them. No need to worry about dependencies or to manually fix things occasionally.
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 21 Nov 2002 14:22:24 +0100
Subject: link.exe input line too long
under visual C 6:
Archive ..\..\engine\zstdlib\zstdlib.lib
z:\head\code\engine\zstdlib>lib /out:..\..\engine\zstdlib\zstdlib.lib
_zstdlib_.\..\engine\zstdlib\binarytrees.obj
..\..\engine\zstdlib\cchkpntlst.obj ..\..\engine\zstdlib\ccom.obj
..\..\engine\zstdlib\chunk.obj
..\..\engine\zstdlib\collisionresponse.obj ....
The input line is too long.
Any ideas of how to deal with this? I know its not Jam's fault, but the
problem lies in all the absolute paths used. Is it possible to modify
the rules to CD to the source directory before calling link.exe?
From: Roger Lipscombe <RLipscombe@sonicblue.com>
Subject: RE: link.exe input line too long
Date: Thu, 21 Nov 2002 05:54:10 -0800
You can change the command line length from 996 to 10240 (change MAXLINE in
jam.h before building). This only works on NT/2K/XP -- Win95/98/ME really
do have a short command line length.
Alternatively, you can break your link command into multiple steps and use a
response file. See (for example) my page on the subject:
http://www.differentpla.net/~roger/devel/jam/misc/linker_command_length.html
Note that this code is against the 2.2 Jambase, but I think it'll work
cleanly on a 2.4 Jambase. You'll need a 'touch' command (e.g. the one from
Cygwin) for this to work properly.
Date: Thu, 21 Nov 2002 15:03:30 +0100 (MET)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: RE: link.exe input line too long
An easier way would be to use an intermediate static library (Library
rule). The Archive actions use the piecemeal modifier.
Subject: RE: link.exe input line too long
Date: Thu, 21 Nov 2002 15:44:54 +0100
From: "BROSSIER Florent" <F.BROSSIER@csee-transport.fr>
An idea is to put the lines into a temporary file and to use this file during the link.
In your Jamrules put the following lines:
NEWLINE = "
" ; # Used to break up long lines for echo to a file
actions Link bind NEEDLIBS {
$(CP) nul: $(<:S=.tmp) > nul:
echo.$(LINKFLAGS)>>$(<:S=.tmp)$(NEWLINE)
echo.$(>)>>$(<:S=.tmp)$(NEWLINE)
echo.$(NEEDLIBS)>>$(<:S=.tmp)$(NEWLINE)
echo.$(LINKLIBS)>>$(<:S=.tmp)$(NEWLINE)
$(LINK) /out:$(<) $(UNDEFS) @$(<:S=.tmp)
IF x%KEEP_TMP%==x $(RM) $(<:S=.tmp)
}
under visual C 6:
Archive ..\..\engine\zstdlib\zstdlib.lib
z:\head\code\engine\zstdlib>lib /out:..\..\engine\zstdlib\zstdlib.lib
_zstdlib_.\..\engine\zstdlib\binarytrees.obj
..\..\engine\zstdlib\cchkpntlst.obj ..\..\engine\zstdlib\ccom.obj
..\..\engine\zstdlib\chunk.obj
..\..\engine\zstdlib\collisionresponse.obj ....
The input line is too long.
Any ideas of how to deal with this? I know its not Jam's fault, but the
problem lies in all the absolute paths used. Is it possible to modify
the rules to CD to the source directory before calling link.exe?
Subject: RE: link.exe input line too long
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 21 Nov 2002 16:14:24 +0100
Cool. I just had to recompile Jam.exe back to a MAXLINE of 996.
Link.exe sucks. Jam.exe rules ;-)
Subject: RE: link.exe input line too long
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 21 Nov 2002 16:57:50 +0100
OK not that cool anyway. Now my top-level link which cannot be made
piecemeal is too large to fit in the new 996 char limit.
So, I modified cmd_new to check for 996 when the rule allows piecemeal,
New version:
/* Bail if the result won't fit in MAXLINE */
/* We don't free targets/sources/shell if bailing. */
if( var_string( rule->actions, cmd->buf, (rule->flags & RULE_PIECEMEAL)? 996 : MAXLINE, &cmd->args ) < 0 )
{
cmd_free( cmd );
return 0;
}
I suggest this as a change to future releases, as it gives you the best of both worlds.
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 21 Nov 2002 18:19:48 +0100
Subject: why does jam not delete libs before rebuild?
it seems to me that the effect of
jam -a
and
jam clean ; jam
in a library subdir should be the same. But on NT, lib.exe seems to
append to the existing lib-file, resulting in lots of warnings and
probably a bad result in the end.
Why does jam not delete the .lib-file initially? How can I make it do so?
Date: Thu, 21 Nov 2002 22:30:30 -0800 (PST)
Subject: Re: compile.c: compile_on(): Why search()?
From: "Christopher Seiwald" <seiwald@perforce.com>
Looks like I'm getting hammered by y'all on the header scan caching.
I'll take a closer look.
From: "Robert Cowham" <robert@vaccaperna.co.uk>
Date: Fri, 22 Nov 2002 15:27:50 -0000
Subject: Header scanning problem
I've been playing about with header scanning but have hit a problem.
I want to auto extract "include" files from a .dsp (VC++ project file) and
if any have changed then call MSDEV to rebuild the whole project.
Here are excerpts from my jam file.
VC_HDRPATTERN = "^SOURCE=\\.\\\\(.*)$" ;
# AS an ASIDE - it would be helpful to document the need for \\ to get it
through to the regexp as \
rule MyHdrRule {
local s = $(>) ;
Depends $(<) : $(>) ;
SEARCH on $(>) = $(<:D) ;
# NoCare $(s) ;
}
rule VCMake {
# VC++ DSP Project - copied from
local dll ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
dll = [ FDirName $(>:D) ReleaseMinDependency $(>:B) ] ;
dll = $(dll:S=.dll) ;
Clean clean : $(<) $(dll) ;
HDRRULE on $(>) = MyHdrRule ;
HDRSCAN on $(>) = $(VC_HDRPATTERN) ;
HDRSEARCH on $(>) = $(<:D) . ;
HDRGRIST on $(>) = $(HDRGRIST) ;
Depends $(<) : $(dll) ;
Depends $(dll) : $(>) ;
VCMake1 $(dll) : $(>) $(dll) ;
}
rule MakeInstaller {
# Copied from MainFromObjects
local _s _t i ;
# Add grist to file names
# Add suffix to exe
_s = [ FGristFiles $(>) ] ;
_t = [ FAppendSuffix $(<) : $(SUFEXE) ] ;
if $(_t) != $(<) {
Depends $(<) : $(_t) ;
NotFile $(<) ;
}
# make compiled sources a dependency of target
Depends exe : $(_t) ;
Depends $(_t) : $(_s) ;
MakeLocate $(_t) : $(LOCATE_TARGET) ;
Clean clean : $(_t) ;
{
switch $(i:S) {
case .dsp :
VCMake $(_t) : $(i) ;
}
}
MakeInstaller1 $(_t) : $(_s[1]) ;
}
actions MakeInstaller1 { $(INSTALL_COMPILE) $(>) }
MakeInstaller FRED.exe : [ FDirName install FRED.iss ] [ FDirName src FRED.dsp ] ;
********************
So I am doing everything at top level (no SubDirs):
Jamfile
/install
FRED.iss
/src
safearray.H
*********************
.dsp structure includes lines like:
SOURCE=.\safearray.H
# End Source File
# Begin Source File
The VC_HDRPATTERN works fine and extracts out the included files, e.g.
safearray.H
The problem is that it can't find safearray.H which is sitting in the src
directory (same dir as .dsp).
I get the debug info below
Excerpt from -d8 output:
<<<<<<<<<<<<<<<<<<<<<<<<<<
make -- safearray.H
set SEARCH = src
get LOCATE
get SEARCH = src
search safearray.H
: src\safearray.H
search safearray.H
: safearray.H
set SEARCH = <============= Why this???
time -- safearray.H <============= Why not done a couple of lines
above
where it looks like it
found the file??
: missing
don't know how to make safearray.H
made+ nofind safearray.H
From: Steve Goodson <steve.goodson@mscsoftware.com>
Date: Fri, 22 Nov 2002 22:35:07 -0700
Subject: INCLUDES bug?
From my reading of the documentation, the statement
INCLUDES b : c ;
should mean that b itself does not depend on c, but anything that depends on
b also depends on c.
With the following Jamfile
actions Make { touch $(1) }
Make a ;
Make b ;
Depends a : b ;
INCLUDES b : c ;
My expectation is that jam would be able to make 'b', but would refuse to
make 'a' for lack of 'c'. Instead, jam refuses to make 'b' for lack of 'c'.
Is this a bug?
From: david.abrahams@rcn.com
Subject: Re: INCLUDES bug?
Date: Sat, 23 Nov 2002 15:03:47 -0500
The description of INCLUDES and LEAVES in the documentation seem to
have little or nothing to do with their implementation or actual
behavior AFAICT. In fact, I have never been able to get LEAVES to do
anything that makes sense. INCLUDES b : c, if you look at the
implementation, actually makes b depend on c in a weaker way than
Depends b : c, and it seems to be useful for suppressing warning
messages about independent targets when a rule builds two targets and
only one may be a dependency of what has to be built, for example when
you are building a DLL and a corresponding import library on Windows.
I would really like to see some clarification of both the intention
and actual behavior of these rules.
Subject: Re: compile.c: compile_on(): Why search()?
From: Matt Armstrong <matt@lickey.com>
Date: Sat, 23 Nov 2002 21:44:28 -0700
It is worth mentioning that the header scan patches that both Craig
and I have advertised do not introduce any risk of a "bad" header
cache entry causing Jam to behave differently.
- The value of the target's HDRSCAN variable is stored in the
cache, so if it changes for a target the cache entry is
invalidated and the file rescanned.
- The cache entry is not used if the modification time of the
file is different from when last scanned.
- The cache entries are aged, so unused cache entries eventually
disappear (happens when files are moved or removed from a project).
- The cache is not used at all unless the HCACHEFILE is set (a jam
variable). Small projects that don't need it don't get it.
These all combine to make it painless. The header cache file need
never be deleted to "correct" a build problem or remove stale entries,
and the project's Jamrules can stick the file someplace where it won't
bother anybody (the build output dir, etc.).
P.S. I officially defer to Craig's McPheeters' version of the patch,
as he has incorporated all the improvements I made to his code (most
of them dealt with the issues described above, but I forget the exact details).
Subject: Re: link.exe input line too long
From: Matt Armstrong <matt@lickey.com>
Date: Sun, 24 Nov 2002 09:37:15 -0700
Not really, as 10240 is too big even under Win2k if the actions has
more than one line. I've found CMD.EXE randomly crashing if any given
line in the actions line is > 1-2k, even though NT itself can handle
individual command line lengths of 10240 bytes.
I've been thinking about an extension to the actions syntax to allow
for the automatic creation of response files. E.g.
actions Cc {
$(CC) @{ -c -o $(<) $(CCFLAGS) $(CCDEFS) $(CCHDRS) $(>) }
}
would put everything within the @{ } into a response file
automatically. Jam takes care of naming, creating and deleting the
response file, so the Jam rules themselves can remain simple and
straightforward.
From: "Jan Mikkelsen" <janm@transactionware.com>
Subject: RE: link.exe input line too long
Date: Mon, 25 Nov 2002 07:22:17 +1100
I have added a similar an extension. My syntax is slightly different;
I have a special prefixes on an expressions. @^ is a single expression
to a file, or you can have numbered files if you need multiple files in
your action: @#0 -> @#9. I picked the sequences because they seemed
unlikely anywhere else.
@C:/path/to/file", you would have
actions Link { $(LINK) ... @@^$(>) }
With numbered files, you can build the file from multiple expressions:
actions DoStuff {
$(DOSTUFF) @#0First$(SPACE)line$(SPACE)goes$(here)
@#0Second-file-line
@#0Other-stuff:$(RANDOM_VARIABLE)
}
This gets turned into an action like
dostuff /path/to/file
Where /path/to/file contains the result of the expressions in order.
I use multiple Jam generated response files for Java with manifest files
dynamically generated by Jam.
I've been meaning to get these changes (and other) into the public
respository for too long. I will submit them in the next day or so.
From: david.abrahams@rcn.com
Subject: Re: link.exe input line too long
Date: Sun, 24 Nov 2002 16:05:33 -0500
Why use a Jam language extension when you can build response-file
creation in Jam code? It seems as though we ought to leave language
extensions to things which you just can't do in Jam, or which are
massively less-expressive or less-efficient to do that way.
Subject: Re: INCLUDES bug?
From: Matt Armstrong <matt@lickey.com>
Date: Sun, 24 Nov 2002 16:35:44 -0700
This may in part be due to the fact that they aren't used in the stock Jambase.
From: david.abrahams@rcn.com
Subject: Re: INCLUDES bug?
Date: Sun, 24 Nov 2002 18:41:26 -0500
Yup, that was my guess, too. I bet they don't get much testing for
that reason, either ;-)
Date: Thu, 28 Nov 2002 09:48:12 +0100 (MET)
From: bruno.schulze@gmx.de
Subject: Simple question on list combination
This is my first contribution to this mailing list, so let me first say Hi
to everybody here.
My problem is as follows: I have two lists X and Y which are supposed to be
parallel (i.e. same number of elements). First of all, how can I make sure
that X and Y have the same number of elements? My second question is probably
not so simple: From the two lists I want to generate another list in the
following way
$(X[1]) $(Y([1]) $(X[2]) $(Y([2]) ... $(X[n]) $(Y([n])
Unfortunately, $(X) $(Y) does not do the trick.
Date: Thu, 28 Nov 2002 12:10:24 +0300
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: Simple question on list combination
Probably, steel the "sequence.length" algorithm from Boost.Build?
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/*checkout*/boost/boost/tools/build/new/sequence.jam?rev=HEAD&content-type=text/plain
I don't know any way to get this behaviour using expansion. You better
write a rule for this purpose.
rule interleave ( list1 * : list2 * ) {
local result ;
while $(list1) {
result += $(list1[1]) $(list2[1]) ;
list1 = $(list1[2-]) ;
list2 = $(list2[2-]) ;
}
}
This uses rule arguments style of Boost.Jam, you'd have to
remove the argument list for Perforce Jam. BTW, you probaly won't
need to check the sequnce length separately, as this rule can
do the check.
Date: Thu, 28 Nov 2002 11:49:17 +0100 (MET)
From: bruno.schulze@gmx.de
Subject: Re: Simple question on list combination
Thanks a lot for your great help. I really like your interleave rule, makes
a lot of sense to me. It's also good to know that there are knowledgable
people on this list.
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/*checkout*/boost/boost/tools/build/new/sequence.jam
?rev=HEAD&content-type=text/plain
From: "Robert Cowham" <robert@vaccaperna.co.uk>
Subject: RE: Header scanning problem
Date: Sat, 30 Nov 2002 14:54:04 -0000
Just to say that I found the problem.
My HDRPATTERN was also matching the "\n" in the file.
As a suggestion, perhaps the debug lines should put quotes around the
filenames when found?
Date: Mon, 2 Dec 2002 09:11:39 -0800 (PST)
Subject: Re: INCLUDES bug?
From: "Christopher Seiwald" <seiwald@perforce.com>
This INCLUDES problem is indeed a bug. It properly handles the relationship
(as INCLUDES) during the dependency analysis, but treats the files as direct
dependencies (like Depends) during the actual build. So if an included file
fails to build, it thinks it can't build the including file.
This is rarely a problem with normal source files including others, but still
deserves to be fixed. I'm toying with two possible fixes, one that is a fairly
light bandaide and another that is a deeper reworking of INCLUDES handling.
If I'm happy with either of them, I'll put the attempt in my path in the
public depot.
http://public.perforce.com:8080/@md=d@//guest/christopher_seiwald/?ac=83
As to the later comment about the LEAVES rule, I'll admit it's a mystery.
Unfortunately, it antedates Perforce, leaving me without a clue as to why it was added.
Date: Tue, 03 Dec 2002 14:35:15 +0300
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: INCLUDES bug?
I've a related issue.
Suppose that there are two generated files a.cpp and b.h, and a.cpp includes
b.h. Naturally, I'd like b.h to be generated before a.cpp is compiled.
Jambase accomplised this using the "first" target:
Depends all : first ... ;
Depends first : b.h ;
But this is not 100% satisfactory. If you add "-j" option to the command line,
correct building order is not guaranteed, and this is indeed a problem for me.
An optimal solution would be to run 'headers' on generated targets *after*
they are generated. This will not change fate of any target, only adjust build
order. I even started to implementing this for Boost.Jam, but realized that
separate handling of INCLUDES and Depends is needed, as you say above.
Is there any appoximate date when you'll make the change?
P.S. In fact, while we can run 'headers' after file is generated, we can't
easily avoid running 'headers' in make0 on files that would be generated.
So, there will be two invocations. But since it's for generated cpp files
only, it's not very important.
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: INCLUDES bug?
Date: Tue, 3 Dec 2002 13:06:25 +0100
If I understand that correctly, the problem is not really the build
order. The build order merely makes the problem show up. Even if jam
builds a.cpp "after" b.h, there isn't any guarantee that building of
b.h has been finished by the time use of a.cpp starts.
You really need to express, explictly, that a.cpp depends on b.h. That
b.h must exist by the time a.cpp is used. Build order doesn't do that;
only dependencies do.
In that case, jam can't run anything else until it has built any
generated headers, because it does not know when actions are runnable. Right?
What, then, if a.cpp does not exist yet, and is (will be) the only file
that includes b.h? a.cpp cannot be generated before b.h is generated
because jam doesn't know the appropriate action order, but there's no
reason to generate b.h at all (yet).
I don't think jam's header scanning can handle cases like this. You
simply must express such dependencies in the Jamfile.
What jam could practically do is detect the error: If a file has been
header-scanned and is subsequently generated, jam can detect that and
give an error. Or, I suppose, jam could simply redo the entire build... yuck.
I believe the example I just gave shows a change of fate for b.h.
Date: Tue, 03 Dec 2002 15:30:03 +0300
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: INCLUDES bug?
That's what I meant. I'd like to have explicit dependencies on b.h.
I don't follow you. If "a.cpp" is to be updated, then dependents will
wait for it. I suggest that after it's physically updated (i.e. the
action is run), you run headers(), which adds some includes. So
dependents will have to wait a little longer. But other targets are not affected.
a.cpp *can* be generated before b.h! It cannot be compiled before b.h is generated.
I'm not fond about this idea. Boost.Build has rather high level
interface and explicit dependencies like that would look wrong.
Does it mean that you cannot generate cpp files?
I still don't understand. Can you elaborate?
Date: Tue, 03 Dec 2002 15:49:25 +0300
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: INCLUDES bug?
Agh.. you mean that scanning of generated a.cpp may result in need to
update b.h -- i.e. changing fate of dependency of a.cpp?
That's right, but:
1. This cannot happen in Boost.Build. So, the change I'm planning
would still be win for us.
2. I think it's possible to support this case even in Jam. You'd
have to artificially run make0 on all generated headers. It will
determine wether they should be updated.
During actual building, if you need to use the generated header,
you'll use it. Fate (in the sense of _target.fate variable) is
already computed. If the generated header is not needed, you won't
build it. The effort spend on determing fate (stat, etc) will be
wasted, but again, there are few generated headers.
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: INCLUDES bug?
Date: Tue, 3 Dec 2002 15:27:20 +0100
Good. We agree on that. I presume that we also agree that -j is required.
I see... this doesn't change build order, IMO. It adds to the dependency
graph (which may indirectly affect the build order).
You're right. My bad.
I'm not fond of it either. On the other hand I _really_ like the simple design of jam:
1. Decide what to do.
2. Do it.
Generated files that include or depend on other generated files don't
fit well into that model.
No, but I think it would mean that you can't generate .h files, which
almost as bad. Reductio in absurdum, I suppose.
Perhaps a sweeping generalization is better: A function in jam that
changes fate on its first invocation should not be trusted to never do
that on a subsequent invocation.
Date: Tue, 03 Dec 2002 20:56:43 +0300
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: INCLUDES bug?
You're right. But since I'd like to have generated header work transparentely,
I'm very likely to break that simple design.
In fact, there's another problem. The generated header target (on which the
action is called) and the target created during the header scanning are
different. One might have
<src/bin>b.h
and
<src#include_path1#include_path2>b.h
For jam, the fact that they are bound to the same location is irrelevant.
So, dependencies on generated headers really don't work. I've worked that
around in Boost.Jam, and certainly would like to solve the last problem
I know.
OK, I'll think about this more when implementing generated targets scanning.
Date: Tue, 3 Dec 2002 10:47:09 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: INCLUDES bug?
| Suppose that there are two generated files a.cpp and b.h, and a.cpp includes
| b.h. Naturally, I'd like b.h to be generated before a.cpp is compiled.
We do this all the time, simply by having the rule that generates a.cpp
explicitly state the INCLUDES of b.h. If a.cpp is generated from some other
source file, scan _that_ file for includes and propagate them down (using
the special variables HDRRULE and HDRSCAN).
I'm rather adverse to interleaving dependency generation and build execution.
What I acknowledge is weak is that the header scanning only really works with
the include syntax of a few languages. I also acknowledge that writing
rules that set and use HDRRULE and HDRSCAN can be intricate. We hope to have
publishable examples at some point.
Date: Tue, 03 Dec 2002 22:24:10 +0300
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: INCLUDES bug?
This seems non-trivial to implement in a generic way, but I'll think about it.
I'm rather interested in your opinion on the other issue I've raised:
the fact that the generated header target that is created, is different
from the target created during dependency scanning. (In particualar,
the latter contains include paths in grist for Boost.Build).
This results in missed dependency. Do you think it's a problem?
From: david.abrahams@rcn.com
Subject: Re: INCLUDES bug?
Date: Tue, 03 Dec 2002 14:53:27 -0500
You might also consider just how far we'd need to go to make the
design general. Suppose you have a chain of executables, each of which
is produced by compiling the output of the previous one? How many
separate scanning passes are needed?
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 04 Dec 2002 14:29:32 +0100
Subject: Linking a library from libraries
I have this steep directory tree, and would like some of all the small
libraries generated by Jam to be linked into larger ones.
Can I do this in a way similar to LinkLibraries?
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 06 Dec 2002 15:52:13 +0100
Subject: ALL_LOCATE_TARGET
is it possible to make jam place all final targets (libs & exes) in the
'out' directory, but leave temporary objects in the source folders?
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: ALL_LOCATE_TARGET
Date: Fri, 6 Dec 2002 16:17:20 +0100
In jam, you do such things by setting variables on targets. For example,
you can write a rule that calls Library and then sets LOCATE_TARGET to
the 'out' directory. Any library generated using that rule behaves the
way you want.
Or you could modify the rules in Jambase. Or you could set the variable
on your library/executable in Jamfile, just next to the Main/Library
call. Whatever makes most sense in your particular situation.
From: "Ingo Weinhold" <bonefish@cs.tu-berlin.de>
Date: Sat, 07 Dec 2002 14:26:33 EDT (-0200)
Subject: ObjectHdrs and HDRSEARCH
apparently the ObjectHdrs rule doesn't adjust HDRSEARCH, so that header
search directories added this ways won't be searched when binding
header files. If the actual source file is passed to the rule, this can
easily be fixed. Jambase.html explicitly states, that the target's
suffix is ignored, though -- not that it would do much harm to set
HDRSEARCH on an object file for instance... :-P
Subject: Re: Linking a library from libraries
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 09 Dec 2002 19:32:36 +0100
My problem is that for libs such as freetype, which I am linking
statically, with a directory layout like this:
SubDir TOP libraries freetype ;
HDRS += $(TOP)/libraries/freetype/include ;
Library freetype :
src/autohint/autohint.c
src/cff/cff.c
src/base/ftbase.c
src/base/ftdebug.c
src/base/ftglyph.c
src/base/ftinit.c
src/base/ftmm.c
src/base/ftsystem.c
src/raster/raster.c
src/sfnt/sfnt.c
src/smooth/smooth.c
src/truetype/truetype.c
src/cid/type1cid.c
src/type1/type1.c
src/winfonts/winfnt.c
src/psnames/psnames.c
;
This works fine, except when using ALL_LOCATE_TARGET. I would not mind
putting Jamfiles in each subdir, but I would prefer the end result to be
just on static library.
Any ideas on how to achieve that?
Date: Tue, 10 Dec 2002 00:59:25 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: INCLUDES bug?
| With the following Jamfile
|
| actions Make
| {
| touch $(1)
| }
|
| Make a ;
| Make b ;
|
| Depends a : b ;
| INCLUDES b : c ;
|
| My expectation is that jam would be able to make 'b', but would refuse to
| make 'a' for lack of 'c'. Instead, jam refuses to make 'b' for lack of 'c'.
| Is this a bug?
I did the deeper reworking; from the RELNOTES:
Fix 'includes' support so that included files aren't treated as
direct dependencies during the command execution phase. If an
included file failed to build, make1() would bypass the including
file. Now make0() appends each child's 'includes' onto its own
'depends' list, eliminating 'includes'-specific code in make0()
and make1().
It's bundled with a bunch of other changes since 2.4 and can be seen at:
http://public.perforce.com:8080/@md=d@//guest/christopher_seiwald/?ac=83
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Date: Wed, 11 Dec 2002 11:45:20 +0100
Subject: Jam & DigitalMars compiler
Hi, I have a stupid question on using Jam. I'm using the DigitalMars C++
compiler (formerly known as Symantec C++) on W2K. But even getting Jam
to work with it for a "hello world" makes some problems.
1. If I just run Jam I get an error message: On NT set BCCROOT, MSVC or
MSVCNT... Well, I don't use one of these compilers. So what now? I tried
to set MSVCNT to the root of the DMC compiler.
2. My Jamfile looks like this:
C++ = sc ;
Main test : test.cpp ;
3. This is the output I get when running jam -n -d3
make -- all
time -- all: unbound
make -- shell
time -- shell: unbound
make -- first
time -- first: unbound
made stable first
made stable shell
make -- files
time -- files: unbound
make -- first
made stable files
make -- lib
time -- lib: unbound
make -- first
made stable lib
make -- exe
time -- exe: unbound
make -- first
make -- test.exe
time -- test.exe: missing
make -- test.obj
time -- test.obj: missing
make -- test.cpp
time -- test.cpp: missing
don't know how to make test.cpp
made+ nofind test.cpp
made+ nomake test.obj
made+ nomake test.exe
made nomake exe
make -- obj
time -- obj: unbound
make -- first
make -- test.obj
made nomake obj
make -- first
made nomake all
...found 10 target(s)...
...can't find 1 target(s)...
...can't make 2 target(s)...
...skipped test.obj for lack of test.cpp...
...skipped test.exe for lack of test.obj...
...skipped 2 target(s)...
So any idea where the problem is? Robert
Date: Wed, 11 Dec 2002 17:23:44 -0800
From: rmg@perforce.com
Subject: Jam 2.5 Release Plan and Call For Bug Fixes
This is to announce that we intend to do a new Jam release - 2.5 -
prior to the end of this month.
Why a release now, and why without any prior fanfare?
all platforms for which we ship Perforce client software. The first
such release will be Perforce 2002.2 (currently in beta, and hopefully
to be declared fit for general availability this month). Since the
"internal" version of Jam used to build Perforce 2002.2 includes some
necessary changes since Jam 2.4 was released, we've decided to:
- Bring the Public Depot sources up-to-date with the "internal"
Jam sources. If you're interested in what these are, keep an eye
on the RELNOTES file in the Public Depot:
ftp://public.perforce.com/public/jam/src/RELNOTES
- Make this call for additional bug fixes (in my first quick survey of Jam
changes in the //guest/... branches to Jam, I didn't spot any such
fixes. I'll try to have one more careful look before finalizing the release.
If you know of any fixes that are important to you, please bring
them to my attention as soon as possible.
- Release the result as Jam 2.5 (source code) in the Public Depot,
to match the Jam 2.5 binaries which will become available for
download from the Perforce web site.
From: xguo@attbi.com
Date: Sun, 15 Dec 2002 06:36:43 +0000
Subject: spaces in path
This is my first post, sorry if this has been discussed many times before.
I am trying to use jam on Windows XP. My Visual Studio is installed under
"c:\program files\microsoft Visual Stuido\VC7", but if I set the VISUALC (I use
ftjam) to the path above, jam will look for libraries in c:\program\lib,
obviously the part after the first space has been omitted.
Is there any way to get around this? I mean without modifying the jambase
file, since this is really a BASIC requirement and I believe Jam is designed for
better portability.
From: david.abrahams@rcn.com
Date: Mon, 16 Dec 2002 08:58:44 -0500
Subject: Documentation bug?
The docs say:
:E=value Assign value to the variable if it is unset.
That can't be right, can it? Shouldn't it be
:E=value Use value instead if the variable is unset.
Subject: Re: Jam 2.5 Release Plan and Call For Bug Fixes
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 16 Dec 2002 15:46:25 -0700
At the risk of repeating myself:
This bug causes Jam to attempt to header scan object files, library
archives, directories, and other random stuff.
To ease the potential integration into the mainline, I just integrated
the recent mainline to
//guest/matt_armstrong/jam/hdrscan_on_target_fix/...
Date: Wed, 18 Dec 2002 16:23:16 +0100
From: boga@mac.com
Subject: Re: Jam 2.5 Release Plan and Call For Bug Fixes
I've posted two perforce jobs into perforce public depot:
revursiveontarget:
This is a generalization of matt_armstrong's bug:
001481.html
IMHO: this is a very ugly bug. (Matt's case is the worst)
I have a fix that's in some way more general than Matt's fix (but not
100% complete) , i'll check that in tomorrow.
extraspacesinactions
This is a minor problem, see the bug description for sample file to reproduce/fix.
From: david.abrahams@rcn.com
Date: Wed, 18 Dec 2002 16:27:04 -0500
Subject: exec.nt multiple process patch
This patch (to the Boost.Jam mainline) is fully described below. It
fixes a problem which is almost certainly present in Perforce Jam as
well. Please let me know if you'd like the full source to our execnt.c.
From: "Anichini, Steve" <Sanichini@midwaygames.com>
Subject: exec.nt multiple process patch
Date: Wed, 18 Dec 2002 13:57:07 -0600
The basic gist of the solution is to tag each temp file with the process ID
of the process, this way if two simultaneous processes are running they will
not collide. There is also code to convert the values in TEMP and TMP to
short paths with no spaces (I can't remember what the problem with that was,
but it was causing things to blow up on some people's machines).
I last sync'd up with the jam in boost 1.29
BTW, ignore the modifications I made to maxline() - I know the values it
returns are incorrect. We prefer to let the command interpreter catch a line
that is too long, as it gives us better feedback as to where the problem is
than jam did. Maybe that's changed in the last few releases of bjam, but I
haven't had time to revisit it.
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: Jam & DigitalMars compiler
Date: Thu, 19 Dec 2002 09:20:13 +0100
Sorry, looks like I missed to send it to the list. Robert
Hi, ok than there shouldn't be a problem as all my files are in the current directory.
I haven't used this rule.
Ok, I have done this. Here is the result runnig JAM as is:
or Microsoft directories.
Hmm... As said, I'm not using any of these compilers. So I set MSVCNT to
the root of the DigitalMars compiler and tried again. Here is the quite long output:
d:\develop\dm\lib\oldnames.lib d:\develop\dm\lib\kernel32.lib
]*[<"]([^">]*)[">].*$
]*[<"]([^">]*)[">].*$
make -- all
time -- all: unbound
make -- shell
time -- shell: unbound
make -- first
time -- first: unbound
made stable first
made stable shell
make -- files
time -- files: unbound
make -- first
made stable files
make -- lib
time -- lib: unbound
make -- first
made stable lib
make -- exe
time -- exe: unbound
make -- first
make -- test.exe
time -- test.exe: missing
make -- test.obj
time -- test.obj: missing
make -- test.cpp
time -- test.cpp: missing
don't know how to make test.cpp
made+ nofind test.cpp
made+ nomake test.obj
made+ nomake test.exe
made nomake exe
make -- obj
time -- obj: unbound
make -- first
make -- test.obj
made nomake obj
make -- first
made nomake all
...found 10 target(s)...
...can't find 1 target(s)...
...can't make 2 target(s)...
...skipped test.obj for lack of test.cpp...
...skipped test.exe for lack of test.obj...
...skipped 2 target(s)...
As you can see it's empty. Does this indicate the local directory?
Hopefully the output helps the JAM gurus here to solve my "littel"
problem... Robert
From: "Baruch Sterin" <baruch_sterin@hotmail.com>
Subject: Re: Jam 2.5 Release Plan and Call For Bug Fixes
Date: Thu, 19 Dec 2002 12:06:11 +0200
There is a problem with Jam and AIX version 4.3 and above. It is an old
issue reported by Randy Roesler about two and a half years ago.
From: "Robert M. Muench" <robert.muench@robertmuench.de>
Subject: RE: Jam & DigitalMars compiler
Date: Thu, 19 Dec 2002 14:27:00 +0100
Hi, hmm... on Windows 2000? IIRC it's not case-sensitive to filenames.
Is JAM case sensitive? What's really strange here is that I can remember
getting JAM run mostly out-of-the-box... I really don't know where the
problem comes from. Robert
Subject: Re: Jam 2.5 Release Plan and Call For Bug Fixes
Date: Thu, 19 Dec 2002 08:23:21 -0800
From: rmg@perforce.com
Thanks for bringing this to our attention. I'll make sure this gets
recorded as a job in the Public Depot.
From: Badari Kakumani <badari@cisco.com>
Date: Thu, 19 Dec 2002 19:36:35 -0800
Subject: Re: Interested in "incremental" builds....
curious if anyone got something like below working...
not sure what is blocking jam to allow implementing this.
though jam binds the targets at the beginning to certain
filesystem location, can it NOT rebind targets on the fly at the
time of update (to `local' directories as specified by vpath)?
Date: Fri, 20 Dec 2002 11:03:10 +0100
From: <boga@mac.com>
Subject: New lexical scanner in Jam 2.5?!
I'd like to see an new (optional) lexical scanner in 2.5.
It should accept the
X = "test.c";
MyRule $(X);
and maybe:
MyRule $(X):$(X);
if ($(X)==""){
}
This shouldn't be the default, because it would break exiting jamfiles.
I've implemented such a scanner in //guest/miklos_fazekas/jamnewlex/.
Currently it only gives warning when it detect anything that's not
compatible with the old lexical scanner. Adding an option to
enable/disable it would be trivial.
From: Robert Love <Robert.Love@WallStreetSystems.com>
Date: Fri, 20 Dec 2002 06:35:23 -0500
Subject: Using Jam with Visual Basic
We have recently purchased user licenses for Perforce and are now in a
position to begin looking at improving are current processes. At present our
Visual Basic developers build all of their own code but as the Software
Management Team we need to take ownership of the build process. We are
currently trying to find the best tool for the job and want to see if other
that we should be looking at.
The important thing is that we don't want to just use Jam if there is a more
suitable tool.
Subject: Re: Re: Jam 2.5 Release Plan and Call For Bug Fixes
From: Matt Armstrong <matt@lickey.com>
Date: Fri, 20 Dec 2002 09:44:35 -0700
This has been fixed in //public/jam/src recently (change 2529).
From: Matt Armstrong <matt@lickey.com>
Date: Fri, 20 Dec 2002 09:48:10 -0700
Subject: two more jam fixes for consideration
//guest/matt_armstrong/jam/nt_msvcdir_fix/...
Years ago, Microsoft stopped using the environment variable MSVCNT
and switched to MSVCDIR (and MSVCDir). This branch updates
Jambase and some docs to reflect that. The vcvars32.bat file is
still provided with Visual C++ (at least as of 6.0) and sets up
MSVCDir to the windows short name of that dir).
There are multiple p4 changes, as this has been in the depot a
while and I've integrated from the mainline periodically.
//guest/matt_armstrong/jam/fix/no_dot_in_path/...@2545
Jam's Jamfile still requires . to be in PATH to build. This
change fixes that. It is simple:
==== //guest/matt_armstrong/jam/fix/no_dot_in_path/Jamfile#2 (text) ====
@@ -62,7 +62,7 @@
if $(YACC) && $(SUFEXE) = ""
{
- GenFile jamgram.y jamgramtab.h : yyacc jamgram.yy ;
+ GenFile jamgram.y jamgramtab.h : $(DOT)$(SLASH)yyacc jamgram.yy ;
}
From: "Robert Cowham" <robert@vaccaperna.co.uk>
Subject: RE: Using Jam with Visual Basic
Date: Thu, 26 Dec 2002 12:50:03 -0000
I have implemented a jamfile for building a system which includes VB and
VC++ DLLs.
For the VB bit, it just calls VB.exe (with appropriate command flags) to do
the building, but the jam rules read the .vbp to find dependencies, so jam
knows if vb.exe needs to be called or not...
Date: Mon, 30 Dec 2002 11:14:39 -0800
From: rmg@perforce.com
Subject: jam2.5, release candidate 1, now available
Jam 2.5rc1 (release candidate 1) is now available at:
We've just finished rolling what changes we can into jam to make up
the 2.4 release. It is available at:
ftp://public.perforce.com/public/jam/jam-2.5.tar
ftp://public.perforce.com/public/jam/jam-2.5.zip
Please see the RELNOTES file in the release for the list of changes
since Jam 2.4:
ftp://public.perforce.com/public/jam/src/RELNOTES
As always: Thanks again to everybody who's contributed to Jam.
We are very appreciative. Keep it up!
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 30 Dec 2002 12:49:53 -0700
Subject: [bug fixes] some problems Purify turned up
There are 3 separate issues that turned up when I my lightly patched
version of the current release candidate of jam under Purify. These
are all present in "stock" jam as well.
1) var_expand() in expand.c references uninitialized memory. The
function is passed an 'in' pointer and an 'end' pointer but will
reference memory past 'end'. The fix is to test the length of the
string before referencing the memory.
@@ -84,7 +84,11 @@
/* This gets alot of cases: $(<) and $(>) */
+#ifdef OPT_FIX_EXPAND_UMR
+ if( end - in == 4 && in[0] == '$' && in[1] == '(' && in[3] == ')' )
+#else
if( in[0] == '$' && in[1] == '(' && in[3] == ')' && !in[4] )
+#endif
{
switch( in[2] )
{
2) file_archscan() in fileunix.c passes a non-terminated string to
sscanf(). The ar_hdr.ar_date and ar_hdr.ar_size strings are not
'\0' terminated (the whole struct is a printable character string,
but the struct itself is not terminated). It seems that Solaris'
sscanf() does a strlen() on the passed string, and so strlen()
references unitialized memory as it runs past the end of the ar_hdr
looking for 0. The fix is to look for the first space in
ar_hdr.ar_date and ar_hdr.ar_size and set that to 0 before calling sscanf().
@@ -228,9 +228,21 @@
long lar_size;
/* Get date & size */
+#ifdef OPT_FIX_AR_UMR
+ {
+ char *p;
+ if ( (p = strchr(ar_hdr.ar_date, ' ')) )
+ *p = 0;
+ sscanf(ar_hdr.ar_date, "%ld", &lar_date);
+ if ( (p = strchr(ar_hdr.ar_size, ' ')) )
+ *p = 0;
+ sscanf( ar_hdr.ar_size, "%ld", &lar_size );
+ }
+#else
sscanf( ar_hdr.ar_date, "%ld", &lar_date );
sscanf( ar_hdr.ar_size, "%ld", &lar_size );
+#endif
3) file_archscan() in fileunix.c can pass uninitialized garbage to the
callback function. The code does not guarantee that the 'lar_name'
is initialized, and happily passes it on. This shows up in Solaris
where it seems the first archive entry always has the name "/".
The fix is to always at least initialize lar_name[0] = 0 and avoid
calling the callback if so.
@@ -278,6 +290,12 @@
*dst++ = *src++;
*dst = 0;
}
+#ifdef OPT_FIX_AR_UMR
+ else
+ {
+ lar_name[0] = 0;
+ }
+#endif
/* Modern (BSD4.4) long names: if the name is "#1/nnnn",
** then the actual name is the nnnn bytes after the header.
@@ -293,12 +311,18 @@
/* Build name and pass it on. */
+#ifdef OPT_FIX_AR_UMR
+ if (lar_name[0]) {
+#endif
if ( DEBUG_BINDSCAN )
printf( "archive name %s found\n", lar_name );
sprintf( buf, "%s(%s)", archive, lar_name );
(*func)( closure, buf, 1 /* time valid */, (time_t)lar_date );
+#ifdef OPT_FIX_AR_UMR
+ }
+#endif
offset += SARHDR + ( ( lar_size + 1 ) & ~1 );
lseek( fd, offset, 0 );
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 30 Dec 2002 14:24:21 -0700
Subject: [bug] deep INCLUDES are broken
The current release candidate of jam seems to have a bug in INCLUDES.
The following snippet demonstrates the problem (it is derived from a
real build system that generates some header files on the fly). When
run, there are two problems:
(1) D is touched before C, despite D including C.
(2) A and B are not touched at all, despite C including B and B including A.
rule Touch {
Clean clean : $(<) ;
}
actions Touch { touch $(1) }
Touch A ;
Touch B ;
Touch C ;
Touch D ;
INCLUDES B : A ;
INCLUDES C : B ;
INCLUDES D : C ;
Depends all : D ;
% jam -v
Jam 2.5rc1. OS=SOLARIS. Copyright 1993-2002 Christopher Seiwald.
% jam
...found 11 target(s)...
...updating 4 target(s)...
Touch D
Touch C
...updated 2 target(s)...
% squeaker% jam -v
Jam 2.4. OS=LINUX. Copyright 1993-2002 Christopher Seiwald.
squeaker% jam
...found 11 target(s)...
...updating 4 target(s)...
Touch A
Touch B
Touch C
Touch D
...updated 4 target(s)...
Subject: Re: [bug] deep INCLUDES are broken
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 31 Dec 2002 12:24:32 -0700
I realize now this isn't a problem, since D does not depend on C.
This is a problem. I've submitted a candidate fix for this in change
2573 in the public depot:
Change 2573 by matt_armstrong@squeaker-perforce-guest on 2002/12/31 11:16:37
Fix INCLUDES by copying the include list of each target's
included file to itself. So this:
INCLUDE B : A ;
INCLUDE C : B ;
implies:
INCLUDE C : A ;
Do then if you have:
Depends D : C ;
Both these will be implied:
Depends D : B ;
Depends D : A ;
(previously D would not depend on A).
Affected files ...
... //guest/matt_armstrong/jam/fix/includes/make.c#2 edit
Differences ...
==== //guest/matt_armstrong/jam/fix/includes/make.c#2 (text) ====
@@ -270,6 +270,8 @@
/* Ignore circular deps: headers include themselves a lot. */
+ incs = 0;
+
for( c = t->includes; c; c = c->next )
{
if( DEBUG_Depends )
@@ -278,8 +280,17 @@
if( c->target->fate == T_FATE_INIT )
make0( c->target, p, depth + 1, counts, anyhow );
+
+ for( d = c->target->includes; d; d = d ->next ) {
+ incs = targetentry( incs, bindtarget( d->target->name ) );
+ }
}
+ /* Add all the includes of our includes to our direct
+ * includes. */
+
+ t->includes = targetchain( t->includes, incs );
+
/* Step 3c: add dependents' includes to our direct dependencies */
incs = 0;
Subject: Re: [bug] deep INCLUDES are broken
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 31 Dec 2002 15:13:01 -0700
To continue this conversation with myself... :-)
This fix is not suitable. For building jam itself, it causes well
over 5 million calls to the 'targetentry' call, each time consuming
memory. Jam consumes nearly 200 megs to build itself given the
/usr/include structure on Solaris. Jam slows from 0.164 secs running
time (for jam 2.4) to 10 seconds.
I attempted a fix where make0 would not append a new target if it were
already in the target's dependency or includes list. This reduces the
memory consumption, but still makes jam take 2.5 seconds to figure out
how to build itself (16 times slower than jam 2.4).
From: Matt Armstrong <matt@lickey.com>
Date: Wed, 01 Jan 2003 22:11:41 -0700
Subject: [bug] "updated" actions
I've found and isolated another bug in the current release candidate of jam 2.5.
Actions flagged with "updated" won't run if their target is missing
but their sources are already built.
The following Jamfile demonstrates the problem:
rule Update {
Depends $(<) : $(>) ;
}
actions updated Update { echo "$(<)" >> $(<) }
actions Touch { touch $(<) }
Touch A ;
Update B : A ;
Depends all : B ;
In the session quoted below, you can see that jam 2.5rc1 fails to
build B if A is already present. Jam 2.4 builds it fine.
% jam -v
Jam 2.5rc1. OS=LINUX. Copyright 1993-2002 Christopher Seiwald.
% ls
Jamfile
% jam
...found 9 target(s)...
...updating 2 target(s)...
Touch A
Update B
...updated 2 target(s)...
% rm B
% jam
...found 9 target(s)...
...updating 1 target(s)...
...updated 1 target(s)...
% /usr/bin/jam -v
Jam 2.4. OS=LINUX. Copyright 1993-2002 Christopher Seiwald.
% /usr/bin/jam
...found 9 target(s)...
...updating 1 target(s)...
Update B
...updated 1 target(s)...
Subject: Re: [bug] "updated" actions
From: Matt Armstrong <matt@lickey.com>
Date: Thu, 02 Jan 2003 10:13:53 -0700
This bug was caused by the change to make0() that stopped marking a
target newer when its parent was missing.
This broke `actions updated`, since even if the target were missing
not all of its dependencies would be marked newer.
A proposed fix below...
Change 2582 by matt_armstrong@squeaker-perforce-guest on 2003/01/02 09:07:17
make1cmds() now ignores the `updated` actions modifier if the
target is missing. This way stable dependants get included
when the target is being built from scratch.
This is necessary because make0() no longer marks a target
newer if the parent is missing.
Affected files ...
... //guest/matt_armstrong/jam/fix/updated_actions/make1.c#2 edit
Differences ...
==== //guest/matt_armstrong/jam/fix/updated_actions/make1.c#2 (text) ====
@@ -67,7 +67,7 @@
static void make1c( TARGET *t );
static void make1d( void *closure, int status );
-static CMD *make1cmds( ACTIONS *a0 );
+static CMD *make1cmds( TARGET *t );
static LIST *make1list( LIST *l, TARGETS *targets, int flags );
static SETTINGS *make1settings( LIST *vars );
static void make1bind( TARGET *t, int warn );
@@ -241,7 +241,7 @@
printf( "...on %dth target...\n", counts->total );
pushsettings( t->settings );
- t->cmds = (char *)make1cmds( t->actions );
+ t->cmds = (char *)make1cmds( t );
popsettings( t->settings );
t->progress = T_MAKE_RUNNING;
@@ -401,7 +401,7 @@
}
/*
- * make1cmds() - turn ACTIONS into CMDs, grouping, splitting, etc
+ * make1cmds() - turn TARGET's ACTIONS into CMDs, grouping, splitting, etc
*
* Essentially copies a chain of ACTIONs to a chain of CMDs,
* grouping RULE_TOGETHER actions, splitting RULE_PIECEMEAL actions,
@@ -411,8 +411,9 @@
*/
static CMD *
-make1cmds( ACTIONS *a0 )
+make1cmds( TARGET *t )
{
+ ACTIONS *a0 = t->actions;
CMD *cmds = 0;
LIST *shell = var_get( "JAMSHELL" ); /* shell is per-target */
@@ -428,6 +429,7 @@
ACTIONS *a1;
CMD *cmd;
int start, chunk, length;
+ int flags;
/* Only do rules with commands to execute. */
/* If this action has already been executed, use saved status */
@@ -438,17 +440,23 @@
a0->action->running = 1;
/* Make LISTS of targets and sources */
+ /* If the `updated` actions modifier has been specified */
+ /* for this rule, ignore it if the target is missing. */
/* If `execute together` has been specified for this rule, tack */
/* on sources from each instance of this rule for this target. */
+ flags = rule->flags;
+ if ( t->binding == T_BIND_MISSING )
+ flags &= ~RULE_UPDATED;
+
nt = make1list( L0, a0->action->targets, 0 );
- ns = make1list( L0, a0->action->sources, rule->flags );
+ ns = make1list( L0, a0->action->sources, flags );
if( rule->flags & RULE_TOGETHER )
for( a1 = a0->next; a1; a1 = a1->next )
if( a1->action->rule == rule && !a1->action->running )
{
- ns = make1list( ns, a1->action->sources, rule->flags );
+ ns = make1list( ns, a1->action->sources, flags );
a1->action->running = 1;
}
Date: Thu, 2 Jan 2003 14:40:44 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: [bug fixes] some problems Purify turned up
These three memory issues are not at all new to jam 2.5, correct?
Still, since it looks like 2.5 will be taking on a few other
corrections (to say the least), I'll pile these in as well.
There's another case for #3 below -- when the solaris "string table"
is read. As with the solo "/" entry, the solaris "//xxx" entry
must bypass the call to the callback function.
| From: Matt Armstrong <matt@lickey.com>
| Subject: [bug fixes] some problems Purify turned up
|
| There are 3 separate issues that turned up when I my lightly patched
| version of the current release candidate of jam under Purify. These
| are all present in "stock" jam as well.
|
| 1) var_expand() in expand.c references uninitialized memory. The
| function is passed an 'in' pointer and an 'end' pointer but will
| reference memory past 'end'. The fix is to test the length of the
| string before referencing the memory.
|
| @@ -84,7 +84,11 @@
|
| /* This gets alot of cases: $(<) and $(>) */
|
| +#ifdef OPT_FIX_EXPAND_UMR
| + if( end - in == 4 && in[0] == '$' && in[1] == '(' && in[3] == ')' )
| +#else
| if( in[0] == '$' && in[1] == '(' && in[3] == ')' && !in[4] )
| +#endif
| {
| switch( in[2] )
| {
|
| 2) file_archscan() in fileunix.c passes a non-terminated string to
| sscanf(). The ar_hdr.ar_date and ar_hdr.ar_size strings are not
| '\0' terminated (the whole struct is a printable character string,
| but the struct itself is not terminated). It seems that Solaris'
| sscanf() does a strlen() on the passed string, and so strlen()
| references unitialized memory as it runs past the end of the ar_hdr
| looking for 0. The fix is to look for the first space in
| ar_hdr.ar_date and ar_hdr.ar_size and set that to 0 before calling
| sscanf().
|
| @@ -228,9 +228,21 @@
| long lar_size;
|
| /* Get date & size */
| +#ifdef OPT_FIX_AR_UMR
| + {
| + char *p;
|
| + if ( (p = strchr(ar_hdr.ar_date, ' ')) )
| + *p = 0;
| + sscanf(ar_hdr.ar_date, "%ld", &lar_date);
| + if ( (p = strchr(ar_hdr.ar_size, ' ')) )
| + *p = 0;
| + sscanf( ar_hdr.ar_size, "%ld", &lar_size );
| + }
| +#else
| sscanf( ar_hdr.ar_date, "%ld", &lar_date );
| sscanf( ar_hdr.ar_size, "%ld", &lar_size );
| +#endif
|
| 3) file_archscan() in fileunix.c can pass uninitialized garbage to the
| callback function. The code does not guarantee that the 'lar_name'
| is initialized, and happily passes it on. This shows up in Solaris
| where it seems the first archive entry always has the name "/".
| The fix is to always at least initialize lar_name[0] = 0 and avoid
| calling the callback if so.
|
| @@ -278,6 +290,12 @@
| *dst++ = *src++;
| *dst = 0;
| }
| +#ifdef OPT_FIX_AR_UMR
| + else
| + {
| + lar_name[0] = 0;
| + }
| +#endif
|
| /* Modern (BSD4.4) long names: if the name is "#1/nnnn",
| ** then the actual name is the nnnn bytes after the header.
| @@ -293,12 +311,18 @@
|
| /* Build name and pass it on. */
|
| +#ifdef OPT_FIX_AR_UMR
| + if (lar_name[0]) {
| +#endif
| if ( DEBUG_BINDSCAN )
| printf( "archive name %s found\n", lar_name );
|
| sprintf( buf, "%s(%s)", archive, lar_name );
|
| (*func)( closure, buf, 1 /* time valid */, (time_t)lar_date );
| +#ifdef OPT_FIX_AR_UMR
| + }
| +#endif
|
| offset += SARHDR + ( ( lar_size + 1 ) & ~1 );
| lseek( fd, offset, 0 );
Subject: Re: [bug fixes] some problems Purify turned up
From: Matt Armstrong <matt@lickey.com>
Date: Thu, 02 Jan 2003 16:50:37 -0700
Correct, I think only #3 is new.
Date: Fri, 3 Jan 2003 14:51:19 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: [bug] "updated" actions
|
| This bug was caused by the change to make0() that stopped marking a
| target newer when its parent was missing.
|
| This broke `actions updated`, since even if the target were missing
| not all of its dependencies would be marked newer.
|
| A proposed fix below...
The original change was intended to affect reporting only.
The idea was that dependents of NOTFILE targets (like "all") shouldn't
be T_FATE_NEWER (newer than their parents), but instead left as
T_FATE_STABLE. That way the new 'jam -dc' (display "causes") option
didn't report spurious "newer" targets. NOTFILE targets have a 0
timestamp, making their children always look newer, and the change was
to suppress this.
But instead of checking for NOTFILE parents, the code checked for missing
parents. And that broke 'actions updated', because it relied on
T_FATE_NEWER even if the parent was missing.
Knowing all this, it is possible to make a smaller fix:
==== //depot/main/jam/make.c#40 - /usr/big/seiwald/jam/make.c ====
***************
*** 403,409 ****
fate = T_FATE_ISTMP;
}
else if( t->binding == T_BIND_EXISTS && p &&
! p->binding == T_BIND_EXISTS && t->time > p->time )
{
fate = T_FATE_NEWER;
}
fate = T_FATE_ISTMP;
}
else if( t->binding == T_BIND_EXISTS && p &&
! p->binding != T_BIND_UNBOUND && t->time > p->time )
{
fate = T_FATE_NEWER;
}
I'll patch the 2.5 line shortly.
From: Steve Goodson <steve.goodson@mscsoftware.com>
Subject: Re: [bug] deep INCLUDES are broken
Date: Fri, 3 Jan 2003 20:05:42 -0800
This was probably caused by the fix to the INCLUDES bug that I reported a few
weeks ago. In that bug a target was incorrectly being skipped when a target
that it included couldn't be built. Here's a patch (to jam 2.4) that fixes
that bug without introducing the problem Matt has discovered here. I don't
know if this is similar to the simpler of the two fixes that Christopher
originally mentioned he was considering. The idea is to keep track of the
status of included targets separately.
diff -ur jam-2.4-original/make1.c jam-2.4/make1.c
--- jam-2.4-original/make1.c Thu Feb 28 11:33:50 2002
+++ jam-2.4/make1.c Fri Jan 3 18:03:48 2003
@@ -179,15 +179,35 @@
/* Now ready to build target 't'... if dependents built ok. */
- /* Collect status from dependents */
+ /* Collect status from dependents (and their INCLUDES) */
+ for( c = t->deps[T_DEPS_Depends] ; c ; c = c->next )
+ {
+ if ( c->target->status > t->status )
+ {
+ failed = c->target->name;
+ t->status = c->target->status;
+ }
+ if ( c->target->hstatus > t->status )
+ {
+ failed = c->target->hfailed;
+ t->status = c->target->hstatus;
+ }
+ }
- for( i = T_DEPS_Depends; i <= T_DEPS_INCLUDES; i++ )
- for( c = t->deps[i]; c; c = c->next )
- if( c->target->status > t->status )
- {
- failed = c->target->name;
- t->status = c->target->status;
- }
+ /* Collect status from our INCLUDES */
+ for( c = t->deps[T_DEPS_INCLUDES] ; c ; c = c->next )
+ {
+ if ( c->target->status > t->hstatus )
+ {
+ t->hfailed = c->target->name;
+ t->hstatus = c->target->status;
+ }
+ if ( c->target->hstatus > t->hstatus )
+ {
+ t->hfailed = c->target->hfailed;
+ t->hstatus = c->target->hstatus;
+ }
+ }
/* If actions on deps have failed, bail. */
/* Otherwise, execute all actions to make target */
diff -ur jam-2.4-original/rules.h jam-2.4/rules.h
--- jam-2.4-original/rules.h Thu Feb 28 10:53:16 2002
+++ jam-2.4/rules.h Fri Jan 3 15:10:38 2003
@@ -153,6 +153,8 @@
# define T_MAKE_DONE 4 /* make1(target) done */
char status; /* execcmd() result */
+ char hstatus; /* collected status for INCLUDES */
+ char *hfailed; /* name of failed INCLUDE */
int asynccnt; /* child deps outstanding */
TARGETS *parents; /* used by make1() for completion */
From: Badari Kakumani <badari@cisco.com>
Date: Wed, 8 Jan 2003 10:37:05 -0800
Subject: negation (!) operator in 2.4
we recently switched from using older version of jam ( 1.3 )
to newer version ( 2.4 ).
as the enclosed example notes, looks like the behaviour of
unary negation operator '!' has changed between these releases.
in the if clause:
if ! $(var_x) = y
looks like the new version of jam does NOT evaluate '$(var_x) = y' first
and then negate it (which was the behaviour of earlier jam).
any one else sees the behaviour i am seeing?
% cat Jamfile
var_x = n ;
if ! $(var_x) = y {
ECHO var_x value is NOT y ;
} else {
ECHO var_x value is y ;
}
*** old version of jam correctly identfies the value of var_x is NOT y
*** (as was the intention in the if clause)
% /sw/packages/jam/C1.3/solaris2bin/jam
var_x value is NOT y
...found 7 target(s)...
*** new version of jam incorrectly identifies the value of var_x IS y
% /sw/packages/jam/2.4/solaris2bin/jam
...starting...Wed Jan 8 10:24:07 2003
var_x value is y
...binding...Wed Jan 8 10:24:07 2003
...found 7 target(s)...
...finished...Wed Jan 8 10:24:07 2003
From: "William Trenker" <wtrenker@hotmail.com>
Date: Sun, 12 Jan 2003 13:42:23 -0800
Subject: A searchable mirror for this list
Would you consider having the Jamming list mirrored on gmane.org? One major
benefit is gmane's search feature; as a jam newbie I can try and find
answers from previous posts. There's no cost so if you're interested the
subscription page is here: http://gmane.org/subscribe.php.
Date: Tue, 14 Jan 2003 18:18:12 +0100
From: boga@mac.com
Subject: Re: [bug] deep INCLUDES are broken
Is there any fix, for the Jam2.5 Deep include bug?!
Is it possible to fix this bug efficiently?!
Or shall we rollback the fix in changelist 2499?!
Matt was your optimization to the bugfix succesfull?!
Subject: Re: Re: [bug] deep INCLUDES are broken
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 14 Jan 2003 10:50:44 -0700
Christopher mailed me privately with an implementation idea that will
still fix INCLUDE correctly but be more efficient than the fix I
posted. Patience is a virtue. ;-)
Date: Tue, 14 Jan 2003 10:17:22 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: [bug] deep INCLUDES are broken
I've got the guts of make0() on my lap, and have a solution that is both
efficient and correct (I believe, but you know what that's worth), but
it's hung up on a relatively minor detail: getting debug output to be
the same. I hope to have it all put back together by the weekend.
Ugly details of the bug, follows. Probably only worth reading if you
have the bits rolling around in your head:
The challenge is to make included includes appear as direct
includes, so that they get considered. This used to work by
recursion, because each TARGET node computed the summary of its
includes -- the hfate and htime -- along with the summary of
its dependents -- the fate and time. Alas, that previous
arrangement confused make1() into treating headers as direct
dependencies during the build phase.
A.o
|
A.c -- A.h
(Read depends down, includes across)
(Failed build of A.h aborts build of A.c)
The fix to the confused make1() problem was to consolidate the
special handling of includes by having make0() tack onto a
target's list of dependendies any of the target's dependents'
includes. Unfortunately, this fix did not recurse: if the
target's dependents' includes included other files, those files
were not added to the target's dependencies.
A.o
|
A.c -- A.h -- B.h -- C.h
is rewritten to:
A.o
| \
A.c A.h -- B.h -- C.h
(A.o depends on A.h, but not B.h or C.h. This is
the current, broken state of jam 2.5rc1.)
Matt's bugfix added some recursion at this point, by transitively
appending includes' includes onto the includes chain. But, as
he found out (and I did before), this can slow make0() down
considerably, as typically header files all include each other
and you wind up with lots of really long chains.
A.o
|
A.c -- A.h -- B.h -- C.h
is rewritten to:
A.o
|
A.c -- A.h -- B.h -- C.h
- B.h - C.h
- C.h
(Matt's fix: if the .h files include each other, the
includes chains get very long.)
The final(?) fix I have is relatively simple, but is an extra
step: to have make0() replace a target's includes chain with
a single pseudo-target whose dependencies are the original
target's includes. That pseudo-target gets passed to make0(),
which then recursively consolidates its fate and time. This
then makes a target's includes fate and time available in a
single target hanging off the original target.
A.o
|
A.c -- A.h -- B.h -- C.h
is rewritten to:
A.o
| \
A.c A.c-includes
| \
A.h A.h-includes
| \
B.h B.h-includes
|
C.h
(New pseudo-target xxx-includes recursively consolidates
fate and time of all included targets.)
While this new scheme does add a node for every include file,
it is linear, rather than exponential, and the time is pretty much neglible.
Real soon now. "I was delayed."
| Subject: Re: [bug] deep INCLUDES are broken
| From: boga@mac.com
|
| Is there any fix, for the Jam2.5 Deep include bug?!
| Is it possible to fix this bug efficiently?!
|
| Or shall we rollback the fix in changelist 2499?!
|
| Matt was your optimization to the bugfix succesfull?!
From: Craig Allsop <callsop@auran.com>
Date: Wed, 15 Jan 2003 11:11:52 +1000
Subject: quoting?
Should the jambase out-of-the-box handle filenames that contain the "&"
character? e.g.
DIR = [ FDirName a b&b c ] ;
Depends all : $(DIR) ;
MkDir $(DIR) ;
It would appear to me the best place to deal with this is in the actions by
adding double quotes around filenames. This seems to work ok for
unix/windows for us so far.
Subject: RE: quoting?
Date: Tue, 14 Jan 2003 18:11:09 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
I know of some tools on Windows at least that do not accept quoted
paths, for whatever reason. The reasons don't really matter, since the
tools exist and are used. :-) Doesn't seem like a good idea to hard code it that way.
Date: Wed, 15 Jan 2003 18:01:34 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Jam examples
I just read all the jam html docs. Are there any smallish jam
examples for a C program? It's hard to learn a new language and
apply it at the same time. It seems to be a much better thing to
use than 'make'. Make is a pain. I'm using debian.
Date: Wed, 15 Jan 2003 18:59:21 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Re: Jam examples
An example for a multi-directory project would be useful:
From: Craig Allsop <callsop@auran.com>
Subject: Re: quoting?
Date: Wed, 15 Jan 2003 18:33:13 +1000
As far as I'm aware commands will never see the quotes because the shell
would remove them. If someone doesn't use this character in filenames the
execution of jam will be the same as it is now.
From: "Chris Antos" <chrisant@windows.microsoft.com>
Sent: Wednesday, January 15, 2003 12:11 PM
Subject: RE: quoting?
Subject: Re: Jam examples
From: Matt Armstrong <matt@lickey.com>
Date: Wed, 15 Jan 2003 08:53:47 -0700
I'm not aware of one. All of my jam knowledge comes from using a
heavily customized Jambase (which seems to be fairly common).
But multi-directory projects introduce 3 new things:
Using the SubInclude rule.
Using the SubDir rule.
"grist"
If you understand those three things, you have it. (and "grist" can
usually be ignored until you want to do something non-standard).
From: "Robert Cowham" <robert@vaccaperna.co.uk>
Subject: RE: Jam examples
Date: Wed, 15 Jan 2003 17:30:22 -0000
One example which may help a little (just shows SubDir commands):
http://public.perforce.com/guest/robert_cowham/jam/jam-example.html
I remember a post of some other examples on a user's web site but can't remember where!
Subject: RE: quoting?
Date: Wed, 15 Jan 2003 12:18:03 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
That's not how it works.
CMD.exe does not remove quotes from command lines.
The reason your C apps don't see the quotes is because the C startup
code parses the raw command line into the argc and argv variables,
eliding quotes where appropriate. But of course the C startup code
hasn't always contained logic for handling quotes. So older apps (or
maybe even relatively new apps compiled with old compilers) do not remove the quotes.
From: Craig Allsop [mailto:callsop@auran.com]
Sent: Wednesday, January 15, 2003 12:33 AM
Subject: Re: quoting?
As far as I'm aware commands will never see the quotes because the shell
would remove them. If someone doesn't use this character in filenames
the execution of jam will be the same as it is now.
From: "Chris Antos" <chrisant@windows.microsoft.com>
Sent: Wednesday, January 15, 2003 12:11 PM
Subject: RE: quoting?
From: Craig Allsop <callsop@auran.com>
Subject: Re: quoting?
Date: Thu, 16 Jan 2003 08:47:48 +1000
Thanks for pointing that out. I suppose the startup code was updated to
handle spaces when long filenames were introduced? Would this apply to any
utility used by a stock jambase?
Do you know any other way to have the shell pass the command line unchanged?
I do agree that it probably shouldn't be automatic tho.
From: "Chris Antos" <chrisant@windows.microsoft.com>
Sent: Thursday, January 16, 2003 6:18 AM
Subject: RE: quoting?
Subject: RE: quoting?
Date: Wed, 15 Jan 2003 15:53:36 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
Quoting is the right solution. I'm just saying the quoting should not
happen automatically inside Jam. Just add the quotes to the Jambase in
the stock rules. That'll fix the stock Jambase while still allowing
other usage to not use quotes. Or if some platforms need to not have
the quotes but still need to share a single rule/action implementation,
then add a modifier such as $(VAR:Q) to do platform-specific quoting.
(Or a different letter if :Q is already taken, I don't remember).
My thoughts leading up to that:
I think it's the wrong question to ask, how to make the shell pass the
command line unchanged. That basically means you want to disable all
parsing that the shell does (well, it means you want a new version of
the shell that has options to control what level of parsing it does --
actually 4DOS and 4NT from www.jpsoft.com do have that kind of control).
But assuming stock tools (e.g. cmd.exe) the only answer to the question
then is "don't use the shell". But we need to use it, because we do
want at least some of the shell's parsing (such as interpreting newlines
as delimiting multiple commands, and redirection operators, etc).
From: Craig Allsop [mailto:callsop@auran.com]
Sent: Wednesday, January 15, 2003 2:48 PM
Subject: Re: quoting?
handle spaces when long filenames were introduced? Would this apply to
any utility used by a stock jambase?
Do you know any other way to have the shell pass the command line unchanged?
I do agree that it probably shouldn't be automatic tho.
From: "Chris Antos" <chrisant@windows.microsoft.com>
Sent: Thursday, January 16, 2003 6:18 AM
Subject: RE: quoting?
Date: Thu, 16 Jan 2003 15:39:30 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Re: Jam examples
I've been doing a C micro-controller project that is arranged as a top
directory, and a number of sub-directories:
/myproj
Jamfile
Jamrules
/analog
Jamfile
analog.h
analog.c
analog.o /* when generated */
/serial
Jamfile
serial.h
serial.c
serial.o /* when generated */
/main
Jamfile
main.h
main.c
main.o /* when generated */
/objs
Jamfile
myproj.exe /* generated by linking analog.o, serial.o, main.o, libm
/hex
myproj.hex /* generated by a utility that reads myproj.exe */
I can't figure out what to put in the subdirectory Jamfiles.
In /myproj/analog/Jamfile, i put:
SubDir TOP analog ;
Main analog : analog.c ;
However, running jam -d 5
shows that it compiles and links. I wan't it to compile to analog.o
only. However, i can't figure out which is the right command for that
(Cc, Object, Objects, etc).
Date: Thu, 16 Jan 2003 17:41:33 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Re: Jam examples
Now, i got another question. This is the tree:
/myproj
Jamfile
Jamrules
/analog
Jamfile
analog.h
analog.c
analog.o /* when generated */
/serial
Jamfile
serial.h
serial.c
serial.o /* when generated */
/main
Jamfile
main.h
main.c
main.o /* when generated */
/objs
Jamfile
myproj.exe /* generated by linking analog.o, serial.o, main.o, libm
/hex
myproj.hex /* generated by a utility that reads myproj.exe */
In /myproj/Jamfile, i have:
****************
SubDir TOP ;
SubInclude TOP analog ;
SubInclude TOP serial ;
SubInclude TOP main ;
Link myproj.exe : analog serial main ;
LinkLibraries myproj : libm ;
****************
Now, i want to run another utility that takes myproj.exe and
gives myproj.hex. There seems to be more than one way of doing
this, so i'm wondering what the 'usual' way is. This is what i
tried in /myproj/Jamfile:
****************
SubDir TOP ;
SubInclude TOP analog ;
SubInclude TOP serial ;
SubInclude TOP main ;
Link myproj.exe : analog serial main ;
LinkLibraries myproj : libm ;
MyObjCopyRule myproj.hex : myproj.exe ;
****************
In /myproj/Jamrules, i have:
****************
actions MyObjCopy {
objcopy -O ihex $(2) $(1) ;
mv myproj.exe objs ;
mv myproj.hex objs/hex ;
}
rule MyObjCopyRule { MyObjCopy $(1) $(2) }
It seems very ugly to me. What is the 'right' way? Is there a more elegant way
of setting up dependencies such as myproj.hex depends on analog.c, serial.c,
and main.c, and then getting the build system to handle everything based on the
suffixes of various intermediate files?
From: xguo@attbi.com
Date: Thu, 16 Jan 2003 08:41:04 +0000
Subject: Please help me correct these 2 PCCTS rules
I am trying to use pccts(something like yacc/lex) in my project and I just can't
get my jam rules to work.
basically, I only have one single file in the project:
cdbc.g
by running antlr command: "antlr -CC cdbc.g" I got
cdbc.cpp, Parser.cpp, Parser.h, tokens.h and parser.dlg.
by running dlg command: "dlg -CC -C2 parser.g" I got
DLGLexer.cpp and DLGLexer.h
compile and link these cpp files together, I got my executable.
here is my jamfile:
=================== jamfile =====================
SubDir TOP src cdbc ;
LINKLIBS on cdbc.exe += pccts_debug.lib ;
Main cdbc : cdbc.cpp
Parser.cpp
DLGLexer.cpp
;
Antlr cdbc.cpp parser.dlg : cdbc.g ;
Dlg DLGLexer.cpp : parser.dlg ;
==================== End of jamfile ==============
and here is the 2 rules/actions I defined in jamrule file:
================ part of jamrule =================
# parser class must be named "Parser" in order to use Antlr rule
rule Antlr {
local _cpp, _pcpp, _ph, _dlg, _th ;
_cpp = [ FGristFiles $(<) ] ;
_pcpp = [ FGristFiles Parser.cpp ] ;
_ph = [ FGristFiles Parser.h ] ;
_dlg = [ FGristFiles parser.dlg ] ;
_th = [ FGristFiles tokens.h ] ;
Depends $(_cpp) $(_pcpp) $(_ph) $(_dlg) $(_th) : $(>) ;
Clean clean : $(_cpp) $(_pcpp) $(_ph) $(_dlg) $(_th) ;
MakeLocate $(_cpp) $(_pcpp) $(_ph) $(_dlg) $(_th) : $(LOCATE_SOURCE) ;
Antlr1 $(_cpp) $(_pcpp) $(_ph) $(_dlg) $(_th) : $(>) ;
INCLUDES $(<) : $(_ph) $(_th) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
}
rule Dlg {
local _cpp, _h ;
_cpp = [ FGristFiles DLGLexer.cpp ] ;
_h = [ FGristFiles DLGLexer.h ] ;
Depends $(_cpp) $(_h) : $(>) ;
Clean clean : $(_cpp) $(_h) ;
MakeLocate $(_cpp) $(_h) : $(LOCATE_SOURCE) ;
Dlg1 $(_cpp) $(_h) : $(>) ;
INCLUDES $(<) : $(_h) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
}
actions Antlr1 { antlr -CC $(>) }
actions Dlg1 { dlg -CC -C2 $(>) }
================== end of jamrule ==============
every time I run jam to compile first time , it complains:
don't know how to make parser.dlg and
warning: using independent target <src!cdbc>parser.dlg
warning: using independent target <src!cdbc>DLGLexer.h
if I run jam again the first error disappears because the file is already
generated in the first run, but the second error stays.
run jam the third time it finally compiles.
I am confused by the dependency in my jamrule/jamfile:
1. why it complains about "don't know how to make parser.dlg and warning: using
independent target <src!cdbc>parser.dlg"? isn't
Antlr cdbc.cpp parser.dlg : cdbc.g ; supposed to set the dependency and includes
parser.dlg a part of dependency graph?
2. Aren't "INCLUDES $(<) : $(_h) ;" and
"Dlg DLGLexer.cpp : parser.dlg ; " supposed to set DLGLexer.h a part of
dependency graph?
Date: Fri, 17 Jan 2003 13:05:48 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Unknown suffix
In a Jamfile, i have:
**********************************************************
*
MainFromObjects myprog : analog serial main ;
ObjCopy myprog : myprog.exe ;
*
**********************************************************
However, i got an error:
Don't know how to make myprog.exe
In Jamrules, i have:
**********************************************************
*
rule ObjCopy { Depends $(1) : $(2) ; }
actions ObjCopy { objcopy -O ihex $(2) $(1) ; }
*
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 17 Jan 2003 17:21:05 +0100
Subject: Problems cross-compiling with GCC on Windows
I am trying to hack Jam into compiling PlayStation2 binaries under
Windows, using Unix-like tools (gcc, ar, ranlib).
However, there seems to be a problem with the code that determines if a
file is already in the .a library file (in this case FFGui.a). The first
run (from clean) is fine, but the subsequent one falsely recompiles a
large subset of the files. See output of two runs below.
Any ideas? Where in the Jambase do I look for the code checking files in
archives? Can't seem to find it.
[ The NTHOST flag triggers this condition in my Jamrules:
if $(NTHOST) {
ALL_LOCATE_TARGET = $(TOP)\\jamout\\_ps2\\$(SUBTARGET) ;
SLASH = \\ ;
}
]
$ jam -sNT= -sUNIX=1 -sNTHOST=1
..\..\jamout\_ps2\release
...found 370 target(s)...
...updating 34 target(s)...
C++ ..\..\jamout\_ps2\release\BaseMap.o
C++ ..\..\jamout\_ps2\release\BaseWindow.o
C++ ..\..\jamout\_ps2\release\BootMenu.o
C++ ..\..\jamout\_ps2\release\Briefing.o
C++ ..\..\jamout\_ps2\release\Controller.o
C++ ..\..\jamout\_ps2\release\ControllerA.o
C++ ..\..\jamout\_ps2\release\CreateServerPC.o
C++ ..\..\jamout\_ps2\release\CutSequenceMenu.o
C++ ..\..\jamout\_ps2\release\DeathMenu.o
C++ ..\..\jamout\_ps2\release\DemoStart.o
C++ ..\..\jamout\_ps2\release\Diary.o
C++ ..\..\jamout\_ps2\release\FileWindow.o
C++ ..\..\jamout\_ps2\release\InventorySelector.o
C++ ..\..\jamout\_ps2\release\JoinServerPC.o
C++ ..\..\jamout\_ps2\release\LanguageMenu.o
C++ ..\..\jamout\_ps2\release\LevelMenuSinglePlayer.o
C++ ..\..\jamout\_ps2\release\LinkMenu.o
C++ ..\..\jamout\_ps2\release\LoadMenu.o
C++ ..\..\jamout\_ps2\release\Loader.o
C++ ..\..\jamout\_ps2\release\Message2.o
C++ ..\..\jamout\_ps2\release\MultiplayerAAS.o
C++ ..\..\jamout\_ps2\release\MultiplayerKOTH.o
C++ ..\..\jamout\_ps2\release\MultiplayerStat.o
C++ ..\..\jamout\_ps2\release\NewMenu.o
C++ ..\..\jamout\_ps2\release\Objectives.o
C++ ..\..\jamout\_ps2\release\SewerMenu.o
C++ ..\..\jamout\_ps2\release\SoundMenu.o
C++ ..\..\jamout\_ps2\release\SubZoneMap.o
C++ ..\..\jamout\_ps2\release\Subtitle.o
C++ ..\..\jamout\_ps2\release\VideoSelector.o
C++ ..\..\jamout\_ps2\release\WaterMark.o
C++ ..\..\jamout\_ps2\release\ZoneMap.o
C++ ..\..\jamout\_ps2\release\ZoneMenu.o
Archive ..\..\jamout\_ps2\release\FFGui.a
Ranlib ..\..\jamout\_ps2\release\FFGui.a
...updated 34 target(s)...
JacobG@CHRISTIANC /cygdrive/z/head/code/freedomfighter/FFGui
$ jam -sNT= -sUNIX=1 -sNTHOST=1
..\..\jamout\_ps2\release
...found 370 target(s)...
...updating 8 target(s)...
C++ ..\..\jamout\_ps2\release\CreateServerPC.o
C++ ..\..\jamout\_ps2\release\CutSequenceMenu.o
C++ ..\..\jamout\_ps2\release\InventorySelector.o
C++ ..\..\jamout\_ps2\release\LevelMenuSinglePlayer.o
C++ ..\..\jamout\_ps2\release\MultiplayerAAS.o
C++ ..\..\jamout\_ps2\release\MultiplayerKOTH.o
C++ ..\..\jamout\_ps2\release\MultiplayerStat.o
Archive ..\..\jamout\_ps2\release\FFGui.a
Ranlib ..\..\jamout\_ps2\release\FFGui.a
...updated 8 target(s)...
Subject: Re: Problems cross-compiling with GCC on Windows
From: Matt Armstrong <matt@lickey.com>
You want something like this at the top of your Jamfile:
NOARSCAN = true ;
From: Steve Goodson <steve.goodson@mscsoftware.com>
Subject: Re: Re: [bug] deep INCLUDES are broken
Date: Fri, 17 Jan 2003 17:42:43 -0800
I didn't see any response to the message I posted a couple weeks ago about
the INCLUDES bug,
so I was wondering if people just missed it, or if there are some problems
with it that I didn't see. I believe that it, too, is a correct and
efficient fix to this problem, and it is quite a bit simpler than the
proposed solution. Is there something I'm missing?
Can't this problem be fixed by simply keeping track of the header's status?
That is, adding an hstatus to the already existing hfate and htime.
So hfate and htime have been removed from the target, and make1 has been
simplified, but at the expense of adding another step to make0 and increasing
(almost doubling?) the number of targets and the number of dependencies.
I can see the appeal of trying to get rid of the 'includes'-specific code in
make1, but after looking at the code for a while I started to appreciate how
jam handles INCLUDES and Depends. Having both INCLUDES and Depends allows a
very natural way of expressing dependencies in Jam's language, and it also
seems to be a good way to handle dependencies internally in jam. Of course,
this was really my first look at jam's internals so others are probably in a
much better position to make that kind of assessment than I am.
I do see one advantage to adding the pseudo-targets:
Currently, (and with my proposed fix) jam will not make a target until after
it has tried to make that target's 'includes'. There is no reason for Jam to
impose this ordering, and consolidating the includes into their own
full-fledged target should remove this restriction.
Date: Mon, 20 Jan 2003 11:03:37 +0100
From: boga@mac.com
Subject: Re: [bug] deep INCLUDES are broken
I think that your fix is working fine. I've rolled back the changes
that caused the bug (2499) in jam 2.5 rc1. And applied your changes to it.
See: //guest/miklos_fazekas/jamdeepincfix.
But i think that Jam 2.5 should implement the pseudo target based
include handling. I think it's more elegant than the old solution. The
old solution basically merged the pseudo target's attributes
(hfate,htime,hstatus) to the target itself. IMHO moving those
information to a separate target is much more clean and flexible solution.
Date: Mon, 20 Jan 2003 23:32:56 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Problem making a new rule work
In a Jamfile, i have:
**********************************************************
*
MainFromObjects myprog : analog serial main ;
ObjCopy myprog : myprog.exe ;
*
**********************************************************
However, i got an error:
Don't know how to make myprog.exe
In Jamrules, i have:
**********************************************************
*
rule ObjCopy { Depends $(1) : $(2) ; }
actions ObjCopy { objcopy -O ihex $(2) $(1) ; }
*
Date: Mon, 20 Jan 2003 14:25:47 +0100 (MET)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Problem making a new rule work
Actually it should work, if the SUFEXE variable is indeed set to `.exe',
i.e. if you're trying this under Windows. Otherwise `myprog.exe' is not known.
Date: Mon, 20 Jan 2003 12:32:25 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: Re: [bug] deep INCLUDES are broken
| So hfate and htime have been removed from the target, and make1 has been
| simplified, but at the expense of adding another step to make0 and increasing
| (almost doubling?) the number of targets and the number of dependencies.
I think the best answer is the pseudo-target approach which we're about
to drop into 2.5rc2.
Using hfate/htime is good for consolidating the fate/time of includes
for make0(), but not for consoldating the list of includes themselves,
which make1() now needs for actual buidling.
Previously make1() would build a target's includes and then the target.
That worked correctly when everything built, but it meant a failure to
build an include would suppress the build of the target, which was wrong
(and which we wished to fix). If make1() doesn't build a target's
includes with the target, when does it? It has the same problem make0()
does: it needs to find a target's dependents' includes. When you get
into includes including includes, it gets combinatorically messy.
So the pseudo node not only subsumes hfate/htime, but it also gives
make1() a handle for building a target's dependents' includes, which must
succeed before the target gets built.
The actual performance of using the pseudo-node is insignificant.
And once the debugging output was brought into line, I think it became a winner.
From: "Craig Allsop" <callsop@sceptre.net>
Date: Tue, 21 Jan 2003 08:38:59 +1000
Subject: jam2.5 & TOP?
In Jam 2.5rc1 (OS=NT). If TOP is set, then jam does not include
jamrules. Is this the intention?
My test is:
d:\jamtest\Jamrules:
Echo Jamrules ;
d:\jamtest\test\Jamfile:
SubDir TOP test ;
Echo Jamfile ;
(without TOP set)
d:\jamtest\test > jam
Jamrules
Jamfile
...found 7 target(s)...
(TOP = d:\jamtest)
d:\jamtest\test > jam
Jamfile
...found 7 target(s)...
The include for Jamrules is contained within this if?
jambase(1202): if ! $($(_top))
Subject: Re: jam2.5 & TOP?
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 20 Jan 2003 21:04:57 -0700
This is the behavior I see too. It looks like setting TOP explicitly
is now broken -- or at least Jamrules will no longer be included if
this is the case. The "offending" p4 change that introduce this problem is:
Change 2480 by rmg@rmg:pdjam:chinacat on 2002/12/15 11:53:12
Rewrite jam's SubDir rule to allow multiple roots.
Infrastructure change.
=== computer:1666: Change 33320 by seiwald@thin on 2002/05/13 10:10:50
Subject: Re: Re: [bug] deep INCLUDES are broken
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 21 Jan 2003 10:40:24 -0700
I've tried out the new -rc2 version and the behavioral bugs are fixed,
which is great! However, -dc output is broken -- it seems to miss newer INCLUDES.
This is verified by building jam (transcript below). Notice "newer
compile.h" is missing from -rc2 output.
squeaker% jam-2.5rc1 -v
Jam 2.5rc1. OS=LINUX. Copyright 1993-2002 Christopher Seiwald.
squeaker% jam-2.5rc2 -v
Jam 2.5rc2. OS=LINUX. Copyright 1993-2002 Christopher Seiwald.
squeaker% jam-2.5rc1
...found 201 target(s)...
squeaker% touch compile.h
squeaker% jam-2.5rc1 -da -dc
newer compile.h
...found 201 target(s)...
...updating 9 target(s)...
Cc bin.linuxx86/compile.o
Yacc1 jamgram.c jamgram.h
YaccMv jamgram.c jamgram.h
Cc bin.linuxx86/jamgram.o
Cc bin.linuxx86/headers.o
Cc bin.linuxx86/scan.o
Archive bin.linuxx86/libjam.a
Ranlib bin.linuxx86/libjam.a
RmTemps bin.linuxx86/libjam.a
Cc bin.linuxx86/jam.o
Link bin.linuxx86/jam
Chmod1 bin.linuxx86/jam
...updated 9 target(s)...
squeaker% touch compile.h
squeaker% jam-2.5rc2 -da -dc
...found 201 target(s)...
...updating 9 target(s)...
Cc bin.linuxx86/compile.o
Yacc1 jamgram.c jamgram.h
YaccMv jamgram.c jamgram.h
Cc bin.linuxx86/jamgram.o
Cc bin.linuxx86/headers.o
Cc bin.linuxx86/scan.o
Archive bin.linuxx86/libjam.a
Ranlib bin.linuxx86/libjam.a
RmTemps bin.linuxx86/libjam.a
Cc bin.linuxx86/jam.o
Link bin.linuxx86/jam
Chmod1 bin.linuxx86/jam
...updated 9 target(s)...
Date: Tue, 21 Jan 2003 22:36:37 -0800 (PST)
Subject: Re: Re: [bug] deep INCLUDES are broken
From: "Christopher Seiwald" <seiwald@perforce.com>
I've got a one-liner to fix this -- basically propagating the parent's timestamp
to the new internal nodes.
I'll push it out tomorrow and you can give it a try.
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 22 Jan 2003 12:18:53 +0100
Subject: Jam2.5 and precompiled headers
are there any plans of applying the MSVC precompiled header patches from
Chris Antos from MS, for jam2.5?
Or is there an alternative version in the works?
Subject: Re: Jam2.5 and precompiled headers
From: Matt Armstrong <matt@lickey.com>
If you are talking about:
then I have an alternate solution. The header scan caching patch put
mentioned in the above post.
Namely, Jam by default doesn't grist header files. The comment in
Jambase's FGristSourceFiles says:
# Leave header files alone, because they have a global visibility.
but myself as others have found that to be an erroneous assumption in
large projects. The post linked to above is a good example: a
precomp.h header file present in every source dir. The bottom line is
that the C compiler does not view the entire source tree as a flat
namespace for header files, so Jam shouldn't either.
If you modify FGristSourceFiles as I have to do exactly what
FGristFiles does, then every header file is properly scanned, but you
do often wind up with the same header file being named with many
different targets all with different source grist.
But the header caching patch solves that, because it is indexed by the
_bound_ name of the file. So if there are multiple targets that bind
to the same file, the file will be header scanned only once.
It turns out that this is a big win even if Jam never writes the
header cache to disk (i.e. no "build turds" required).
Date: Wed, 22 Jan 2003 11:18:47 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Jam2.5 and precompiled headers
For what its worth, my plan is to integrate the 2.5 changes into my
branch once 2.5 is released. There are changes in my branch which didn't
find their way into 2.5, some of them never will. I'll try to keep my
branch as a proper superset of the jam mainline.
From: david.abrahams@rcn.com
Date: Wed, 22 Jan 2003 15:51:33 -0500
Subject: [Matthias Troyer <troyer@itp.phys.ethz.ch>] [jamboost] Patch for
FYI, this bug was found in the hash implementation of the Boost.Jam
sources; you might want to check the Perforce Jam sources to see if
it's there as well.
From: Matthias Troyer <troyer@itp.phys.ethz.ch>
Date: Wed, 22 Jan 2003 19:56:48 +0100
Subject: [jamboost] Patch for jam bug on Cray
Our local Cray adminstrators (Bruno Loepfe from ETH and Olivier Byrde
from Cray) found a bug in the boost jam sources. it is in the file hash.c:
char *b = (*data)->key;
int keyval;
keyval = *b;
while( *b ) keyval = keyval * 2147059363 + *b++;
keyval &= 0x7FFFFFFF;
This assumes that overflows are benign, and works only if int is 32
bit. It will not work on the Cray, where int is 64 bit, and with
optimization turned on actually 46 bit (on the Cray SV1 and some other
Crays integer operations will be performed in 46 bit arithmetic by
employing the floating point units). There are two proposed bug fixes:
a) change keyval to a short on the Crays where shorts are 32 bit
b) (the recommended and clean solution) change
keyval = keyval * 2147059363 + *b++;
to
keyval = ((keyval * 2147059363)) & 0xFFFFFFFF) + *b++;
On a 32 bit machine the & 0xFFFFFFFF should be optimized away by the
compiler and on the Cray it will ensure correct compilation.
Can somebody patch the boost source code to fix this bug?
From: Craig Allsop <callsop@auran.com>
Date: Thu, 23 Jan 2003 16:32:59 +1000
Subject: jam2.5 & GenFile1
I've don't currently use this action but I noticed the recently added path
line to this is not compatible with NT.
actions GenFile1 {
PATH="$PATH:."
$(>[1]) $(<) $(>[2-])
}
In what case is the current directory on the path required? In a project
with many directories jam could be launched from several places and the
current directory won't be consistant, so I'm a little confused over this change.
Subject: Re: Jam2.5 and precompiled headers
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 23 Jan 2003 10:13:42 +0100
Still, the rules in Chris' Jambase would be nice to have as a part of
standard Jam, no?
Subject: Re: Jam2.5 and precompiled headers
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 23 Jan 2003 10:37:07 +0100
So that lowly beginners such as myself could migrate to Jam from M$
Visual Studio with relative ease, without having to invent a Jambase
from scratch. The rules (especially the later version which I am
currently using) are quite non-trivial I should say.
PCH speeds up compilation quite a lot, and I doubt anyone would move
from VC++ to Jam were it not supported.
Subject: Re: Jam2.5 and precompiled headers
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 23 Jan 2003 14:04:26 +0100
Where can I get the latest version of this patch? I'm having some
trouble dealing with P4Win, could someone send me a plain-old patch?
Subject: Re: Jam2.5 and precompiled headers
From: Matt Armstrong <matt@lickey.com>
Date: Thu, 23 Jan 2003 11:34:24 -0700
In my ideal world, no. Instead Jam would grist header files just like
source files and be smart about going to disk to scan for headers only
once per file. Chris rubber stamps this idea here:
Subject: RE: Jam2.5 and precompiled headers
Date: Thu, 23 Jan 2003 14:37:28 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
You're confusing two separate topics. Improving the header scans and
using precompiled headers are separate things. However, if you try to
use precompiled headers without making _some_ kind of improvement to how
Jam scans headers (e.g. gristing them or caching the scan results) then
you hit a major perf obstacle. It's significant to note that the
_reason_ I ran into that obstacle and needed to grist the headers was
because I had already determined it would present a significant benefit
to use precompiled headers. So they're separate, but using precompiled
headers has a dependency on fixing header scans.
Precompiled headers offer a big time savings during compilation time, by
allowing the compiler to avoid reading, parsing, etc a significant set
of headers. When you #include "windows.h" you're actually including a
few hundred KB of headers either directly or indirectly. Include one of
the OLE headers and you include another several hundred KB of headers.
Unless your .C[pp] files average around a hundred KB each, the compiler
is spending more time on the headers than it spends on your code. A
precompiled header is a state dump of the header-related work the
compiler did. It does that work once, and then simply reloads the state
dump for subsequent files. Very worthwhile.
And we'd get an even bigger perf win if we could batch up multiple
.c[pp] compilations in a single action. I.e. pass a list of several
sources (and object target names) to the compiler, and let it do them
all at once. Not only do we get to avoid respawning and reinitializing
the compiler, but when using precompiled headers the compiler doesn't
even have to reread the big pch file (often ~8MB) multiple times.
So absolutely Jam should contain rules for using precompiled headers
using MSVC. Don't knock it until you've tried it (for a baseline, you
should test the pch perf wins outside of Jam as a baseline for what Jam
should be able to _beat_; because Jam adds perf wins of its own).
From: Matt Armstrong [mailto:matt@lickey.com]
Subject: Re: Jam2.5 and precompiled headers
In my ideal world, no. Instead Jam would grist header files just like
source files and be smart about going to disk to scan for headers only
once per file. Chris rubber stamps this idea here:
Subject: RE: Jam2.5 and precompiled headers
Date: Thu, 23 Jan 2003 14:56:48 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
Additionally, I realized my comment "And we'd get an even bigger perf
win if we could batch up multiple .c[pp] compilations in a single
action" is ambiguous.
MSVC supports batching, and gets a big perf win from it. I'm not aware
of anyone who has written Jam rules to take advantage of the MSVC
batching feature. I vaguely recall discussing batching a few times
previously on this list, but I don't remember if it was hard to do in
Jam, or impossible, or just inelegant.
In any case, support both precompiled headers and batch compilation
would be extremely beneficial to have in Jam for people using MSVC (or
maybe for multiple compilers running Jam and compiling for Windows).
From: Chris Antos [mailto:chrisant@windows.microsoft.com]
Sent: Thursday, January 23, 2003 2:37 PM
Subject: RE: Jam2.5 and precompiled headers
You're confusing two separate topics. Improving the header scans and
using precompiled headers are separate things. However, if you try to
use precompiled headers without making _some_ kind of improvement to how
Jam scans headers (e.g. gristing them or caching the scan results) then
you hit a major perf obstacle. It's significant to note that the
_reason_ I ran into that obstacle and needed to grist the headers was
because I had already determined it would present a significant benefit
to use precompiled headers. So they're separate, but using precompiled
headers has a dependency on fixing header scans.
Precompiled headers offer a big time savings during compilation time, by
allowing the compiler to avoid reading, parsing, etc a significant set
of headers. When you #include "windows.h" you're actually including a
few hundred KB of headers either directly or indirectly. Include one of
the OLE headers and you include another several hundred KB of headers.
Unless your .C[pp] files average around a hundred KB each, the compiler
is spending more time on the headers than it spends on your code. A
precompiled header is a state dump of the header-related work the
compiler did. It does that work once, and then simply reloads the state
dump for subsequent files. Very worthwhile.
And we'd get an even bigger perf win if we could batch up multiple
.c[pp] compilations in a single action. I.e. pass a list of several
sources (and object target names) to the compiler, and let it do them
all at once. Not only do we get to avoid respawning and reinitializing
the compiler, but when using precompiled headers the compiler doesn't
even have to reread the big pch file (often ~8MB) multiple times.
So absolutely Jam should contain rules for using precompiled headers
using MSVC. Don't knock it until you've tried it (for a baseline, you
should test the pch perf wins outside of Jam as a baseline for what Jam
should be able to _beat_; because Jam adds perf wins of its own).
From: Matt Armstrong [mailto:matt@lickey.com]
Sent: Thursday, January 23, 2003 10:34 AM
Subject: Re: Jam2.5 and precompiled headers
In my ideal world, no. Instead Jam would grist header files just like
source files and be smart about going to disk to scan for headers only
once per file. Chris rubber stamps this idea here:
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Jam2.5 and precompiled headers
Date: Fri, 24 Jan 2003 11:30:53 +0100
It ought to be possible, at least in some common cases.
FWIW, gcc can do the same thing, but as near as I can tell it doesn't
help much. I tested just now and saw less than 1% speed improvement
(with about 40 C++ files). Disappointing.
Subject: Re: Jam2.5 and precompiled headers
From: Matt Armstrong <matt@lickey.com>
Date: Fri, 24 Jan 2003 09:59:21 -0700
Can we see your precompiled header patches? I am going by this post:
where you talks about gristing some headers and not others to avoid
performance problems with jams header scanning.
I'm talking about gristing all headers uniformly and fixing the jam
performance problem. With jam fixed, your patch as I understand it becomes unnecessary.
I'm very interested in this too.
There are two problems I see with doing this in Jam:
(1) Each source file can have different compile flags. Currently
Jam tracks this with target variables CCFLAGS (or C++FLAGS),
CCHDRS and CCDEFS. You don't want to group compilation when
these differ.
(2) The -o flag can't be passed to the compiler to locate the
object file in its output location (/Fo for Visual C++).
An idea I have for (1) is something like 'actions piecemeal' but with
smarts about grouping together stuff that has identical values for
various target variables. E.g.
actions grouped ( CCFLAGS CCDEFS CCHDRS ) Cc { $(CC) -c $(CCFLAGS) $(CCDEFS) $(CCHDRS) $(>) }
would compile sources together that had the same values for CCFLAGS, CCDEFS, and CCHDRS.
Then there is the problem of copying the resulting object files to the
right location afterwards (2). Jambase has a RELOCATE flag that copes
with broken -o flags on compilers and that might be a start.
Date: Fri, 24 Jan 2003 14:08:31 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: jam2.5 & TOP?
| In Jam 2.5rc1 (OS=NT). If TOP is set, then jam does not include
| jamrules. Is this the intention?
I have a simple two-line fix for this, but because the behavior
of SubDir with a pre-set TOP is so oddball, I'd either like to see
your real-world example or have you try this out and see if it works for you.
The patch is below.
==== //depot/main/jam/Jambase#209 - /usr/big/seiwald/jam/Jambase ====
***************
*** 1203,1217 ****
Exit SubDir syntax error ;
}
! if ! $($(_top)) {
# Get path from SUBDIR (or CWD) to $(TOP)
SUBDIR_UP += $(_tokens) ;
$(_top)-UP = $(SUBDIR_UP) ;
$(_top)-DOWN = $(SUBDIR_DOWN) ;
! $(_top) = [ FSubDirPath $(_top) ] ;
# File is $(TOPRULES) or $(TOP)/Jamrules.
# Include $(TOPRULES) if set.
Exit SubDir syntax error ;
}
! if ! $($(_top)-UP) {
# Get path from SUBDIR (or CWD) to $(TOP)
+ # $(TOP) may be set externally.
SUBDIR_UP += $(_tokens) ;
$(_top)-UP = $(SUBDIR_UP) ;
$(_top)-DOWN = $(SUBDIR_DOWN) ;
! $(_top) ?= [ FSubDirPath $(_top) ] ;
# File is $(TOPRULES) or $(TOP)/Jamrules.
# Include $(TOPRULES) if set.
Date: Fri, 24 Jan 2003 16:43:39 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: [Matthias Troyer <troyer@itp.phys.ethz.ch>] [jamboost] Patch for jam bug on Cray
I may have slept through this class in college, but I'm not getting
a clear picture of what the problem is. Is it that the cray will
generate a trap on integer overflow? Other than that, we're only
relying on keyval producing the same value (the hash) for a given
input string. We don't really care how it gets there.
Date: Fri, 24 Jan 2003 16:45:47 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: jam2.5 & GenFile1
| I've don't currently use this action but I noticed the recently added path
| line to this is not compatible with NT.
|
| actions GenFile1
| {
| PATH="$PATH:."
| $(>[1]) $(<) $(>[2-])
| }
|
| In what case is the current directory on the path required? In a project
| with many directories jam could be launched from several places and the
| current directory won't be consistant, so I'm a little confused over this change.
It's a little mistake. It allows jam to build itself on UNIX without
having . in the path. It is the rare case in jam where a file in the
local directory (the name of the executable to generate the output file)
can't be referred to directly, because it is only time a target is
subject to the shell's PATH.
I think I'm going to make that a UNIX specific variation of GenFile1,
unless someone has a better idea.
Date: Fri, 24 Jan 2003 16:50:27 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: Jam2.5 and precompiled headers
| FWIW, gcc can do the same thing, but as near as I can tell it doesn't
| help much. I tested just now and saw less than 1% speed improvement
| (with about 40 C++ files). Disappointing.
That's largely why jam never did any such batching, because it didn't
matter much at the time most of it was written. It's worth looking into
the pch and batch optimizations, as header file mania has imposed a big
penalty for a little "#include <windows.h>".
From: david.abrahams@rcn.com
Subject: Re: [Matthias Troyer <troyer@itp.phys.ethz.ch>]
Date: Fri, 24 Jan 2003 20:23:05 -0500
He didn't spell it out for me, but I'm guessing that's the case; it's
certainly conforming behavior. Actually, looking at it closely, it
seems to be conforming behavior for any platform regardless of word
size. I don't think I like the suggested fix because it's highly
platform-specific. Instead I would just use unsigned integer types.
According to the 6.2.5/9 in the 'C' standard,
"A computation involving unsigned operands can never overflow,
because a result that cannot be represented by the resulting
unsigned integer type is reduced modulo the number that is one
greater than the largest value that can be represented by the
resulting type."
Then I'd go with:
int /* if you insist, but should be unsigned */
hashitem(
register struct hash *hp,
HASHDATA **data,
int enter ) {
ITEM **base;
register ITEM *i;
char *b = (*data)->key;
unsigned keyval;
if( enter && !hp->items.more ) hashrehash( hp );
if( !enter && !hp->items.nel ) return 0;
keyval = (unsigned char)*b;
while( *b ) keyval = keyval * 2147059363u + (unsigned char)*b++;
/* keyval &= 0x7FFFFFFF; */
base = hp->tab.base + ( keyval % (unsigned)hp->tab.nel );
for( i = *base; i; i = i->hdr.next )
if( keyval == i->hdr.keyval &&
!strcmp( i->data.key, (*data)->key ) ) {
*data = &i->data;
return !0;
}
if( enter ) {
i = (ITEM *)hp->items.next;
hp->items.next += hp->items.size;
hp->items.more--;
memcpy( (char *)&i->data, (char *)*data, hp->items.datalen );
i->hdr.keyval = keyval;
i->hdr.next = *base;
*base = i;
*data = &i->data;
}
return 0;
}
Subject: Re: jam2.5 & GenFile1
From: Matt Armstrong <matt@lickey.com>
Date: Fri, 24 Jan 2003 19:48:04 -0700
Wouldn't prepending $(>[1]) with $(DOT)$(SLASH) make more sense? If
the intent is to run the executable in the specific dir, its better to
be explicit than to depend on the path (e.g. what if somebody has
yyacc somewhere in their path).
From: david.abrahams@rcn.com
Subject: Re: [Matthias Troyer <troyer@itp.phys.ethz.ch>]
Date: Sat, 25 Jan 2003 14:25:15 -0500
Both of these approaches still make non-portable assumptions about word size, AFAICT.
[it's very unclear what could have been causing the crash, in that
case. 64 or 48-bit integers will still end up representing a positive
value when only the bottom bits you get via &= 0x7fffffff are kept]
Would you please test the code that I posted, since it is not supposed
to be making any assumptions about word size?
Date: Sat, 25 Jan 2003 14:44:18 -0400
Subject: Re: [Matthias Troyer <troyer@itp.phys.ethz.ch>] [jamboost] Patch for jam bug on Cray
From: "Lex Spoon" <lex@cc.gatech.edu>
Incidentally, unsigned int's are *guaranteed* to have benign overflows
in C. So this caveat could be eliminated by switching to unsigned int's.
From: david.abrahams@rcn.com
Subject: Re: [Matthias Troyer <troyer@itp.phys.ethz.ch>]
Date: Sat, 25 Jan 2003 16:32:29 -0500
Which, as you'll notice, is exactly what I proposed. If the magic 32
bits have any significance in the code, I'd switch to unsigned long,
but it doesn't look as though 32 bits count. Probably size_t is the right
type in the end.
From: "Peter Klotz" <peter.klotz@aon.at>
Date: Sun, 26 Jan 2003 19:38:02 +0100
Subject: jam 2.4: Variable STDLIBPATH obsolete
The variable STDLIBPATH set for the Borland Compiler in Jambase is not
used anywhere else in Jambase.
Shouldn't it be removed?
From: "Jan Mikkelsen" <janm@transactionware.com>
Date: Mon, 27 Jan 2003 23:45:53 +1100
Subject: Temporary files in actions & Java :C modifier
I have just submitted some of my changes into the public depot at
//guest/jan_mikkelsen/jam/src/...
At this point, there are two additions:
- The way strings are expanded in actions has been extended. Temporary
files can be created from expressions, allowing long lists of file names
(for example) to compilers or for configuration files for build tools
(eg: manifest files for Java jars) to be created by Jam on the fly.
- A :C modifier has been added to allow a package name to be extracted
from a file based on the PACKAGESCAN variable.
I've used these changes to compile reasonably sized Java projects to
good effect. Using temporary files to batch large numbers of source
files into a single compiler run gives a significant win with Java
compilers vs one invocation per source file. Automatic classpath
management and manifest generation also helps a lot.
The Jambase that drives this for Java builds is slightly customised.
I'm generalising it at the moment, and I will try to get it submitted quickly.
I'm interested to know whether this is useful and whether improvements can be made.
Subject: Re: Temporary files in actions & Java :C modifier
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 27 Jan 2003 10:46:21 -0700
Great idea! -- I've been meaning to implement this for a while.
I'm curious -- why did you find a need for the numbered form "#n"?
In the Windows world this is commonly known as a response file. It
may make sense to name it that way for jam too.
I had in mind a more friendly syntax, turned on optionally with
another arg to actions. E.g.
actions response Foo
{
ls @(what goes in the temporary file)
}
Without the 'response' flag on actions, the special meaning of @( is
not recognized.
From: "Jan Mikkelsen" <janm@transactionware.com>
Subject: RE: Temporary files in actions & Java :C modifier
Date: Tue, 28 Jan 2003 08:54:44 +1100
There are cases where I have wanted two or three separate response files
in a single action. For example, building a Java jar file and having a
Manifest file created from configuration information in Jamfiles.
Another example might be (streching slightly, because there are
alternatives) a DEF file for the Microsoft linker.
For example, something like:
actions together DoJavaLib {
"$(JAR)" Ufm "$(<)" @#0Created-By:$(SPACE)Jam @@^"$(>)"
@#0Manifest-version:$(SPACE)1.0
@#0Class-Path:$(SPACE)$(JAR_CLASSPATH:J=$(SPACE))
@#0$(JAR_MANIFEST)
}
Which creates a single command:
"jamjar" Ufm "/path/to/target.jar" /tmp/jar-manifest @/tmp/jar-sources
With the two /tmp/jar-* files managed by Jam, and with the numbered file
containing the four numbered expressions concatenated.
I like the idea of a "response" style flag to avoid breaking existing
actions which happen to have the same character sequence as the lead-in
for temporary file generation. My approach was to select sequences so
ugly that no one else would use them!
Using whitespace as a token delimiter fits neatly into the existing
command parsing syntax, and hopefully avoids surprising programmers used
to whitespace being the delimiter in action string expansion.
From: "Craig Allsop" <callsop@sceptre.net>
Subject: Re: jam2.5 & TOP?
Date: Tue, 28 Jan 2003 18:26:57 +1000
Sorry, it looks like I opened a can of worms.
It now includes Jamrules correctly in both my test case and the real deal,
however should SUBDIR also reflect the value of TOP? In 2.4 it represented
top plus the local path. If you add 'Echo SUBDIR "=" $(SUBDIR)' to
jamtest\test\jamfile you'll see. Here is the difference:
[d:\jamtest\test]jam1 -v
Jam 2.4. OS=NT. Copyright 1993-2002 Christopher Seiwald.
[d:\jamtest\test]jam2 -v
Jam 2.5rc1. OS=NT. Copyright 1993-2002 Christopher Seiwald.
[d:\jamtest\test]set TOP=d:\jamtest
[d:\jamtest\test]jam1
Jamrules
Jamfile
SUBDIR = d:\jamtest\test
...found 7 target(s)...
[d:\jamtest\test]jam2
Jamrules
Jamfile
SUBDIR = .
...found 7 target(s)...
Another issue: Is it legal for the root to contain a Jamfile with this?
SubDir TOP ;
SubInclude TOP test ;
If so, the new code includes Jamrules twice because the first time TOP-UP is
set to nothing, it then hangs including .\jamfile over and over (with or
without TOP set). Jam 2.4 is ok. Here is exactly what I have:
[d:\jamtest]cat jamrules
Echo Jamrules ;
[d:\jamtest]cat jamfile
SubDir TOP ;
SubInclude TOP test ;
[d:\jamtest]cat test\jamfile
SubDir TOP test ;
Echo Jamfile ;
Echo SUBDIR "=" $(SUBDIR) ;
[d:\jamtest]test\jam1
Jamrules
Jamfile
SUBDIR = d:\jamtest\test
...found 7 target(s)...
[d:\jamtest]test\jam2
Jamrules
...hangs
The reason we've been using TOP is because we have trouble with Visual
Studio in that it cannot locate debugging information when you go to a
library to build it manually and then go to an executable that depends on
this library and built it. I believe Visual Studio believes the symbols for
this library are local to the executable. I haven't looked real closely at
the issue since setting TOP results in fully qualified paths for all targets
no matter where jam is launched which solves the problem.
Subject: Re: jam2.5 & TOP?
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 28 Jan 2003 09:24:45 -0700
There is also another problem with relative paths under Windows: if
you build within a subdir the default Jam rules will feed paths like
"../../what/ever/foo.obj" to the Visual C++ lib.exe but
"what/ever/foo.obj" when built from the root of the project. lib.exe
treats these two as _different_ objects, unlike Unix archivers.
This means you can not build from anything but the root of the project.
I solved this by adding a PWD rule that, together with support rules
written in Jam, can fully qualify TOP regardless of where jam is run from.
From: David Abrahams <david.abrahams@rcn.com>
Subject: Re: Patch for jam bug on Cray
Date: Fri, 31 Jan 2003 08:59:07 -0500
OK, I think I understand what you are saying.
Let me just make sure I'm not crazy, though: AFAICT this is not a bug
in the Jam code I suggested, but in the C implementation. The C
standard gives no leeway for the implementation to do anything which
generates a signal in response to unsigned integer arithmetic. Correct?
Mostly, except for the conformance issue.
From: David Abrahams <david.abrahams@rcn.com>
Subject: Re: Patch for jam bug on Cray
Date: Fri, 31 Jan 2003 12:39:04 -0500
No need to "admit"; it was an appropriate response on your part since
you only have to support the Cray. As somebody working on the jam
code for many platforms, I wanted to find a fix that was guaranteed
portable to conforming 'C' implementations so that this wouldn't
happen with any other platforms. It was just bad luck that the Cray
'C' compiler is non-conforming in this regard ;-)
That was my suggestion ;-)
What I would like to do now is to recover my last suggested
implementation (the one using unsinsiged integers that should work on
all conforming implementations and which I don't seem to have
anymore), and add a patch for cray which masks bits in the right place
based on a preprocessor switch. The most expedient way to get there
would be for Matthias to make the modification and test that
implementation before posting it here. Can we do that?
Subject: Re: Jam2.5 and precompiled headers
From: Matt Armstrong <matt@lickey.com>
Date: Fri, 31 Jan 2003 16:05:13 -0700
By "batching" do you mean throwing multiple .c files onto the command
line, instead of running cl.exe multiple times? I did a test here and
the performance gain was measurable but not impressive.
In a project at work, one library consists of 78 .c files. Compiling
each .c file individually takes 65 seconds. Compiling all the .c
files together on the command line takes 60 seconds. Neither tests
made any use of pre-compiled headers.
So it appears that batch compilation isn't a big win after all. Does
this match the experience of others?
Subject: RE: Jam2.5 and precompiled headers
Date: Fri, 31 Jan 2003 15:34:53 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
I said you get maximum win from batch compilation WHEN YOU USE
PRECOMPILED HEADERS. :-) (caps to add visibility to key point)
Between batch compilation and precompiled headers, the latter gives a
bigger win. Combining the two gains some additional improvements (or at
least it did in the big project I worked on a few years ago -- right now
I'm working on a very small project only a couple thousand files, and I
haven't checked to make sure the compiler still has those additional
improvements, but it's hard to imagine they'd have been removed :-).
But 65 versus 60 seconds -- based on your timings, that's nearly a 10%
boost even without precompiled headers. If your build took 1:48:00
(~1.8 hours) then you could expect about 10 minutes improvement (~1.65
hours). Build labs frequently care about how long it takes to build.
Consider a build that takes 20 hours (oh yes such things exist :-), it
would cut the build time by about 1.5 hours!!
I'm not saying batch compilation is useful for small projects -- I'm
only saying that the larger your project, the more you will benefit from it.
From: Matt Armstrong [mailto:matt@lickey.com]
Sent: Friday, January 31, 2003 3:05 PM
Subject: Re: Jam2.5 and precompiled headers
By "batching" do you mean throwing multiple .c files onto the command
line, instead of running cl.exe multiple times? I did a test here and
the performance gain was measurable but not impressive.
In a project at work, one library consists of 78 .c files. Compiling
each .c file individually takes 65 seconds. Compiling all the .c
files together on the command line takes 60 seconds. Neither tests
made any use of pre-compiled headers.
So it appears that batch compilation isn't a big win after all. Does
this match the experience of others?
From: Elliot Murphy <elliot.murphy@veritas.com>
Subject: RE: Jam2.5 and precompiled headers
Date: Fri, 31 Jan 2003 21:58:54 -0500
My experiences with a large project that takes from 1-2 hours to build
(using build.exe from the Windows DDK on a 2-proc system) is that using
batch compilation gave an average of 10-15% speedup. Like Chris says - the
larger your project the more important these incremental performance
improvements become.
|From: Chris Antos [mailto:chrisant@windows.microsoft.com]
|Sent: Friday, January 31, 2003 6:35 PM
|Subject: RE: Jam2.5 and precompiled headers
|
|I said you get maximum win from batch compilation WHEN YOU USE
|PRECOMPILED HEADERS. :-) (caps to add visibility to key point)
|
|Between batch compilation and precompiled headers, the latter gives a
|bigger win. Combining the two gains some additional
|improvements (or at
|least it did in the big project I worked on a few years ago --right now
|I'm working on a very small project only a couple thousand files, and I
|haven't checked to make sure the compiler still has those additional
|improvements, but it's hard to imagine they'd have been removed :-).
|
|But 65 versus 60 seconds -- based on your timings, that's nearly a 10%
|boost even without precompiled headers. If your build took 1:48:00
|(~1.8 hours) then you could expect about 10 minutes improvement (~1.65
|hours). Build labs frequently care about how long it takes to build.
|Consider a build that takes 20 hours (oh yes such things exist :-), it
|would cut the build time by about 1.5 hours!!
|
|I'm not saying batch compilation is useful for small projects -- I'm
|only saying that the larger your project, the more you will
|benefit from it.
|
|> MSVC supports batching, and gets a big perf win from it. I'm not
|> aware of anyone who has written Jam rules to take advantage of the
|> MSVC batching feature. I vaguely recall discussing batching a few
|> times previously on this list, but I don't remember if it was hard
|> to do in Jam, or impossible, or just inelegant.
|
|By "batching" do you mean throwing multiple .c files onto the command
|line, instead of running cl.exe multiple times? I did a test here and
|the performance gain was measurable but not impressive.
|
|In a project at work, one library consists of 78 .c files. Compiling
|each .c file individually takes 65 seconds. Compiling all the .c
|files together on the command line takes 60 seconds. Neither tests
|made any use of pre-compiled headers.
|
|So it appears that batch compilation isn't a big win after all. Does
|this match the experience of others?
Subject: Re: Jam2.5 and precompiled headers
From: Matt Armstrong <matt@lickey.com>
Date: Sat, 01 Feb 2003 09:56:07 -0700
Interestingly, I see performance gains of 70-80% when a whole library
is turned into a single .c file (e.g. add one .c file that #includes
all the others and compile that).
Unfortunately this doesn't always work, given the semantics of a C
compilation unit.
I've also looked into trying to get visual c++ to support parallel
builds (-j 2). Unfortunately, the compiler barfs because it wants to
build a .pdb file. :-(
From: Elliot Murphy <elliot.murphy@veritas.com>
Subject: RE: Jam2.5 and precompiled headers
Date: Sat, 1 Feb 2003 12:37:39 -0500
Without knowing exactly what error you're getting from visual c++,
we had trouble on parallel builds with vc++ until we added the /Fd
switch, which allows you to specify which name should be used
for the pdb file. (I haven't done this with jam yet or I would post
more details).
|From: Matt Armstrong [mailto:matt@lickey.com]
|Sent: Saturday, February 01, 2003 11:56 AM
|Subject: Re: Jam2.5 and precompiled headers
|
|> I said you get maximum win from batch compilation WHEN YOU USE
|> PRECOMPILED HEADERS. :-) (caps to add visibility to key point)
|
|Interestingly, I see performance gains of 70-80% when a whole library
|is turned into a single .c file (e.g. add one .c file that #includes
|all the others and compile that).
|
|Unfortunately this doesn't always work, given the semantics of a C
|compilation unit.
|
|> Between batch compilation and precompiled headers, the latter gives
|> a bigger win. Combining the two gains some additional improvements
|
|I've also looked into trying to get visual c++ to support parallel
|builds (-j 2). Unfortunately, the compiler barfs because it wants to
|build a .pdb file. :-(
Subject: RE: Jam2.5 and precompiled headers
Date: Sat, 1 Feb 2003 12:02:29 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
I got roughly 50% performance gain simply from using precompiled
headers; i.e. I cut my build time basically in half. You can
potentially get less or more improvement depending on what your
precompiled header actually includes; i.e. how much redundant processing
is offloaded into the pch. Take an additional 10% for batch
compilation, and that's in the neighborhood of the gain you're seeing
from turning the whole library into a single .c file. You're right,
it's not always possible. Some other drawbacks are that it can make
several things very difficult and you may get very poor linker
optimization results, and may have a lot of dead code left in the
executable, bogus DLL dependencies, and etc.
By the way, to solve the multiple compiler processes blocking each other
from accessing the pdb file, use the /Z7 compiler flag instead of /Zi.
From: Matt Armstrong [mailto:matt@lickey.com]
Sent: Saturday, February 01, 2003 8:56 AM
Interestingly, I see performance gains of 70-80% when a whole library
is turned into a single .c file (e.g. add one .c file that #includes
all the others and compile that).
Unfortunately this doesn't always work, given the semantics of a C
compilation unit.
I've also looked into trying to get visual c++ to support parallel
builds (-j 2). Unfortunately, the compiler barfs because it wants to
build a .pdb file. :-(
Date: Tue, 04 Feb 2003 01:38:53 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Confused by SubDir
When i put: SubDir TOP d1 ... dn ;
into a jamfile, is the directory: TOP d1 ... dn
always supposed to be the current directory of that jamfile,
or is it designed so that the following commands can be made to
work in a directory other than the current one?
http://public.perforce.com/public/jam/src/Jambase
rule SubDir {
#
# SubDir TOP d1 ... ;
#
# Support for a project tree spanning multiple directories.
#
# SubDir introduces a Jamfile that is part of a project tree whose
# root is $(TOP). TOP is a user-selected variable name for the
# tree; d1 ... are the directory elements that lead from the root
# of the tree to the directory of the Jamfile.
#
# When jam reads the Jamfile in the current working directory
# (CWD), the first SubDir call sets $(TOP) to the back path to
# the project root for use by subsequent SubDir calls. The path
# contains one ../ for each directory from the root.
#
# Each SubDir call sets the (fixed) variable $(SUBDIR) to the path
# from the CWD to the named directory. SubDir also sets other
# Jambase variables (SEARCH_SOURCE, LOCATE_TARGET) to $(SUBDIR),
# so that file names within the Jamfile refer to $(SUBDIR).
#
# The first invocation of SubDir for TOP includes the
# project-specific rules files $(TOPRULES). If $(TOPRULES) is
# not set SubDir looks for a Jamrules in $(TOP) and includes that
# if present.
#
# SubDir supports different TOPs for separate project trees; it
# simply uses the last value of $(SUBDIR) instead of the CWD when
# computing the path to $(TOP).
Subject: Re: Confused by SubDir
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 03 Feb 2003 10:08:05 -0700
The first -- the directory components "d1 ... dn" are supposed to name
the directory that the Jamfile is in, relative to the top of the source tree.
Date: Mon, 3 Feb 2003 10:36:09 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Jam2.5 and precompiled headers
I needed to solve this problem for a build I was working on at the time.
We are using pre-compiled headers and .pdb debugging - I don't think its
possible to use that combination with the standard version of jam.
There is /Z7 and /Zi debugging available. /Z7 placed the debug info into
the .obj file, /Zi places the debug info into an external file which you can name.
With pre-compiled headers, you'll compile a single file with a switch to
generate a pre-compiled header file (.pch) and then provide that file on some
set of files (all the files for a library, all the files for a .dll, etc.)
Using multiple jobs with .pch files is easy, just set it up. Using multiple
jobs with .pdb debugging is possible, if you specify a different database
for every .obj file. But this makes the build area quite a lot larger than
normal. If two compile actions are running at the same time with the same .pdb
file, one of them will fail as they both want to open the file to write into,
and they both can't do that.
When you use .pch files with .pdb files, the compiler demands that the .pch
file provided was compiled with the same .pdb file that the current .obj file
is using. This means that for some set of .obj files, they need to be
compiled with the same .pch and .pdb file, which means that only one of those
compiles can be executing at a time due to the file lock problem.
I solved this problem in my version of jam by introducing a semaphore node.
You assign a semaphore to a node by setting its JAM_SEMAPHORE variable.
Something like:
JAM_SEMAPHORE on $(target) = $(dll-semaphore) ;
The same semaphore node is assigned to all of the files which would share the
same .pdb file. Jam then ensures that only one action is launched at a time
for each semaphore. I can then use multiple jobs, and as long as there is
work for the build (if you're building multiple .dll's or libraries) then
multiple jobs will be launched.
The code for this is available in my public branch.
//guest/craig_mcpheeters/jam/src/
In the Jamfile.config file in that branch, look for documentation on the
OPT_SEMAPHORE define, which is how you turn it on.
Business phone: VNET: 8670 or (416) 874-8670 or (206) 789-1374
Date: Wed, 05 Feb 2003 21:58:22 +1100
From: Russell <rjshaw@iprimus.com.au>
In the top level Jamfile, i've got:
SubDir TOP ;
Main myprog : main.c analog.c serial.c
SubInclude TOP main ;
SubInclude TOP analog ;
SubInclude TOP serial ;
In the 'main' subdirectory jamfile, i have:
SubDir TOP main ;
Objects main.c ;
In the 'analog' subdirectory jamfile, i have:
SubDir TOP analog ;
Objects analog.c ;
In the 'serial' subdirectory jamfile, i have:
SubDir TOP serial ;
Objects serial.c ;
When i run jam from the top directory, i get errors:
don't know how to make main.c
don't know how to make analog.c
don't know how to make serial.c
How do you jam with subdirectories? Everything i try gives errors.
Date: Wed, 5 Feb 2003 13:47:06 +0100 (MET)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Supposing that your source files reside in the respective subdirectories,
this is not surprising.
As a general remark: The Main rule is intended to compile the source
files and link the objects. The Objects rule compiles source files to
objects. So in the way you apply these rules, the source files would be
compiled twice. So you should either drop the Objects rules or use
MainFromObjects instead of Main.
1) Dropping the Objects rules (and, if you like also the then-empty
Jamfiles in the subdirectories): The errors jam reports are caused by the
fact, that it doesn't know, where the listed source files are located. By
default jam searches only the directory of the Jamfile, so you have to
tell it explicitly, e.g. by:
SEARCH_SOURCE += [ FDirName $(TOP) main ] [ FDirName $(TOP) analog ]
[ FDirName $(TOP) serial ] ;
This must be inserted before the invocation of the Main rule.
2) Using MainFromObjects instead of Main:
MainFromObjects myprog : <main>main.o <analog>analog.o
<serial>serial.o ;
The grist (the `<...>' annotation) is needed to make jam find the object
files. The concept can be looked up in the documentation.
Date: Thu, 6 Feb 2003 16:23:19 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: Problems with SubDir rule in jam-2.5rc1
I just tried updating to jam-2.5 and I'm having problems with the SubDir
rule now. It is impossible to set a custom TOP path now. With jam 2.4 I
did something like this:
Directory Layout:
source/Jamfile
source/Jamrules
source/...several subdirs with Jamfiles...
build/Jamfile
Jamfile in source looks like this:
SubDir TOP ;
SubInclude ... ;
The build/Jamfile looked like this:
TOP = ../source ;
include $(TOP)/Jamfile ;
so I was able to build source into another directory easily. However in
the jam-2.5 rules, my Jamrules file isn't included when I'm setting the
TOP variable :-/
Date: Thu, 6 Feb 2003 16:24:49 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: smallbugfix for SubDir rule (fwd)
while the remaining parts are likely unreadable without MIME-aware tools.
Send mail to mime@docserver.cac.washington.edu for more info.
---1463809751-1271986298-1044308876=:1863
-Just sent this again as my other mail didn't make it through to the list
it seems... Sorry if it arrives twice-
I just found a bug in jam 2.5rc1. I'm building a project and I'm using
SubDir /SubInclude rules. This all works nicely until I set ALL_LOCATE_TARGET.
All object files then go to the ALL_LOCATE_TARGET, but jam isn't creating
subdirectories in the ALL_LOCATE_TARGET directory. For example
mysubdir/foo.cpp
with ALL_LOCATE_TARGET = out/linux ;
will be compiled into:
out/linux/foo.o
but should be compiled into
out/linux/mysubdir/foo.o
Attached to this mail is a 2 line fix for the problem.
Date: Fri, 07 Feb 2003 02:37:53 +1100
From: Russell <rjshaw@iprimus.com.au>
I decided to do (2) above. I'm nearly there, but still have a problem. The
top jamfile has:
SubDir TOP ;
MainFromObjects myprog : <main>main.o <serial>serial.o ;
SubInclude TOP main ;
SubInclude TOP serial ;
The jamfile in the 'main' subdirectory has:
SubDir TOP main ;
Objects main.c ;
The jamfile in the 'serial' subdirectory has:
SubDir TOP serial ;
Objects serial.c ;
Running 'jam' at the top level gives errors:
don't know how to make main.o
don't know how to make serial.o
The .o files get created in the subdirectories, but MainFromObjects
doesn't see those. I've tried setting SEARCH_SOURCE += [ FDirName $(TOP) main ],
but i think that only works for finding .c source files and not .o files.
I couldn't find how to set a search path for intermediate files such as .o.
Date: Thu, 6 Feb 2003 17:49:23 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: MakeLocate bugfix
while the remaining parts are likely unreadable without MIME-aware tools.
Send mail to mime@docserver.cac.washington.edu for more info.
I'm having a Jamfile like this:
LOCATE = out ;
Main foo : source/bar.cpp ;
This fails at the moment because jam doesn't create the out/src directory.
Date: Thu, 6 Feb 2003 17:51:21 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: Re: MakeLocate bugfix
ups, of course I meant LOCATE_TARGET.
Date: Thu, 6 Feb 2003 18:13:01 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: Re: MakeLocate bugfix
while the remaining parts are likely unreadable without MIME-aware tools.
Send mail to mime@docserver.cac.washington.edu for more info.
Sorry to spam this list... But my first patch has been highly buggy. This
one should be okay now.
Subject: Re: MakeLocate bugfix
From: Matt Armstrong <matt@lickey.com>
Date: Thu, 06 Feb 2003 11:53:43 -0700
Here you are using a target name of "source/bar.cpp" but you will find
that you are fighting the spirit of Jam doing this. The best way to
do what you want is this:
LOCATE_TARGET = out ;
SEARCH_SOURCE += source ;
Main foo : bar.cpp ;
or, if you know that just bar.cpp is in the source dir:
LOCATE_TARGET = out ;
LOCATE on bar.cpp = source ;
Main foo : bar.cpp ;
The idea is to use SEARCH and LOCATE to attach directory names to
targets.
I've also made it so that myprog.exe is put into the objs subdirectory:
SubDir TOP;
LOCATE_TARGET = objs ;
MainFromObjects myprog : main.o serial.o ;
LOCATE on main.o = main ;
LOCATE on serial.o = serial ;
In Jamrules, i've made gcc generate myprog.exe as well as a myprog.map
file. However, despite trying a few different ways, i can't get myprog.map
to go into the 'objs' directory; it just ends up in the top directory. There
seems to be dozens of possible ways to do that. What would be the right way
that runs with the philosophy of jam?
Date: Sat, 08 Feb 2003 18:31:42 +1100
From: Russell <rjshaw@iprimus.com.au>
After experimenting and reading http://public.perforce.com/public/jam/src/Jambase
dozens of times, i came up with a new rule that looks kool, and i thought it might
be a good one to add to jambase, if there isn't an obvious alternative that i've missed.
I call it MvAfter (move after). You use it when an action generates an output
file that you're primarily interested in (such as the .exe output of a linker),
and another secondary file that is a side-effect from the same action (such as
a map file from a linker). MvAfter makes it easy to move the secondary file to
any directory, *after* the primary action has been done.
This is an example top-level Jamfile:
SubDir TOP ;
LOCATE_TARGET = objs ;
# generate myprog.exe and myprog.map
MainFromObjects myprog : main.o serial.o ;
# move $(TOP)/myprog.map to $(TOP)/objs/myprog.map after myprog.exe is generated.
LOCATE_TARGET = objs ;
MvAfter myprog.map : myprog.exe ;
LOCATE on main.o = main ;
LOCATE on serial.o = serial ;
SubInclude TOP main ;
SubInclude TOP serial ;
In Jamrules:
rule MvAfter {
# MvAfter secondary_file_name : file_name_resulting_from_primary_action
MakeLocate $(<) : $(LOCATE_TARGET) ;
Depends $(<) : $(>) ;
Depends all : $(<) ;
Clean clean : $(<) ;
}
actions MvAfter { $(MV) $(<:BS) $(<) ; }
I still haven't really got the hang of where and when to use grists.
Can this code be refined? I've only tested it a few times so far.
Date: Sun, 09 Feb 2003 03:16:54 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Implicit action won't work
When i run jam, the rule HEX executes, but the action HEX
doesn't execute. Why?
In Jamrules:
rule HEX { echo "Doing rule" ; }
actions HEX { echo "Doing action" }
Date: Sun, 09 Feb 2003 12:22:07 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Re: Implicit action won't work
Ok, i got that sorted;)
Now, i'm trying to understand SEARCH behaviour. In the Jamfile, i have:
SEARCH_SOURCE = objs ;
LOCATE_TARGET = objs ;
HEX myprog : myprog.exe ;
In Jamrules, i have:
rule HEX {
LOCATE on $(<) = $(LOCATE_TARGET) ;
SEARCH on $(>) = $(SEARCH_SOURCE) ;
Depends $(<) : $(>) ;
Depends all : $(<) ;
echo "Rule HEX:" $(<) $(>) ;
}
actions HEX { echo "Action HEX:" $(<) $(>) }
Running jam gives:
Rule HEX: myprog myprog.exe
Action HEX: myprog objs/myprog.exe
I don't understand why the second line doesn't say:
Action HEX: objs/myprog objs/myprog.exe
Subject: Re: Implicit action won't work
From: "Ingo Weinhold" <bonefish@cs.tu-berlin.de>
Date: Sun, 09 Feb 2003 16:23:38 CET (+0100)
I don't know either. I threw your lines into a Jamfile and got:
$ jam-2.4
Rule HEX: myprog myprog.exe
..found 9 target(s)...
..updating 1 target(s)...
HEX objs/myprog
Action HEX: objs/myprog objs/myprog.exe
..updated 1 target(s)...
Tested with plain Jam 2.4.
Date: Mon, 10 Feb 2003 12:32:06 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Re: Implicit action won't work
Hi, i managed to get that result too (working too late;)
I think i don't fully understand the jam process, so i made
a small example that causes various problems.
In Jamrules, i have:
TOP = /home/russell/test ;
SUFEXE = .exe ;
rule HEX {
local _h ;
_h = $(<:S=.hex) ;
Depends $(_h) : $(>) ;
Depends all : $(_h) ;
echo "Rule Hex:" $(<) $(_h) S(>) ;
}
actions HEX { echo "Action hex:" $(<) $(>) }
In Jamfile, i have:
SubDir TOP ;
HEX myprog : myprog.exe ;
I created /home/russell/test/myprog.exe with "touch myprog.exe".
When i run jam, i get:
Rule hex: myprog myprog.hex myprog.exe
...found 9 target(s)...
which shows action HEX didn't execute. The last few lines of jam -d 3
shows:
made newer myprog.exe
made+ missing myprog.hex
made update all
...found 9 target(s)...
I don't know why "actions HEX" doesn't work. There's no jam options
to put out a simple dependency tree.
Date: Mon, 10 Feb 2003 11:39:38 +0100 (MET)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Implicit action won't work
Mmh, what is wrong with `-d3'?
Your problem here is that HEX is invoked on myprog.exe, not on myprog.hex.
Thus for jam no rule is invoked on myprog.hex and hence it can't execute
any actions.
You can work around by renaming the actions to HEX1 and add
`HEX1 $(_h) : $(>) ;' to your HEX rule. This way HEX1 is called for your
target and the respective actions will be executed.
Date: Mon, 10 Feb 2003 23:27:47 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Re: Implicit action won't work
I just saw that a space was used after -d at
http://public.perforce.com/public/jam/src/Jam.html
This how i thought it worked: rule HEX is invoked with "HEX myprog : myprog.exe ;",
causing action HEX to be invoked with arguments "myprog : myprog.exe"
I thought if rule HEX was executed, then action HEX would *always* be
executed too. It says here:
http://www.perforce.com/perforce/conf2001/wingerd/WPLaura.pdf
3.7 Implicity Invoked Actions
When an action and a rule have the same name, Jam implicitly invokes
the action with the same arguments that were used in the rule invocation.
Here's an example of an implicitly invoked action:
rule MyRule { Depends all : $(1) ; }
actions MyRule { p4 info > $(1) }
MyRule info.output ;
A single statement invokes "MyRule". The MyRule rule will be run
during the parsing phase, and the MyRule action will be run during
the updating phase. Both are passed the same argument, info.output.
Date: Mon, 10 Feb 2003 15:31:10 +0100 (MET)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Implicit action won't work
That's not what I was driving at -- the space doesn't matter -- I was
merely saying, that this option does print a dependency tree.
Obviously it's not right.
From: "Peter Klotz" <peter.klotz@aon.at>
Date: Mon, 10 Feb 2003 21:33:39 +0100
Subject: Problem with header file scanning and updates
I have the following problem with jam 2.4:
base.ui --> (base.cpp, base.hpp) --> (derived.cpp, derived.hpp)
The pair base.cpp/base.hpp is generated from base.ui.
The contents of derived.cpp and derived.hpp are as follows (simplified):
derived.cpp:
#include "derived.hpp"
derived.hpp
#include "base.hpp"
If I change base.ui, jam detects that it has to (re)generate
base.cpp/base.hpp.
Then header file scanning finds that derived.hpp includes base.hpp and that
derived.cpp includes derived.hpp.
Unfortunately derived.cpp is not rebuilt for some reason, although it should be.
Subject: Re: Problem with header file scanning and updates
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 10 Feb 2003 13:51:09 -0700
Is it rebuilt the second time you run Jam?
Does Jam know that base.hpp depends on base.ui?
From: "Peter Klotz" <peter.klotz@aon.at>
Subject: Re: Problem with header file scanning and updates
Date: Mon, 10 Feb 2003 21:59:33 +0100
Yes, my rule looks as follows:
rule UicR {
# UicR somefile.cpp : somefile.ui ;
local _ui = [ FGristFiles $(>) ] ; # .ui file
local _hpp = [ FGristFiles $(<:S=.hpp) ] ; # .hpp file
local _cpp = [ FGristFiles $(<) ] ; # .cpp file
SEARCH on $(_ui) = $(SEARCH_SOURCE) ;
MakeLocate $(_cpp) $(_hpp) : $(LOCATE_SOURCE) ;
Depends uic : $(_cpp) $(_hpp) ;
Depends $(_cpp) $(_hpp) : $(_ui) ;
Includes $(_cpp) : $(_hpp) ;
Clean clean : $(_cpp) $(_hpp) ;
Uic $(_cpp) $(_hpp) : $(_ui) ; # call corresponding action
}
And the corresponding action is:
actions Uic {
cd $(>:D)
$(UIC) -o $(<[2]:B)$(<[2]:S) $(>:B)$(>:S)
$(UIC) -o $(<[1]:B)$(<[1]:S) -impl $(<[2]:B)$(<[2]:S) $(>:B)$(>:S)
}
Date: Tue, 11 Feb 2003 14:49:10 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Re: Implicit action won't work
Ok, i downloaded the jam source and did:
cd ~/jam-2.4
make
cd bin.linuxx86
ddd ./jam
but i found there's no debug info generated. What option
do i pass to 'make' to have the -g debug info created?
I read the makefiles, but i'm still not sure.
Subject: RE: Implicit action won't work
Date: Mon, 10 Feb 2003 20:13:23 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
It's true, if you take into account that the whole point of Jam or Make
or Nmake or etc is to define dependencies so that only the actions that
*need* to happen are actually executed. It seems you interpreted the
section as saying "actions are always executed if a rule is executed",
but (a) I can't find where it says that [good, because it's not true
:-)] and (b) the point that section is trying to make is that "when an
action is executed, its arguments are the same as those used in the rule
invocation".
In your case, the problem is simple. You never said that $(<) depends
on $(_h). So de facto it does not. :-)
Depends all : $(_h) ;
You did say that "all" depends on $(_h), so at some point during Jam it
should in fact build the .hex file, but possibly not in time for it to
serve as an input to whatever actions are using. "all" is a
pseudo-target; note that the statement above does not mean "make each
individual target depend on $(_h)". Did you add that line just to
suppress the "independent target" warning that you'd otherwise get?
Here's the line you should be using instead:
Depends $(<) : $(_h) ;
Subject: Re: Problem with header file scanning and updates
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 10 Feb 2003 21:38:39 -0700
Then you have a problem with dependencies.
My theory is that the problem is caused by "grist."
I see that you are using grist (FGristFiles) in your UicR rule. If
you are using SubDir, then FGristFiles will cause each directory in
your project to have different grist added to the targets.
So your UicR rule will create a target something like <a!b!c>base.hpp.
However, Jambase by default does not grist header files found during
the header scan phase. So Jam will scan derived.cpp and find
#include "base.hpp"
and create a base.hpp target.
So you end up with <a!b!c>base.hpp and base.hpp -- two Jam targets
that reference the same file.
The first time you run Jam, base.hpp is not modified on disk yet, so
derived.cpp is not out of date (Jam does not realize base.hpp and
<a!b!c>base.hpp are the same file). The second time, Jam notices that
base.hpp is newer than derived.cpp and recompiles derived.cpp.
This is one place where jam is weak -- I wish there were an elegant
solution to this problem.
You can check if this is happening by running
jam -d5 | egrep '(base.hpp|derived.cpp)'
You'll see calls to Depends and Includes that set up the dependency
tree. You'll probably find that the base.hpp included by derived.cpp
has no grist, but the base.hpp generated by your custom rule has grist.
There are three solutions (probably more):
1) Explicitly set up the dependency in your Jamfile:
Includes base.hpp : <a!b!c>base.hpp ;
This second one tricks Jam into thinking base.hpp (the one
discovered at header scan time) "includes" <a!b!c>base.hpp (the one
generated by your rule), so any file including base.hpp (namely,
derived.cpp) will end up depending on <a!b!c>base.hpp.
2) Turn off grist. Do this by setting the SOURCE_GRIST variable to
nothing after every SubDir call.
SubDir TOP foo ;
SOURCE_GRIST = ;
This assumes that no two directories in your project will ever have
files with the same name.
3) Use FGristSourceFiles in your UicR rule. Add a modified version of
Jambase' FGristSourceFiles rule to your Jamrules. The modified
version will not add grist to .hpp files, just as the stock Jambase
version does not add grist to .h files.
This assumes that your project will never have two .hpp files with
the same name in different directories.
I wish there were an option 4)
4) Modify jam itself such that all targets that are bound to the same
file on disk automatically have an Includes dependency amongst
themselves (since anybody depending on one of them will want to
depend on all of them).
Date: Tue, 11 Feb 2003 16:01:27 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Re: Implicit action won't work
I don't seem to get it yet;)
The original code is at the end of this message for clarity.
In Jamfile, i have:
HEX myprog : myprog.exe ;
In Jamrules:
_h = $(<:S=.hex) ; # _h is myprog.hex
Depends $(_h) : $(>) ; # myprog.hex depends on myprog.exe
Depends all : $(_h) ; # 'all' depends on myprog.hex
So, jam sees that 'all' depends on myprog.hex, which in
turn depends on myprog.exe. Therefore, the actions for
building myprog.exe and myprog.hex should execute. However,
i still don't get why the "actions HEX" for myprog.hex doesn't happen.
This means:
Depends myprog : myprog.hex ;
That's what i don't get. If 'all' depends on myprog.hex,
then why is 'myprog' needed in a Depends statement?
Original post:
In Jamfile, i have:
SubDir TOP ;
HEX myprog : myprog.exe ;
In Jamrules, i have:
TOP = /home/russell/test ;
SUFEXE = .exe ;
rule HEX {
local _h ;
_h = $(<:S=.hex) ;
Depends $(_h) : $(>) ;
Depends all : $(_h) ;
echo "Rule Hex:" $(<) $(_h) S(>) ;
}
actions HEX { echo "Action hex:" $(<) $(>) }
I created /home/russell/test/myprog.exe with "touch myprog.exe".
When i run jam, i get:
Rule hex: myprog myprog.hex myprog.exe
...found 9 target(s)...
which shows action HEX didn't execute. The last few lines of jam -d 3
shows:
made newer myprog.exe
made+ missing myprog.hex
made update all
...found 9 target(s)...
I don't know why "actions HEX" doesn't work.
Date: Mon, 10 Feb 2003 21:52:28 -0800 (PST)
Subject: RE: Implicit action won't work
Rules aren't exactly what you'd call "executed", but let's leave that for
now... I believe your confusion regarding actions always getting run
probably arises from the difference between this:
$ cat Jamfile
rule HEX { Echo "Doing rule" ; }
actions HEX { echo "Doing action" }
HEX somefoo ;
$ jam
Doing rule
...found 7 target(s)...
And this:
$ jam somefoo
Doing rule
...found 1 target(s)...
...updating 1 target(s)...
HEX somefoo
Doing action
...updated 1 target(s)...
In the second case I've specifically asked Jam to build "somefoo", and so
it does. In order for just 'jam' to do the actions, however, you need to
have something that you've asked for depend on the target that your
actions would build. Which I assume is why you added the dependency of
"all" (the default "asked for without having to explicitly do so" Jam
target) to your rule:
rule HEX { Echo "Doing rule" ; Depends all : $(<) ; }
$ jam
Doing rule
...found 8 target(s)...
...updating 1 target(s)...
HEX somefoo
Doing action
...updated 1 target(s)...
Since the actions at this stage of your experimenting don't actually
create "somefoo", the target will always be out-of-date, so the actions
will always run.
Now let's look at where you currently really are with your rule, actions and target:
rule HEX {
local _h ;
_h = $(<:S=.hex) ;
Depends $(_h) : $(>) ;
Depends all : $(_h) ;
echo "Rule Hex:" $(<) $(_h) S(>) ;
}
actions HEX { echo "Action hex:" $(<) $(>) }
HEX myprog : myprog.exe ;
$ jam
Rule Hex: myprog myprog.hex myprog.exe
...found 9 target(s)...
$ jam myprog
Rule Hex: myprog myprog.hex myprog.exe
...found 1 target(s)...
...updating 1 target(s)...
warning: using independent target myprog.exe
HEX myprog
Action hex: myprog myprog.exe
...updated 1 target(s)...
Note two things -- one, you still need to ask for your target in order for
Jam to build it (because nothing depends on it), and two, you've got an
"independent target".
What you've not actually done yet is told Jam that just saying 'jam'
should build HEX targets -- nor have you got your dependency on myprog.exe
set up correctly, which is why it's an "independent target" -- no target
depends on it, because "myprog.hex" isn't a target, and "myprog", which
is, doesn't depend on "myprog.exe".
The simplest solution -- change your target to be:
HEX myprog.hex : myprog.exe ;
Or, if you want to be a bit more generic:
HEX myprog$(SUFHEX) : myprog$(SUFEXE) ;
And change your rule to be:
rule HEX {
Depends all : $(<) ;
Depends $(<) : $(>) ;
Echo "Rule Hex:" $(<) $(>) ;
}
Now just running 'jam' will do what you want:
$ jam
Rule Hex: myprog.hex myprog.exe
...found 9 target(s)...
...updating 1 target(s)...
HEX myprog.hex
Action hex: myprog.hex myprog.exe
...updated 1 target(s)...
Because you (implicitly) asked for the (pseudo) target "all" (by just
running 'jam') and everything it depends on to be built.
If you didn't want to include the $(SUFHEX) in your target's name, then
you can keep your rule as it is, but you'd need to pass $(_h) to, eg., a
HEX1 actions:
rule HEX {
[...]
HEX1 $(_h) : $(>) ;
}
actions HEX1 {
echo "Action hex:" $(<) $(>)
}
Note that you can't have your actions just be "HEX", because you need to
pass $(_h) to an actions in order for it to get made, because that's not
your target in your Jamfile -- myprog is -- and nothing depends on myprog.
Date: Tue, 11 Feb 2003 08:56:00 +0300
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: Problem with header file scanning and updates
In boost.jam we solve the same problem by introducing SEARCH_FOR_TARGET
builtin. You give a list of paths and a target name to it, and if there's
target already bound to one of paths and with the same name, it returns that
target.
(details are at the bottom of
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/*checkout*/boost/boost/tools/build/boost_build_v2.html?rev=HEAD&content-type=text/html
)
Your idea looks way simpler! On the other hand, I'm not sure it will work as
is. Assume you're looking for file "parser.h" in directories "a", "b" and "c".
There's <b>parser.h, bound to "b/parser.h" via LOCATE. The standard file
search won't care about that another target, and will declare "parser.h" as
unfound. I think's it needed that search mechanism look if there are other
targets bound to the same location, and stop here (adding include dependencies
are you describe).
Date: Mon, 10 Feb 2003 21:54:50 -0800 (PST)
Subject: RE: Implicit action won't work
Ratz! -- wrong captured line for who "wrote" -- sorry about that. That
should have read ""Russell" <rjshaw@iprimus.com.au> wrote:".
From: "Peter Klotz" <peter.klotz@aon.at>
Subject: Re: Problem with header file scanning and updates
Date: Tue, 11 Feb 2003 07:52:52 +0100
I agree.
That is exactly what I meant to do. There could be several
base.ui/derived.cpp/derived.hpp files in different directories.
A bad situation.
Yes, I find the following:
<PROJECTS!Applications!JamTest>base.hpp
Sounds very good. Since all my Jamfiles are generated automatically this is
not a problem to do.
It already works in my test case.
This is impossible, since the project is spread across several directories
(each containing a static library) and I extensively use grist.
I am not able to guarantee that. Different developers tend to produce files
with identical names.
From: "Peter Klotz" <peter.klotz@aon.at>
Subject: Re: Problem with header file scanning and updates
Date: Tue, 11 Feb 2003 07:53:31 +0100
My Jamfile looks like this.
Main MyProject : derived.cpp base.ui ;
The UserObject rule takes care of base.ui.
The real (large) project is spread across several directories, so I
extensively use the SubDir rule and grist.
Date: Tue, 11 Feb 2003 19:05:01 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Re: Implicit action won't work
Hmm. That seems to be the solution. Implicit actions *aren't* always called
when the corresponding rule is.
HEX myprog : myprog.exe ;
An impicit rule/action should be thought of as:
HEX(the hex rule) myprog : myprog.exe ;
HEX(the hex action) myprog : myprog.exe ;
From what i see, "actions HEX" should be invoked if
"rule HEX" sets up a dependency of 'myprog' on 'myprog.exe'.
I see now that an implicit rule is unsuitable for what i'm
doing. Thanks for the long answer; i think i get it now;)
I'll attempt to submit a multi-subdirectory jam example some time.
Date: Tue, 11 Feb 2003 19:33:37 +1100
From: Russell <rjshaw@iprimus.com.au>
Subject: Re: Implicit action won't work
yes, i thought *implicit* actions were always executed, but i know
that's wrong now;)
The .hex file was to be the end result anyway (for an eprom).
rule HEX {
local _h ;
_h = $(<:S=.hex) ;
MakeLocate $(_h) : $(LOCATE_TARGET) ;
Depends $(_h) : $(>) ;
Depends all : $(_h) ;
HEX1 $(_h) : $(>) ;
Clean clean : $(_h) ;
NOTFILE $(<) ;
}
actions HEX1 {
avr-objcopy -O ihex -R .eeprom -g $(>) $(<)
}
LOCATE_TARGET = objs ;
HEX $(PROJ) : $(PROJ).exe ;
Now all i need to do is figure out grist;)
Subject: Re: Jam2.5 and precompiled headers
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 11 Feb 2003 11:46:32 +0100
This sounds good. Do you have an example Jamfile/Jambase for using MSVC
precompiled headers and semaphores?
From: "Chris Haarmeijer" <c.haarmeijer@keepitsimple.nl>
Date: Tue, 11 Feb 2003 12:04:00 +0100
Subject: precompiled headers
We're switching from our MSVC workspaces and makefiles to jamfiles. The
only problem is that we used to have precompiled header and would like
that functionality also using jam. Has anybody used this combination before?
Subject: RE: Jam2.5 and precompiled headers
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 11 Feb 2003 13:56:40 +0100
Naturally, I agree. You mentioned in a private post that there is a bug
in Jam with regards to the 'on' keyword not correctly restricting its
scope. Can you elaborate on this bug and how to fix it?
From: "Chris Haarmeijer" <c.haarmeijer@keepitsimple.nl>
Date: Tue, 11 Feb 2003 15:06:02 +0100
Subject: precompiled headers
Ok, doh, hit myself. I probably should have read the archives first
about the subject...
Subject: Re: Problem with header file scanning and updates
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 11 Feb 2003 08:55:14 -0700
And there you have it! :-)
Note that solution #1 will result in too many things being compiled
sometimes. If you have multiple base.hpp files in the project, Jam
will see a series of this:
Includes base.hpp : <a!b!c>base.hpp ;
Includes base.hpp : <d!e!f>base.hpp ;
Includes base.hpp : <h!i!j>base.hpp ;
And any file including a "base.hpp" will depend on all of the
generated base.hpp files.
But at least compiling too much is not as bad as what you had before.
Subject: Re: Problem with header file scanning and updates
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 11 Feb 2003 09:01:17 -0700
Yes we are thinking along the same lines. I haven't thought this idea
out, but it sure sounds like it could make dealing with generated
header files and grist more friendly.
Date: Tue, 11 Feb 2003 10:29:43 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Jam2.5 and precompiled headers
I do, but unfortunately its tied into our custom/internal Jambase which
I'm unable to publish.
First solve the problem for precompiled headers with the /Z7 debug info.
During solving that problem, you will likely need to create the precompiled
header and then use the .pch in the other files. When you associate the
.pch file and the flags to use it with the other .obj files, thats a good
time to also associate the semaphore. The semaphore is a small piece of the problem.
Subject: RE: Jam2.5 and precompiled headers
Date: Tue, 11 Feb 2003 11:14:08 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
Richard, Christopher, and I discussed the "on" thing in private emails a
couple weeks ago. They agreed on the central point, and I think 2.5 is
supposed to include the central fix.
We disagreed on secondary design points, and both our positions are
defensible, although I'm biased and think mine is "more" defensible. ;-)
Anyway, the central fix is enough to solve the root of the problem,
although you won't be able to use my Jambase as-is. It will need an
extra line before anyplace that I used the "VAR on $(x) += $(y)" syntax:
VAR on $(x) ?= $(VAR) ;
That will copy the global value of $(VAR) into the "VAR on $(x)" first,
if "VAR on $(x)" has not yet been populated. Otherwise the initial "+="
will merely set the new target-specific value and obscure the global
value, thus not doing what the rule writer intuitively expects. Without
the central fix, the "?=" line above won't work properly.
My private version's "+=" automatically copies the global value first if
appropriate, at least until someone can show me an example where it
would not be the Right Thing to do.
From: Jacob Gorm Hansen [mailto:jg@ioi.dk]
Sent: Tuesday, February 11, 2003 4:57 AM
Subject: RE: Jam2.5 and precompiled headers
Naturally, I agree. You mentioned in a private post that there is a bug
in Jam with regards to the 'on' keyword not correctly restricting its
scope. Can you elaborate on this bug and how to fix it?
Subject: Re: Jam2.5 and precompiled headers
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 11 Feb 2003 23:45:53 -0700
That's an interesting. I never thought about this because I've never
written a rule that would depend on the behavior of "on <target> +="
either way. I usually keep the names used for target variables
distinct from global variables.
After thinking about it though, the behavior of your jam does not
conform to my personal principle of least surprise. I think of the
"on target" variables as living in a separate namespace, and would
consider automatic "pollution" of this namespace an unexpected event.
Subject: RE: Jam2.5 and precompiled headers
Date: Tue, 11 Feb 2003 23:27:59 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
That's interesting. The original Jambase doesn't conform to your
approach of using distinct variable names. Look at CCFLAGS, C++FLAGS,
HDRS, STDHDRS, for example. Each of these are used both as globals and
also as "on target" variables.
You basically just illustrated my point for me. Taking what you said to
its logical conclusion, Jam should disallow intersection of the global
versus target variable namespaces. That would be fine with me, though
it would entail tweaking a lot of rules.
I simply took the path of least resistance to fix the problem, based on
precedence that already existed in the stock Jambase.
The trouble is that today Jam explicitly allows namespace overloading,
and the stock Jambase even takes advantage of it. So I don't buy the
argument that Jam is designed for the global and target variable
namespaces to be independent or separate. Or, put another way, I don't
buy that the argument is still valid in the current state of Jam's
evolution.
Besides, you might find it illuminating to take a closer look at the
exact values of for example $(HDRS) on targets. Part of what originally
sent me down this road was subtle cases where when I changed certain
headers Jam didn't rebuild the right things. After much debugging (and
sifting through -d9 output) I finally realized it was due to the problem
I've described. HDRS on the target was not quite the right value, due
to the ?= and += issue with the "on" keyword.
You might have cases lurking where "HDRS on target" is not set quite
right, and just not have noticed them yet. The only way I ever realized
tactic to fix the problem. Or the stock Jambase may have solved this
another way. I still haven't upgraded to Jam 2.4 much less 2.5.
Subject: Re: Jam2.5 and precompiled headers
From: Matt Armstrong <matt@lickey.com>
Date: Wed, 12 Feb 2003 10:46:25 -0700
Here is one argument in favor of your idea.
"a += b" is often taken as syntactic sugar for "a = a + b".
In the case of "on target" variables, we can expand += explicitly to
its "a = $(a) b ;" equivalent:
rule FetchTargetVar { on $(1) return $($(2)) ; }
VAR = global-value ;
# Semantic equivalent to "VAR on t += value ;"
VAR on t = [ FetchTargetVar t : VAR ] target-value ;
ECHO "global VAR =" $(VAR) ;
ECHO "VAR on t =" [ FetchTargetVar t : VAR ] ;
What does this print out?
global VAR = global-value
VAR on t = global-value target-value
The crux is that before variable a is set on a target, the value of "a
on target" is a's global value. So there is a real argument for +=
behaving the way you want.
Yes, I submitted a bug fix for this last December, and Christopher
based his fix on mine. I think the fix is in jam 2.5.
From: "Narayanan Balakrishnan" <narayanan_b@hotmail.com>
Date: Wed, 12 Feb 2003 19:35:19 +0000
Subject: Problem with SubInclude Rule
If the SubInclude rule is called in a Jamfile enclosed in an if (..)
condition, the following error is generated:
Top level of source tree has not been set with SubDir
Consider the following example:
Dir ~/myroot contains Jamrules
File ~/myroot/Jamfile contains the following entries:
# begin
SubDir ROOT ;
SubInclude ROOT folder1 ;
SubInclude ROOT folder2 ;
# end
File ~/myroot/folder1/Jamfile has the following entries:
# begin
SubDir ROOT folder1 ;
# end
File ~/myroot/folder2/Jamfile has the following entries:
# begin
SubDir ROOT folder2 ;
# end
Assuming the environment variable XYZ is set, if I modify ~/myroot/Jamfile
# begin
SubDir ROOT ;
if $(XYZ) = "USEFOLDER1" { SubInclude ROOT folder1 ; }
SubInclude ROOT folder2 ;
# end
then the error message "Top level of source tree has not been set with
SubDir" is seen. Is this is a known issue? I am using version 2.0.5 of jam
on solaris 2.7.
Subject: Re: Problem with SubInclude Rule
From: Matt Armstrong <matt@lickey.com>
Date: Wed, 12 Feb 2003 16:57:40 -0700
Did you try the exact example you posted? I did and did not run into
a problem. Also 2.0.5 is a very old release, you might try 2.4.
Subject: Re: Problem with SubInclude Rule
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 13 Feb 2003 10:33:55 +0100
I am doing similar stuff with no problems, with both jam 2.3 and 2.4.
Date: Thu, 13 Feb 2003 16:40:06 -0800 (PST)
Subject: Dealing with spaces in directory and file names
Does anyone know how to deal with directory and files names that have
spaces in them when they're File targets? I've tried adding quotes (and
every combination/variation I can think of) around the $(<) and $(>) in
the File actions, and I can get them to show up in -n ouput -- but when I
actually let it run for real, they've been eaten somewhere along the line,
so the copy command fails. Any help muchly appreciated.
Date: Fri, 14 Feb 2003 10:39:43 -0800 (PST)
Subject: Re: Dealing with spaces in directory and file names
Mystery solved. I forgot that I'd already added an override actions for
File in my rules file (so I could add a $(CPFLAGS) to the command line),
so when I wanted to add the quotes around the args, since I'd forgotten
there already was an actions File in my Jamrules, I added one again, and,
trying to be organized about it, ended up putting it further up in the
file than the one I'd forgotten was even there -- so of course, that one
(which didn't have any quotes around the args) was the last one in, so
that was the one getting run. Doh! (Note to self: Always search your rules
file first for whatever rule or actions you're thinking of tweaking.)
From: "Peter Klotz" <peter.klotz@aon.at>
Date: Sun, 16 Feb 2003 19:48:18 +0100
Subject: Parallel jam invocations corrupt batch files under Windows
Jam 2.4 seems to write batch files under Windows that have a simple
naming convention and are never deleted, just overwritten.
This leads to trouble when using several command shells, each executing
jam (e.g. compiling two different programs at the same time on a dual processor machine).
The attached patch changes the naming convention by adding a random
number and deleting the batch file when it is no longer needed.
Subject: RE: Parallel jam invocations corrupt batch files under Windows
Date: Mon, 17 Feb 2003 05:04:06 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
A random number doesn't solve the problem, it just reduces it
(also making it harder to track down the real problem later).
Why not go for a complete solution -- include the Jam process ID
and a monotonically increasing number. That's a solution that's
been posted here previously, and which my private version of Jam uses.
I thought the fix had been included in the official Jam already --
maybe it's in the 2.5 relnotes; I haven't checked.
Btw, I don't have problems about leftover temporary files from Jam
(but maybe I made a fix and forgot) on Windows.
From: Peter Klotz [mailto:peter.klotz@aon.at]
Sent: Sunday, February 16, 2003 10:48 AM
Jam 2.4 seems to write batch files under Windows that have a simple naming
convention and are never deleted, just overwritten.
This leads to trouble when using several command shells, each executing jam
(e.g. compiling two different programs at the same time on a dual processor machine).
The attached patch changes the naming convention by adding a random number
and deleting the batch file when it is no longer needed.
From: David Abrahams <david.abrahams@rcn.com>
Date: Mon, 17 Feb 2003 10:57:29 -0500
Subject: GLOB and case-sensitivity
Boost.Jam seems to have case-sensitive GLOB on NT. In other words, if I use
GLOB "." : *.bat
and there's a FOO.BAT in the current directory, the result will still
be empty. Is this something that's been fixed in Perforce Jam, or
should it be considered a bug?
Subject: Re: Parallel jam invocations corrupt batch files under Windows
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 17 Feb 2003 10:32:47 -0700
The current release candidate for Jam 2.5 contains a fix for this. It
includes the PID in the temp file name.
From: David Abrahams <david.abrahams@rcn.com>
Date: Wed, 19 Feb 2003 04:36:33 -0500
Subject: GLOB and case-sensitivity
Boost.Jam seems to have case-sensitive GLOB on NT. In other words, if I use
GLOB "." : *.bat
and there's a FOO.BAT in the current directory, the result will still
be empty. Is this something that's been fixed in Perforce Jam, or
should it be considered a bug?
From: David Abrahams <david.abrahams@rcn.com>
Subject: Re: GLOB and case-sensitivity
Date: Wed, 19 Feb 2003 11:50:02 -0500
Now patched in Boost.Jam CVS to downcase found filenames on NT and
Cygwin before matching against the pattern. An optional 3rd argument
causes downcasing on all platforms. Case-insensitive comparison can
be achieved by lowercasing the patterns before passing them in.
Date: Thu, 20 Feb 2003 18:27:08 +0900
From: Anthony Heading <aheading@jpmorgan.com>
Subject: Spurious warning in 2.5?
I tried to send this to this list a week ago, but I think it must have been lost.
2.5 pre seems to believe there is a circular dependency from the following Jamfile:
===============================
rule CProto {
Depends $(<) : $(>) ;
INCLUDES $(>) : $(<) ;
}
actions CProto { cproto -o $(<) $(>) }
CProto file.cproto.h : file.c ;
Main file : file.c ;
===============================
The idea here is that an included header file is
autogenerated from source - for example "cproto"
is a program which autogenerates a prototype header.
All seems to work cleanly with 2.4.
2.5pre says:
warning: file.c depends on itself
which I think is entirely false. file.c is a source
file and doesn't depend on anything.
Any confirmation or opposite about whether this
indeed a bug or not much welcomed.
Date: Fri, 21 Feb 2003 09:31:08 +1100
From: Russell Shaw <rjshaw@iprimus.com.au>
Subject: Re: Spurious warning in 2.5?
Expanding it out shows:
Depends file.cproto.h : file.c
INCLUDES file.c : file.cproto.h
So, file.cproto.h depends on file.c. However, whatever
depends on file.c also depends on file.cproto.h. Therefore,
file.cproto.h depends on file.cproto.h. The error message
seems to be wrong, but it still looks like a circular dependency.
From: mahnazt@frogware.com
Date: Thu, 20 Feb 2003 17:10:55 -0500
Subject: Making jam from the sources on Linux
I am trying to make a binary of jam from source on Linux Redhat
7.1 2.96-81 installation with
gcc
make and gnu for redhat-Linux ver 8
I downloaded the sources and what i could gather from the README was to
just issue a make command in the source directory.
Here is the error I get
$ make
jam0
make : jam0 command not found
make *** [all] Error 127
Subject: Re: Making jam from the sources on Linux
From: Matt Doar <matt@trpz.com>
Date: 20 Feb 2003 14:40:35 -0800
Make sure your PATH contains the directory where you ran make.
From: Vladimir Prus <ghost@cs.msu.su>
Date: Fri, 21 Feb 2003 11:05:28 +0300
Subject: Generation of two targets to the same location
jam now is quite benign toward binding several targets to the same location.
In general, this cannot be made error. However, suppose that two targets are
bound to the same location via LOCATE variable. For me, this looks like 100%
error, and I'd like to be reported as such, or at least become warning. In
Boost.Jam, such warning/error can be trivially added.
Is this desire resonable? Or there are valid use cases where two targets are
bound to the same location via LOCATE?
Date: Fri, 21 Feb 2003 18:02:19 +0900
From: Anthony Heading <aheading@jpmorgan.com>
Subject: Re: Spurious warning in 2.5?
Ah yes. Hmm. I guess that dependency is really an unwanted
artifact of the way that INCLUDES works.
Is there a way to express the correct relationship without
generating that dependency or warning message?
Date: Sat, 22 Feb 2003 07:49:54 +1100
From: Russell Shaw <rjshaw@iprimus.com.au>
Subject: Re: Spurious warning in 2.5?
I haven't done any jam for a while, but i'd start with this:
rule CProto {
Depends $(<) : $(>) ;
Depends exe : $(<) ;
}
actions CProto {
cproto -o $(<) $(>)
}
CProto file.cproto.h : file.c ;
Main file : file.c ;
===============================
exe is a sudo target used in rule MainFromObjects:
http://public.perforce.com/public/jam/src/Jambase
What file includes file.cproto.h? This information might
be needed to get the order of updating right.
Date: Fri, 21 Feb 2003 19:48:00 +0900
From: Anthony Heading <aheading@jpmorgan.com>
Subject: Re: Spurious warning in 2.5?
file.c. Hence the original INCLUDES was pretty much
appropriate. And why thus I wasn't sure that the
ordering would be right doing something like you
suggest...
Date: Sat, 22 Feb 2003 10:00:31 +1100
From: Russell Shaw <rjshaw@iprimus.com.au>
Subject: Re: Spurious warning in 2.5?
HdrRule in http://public.perforce.com/public/jam/src/Jambase
already does "INCLUDES file.c : file.cproto.h. How about
this:
rule CProto { Depends $(<) : $(>) ; }
actions CProto { cproto -o $(<) $(>) }
CProto file.c : file.cproto.h ;
Main file : file.c ;
From: David Abrahams <david.abrahams@rcn.com>
Date: Fri, 21 Feb 2003 08:50:42 -0500
Subject: Re: [jamboost] Generation of two targets to the same location
We need to keep the target from being rebuilt the next time, once the
expected failure occurs.
From: "Axelsson, Andreas" <Andreas.Axelsson@dice.se>
Date: Mon, 24 Feb 2003 09:55:20 +0100
Subject: MSVC Cl with multiple source files?
has anyone experimented with modifying the Object and related rules to allow
Cl to take more than one sourcefile at the time on the commandline? In MS
DevStudio that's part of the compile-time optimization, as the compiler
doesn't have to reload/initialize for each file, but builds 5-10 files at a
time. I'm rather new to Jam so before I spend too much time on this I'd like
to know if anyone's tested it.
In short I wish to get the following done:
Main test : source1.cpp source2.cpp [sourceN.cpp] ;
"cl source1.cpp source2.cpp [sourceN.cpp] $(ALL_THE_FLAGS)"
If N is to high, it will split into several calls.
It looks like the Library calls do something similar when there are many
object files, so it shouldn't be to much of a problem.
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 24 Feb 2003 17:14:48 +0100
Subject: is -q gone
I noticed that in Matt Armstrongs branch of Jam (which I use for the
header scanning) the -q option was missing. Is this deliberate move, or
will it reappear later on?
Date: Tue, 25 Feb 2003 16:45:02 +0900
From: Anthony Heading <aheading@jpmorgan.com>
Subject: Re: Spurious warning in 2.5?
So it does! Nice to get rid of a line - it's a little
cleaner then. But it gives the same warning. As I think
one would expect: the explicit INCLUDES was just a duplicate.
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Making jam from the sources on Linux
Date: Tue, 25 Feb 2003 16:10:11 +0100
Or even better, change jam so it invokes ./jam0 instead of just jam0.
Subject: Re: is -q gone
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 25 Feb 2003 08:53:36 -0700
I just noticed that it does not even compile! :-)
If -q is missing, it is not intended. I'll see what I can do to fix
it, but I won't have time to do so any time soon.
From: "Axelsson, Andreas" <Andreas.Axelsson@dice.se>
Subject: RE: MSVC Cl with multiple source files?
Date: Tue, 25 Feb 2003 17:05:17 +0100
After some tests it looks like it won't matter if I use the "together" and
"piecemeal" modifiers, as each sourcefile will result in one objectfile, so
there's no way of specifying a target that is actually more than one file.
Compare with Library, which makes one library out of a number of object
files, each generated from one source-file.
What I'd like to do is:
Objects a.o b.o c.o : a.c b.c c.c ;
and have the Objects rule understand that building all c-files in one go
will produce all o-files, but that the dependency is only between each
matching source and object file.
From: Robert Cowham [mailto:robert@vaccaperna.co.uk]
Sent: den 24 februari 2003 15:54
Subject: RE: MSVC Cl with multiple source files?
Check out:
actions [ modifiers ] rulename { commands }
Define a rule's updating actions, replacing any previous definition. The
first two arguments may be referenced in the action's commands as $(1) and
$(2) or $(<) and $(>).
The following action modifiers are understood:
actions piecemeal commands are repeatedly invoked with a subset of $(>)
small enough to fit in the command buffer on this OS.
See existing Jambase for use of piecemeal.
Subject: Re: MSVC Cl with multiple source files?
From: Matt Armstrong <matt@lickey.com>
Date: Tue, 25 Feb 2003 14:37:27 -0700
Yeah, that'd probably require messing with the guts of jam to get
working. I posted my ideas on the subject last month or so.
From: David Abrahams <david.abrahams@rcn.com>
Date: Wed, 26 Feb 2003 20:41:23 -0500
Subject: Re: [jamboost] bjam vs scons vs cmake vs ...
I think each system has its own advantages and disadvantages;
Boost.Build is only a better choice than others if your needs
correspond with its strengths. That said, I can draw some very broad
distinctions between Boost.Build, Perforce Jam, and Scons. Natural
I'll say more about Boost.Build; I hope representatives of other
systems will correct me if I make a mistake here.
very weak interpreted language on top of it. It is designed as a
replacement for make, which it improves on in several ways, among them
by adding a more powerful and understandable imperative language, a
nice syntax for specifying targets, and eliminating the need for
recursive make invocations in large projects.
Boost.Build is designed for high-level build configuration: you
describe targets and their relationships in platform- and
toolset-independent terms, and the build system takes care of the
nitty-gritty details of selecting toolset-specific compiler flags,
adjusting for the fact that some platforms' shared libraries need to
be used with import libraries while others' do not, etc. Another
feature is that you can ask to build with multiple configurations
simultaneously (e.g. to test with several toolsets). Targets may
"propagate" properties such as #include paths or #defines which are
needed by their dependent targets. There are many more features along
these lines, designed to handle the details of build configuration for
you. The underlying build tool (bjam) builds upon Perforce Jam with
language extensions such as a module system, argument list
declaration, etc., which are valuable in constructing any large,
reliable software system. The underlying dependency analysis/build
engine is very similar to what is found in Perforce Jam.
Scons began by geting the underlying build engine right; they work
hard to make sure that in a multiprocessor parallel build system, the
absolute maximum amount of parallelism is used, and that absolutely no
files are rebuilt unless they need to be. It started out aimed at
roughly the same level of build specification as Perforce Jam, though
they have gradually been getting more high-level. I don't think
they're quite at the level of Boost.Build yet, especially not v2 which
is nearing release. Scons is built on Python, which is more powerful
and expressive than the Boost.Jam language for most jobs. However,
since the Perforce Jam language was designed to make minimal build
specification easy (e.g. no quotes needed around most filenames),
Scons is still slightly less-slick for the user to write build
specifications in. If they are comfortable with Python, though, that
may be a non-issue.
From: "Craig Allsop" <callsop@sceptre.net>
Date: Fri, 28 Feb 2003 15:45:32 +1000
Subject: flags on source?
Is there an easy method of adding flags for sources? I'd like to specify
a path prefix for each source when running an archive tool but I can
only see how to do this on the target but my target is one file. The
path would have no relation to the path of the real file.
From: Jack_Goral@NAI.com
Date: Fri, 28 Feb 2003 07:42:53 -0800
Subject: jam-2.5rc2 : Jamrules inn TOP directory not read when using SubDi
SubDir/SubInclude seems to be broken and it was working on same
Jamfile/Jamrules file set in jam-2.4
System:
NT 4.0 / W2K
jam build for MS Visual C++ 6.0
Date: Fri, 28 Feb 2003 17:25:31 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: Re: jam-2.5rc2 : Jamrules inn TOP directory not read when
using SubDi r/SubInclude
It seems to be only broken when TOP is set before "SubDir TOP". The
problem is reported and we're still waiting for a proper fix in the
perforce depot...
From: David Abrahams <david.abrahams@rcn.com>
Date: Fri, 28 Feb 2003 14:17:33 -0500
Subject: Re: [scons-devel] scons vs cmake vs bjam ...
I totally agree with you :( which is one reason I wrote
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/*checkout*/boost/boost/tools/build/jam_src/index.html#jam_fundamentals.
I really wish Perforce would do a more complete job of documenting
(and testing**) their code.
<mayberantthree>
And often people will assume the tutorial is supposed to tell them
everything they need to know and will completely ignore the reference
documentation when they get stuck, even if they remember the tutorial
will exist.
</mayberantthree>
**AFAICT Boost has the only regression tests for Jam.
From: mahnazt@frogware.com
Date: Fri, 28 Feb 2003 15:16:06 -0500
Subject: changing LINKFLAG in jamfile
I ma compilng a large product with jam ,some of the files are compiling
with Cc rules and some of them with C++.For linking I need to change the
LINKFLAG .Would you tell me how can I do it .
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 03 Mar 2003 13:41:50 -0700
Subject: -dc output broken in 2.5rc2
Christopher talk about a simple fix for the broken -dc output in rc2.
Any chance that it could be pushed out to public.perforce.com or posted here?
From: Rob Walker <rob.lists@tenfoot.org.uk>
Date: Wed, 12 Mar 2003 10:13:03 +0000
Subject: Emacs mode
Attached is my extension to the Emacs mode for jam produced by Eric Scouten.
I've added support for indentation and imenu.
It works OK with GNU Emacs 21.2 and maybe other versions and XEmacs, but I
haven't tried them. (I seem to recall having to change the (require 'generic)
to (require 'generic-mode) for XEmacs).
It's also available at http://www.tenfoot.org.uk/emacs/jam-mode.el
;; *****************************************************************************
;; jam-mode.el
;; Font-lock support for Jam files
;; http://www.tenfoot.org.uk/emacs/
;; 04 Mar 2003 - 0.1 - Added imenu support and basic indentation
;; Copyright (C) 2000, Eric Scouten
;; *****************************************************************************
;; To add font-lock support for Jam files, simply add the line
;; (require 'jam-mode) to your .emacs file. Make sure generic-mode.el
;; is visible in your load-path as well.
;; *****************************************************************************
;; Generic-mode is a meta-mode which can be used to define small modes
;; which provide basic comment and font-lock support. Jam-mode depends on
;; this mode.
;; Might need generic-mode for xemacs??
(require 'generic)
(define-generic-mode 'jam-mode
; Jam comments always start with '#'
(list ?# )
; Jam keywords (defined later)
nil
; Extra stuff to colorize
(list
; Jam keywords
(generic-make-keywords-list
(list "actions" "bind" "case" "default" "else" "existing" "for" "if"
"ignore" "in" "include" "local" "on" "piecemeal" "quietly" "rule" "switch"
"together" "updated")
'font-lock-keyword-face)
; Jam built-in variables
(generic-make-keywords-list
(list
"JAMDATE" "JAMSHELL" "JAMUNAME" "JAMVERSION" "MAC" "NT" "OS" "OS2"
"OSPLAT" "OSVER" "UNIX" "VMS")
'font-lock-constant-face)
; Jam built-in targets
(generic-make-keywords-list
(list
"ALWAYS" "Depends" "ECHO" "INCLUDES" "LEAVES" "LOCATE" "NOCARE"
"NOTFILE" "NOUPDATE" "SEARCH" "TEMPORARY")
'font-lock-builtin-face)
; Jam built-in targets (warnings)
(generic-make-keywords-list
(list
"EXIT")
'font-lock-warning-face)
; Jambase rules
(generic-make-keywords-list
(list
"Archive" "As" "Bulk" "Cc" "CcMv" "C\\+\\+" "Chgrp" "Chmod" "Chown" "Clean" "CreLib"
"Depends" "File" "Fortran" "GenFile" "GenFile1" "HardLink"
"HdrRule" "Install" "InstallBin" "InstallFile" "InstallInto" "InstallLib" "InstallMan"
"InstallShell" "Lex" "Library" "LibraryFromObjects" "Link" "LinkLibraries"
"Main" "MainFromObjects" "MakeLocate" "MkDir" "MkDir1" "Object" "ObjectC\\+\\+Flags"
"ObjectCcFlags" "ObjectHdrs" "Objects" "Ranlib" "RmTemps" "Setuid" "SubDir"
"SubDirC\\+\\+Flags" "SubDirCcFlags" "SubDirHdrs" "SubInclude" "Shell" "Undefines"
"UserObject" "Yacc" "Yacc1" "BULK" "FILE" "HDRRULE" "INSTALL" "INSTALLBIN" "INSTALLLIB"
"INSTALLMAN" "LIBRARY" "LIBS" "LINK" "MAIN" "SETUID" "SHELL" "UNDEFINES"
"addDirName" "makeCommon" "makeDirName" "makeGrist" "makeGristedName" "makeRelPath"
"makeString" "makeSubDir" "makeSuffixed" "unmakeDir")
'font-lock-function-name-face)
; Jambase built-in targets
(generic-make-keywords-list
(list
"all" "clean" "dirs" "exe" "files" "first" "install" "lib" "obj" "shell" "uninstall")
'font-lock-builtin-face)
; Jambase built-in variables
(generic-make-keywords-list
(list
"ALL_LOCATE_TARGET" "AR" "ARFLAGS" "AS" "ASFLAGS" "AWK" "BCCROOT" "BINDIR" "CC" "CCFLAGS"
"C\+\+" "C\\+\\+FLAGS" "CHMOD" "CP" "CRELIB" "CW" "CWGUSI" "CWMAC" "CWMSL" "DOT" "DOTDOT"
"EXEMODE" "FILEMODE" "FORTRAN" "FORTRANFLAGS" "GROUP" "HDRGRIST" "HDRPATTERN" "HDRRULE"
"HDRS" "HDRSCAN" "HDRSEARCH" "INSTALL" "JAMFILE" "JAMRULES" "LEX" "LIBDIR" "LINK"
"LINKFLAGS" "LINKLIBS" "LOCATE_SOURCE" "LOCATE_TARGET" "LN" "MACINC" "MANDIR" "MKDIR"
"MODE" "MSLIB" "MSLINK" "MSIMPLIB" "MSRC" "MSVC" "MSVCNT" "MV" "NEEDLIBS" "NOARSCAN"
"OSFULL" "OPTIM" "OWNER" "RANLIB" "RCP" "RELOCATE" "RM" "RSH" "RUNVMS" "SEARCH_SOURCE"
"SED" "SHELLHEADER" "SHELLMODE" "SLASH" "SLASHINC" "SOURCE_GRIST" "STDHDRS" "STDLIBPATH"
"SUBDIR" "SUBDIRASFLAGS" "SUBDIRC\\+\\+FLAGS" "SUBDIRCCFLAGS" "SUBDIRHDRS" "SUBDIR_TOKENS"
"SUFEXE" "SUFLIB" "SUFOBJ" "UNDEFFLAG" "UNDEFS" "WATCOM" "YACC" "YACCFLAGS" "YACCFILES")
'font-lock-function-name-face)
; Jam variable references $(foo)
'("$(\\([^ :\\[()\t\r\n]+\\)[)\\[:]" 1 font-lock-variable-name-face))
; Apply this mode to all files whose names start with "Jam".
(list "\\Jam")
; Attach setup function so we can modify syntax table.
(list 'jam-mode-setup-function)
; Brief description
"Generic mode for Jam rules files")
(defun jam-mode-setup-function ()
(modify-syntax-entry ?_ "w")
(modify-syntax-entry ?. "w")
(modify-syntax-entry ?/ "w")
(modify-syntax-entry ?- "w")
(modify-syntax-entry ?+ "w")
(setq imenu-generic-expression
'(("Rules" "^rule\\s-+\\([A-Za-z0-9_]+\\)" 1)
("Actions" "^actions\\s-+\\([A-Za-z0-9_]+\\)" 1)))
(imenu-add-to-menubar "Jam")
(make-local-variable 'indent-line-function)
(setq indent-line-function 'jam-indent-line)
(run-hooks 'jam-mode-hook)
)
(defvar jam-mode-hook nil)
(defvar jam-indent-size 2
"Amount to indent by in jam-mode")
(defvar jam-case-align-to-colon t
"Whether to align case statements to the colons")
(defun jam-indent-line (&optional whole-exp)
"Indent current line"
(interactive)
(let ((indent (jam-indent-level))
(pos (- (point-max) (point))) beg)
(beginning-of-line)
(setq beg (point))
(skip-chars-forward " \t")
(if (zerop (- indent (current-column)))
nil
(delete-region beg (point))
(indent-to indent))
(if (> (- (point-max) pos) (point))
(goto-char (- (point-max) pos)))
))
(defun jam-goto-block-start ()
"Goto the start of the block containing point (or beginning of buffer if not
in a block"
(let ((l 1))
(while (and (not (bobp)) (> l 0))
(skip-chars-backward "^{}")
(unless (bobp)
(backward-char)
(setq l (cond
((eq (char-after) ?{) (1- l))
((eq (char-after) ?}) (1+ l))
)))
)
(bobp))
)
(defun jam-indent-level ()
(save-excursion
(let ((p (point))
ind
(is-block-start nil)
(is-block-end nil)
(is-case nil)
(is-switch nil)
switch-ind)
;; see what's on this line
(beginning-of-line)
(setq is-block-end (looking-at "^[^{\n]*}\\s-*$"))
(setq is-block-start (looking-at ".*{\\s-*$"))
(setq is-case (looking-at "\\s-*case.*:"))
;; goto start of current block (0 if at top level)
(if (jam-goto-block-start)
(setq ind 0)
(setq ind (+ (current-indentation) jam-indent-size)))
;; increase indent in switch statements (not cases)
(setq is-switch (re-search-backward "^\\s-*switch" (- (point) 100) t))
(when (and is-switch (not (or is-block-end is-case)))
(goto-char p)
(setq ind (if (and jam-case-align-to-colon
(re-search-backward "^\\s-*case.*?\\(:\\)"))
(+ (- (match-beginning 1) (match-beginning 0))
jam-indent-size)
(+ ind jam-indent-size)))
)
;; indentation of this line is jam-indent-size more than that of the
;; previous block
(cond (is-block-start ind)
(is-block-end (- ind jam-indent-size))
(is-case ind)
(t ind)
)
)))
(provide 'jam-mode)
;; jam-mode.el ends here
From: "Gabe E. Nydick" <gnydick@transacttools.net>
Date: Thu, 13 Mar 2003 15:50:39 -0500
Subject: compiling multiple types of objects
I am trying to simply compile (on Linux) a .o, .so and executable file in
the same jamfile. Everything works as separate operations, but because the
only way to cause the linker to pick the .so is to change the SUFOBJ
variable, I can only link in either all .o or all .so. What am I doing wrong?
Date: Fri, 14 Mar 2003 12:10:48 +1100
From: Russell Shaw <rjshaw@iprimus.com.au>
Subject: Re: compiling multiple types of objects
IIRC, a shared object file is not linked into your program at compile time,
but is linked at run time. Anyway, you don't compile objects, only link them.
Date: Thu, 13 Mar 2003 22:45:18 -0800
From: rmg@foxcove.com
Subject: jam 2.5rc2
jam 2.5rc2 (release candidate 2) is now playing at the Perforce Public
Depot. Release Notes are at
http://public.perforce.com/public/jam/src/RELNOTES
Release archives (source code only):
ftp://ftp.perforce.com/jam/jam-2.5.tar
or ftp://ftp.perforce.com/jam/jam-2.5.zip
The release will bear the "rc2" suffix for a few weeks, to give some
time for any serious bugs to surface. If none do, it shall quietly be
dropped, and this will become the final jam 2.5 release.
Here are the changes between jam2.5rc1 and rc2 (from the RELNOTES):
0. Changes between 2.5rc1 and 2.5rc2:
Several uninitialized memory accesses have been corrected in
var_expand() and file_archscan(), thanks to Matt Armstrong.
Fix 'actions updated' broken by change 2487. (See the description
to change 2612 for details).
Fix "includes of includes not being considered", broken by 2499.
(See the description to change 2614 for details).
Remove NT FQuote rule, as, \" is required to pass quotes on the
command line.
Porting change: allow jam to build with BorlandC 5.5
Porting change: for WinXP IA64; set MSVCNT to the root of the SDK
and MSVCVer to Win64; change handle type to long long (too much to
include windows.h?); bury IA64 in the library path in Jambase.
Porting change: Mac classic MPW Codewarrior 7 upgrades: minor
compiling issues, new paths in Jambase for libraries and includes,
and separate out GenFile1 that sets PATH for UNIX only, as it
does't work under MPW (or anything other than with sh).
Porting change: Minor Cray porting: make hashitem()'s key value
unsigned so we're guaranteed no integer overflows.
SubDir's support for an externally set TOP variable was broken in
2.5rc1. It now works as it did in 2.4. Further, using SubDir to
include a subpart of an SubDir tree now works. Previously, you
could only include the root of another SubDir tree. For example,
SubDir ALL src builds ;
SubInclude ALL src server support ;
Essentially includes the ../server/support/Jamfile, without getting
confused as to the current directory.
From: "Joe Bruce" <joe.bruce@acterna.com>
Date: Fri, 14 Mar 2003 09:51:23 -0500
Subject: Bug in header scanning
There appears to be a bug in Jam's header file scanning. I confirmed the
bug in both 2.4 (which we just started using to replace GNU make +
automake/autoconf) and in 2.5-rc2. For better or worse, we have the
following directory structure that exposes the problem in Jam:
component1/
include/
component1/
headerA.h
headerB.h
src/
source1.cpp
component2/
include/
component2/
headerC.h
headerD.h
src/
source2.cpp
Given this example structure, the file source2.cpp has a directive to
#include "component1/headerA.h" - the include search path of the compiler
is set to include "component1/include". Jam (as seen with jam -d3) has no
trouble binding component1/headerA.h. However, if "headerA.h" has a directive
to #include "headerB.h" (note that "component1/" is missing), Jam reports
"headerB.h" as missing. The compiler (g++) has no trouble with this, as it
searches the current directory of "headerA.h" first and finds "headerB.h" there.
This causes a clean build to work successfully, but modifying "headerB.h" will
not cause any files to rebuild - an obvious problem while
developing. If any additional information is required, please let me
know. We have found a workaround by specifying both directories in SUBDIRHDRS
of the Jamfile. For example,
SUBDIRHDRS = $(TOP)/component1/include
$(TOP)/component1/include/component1 ;
However, this work-around has some serious maintenance implications:
1. Developers need to remember to add both paths to SUBDIRHDRS or the build
will not work right.
2. Developers could get "lazy" in other components and leave off the
"component1/" from the #include directives. We may change the structure to
eliminate the extra directory
(i.e., move the contents of component1/include/component1 to component1/include),
but we need to ensure that the header files have unique names.
Date: Fri, 14 Mar 2003 07:49:27 -0800 (PST)
Subject: Re: Bug in header scanning
I don't know that it's ever been acknowledged as an actual bug -- but for
the history of the possible solutions, see:
Subject: Re: Bug in header scanning
From: "Joe Bruce" <joe.bruce@acterna.com>
Date: Fri, 14 Mar 2003 11:52:21 -0500
Personally, I believe this IS a bug - it is an inconsistency between the
way Jam works and the way the C preprocessor works. The C preprocessor
first looks in the current directory (that of the source or header file)
if the #include'd file is in quotes. Jam does not do this - it treats
#include "file.h" as #include <file.h>, which is technically incorrect.
I plan to implement one of the listed work-arounds, but I still believe
that this is a bug should be fixed in the main line of development due to
the inconsistency with the C preprocessor. The results of the bug are
"catastrophic" - users are unaware that their rebuild is incomplete (as we were).
Any chance that this would be fixed (even using one of the work-arounds)
in the real 2.5 release? ;-)
03/14/2003 10:49 AM
Subject: Re: Bug in header scanning
I don't know that it's ever been acknowledged as an actual bug -- but for
the history of the possible solutions, see:
From: "Peter Klotz" <peter.klotz@aon.at>
Date: Sat, 15 Mar 2003 10:30:40 +0100
Subject: jam 2.5-rc2: Temporary files under Windows remain
The fix under Windows which ensures unique filenames for batch files leads
to batch files piling up in the Windows temporary directory.
The attached patch fixes that behavior by deleting the batch files when no longer needed.
name="jam-2.5-rc2.win32-tmp.patch"
filename="jam-2.5-rc2.win32-tmp.patch"
diff -urb jam-2.5/jam-2.5/execunix.c jam-2.5.patched/jam-2.5/execunix.c
--- jam-2.5/jam-2.5/execunix.c 2003-03-06 22:48:54.000000000 +0100
+++ jam-2.5.patched/jam-2.5/execunix.c 2003-03-15 10:21:44.000000000
+0100
@@ -310,6 +310,11 @@
cmdtab[ i ].pid = 0;
+#ifdef USE_EXECNT
+ if (cmdtab[i].tempfile)
+ unlink(cmdtab[i].tempfile);
+#endif
+
(*cmdtab[ i ].func)( cmdtab[ i ].closure, rstat );
return 1;
Subject: Re: compiling multiple types of objects
From: Matt Armstrong <matt@lickey.com>
Date: Sat, 15 Mar 2003 19:23:22 -0700
That's impossible to really tell without more details.
If you need to hack the value of SUFOBJ for certain runs of certain
rules, you can do that. E.g.
local save_SUFOBJ = $(SUFOBJ) ;
SomeRule foo : bar ;
SUFOBJ = $(save_SUFOBJ) ;
But, that doesn't seem like something you'd want to do.
I'm not up on what it takes to compile a .so, but the default Jambase
is set up for the more typical .c -> .o -> .lib -> executable
sequence. You may need to write some custom rules to do what you
want.
Subject: Re: Bug in header scanning
From: Matt Armstrong <matt@lickey.com>
Date: Sat, 15 Mar 2003 19:42:28 -0700
Note that not all compilers implicitly prepend or append the directory
in which an #included header was found to the header's search path.
Consider this contrived example.
include/A/1.h
include/A/2.h
include/B/1.h
include/B/2.h
A1.h does:
#include "B/1.h"
#include "2.h"
Which "2.h" file is included? Will Jam consider the same 2.h as the
compiler?
The C standard leaves the interpretation of the file names in #include
directives totally up to the implementation. The list of additional
"local" locations searched with double quote #include directives is
also left to the implementation. I've encountered compilers that do
not implicitly search the same directory as the .h file for double
quoted #include lines (instead, they search only the same directory as
the initial .c file).
The last company I worked at, we avoided these problems by banning
double quote #includes from .h files altogether. This forced people
to fully qualify their #include paths and avoided some portability issues.
That said, most compilers do this and you might notice that I did one
of the 3 patches to jam that "fix" this problem.
Date: Tue, 18 Mar 2003 17:28:37 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: jam 25rc2 compiles libraries again when only the application changed
after using jam2.5rc2 for some while I now found problems when rebuilding
parts of an application. Unfortunately I wasn't able to reproduce the problem
with simple Jamfiles (I tried several hours). So this is a more verbose report
about the problem. My system is a x86 linux/debian machine.
1 of the targets in my project is the simple1 application which is build from
1 source file which is compiled and linked to several libraries. After a
complete rebuild I get the correct bahaviour in jam2.5rc1:
Test1:
matze@Tom:~/projects/CSjam/CS$ touch apps/tutorial/simple1/simple1.cpp
matze@Tom:~/projects/CSjam/CS$ jam25rc1 simple1
...found 799 target(s)...
...updating 2 target(s)...
C++ ./out/linuxx86/debug/apps/tutorial/simple1/simple1.o
LinkApplication simple1
...updated 2 target(s)...
Test2:
matze@Tom:~/projects/CSjam/CS$ rm out/linuxx86/debug/apps/tutorial/simple1/simple1.o
matze@Tom:~/projects/CSjam/CS$ jam25rc1 simple1
...found 799 target(s)...
...updating 2 target(s)...
C++ ./out/linuxx86/debug/apps/tutorial/simple1/simple1.o
LinkApplication simple1
...updated 2 target(s)...
Doing the same with jam2.5rc2 results in this:
Test1:
matze@Tom:~/projects/CSjam/CS$ touch apps/tutorial/simple1/simple1.cpp
matze@Tom:~/projects/CSjam/CS$ jam25rc2 simple1
...found 799 target(s)...
...updating 2 target(s)...
C++ ./out/linuxx86/debug/apps/tutorial/simple1/simple1.o
LinkApplication simple1
...updated 2 target(s)...
Test2:
matze@Tom:~/projects/CSjam/CS$ rm out/linuxx86/debug/apps/tutorial/simple1/simple1.o
matze@Tom:~/projects/CSjam/CS$ jam25rc2 simple1
...found 799 target(s)...
...updating 140 target(s)...
C++ ./out/linuxx86/debug/apps/tutorial/simple1/simple1.o
C++ ./out/linuxx86/debug/libs/csgeom/csrect.o
C++ ./out/linuxx86/debug/libs/csgeom/vector2.o
... jam recompiles all libraries here ...
C++ ./out/linuxx86/debug/libs/cstool/mdldata.o
Archive ./out/linuxx86/debug/libs/cstool/libcstool.a
Ranlib ./out/linuxx86/debug/libs/cstool/libcstool.a
LinkApplication simple1
...updated 140 target(s)...
I've upload the results of a a "jam25rc1 -dd simple1" and a
"jam25rc2 -dd simple1" run here:
http://crystal.sourceforge.net/jam25rc1deps (124k)
http://crystal.sourceforge.net/jam25rc2deps (123k)
Feel free to mail me if you need more informations about the bug or my buildrules.
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 19 Mar 2003 13:37:57 +0100
Subject: Dealing with non-unique target names
I would like my temporary object-files to end up in a structure similar
to that of my source tree, or alternatively with all files prefixed grist-like.
Is this possible to achieve from Jamrules, or do I need to modify each
and every Jamfile in my project?
Date: Wed, 19 Mar 2003 14:21:44 +0100 (MET)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Dealing with non-unique target names
Good question. I don't know any, that doesn't involve a significant amount of hacking.
It would be great, if an empty UserSubDir rule (similar to UserObject)
could be introduced, that SubDir invokes at its very end. That way one
could simply override UserSubDir in Jamrules, using it for setting
directory local stuff, like the LOCATE_* variables.
Date: Thu, 20 Mar 2003 13:16:50 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: error in jams new includefile handling Was: jam 25rc2 compiles
libraries again when only the application changed
I finally was able to create a simple example of the bug. I'm having the
following project structure:
Jamfile:
Main test : test.c ;
Library testlib : testlib.c ;
LinkLibraries test : testlib ;
test.c:
#include "1.h"
int main() { return 0; }
testlib.c:
#include "1.h"
1.h:
#include "2.h"
2.h:
/* empty */
If you now remove test.o after building all the files then jam is
rebuilding the library as well although nothing changed there:
matze@kiff at $ jam -v
Jam 2.5rc2. OS=LINUX. Copyright 1993-2002 Christopher Seiwald.
matze@kiff at $ jam -d4 test
make -- test
time -- test: Thu Mar 20 13:13:04 2003
make -- test.o
time -- test.o: missing
make -- test.c
time -- test.c: Thu Mar 20 13:10:31 2003
make -- test.c
time -- test.c: parents
make -- 1.h
time -- 1.h: Thu Mar 20 13:10:52 2003
make -- 1.h
time -- 1.h: parents
make -- 2.h
time -- 2.h: Thu Mar 20 13:11:14 2003
made* newer 2.h
made* newer 1.h
made* newer test.c
made+ update test.o
make -- testlib.a
time -- testlib.a: Thu Mar 20 13:13:03 2003
make -- testlib.a(testlib.o)
time -- testlib.a(testlib.o): Thu Mar 20 13:13:03 2003
make -- testlib.o
time -- testlib.o: parents
make -- testlib.c
time -- testlib.c: Thu Mar 20 13:10:37 2003
make -- testlib.c
time -- testlib.c: parents
made stable testlib.c
made+ update testlib.o
made+ update testlib.a(testlib.o)
made+ update testlib.a
made+ update test
...found 9 target(s)...
...updating 4 target(s)...
Cc test.o
Cc testlib.o
Archive testlib.a
Ranlib testlib.a
Link test
Chmod1 test
...updated 4 target(s)...
jam is marking 2.h as newer here (although it certainly is not newer)
which seems to trigger unneeded rebuild of the library.
I just reverted the latest include changes in jam and the behaviour was
correct again. So it seems these changes introduced the bug.
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 20 Mar 2003 15:41:19 +0100
Subject: hdrscan cache as patch
does anyone have the hdrscan code available as a standalone patch?
I would like to try to apply it to 2.5rc2.
From: "Axelsson, Andreas" <Andreas.Axelsson@dice.se>
Date: Thu, 20 Mar 2003 16:33:14 +0100
Subject: Alternatives to headerscanning
I'm thinking about using jam to build data objects from source data such as
Maya object files and textures, but I can't figure out how to build
dependencies dynamically in jam.
I'd like to be able to parse the source file for references to other files,
just like the C/C++ headerscanning seem to work in jam. Is it possible to
add such functionality for other filetypes without extending the jam
executable, or if not, has anyone thought on how such an extension could be
made general to allow for scanning of any type of file?
If there are other ways, like calling external tools during the parsing, by
just writing clever rules, I'd rather do that, but I haven't managed to
figure out how I'd do it.
Date: Thu, 20 Mar 2003 09:52:11 -0600
From: <Jack_Goral@NAI.com>
Subject: temp files not removed by jam-2.5rc2
Probably bug: temporary jamXXX.bat files polute c:\temp directory after jam finishes. This is jam on Windows/VC++.
From: David Abrahams <david.abrahams@rcn.com>
Subject: Re: Alternatives to headerscanning
Date: Thu, 20 Mar 2003 10:49:46 -0500
It's nothing that complicated; you don't need a Jam extension. Just
set HDRSCAN on the source files to point at your own custom rule and
use your own custom parsing routines to scan the file. We do this in
Boost.Build for scanning docbook xml files for xincludes, and even
for extracting/dumping documentation embedded in our .jam modules.
Date: Thu, 20 Mar 2003 08:06:21 -0800 (PST)
Subject: Re: Alternatives to headerscanning
Yes, you can use the "header scanning" functionality for other things
besides just scanning source files for #include's -- just set your
patterns and set HDRRULE, HDRSCAN, HDRSEARCH (and possibly HDRGRIST) on
your files and provide the rule that HDRRULE is set to. I do this for XML
files that can have tags that reference other files and for IDL files for imports.
From: "Axelsson, Andreas" <Andreas.Axelsson@dice.se>
Subject: RE: Alternatives to headerscanning
Date: Thu, 20 Mar 2003 17:32:21 +0100
Ok, that sounds good, only in this case, some of my sources are binaries and
pattern-matching may not work very well. In this specific case I'm looking
at Maya 3D-files that include other Maya files that each include a number of
.tga texture files. I'm not very familiar with grep and all the magic that
it can do, so perhaps I'm wrong?
From: David Abrahams [mailto:david.abrahams@rcn.com]
Sent: den 20 mars 2003 16:50
Subject: Re: Alternatives to headerscanning
It's nothing that complicated; you don't need a Jam extension. Just
set HDRSCAN on the source files to point at your own custom rule and
use your own custom parsing routines to scan the file. We do this in
Boost.Build for scanning docbook xml files for xincludes, and even
for extracting/dumping documentation embedded in our .jam modules.
From: David Abrahams <david.abrahams@rcn.com>
Subject: Re: Alternatives to headerscanning
Date: Thu, 20 Mar 2003 11:52:03 -0500
Hmm, that does sound like it might need a Jam extension. Maybe you
could add a variable HDRCMD which specifies a command to run on the
file and extract some text that's processed later. That sounds like
it demands a primitive we've been wanting to add to Boost.Build for
quite some time: "run this shell command and capture its output" (aka
popen on Unix, I think). Unfortunately AFAIK writing a portable
version of that primitive is nontrivial. If anyone already has it as
a Jam extension, please let me know!
From: "Axelsson, Andreas" <Andreas.Axelsson@dice.se>
Subject: RE: Alternatives to headerscanning
Date: Fri, 21 Mar 2003 14:54:39 +0100
Yes, such an extension would definitely be useful. I'll look into how it
could be done when I understand the syntax of Jam enough not to create
anything non-standard. I suppose the GLOB command would be a good starting point.
From: David Abrahams [mailto:david.abrahams@rcn.com]
Sent: den 20 mars 2003 17:52
Subject: Re: Alternatives to headerscanning
Hmm, that does sound like it might need a Jam extension. Maybe you
could add a variable HDRCMD which specifies a command to run on the
file and extract some text that's processed later. That sounds like
it demands a primitive we've been wanting to add to Boost.Build for
quite some time: "run this shell command and capture its output" (aka
popen on Unix, I think). Unfortunately AFAIK writing a portable
version of that primitive is nontrivial. If anyone already has it as
a Jam extension, please let me know!
From: "Andrew Bachmann" <shatty@myrealbox.com>
Date: Mon, 24 Mar 2003 23:47:38 -0800 PST
Subject: bug in 2.5rc2 ?
I downloaded jam-2.5.tar and built it on solaris with gcc, and beos with gcc. I ran into a bug(?) on
both platforms with the setup below. In 2.4, this works as expected and the BuildAntlr1 rule
runs, changing into the subdirectory "src/antlr" and then running the configure command
successfully. In 2.5 it doesn't work and I get an error. I also tried the files from the cvs (the
same?) and got the same results.
On a positive note, building jam 2.5 on beos did not require changing the makefile, contrary to
the README. I had to create a link from cc to gcc in order to do the build on solaris, and after
that I didn't have to change the makefile either.
warning: using independent target src/antlr
BuildAntlr1 src/antlr/lib/libantlr.a
/bin/sh: src/antlr/lib/src/antlr: does not exist
cd src/antlr/lib/src/antlr ;
./configure --prefix=`pwd` ;
make install ;
...failed BuildAntlr1 src/antlr/lib/libantlr.a ...
rule BuildAntlr {
local l = $(1:S=$(SUFLIB)) ;
Depends lib : $(l) ;
MakeLocate $(l) : [ FDirName $(LOCATE_TARGET) lib ] ;
BuildAntlr1 $(l) : $(LOCATE_TARGET) ;
Clean cleanAntlr : $(l) ;
}
actions BuildAntlr1 {
cd $(2) ;
./configure --prefix=`pwd` ;
make install ;
}
SubDir TOP ;
SubInclude TOP src ;
SubDir TOP src ;
SubInclude TOP src antlr ;
SubDir TOP src antlr ;
BuildAntlr libantlr ;
From: Mahadevan R <Mahadevan.R@sisl.co.in>
Date: Wed, 26 Mar 2003 09:29:37 +0530
Subject: MSVCDIR or MSVCDir ?
First of all, I'm not a regular use of Jam, I'm only just checking it
out on Win2k/MSVC6 and Win2k/MSVC7.
While trying to build Jam (2.5) from the sources, I discovered that Jam
looks for the environment variable "MSVCDIR" (among others) for figuring
out the path of the compiler et al. The 'vcvars32.bat' file which comes
with VC6 for setting env. vars actually sets the var "MSVCDir" (note
difference in case). Given that variables in Jam are case-sensitive, it
makes sense to look for $(MSVCDir) also apart from $(MSVCDIR) in the Jambase rules.
Attached are two small patches - for jambase.c and Jambase - which look
for this variable also.
Hope this info/patch helps someone.
[PS: I couldn't figure out how to search the mailing list archives for
this info nor could I figure out how to post patches, so I thought I'll
trouble the mailing list itself for this -- my apologies if this is not
the right way to do it.]
--- jambase.c Fri Mar 07 13:03:43 2003
+++ ..\..\jam-2.5\jambase.c Wed Mar 26 08:20:38 2003
@@ -44,9 +44,10 @@
"STDHDRS ?= $(MSVC)\\\\include ;\n",
"UNDEFFLAG ?= \"/u _\" ;\n",
"}\n",
-"else if $(MSVCNT) || $(MSVCDIR)\n",
+"else if $(MSVCNT) || $(MSVCDIR) || $(MSVCDir)\n",
"{\n",
"MSVCNT ?= $(MSVCDIR) ; \n",
+"MSVCNT ?= $(MSVCDir) ; \n",
"local I ; if $(OSPLAT) = IA64 { I = ia64\\\\ ; } else { I = \"\" ; }\n",
"AR ?= lib ;\n",
"AS ?= masm386 ;\n",
--- Jambase Fri Mar 07 13:00:04 2003
+++ ..\..\jam-2.5\Jambase Wed Mar 26 08:19:34 2003
@@ -206,11 +206,12 @@
STDHDRS ?= $(MSVC)\\include ;
UNDEFFLAG ?= "/u _" ;
}
- else if $(MSVCNT) || $(MSVCDIR)
+ else if $(MSVCNT) || $(MSVCDIR) || $(MSVCDir)
{
# Visual C++ 6.0 uses MSVCDIR
MSVCNT ?= $(MSVCDIR) ;
+ MSVCNT ?= $(MSVCDir) ;
# bury IA64 in the path for the SDK
From: "Craig Allsop" <callsop@sceptre.net>
Date: Thu, 27 Mar 2003 08:06:59 +1000
Subject: SubDir rule
It would be helpful if the SubDir rule called a user defined rule on it's
tail. This may be used to define/redefine extra variables. The SubDir rule
can be redefined by the user however, it's first invocation cannot be
redefined since it is the bootstrap for jamrules.
A note for those on the mailing list: Don't forget to update your SubDir
rule if you've redefined it as it will probably not be compatible with the
new one of jam 2.5.
Date: Thu, 27 Mar 2003 01:49:37 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: Re: SubDir rule
Don't we have the same problem with a user defined rule? The first place
where you can define it Jamrules which is reached after the SubDir rule
has been called.
Yes I'm also overriding the SubDir to set some more inteligent
LOCATE_TARGET path. However having to update and sync the SubDir rules
with every jam update isn't good either.
Though it isn't an optimal solution, we could at least make this a little
nicer by having a _SubDir rule in the Jambase which does the same like the
(original) SubDir rule. We could then call that rule in the custom SubDir rules.
From: "Craig Allsop" <callsop@sceptre.net>
Subject: Re: SubDir rule
Date: Thu, 27 Mar 2003 11:28:32 +1000
By the time jam reaches the user defined rule it will have already read the
jamrules and redefined it. My suggestion is something like this at the end
of the normal SubDir rule:
UserSubDir $(<) ;
And a dummy UserSubDir defined in the jambase that does nothing, for those
who don't need this feature.
Date: Thu, 27 Mar 2003 10:43:52 +0100 (MET)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: SubDir rule
Hey, isn't that the idea I proposed not too long ago. :-P
No need to say, that I'm all for it. :-)
It wouldn't even be necessary to pass the parameters, since at that time
SUBDIR and SUBDIR_TOKENS are already set up. Although, the name of the TOP
variable may be of interest.
From: "Craig Allsop" <callsop@sceptre.net>
Subject: Re: error in jams new includefile handling Was: jam 25rc2
compiles libraries again when only the application changed
Date: Mon, 31 Mar 2003 18:49:57 +1000
I see this issue here as well. It appears that a missing target has no time
and dependant headers are marked newer causing many other targets to be
spoiled. As already explained, to recreate it just delete a object file. I
modified my version of jam to solve this however the correct solution is
probably best left to someone else.
Subject: Re: Dealing with non-unique target names
From: Matt Doar <matt@trpz.com>
Date: 31 Mar 2003 08:42:54 -0800
We ended up putting
LOCATE_TARGET = $(BUILD_DIR)/$(SUBDIR_TOKENS:J=/) ;
at the top of all our Jamfiles.
Subject: Re: Dealing with non-unique target names
From: Matt Armstrong <matt@lickey.com>
Date: Mon, 31 Mar 2003 22:36:57 -0700
I just stick a rule "ConfigureBuild" in Jamrules and call that after
every call to SubDir.
From: "Craig Allsop" <callsop@sceptre.net>
Date: Tue, 1 Apr 2003 21:45:18 +1000
Subject: EOL match in egrep expression?
I'm trying to scan for file names in a text file (on NT with jam 2.4)
with a HDRSCAN expression of "^Alpha=(.+)$" and it's returning the
"new line" character in the matched names. Is this correct, considering
the $ is outside the brackets? Is the + treated as a maximal match the problem?
My workaround is quite ugly:
"^Alpha=([abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_\.\(\){}#%!~;:,=-]+)".
From: "Robert Cowham" <robert@vaccaperna.co.uk>
Subject: RE: EOL match in egrep expression?
Date: Tue, 1 Apr 2003 15:55:48 +0100
I had a similar problem. The documentation on the regexp matching is somewhat(!) lacking.
I found this worked:
VC_HDRPATTERN = "^SOURCE=\\.\\\\([a-zA-Z0-9_\\.\" ]*)" ;
Sent: 01 April 2003 12:45
Subject: EOL match in egrep expression?
I'm trying to scan for file names in a text file (on NT with jam 2.4)
with a
HDRSCAN expression of "^Alpha=(.+)$" and it's returning the "new line"
character in the matched names. Is this correct, considering the $ is
outside the brackets? Is the + treated as a maximal match the problem?
My workaround is quite ugly:
"^Alpha=([abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_\.\
(\){}#%!~;:,=-]+)".
Date: Tue, 01 Apr 2003 18:08:42 +0200
From: Enno Rehling <ennor@funcom.com>
Subject: different rules for textfiles
I've got a couple of files that all have a .txt extension, and create header
files, but by separate programs. I wonder how to set up the rules for this,
seems like I can't use UserObject for this.
A.txt runs through X.sh to produce A.h
B.txt runs through Y.sh to produce B.h
I'm pretty new to jam (started playing with it yesterday), and I've got all
the obvious stuff (and some not-so-obvious) from our Makefiles converted to
jam. But this one thing I haven't been able to figure out.
Date: Tue, 01 Apr 2003 18:35:04 +0200
From: Enno Rehling <ennor@funcom.com>
Subject: Re: different rules for textfiles
Okay, getting a bit late, I've answered my previous question...
However, I'm sure there's a more elegant solution. I added the following to
my Jamrules:
actions IdCompile { perl $(TOP)/tools/create_enum_file $(>) > $(<) }
rule IdFile { Depends $(<) : $(>) ; IdCompile $(<) : $(>) ; }
which handles one type of textfiles I have. I then added this to the
Jamfile in one of my project's subdirectories:
IdFile TypeID.h : $(SUBDIR)/Setupfiles/TypeID.txt ;
IdFile RDBID.h : $(SUBDIR)/Setupfiles/RDBID.txt ;
Writing $(SUBDIR)/Setupfiles/ looks kind of ugly, but I haven't found a more
elegant-looking way (especially adding the directory to SEARCH_SOURCE didn't work).
Date: Wed, 02 Apr 2003 13:53:27 +0200
From: Enno Rehling <ennor@funcom.com>
Subject: recompiles
I've got some trouble with unnecessary recompiles. My Jamfile looks like this:
SOURCES
# more.cpp
main.cpp ;
Main test : $(SOURCES) ;
The weird thing here is, when I remove the # to add one more source to
$(SOURCES), I was expecting jam to only compile that one sourcefile and
re-link. However, it insists on recompiling some (not all) of the files in
$(SOURCES). However, it doesn't insist on it if I'm simply adding the new
file to the *end* of the list of sources. Why is that?
All my files are essentially empty. The problem seems to be connected to the
fact that they include a common header file (which itself simply includes
stdio.h). However, I don't see why that creates any dependencies?
Here are the files:
#include "Utils.h"
--- main.cpp begin
#include "Utils.h"
int main(int, char**) { return 0; }
--- main.cpp end
--- Utils.h begin
#include <stdio.h>
--- Utils.h end
Date: Wed, 02 Apr 2003 15:08:05 +0200
From: Enno Rehling <ennor@funcom.com>
Subject: Re: recompiles
I've reduced the problem to an even smaller example (removed the header). I
now have two files that both include <features.h> and nothing else. When I
add more.cpp to my sources list, I get the following output for jam -dmda -n
(and as you can see, it still wants to recompile main.cpp). I'm afraid I
can't really read the output of jam so well, but none of the header files
seems to have been touched, so why does it say:
made* newer gnu/stubs.h
Here's the full output:
make -- all
time -- all: unbound
Depends "all" : "shell" ;
make -- shell
time -- shell: unbound
Depends "shell" : "first" ;
make -- first
time -- first: unbound
made stable first
made stable shell
Depends "all" : "files" ;
make -- files
time -- files: unbound
Depends "files" : "first" ;
made stable files
Depends "all" : "lib" ;
make -- lib
time -- lib: unbound
Depends "lib" : "first" ;
made stable lib
Depends "all" : "exe" ;
make -- exe
time -- exe: unbound
Depends "exe" : "first" ;
Depends "exe" : "test" ;
make -- test
time -- test: Wed Apr 2 15:02:08 2003
Depends "test" : "more.o" ;
make -- more.o
time -- more.o: missing
Depends "more.o" : "more.cpp" ;
make -- more.cpp
time -- more.cpp: Wed Apr 2 15:00:57 2003
make -- more.cpp
time -- more.cpp: parents
Includes "more.cpp" : "features.h" ;
make -- features.h
bind -- features.h: /usr/include/features.h
time -- features.h: Tue Feb 25 14:43:59 2003
make -- features.h
time -- features.h: parents
Includes "features.h" : "sys/cdefs.h" ;
make -- sys/cdefs.h
bind -- sys/cdefs.h: /usr/include/sys/cdefs.h
time -- sys/cdefs.h: Tue Feb 25 14:45:29 2003
make -- sys/cdefs.h
time -- sys/cdefs.h: parents
Includes "sys/cdefs.h" : "features.h" ;
made* newer sys/cdefs.h
Includes "features.h" : "gnu/stubs.h" ;
make -- gnu/stubs.h
bind -- gnu/stubs.h: /usr/include/gnu/stubs.h
time -- gnu/stubs.h: Tue Feb 25 14:46:39 2003
made* newer gnu/stubs.h
made* newer features.h
made* newer more.cpp
made+ update more.o
Depends "test" : "main.o" ;
make -- main.o
time -- main.o: Wed Apr 2 15:02:08 2003
Depends "main.o" : "main.cpp" ;
make -- main.cpp
time -- main.cpp: Wed Apr 2 15:02:03 2003
make -- main.cpp
time -- main.cpp: parents
Includes "main.cpp" : "features.h" ;
made stable main.cpp
made+ update main.o
made+ update test
made update exe
Depends "all" : "obj" ;
make -- obj
time -- obj: unbound
Depends "obj" : "first" ;
Depends "obj" : "more.o" ;
Depends "obj" : "main.o" ;
made update obj
Depends "all" : "first" ;
made update all
...found 15 target(s)...
...updating 3 target(s)...
C++ more.o
C++ main.o
Link test
Chmod1 test
...updated 3 target(s)...
From: Paul_Donovan@scee.net
Date: Wed, 2 Apr 2003 16:44:04 +0100
Subject: SubDirHdrs question
I've just started using jam, version 2.4 under Linux.
I've succesfully made a couple of Jamfiles that build a library in a
subdirectory and build an executable in the top directory that is linked
with the library, but I've had to use the SubDirHdrs rule in a strange
(counter-intuitive) way. Perhaps I'm just not understanding the SubDir and
SubInclude rules? This is the structure of my test project:
Jamfile
support.cpp
animtable.cpp
xmlparser.cpp
expat_config.h
xmllib/
Jamfile
xmltok.c
xmlparse.c
xmlrole.c
...
The files in xmllib/ are compiled to produce a library (libexpat.a) in that
directory, which the executable (animtable) in the top directory is linked with.
Jamfile contains this:
SubDir TOP ;
LINKLIBS = -lstdc++ -lg ;
Main animtable : support.cpp animtable.cpp xmlelement.cpp xmlparser.cpp ;
LinkLibraries animtable : libexpat ;
SubInclude TOP xmllib ;
And xmllib/Jamfile looks like this:
SubDir TOP xmllib ;
SubDirHdrs . ;
Library libexpat : xmlparse.c xmltok.c xmlrole.c ;
The .c files in xmllib/ contain #include "expat_config", which is actually
in the directory above. In order to get the compiler to find
expat_config.h, I've added the SubDirHdrs rule. What I don't fully
understand, or rather, what I find odd, is the fact that I have to set it
to '.' After all, the header being searched for isn't _in_ the directory
that the Jamfile is in (xmllib), it's in the directory above - '..'
Someone reading xmllib/Jamfile will assume I'm talking about the xmllib directory!
From examining the commands Jam invokes, I can see that it actually calls
'cc' from the top directory on the .c files in xmllib/ :
cc -c -o xmllib/xmlparse.o -O -Ixmllib -I. xmllib/xmlparse.c
hence the reason that the -I. needs to be passed to the compiler.
So, what am I misunderstanding, or what am I doing wrong? I just think this
setup is odd. I realise that I could just move expat_config.h into xmllib/,
but that's no fun :-)
Subject: Re: SubDirHdrs question
From: Paul_Donovan@scee.net
Date: Wed, 2 Apr 2003 17:16:04 +0100
Thanks Enno, that works a treat! As you say, I can now call jam from either
directory. I'll leave working out how it works until tomorrow - I can only
learn so much in one day ;-)
[Ah, Funcom - same line of work as me then :-) ]
Date: Thu, 03 Apr 2003 12:51:00 +0200
From: Enno Rehling <ennor@funcom.com>
Subject: Re: recompiles
Thanks, that solved my problem. 2.5 is the version in debian testing at the
moment. Looks like 2.4 is considerably slower during the dependency
checking, but at least it gets it right :-)
Looks like I'm almost done converting our Makefiles now. jam rocks!
From: Paul_Donovan@scee.net
Date: Fri, 4 Apr 2003 11:31:23 +0100
Subject: SEARCH_SOURCE or LOCATE_TARGET - which to use?
Since my last question to this list, I've moved on to getting jam 2.4
working with a cross-compiler I use. This targets a machine that needs a
particular bit of startup code linked into an executable in order for it to
run. The source assembly file for this is located in the libraries for the
target, but is usually assembled into an object file in the source
directory of whatever project you're working on.
Now, I can cross-compile, assemble and link an executable for the target
successfully if I use the following Jamrules and Jamfile:
#Jamrules in TOP
SCE_TOP = c:/usr/local/sce ;
CC = ee-gcc2953 ;
C++ = ee-gcc2953 ;
#Jamfile in TOP
SubDir TOP ;
Main animtable : $(SCE_TOP)/ee/lib/crt0.s support.cpp animtable.cpp
xmlelement.cpp xmlparser.cpp ;
LinkLibraries animtable : libexpat ;
SubInclude TOP xmllib;
But this produces $(SCE_TOP)/ee/lib/crt0.o, which is technically OK but, as
I said above, I'd like it to be placed in $(TOP)
So I tried this:
SubDir TOP ;
Main animtable : $(SCE_TOP)/ee/lib/crt0.s support.cpp animtable.cpp
xmlelement.cpp xmlparser.cpp ;
LOCATE_TARGET on $(SCE_TOP)/ee/lib/crt0.s = $(TOP) ;
LinkLibraries animtable : libexpat ;
But it still produces the crt0.o in $(SCE_TOP)/ee/lib. What am I doing
wrong? I've tried putting the LOCATE_TARGET line before 'Main animtable :
...' but it doesn't make any difference. Looking at the output of jam -d+7,
it just seems to be ignoring the LOCATE_TARGET line.
Not to be detered, I tried a different approach, which was this:
SubDir TOP ;
Main animtable : crt0.s support.cpp animtable.cpp xmlelement.cpp
xmlparser.cpp ;
SEARCH_SOURCE on crt0.s = $(SCE_TOP)/ee/lib ;
LinkLibraries animtable : libexpat ;
SubInclude TOP xmllib;
But jam just says:
don't know how to make crt0.s
These seem very basic things, but I just can't get it to work. I've
searched the entire list archives and there has been very little mentioned
about both of these rules. Like my previous problem, I can work around it,
but I want to understand how to do stuff like this if I stand any hope of
converting our build system from make to jam.
Subject: Re: SEARCH_SOURCE or LOCATE_TARGET - which to use?
From: Paul_Donovan@scee.net
Date: Fri, 4 Apr 2003 13:09:19 +0100
-d+7,
For the benefit of the list, this was the solution. jam seriously needs
better examples and documentation.
Subject: Re: SEARCH_SOURCE or LOCATE_TARGET - which to use?
From: "Ingo Weinhold" <bonefish@cs.tu-berlin.de>
Date: Fri, 04 Apr 2003 16:59:42 +0200 CEST
There is a bit of truth in it, but only a bit. In fact, when binding
targets jam only uses the SEARCH and LOCATE variables, but these
variables are set by concerned rules from SEARCH_SOURCE/LOCATE_SOURCE/
LOCATE_TARGET. Therefore you obviously need to set the latter variables
*before* invoking the rules that use them -- in your case this is Main
(eventually invoking Object, which sets SEARCH and LOCATE).
Mmh, while I tend to agree in general, in case of SEARCH and LOCATE it isn't that bad.
Subject: Re: bug in 2.5rc2 ?
From: "shatty" <shatty@myrealbox.com>
Date: Fri, 04 Apr 2003 11:06:38 -0800
Since I didn't get any response yet on this bug I
decided to take a look at it myself. Here's some more
information from jam -d5, this time executed from
inside the $(TOP)/src/antlr directory. I also changed
the BuildAntlr rule to use a local just for kicks.
This example shows clearly how the action is not being
performed with the arguments from the point of
execution. (relevant lines highlit with ####)
rule BuildAntlr {
local l = $(1:S=$(SUFLIB)) ;
local t = $(LOCATE_TARGET) ;
Depends lib : $(l) ;
MakeLocate $(l) : [ FDirName $(LOCATE_TARGET) lib ] ;
BuildAntlr1 $(l) : $(t) ;
Clean cleanAntlr : $(l) ;
}
actions BuildAntlr1 {
cd $(2) ;
./configure --prefix=`pwd` ;
make install ;
}
#### >>>> BuildAntlr1 libantlr.a : .
make -- all
time -- all: unbound
make -- shell
time -- shell: unbound
make -- first
time -- first: unbound
made stable first
made stable shell
make -- files
time -- files: unbound
made stable files
make -- lib
time -- lib: unbound
make -- libantlr.a
bind -- libantlr.a: lib/libantlr.a
time -- libantlr.a: missing
make -- <dir>lib
bind -- <dir>lib: lib
time -- <dir>lib: Fri Apr 4 10:26:26 2003
made stable <dir>lib
made+ missing libantlr.a
made update lib
make -- exe
time -- exe: unbound
made stable exe
make -- obj
time -- obj: unbound
made stable obj
made update all
...found 9 target(s)...
...updating 1 target(s)...
warning: using independent target .
BuildAntlr1 lib/libantlr.a
/bin/sh: ./configure: No such file or directory
make: *** No rule to make target `install'. Stop.
#### cd lib/. ;
./configure --prefix=`pwd` ;
make install ;
...failed BuildAntlr1 lib/libantlr.a ...
...failed updating 1 target(s)...
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 07 Apr 2003 15:59:08 +0200
Subject: Jam not setting nonzero exit code on missing Jamfile
If I try to execute jam in a directory without a Jamfile, jam exists
with an error message, but withot setting a non-zero error code, which
is annoying when using jam in "jam && run"-type scenarios.
I suppose the reason for this behaviour is that Jamfile is included from
Jambase just like any other file, and that generally failing on missing
include files is not supposed to stop the build (though I am not sure I
agree that this is the correct choice).
Am I the only one thinking this is a bug, or do others agree that this
should be fixed in mainline jam?
From: "Craig Allsop" <callsop@sceptre.net>
Subject: Re: Jam not setting nonzero exit code on missing Jamfile
Date: Tue, 8 Apr 2003 11:25:02 +1000
Perhaps the thinking is you don't need a tool higher than jam so no error
result is required? :) I'm only joking, seriously, it would be nice.
Date: Wed, 9 Apr 2003 09:46:47 -0700 (PDT)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: temp files not removed by jam-2.5rc2
| Probably bug: temporary jamXXX.bat files polute c:\temp directory
after jam finishes. This is jam on Windows/VC++.
Public depot change 3069 addresses this.
==== //public/jam/src/RELNOTES#64 (text) ====
58a59,63
==== //public/jam/src/execunix.c#14 (text) ====
Date: Wed, 9 Apr 2003 09:51:45 -0700 (PDT)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: SubDir rule
| It would be helpful if the SubDir rule called a user defined rule on it's
| tail. This may be used to define/redefine extra variables. The SubDir rule
| can be redefined by the user however, it's first invocation cannot be
| redefined since it is the bootstrap for jamrules.
I've submitted change 3070 to the public depot for this. I've long meant
to support it. To use it, you set SUBDIRUSER to the list of rule names
you want invoked at the end of the SubDir rule, and each one is called with
the same arguments as SubDir.
So somewhere in your Jamrules (read early on by SubDir) you can set
SUBDIRUSER so that by the end of SubDir the rule gets invoked.
Date: Wed, 9 Apr 2003 09:55:18 -0700 (PDT)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: error in jams new includefile handling
| I see this issue here as well. It appears that a missing target has no time
| and dependant headers are marked newer causing many other targets to be
| spoiled. As already explained, to recreate it just delete a object file. I
| modified my version of jam to solve this however the correct solution is
| probably best left to someone else.
Change 3057 fixes this. The new internal "header timestamp collection"
targets were being treated as missing temp targets in need of rebuilding.
It's a simple fix to a stupid, ugly bug.
==== //public/jam/src/make.c#19 (text) ====
49a50
202a204,208
207,208c213
< p->binding != T_BIND_MISSING ||
< p && t->flags & T_FLAG_INTERNAL )
---
From: Paul_Donovan@scee.net
Date: Thu, 10 Apr 2003 17:23:11 +0100
Subject: HDRSCAN and egrep patterns
Am I right in thinking that Jam 2.4 doesn't support the | metacharacter in
egrep patterns used by HDRSCAN and HDRPATTERN? I have been unable to get
them to work for my project where I'm trying to scan for dependencies that
can be 'included' using two methods.
The source files can contain '#include' (like C/C++) and also 'using' e.g
#include "libfsm/include/navigation.fsh"
using "fsmcode/libstatetypes.h"
I started with the HDRPATTERN in the default jambase file:
FSMHDRPATTERN = "^[ ]*#[ ]*include[ ]*[<\"]([^\">]*)[\">].*$" ;
and tried to modify it like this:
FSMHDRPATTERN = "^[ ]*#[ ]*(include|using)[ ]*[<\"]([^\">]*)
[\">].*$" ;
which should match #include and #using (a first step), but it doesn't work.
I've tried various other methods, but I can't get anything to work.
I solved my problem by creating two separate patterns:
# Match #include ""
FSMHDRPATTERN1 = "^[ ]*#[ ]*include[ ]*[<\"]([^\">]*)[\">].*$" ;
# Match using ""
FSMHDRPATTERN2 = "^[ ]*using[ ]*[<\"]([^\">]*)[\">].*$" ;
HDRSCAN on $(>) = $(FSMHDRPATTERN1) $(FSMHDRPATTERN2) ;
This seems less than ideal from a performance point of view, but I've
exhausted my knowledge of regular expressions :-(
Date: Mon, 14 Apr 2003 18:30:17 +0200 (CEST)
From: Harri Porten <porten@kde.org>
Subject: include & locals
[reposting as my previous mails seems to have been dropped because of a
wrong sender address]
The documentation of 'include' in Jam.html reads:
"The include file is inserted into the input stream during the parsing
phase. The primary input file and all the included file(s) are treated as
a single file; that is, jam infers no scope boundaries from included files."
However, using a Jamfile
include file.inc ;
echo $(x) ;
whereas file.inc's content is
local x ;
x = foo ;
reveals an empty output. Removing the 'local' statement leads to "foo" being printed.
What's wrong here ? The documentation, the code, or my understanding ?
Date: Mon, 14 Apr 2003 21:04:00 +0200 (CEST)
From: Matze Braun <matze@braunis.de>
Subject: Problems with $(VARIABLE:P) and MkDir rule
There seem to be some problems with the $(VARIABLE:P) behaviour:
[BUG]:
MkDir /path/to///somedir
fails, because of the 3 slashes:
MkDir /path/to/../foo
also fails because .. is in the path.
The problem is that :P doesn't really return the parent directory. This
would need either making :P correctly returning the parent directory or to
improve the docu (stating about returning directory without last
component) and making the MkDir rule smarter.
Date: Tue, 15 Apr 2003 16:24:04 -0400
Subject: How to link a dll with an executable file or with anothe dll?
I have some question about Jam.
1) I want to know how to link a dll with an executable file or with another
dll.
Please, look what I do and let me know if there is a better or faster way to
perform such action :
The work is done with Borland 5.5.1 compiler on a win32 platform.
I have a simple project
root/:
jamfile
jamrules
RepMain/ jamfile, RepMain
main.cpp / \
RepA RepB
RepA/ jamfile, \ /
a.cpp, a.h RepC
RepB/ jamfile,
b.cpp, b.h
RepC/ jamfile,
c.cpp, c.h
---IN root/:---
IN jamfile:
SubDir TOP ;
SubInclude TOP RepMain ;
IN jamrules:
SUFSHR = .dll ;
rule Dll {
local _dll = $(<)$(SUFSHR) ;
local _lib = $(_dll:B).lib ;
if $(BCCROOT) {
local DLL_LINKFLAGS = -tWD ;
LINKFLAGS on $(_dll) += $(DLL_LINKFLAGS) ;
}
MainFromObjects $(_dll) : $(>:S=$(SUFOBJ)) ;
Objects $(>) ;
if $(BCCROOT) { ImportLibrary $(_lib) : $(_dll) ; }
MakeLocate $(_lib) : $(LOCATE_TARGET) ;
}
rule ImportLibrary {
local _lib = $(<) ;
local _dll = $(>) ;
Depends exe : $(_lib) ;
Depends $(_lib) : $(_dll) ;
Implib $(_dll) ;
}
actions Implib { $(BCCROOT)/implib -c $(<:S=$(SUFLIB)) $(<) ; }
rule LinkLibraries {
local _e = [ FAppendSuffix $(<) : $(SUFEXE) ] ;
Depends $(_e) : $(>:S=$(SUFLIB)) ;
NEEDLIBS on $(_e) += $(>:S=$(SUFLIB)) ;
local _d = [ FAppendSuffix $(<) : $(SUFSHR) ] ;
Depends $(_d) : $(>:S=$(SUFLIB)) ;
NEEDLIBS on $(_d) += $(>:S=$(SUFLIB)) ;
}
rule SubIncludeOnce {
local i ;
local include_marker = included ;
# value of include_marker is the concatenated directory names in the
# path to the directory being included
for i in $(<[2-])
{
include_marker = $(include_marker)_$(i) ;
}
# if the variable whose name is the value of include_marker does not
# exist, then we know we haven't included that directory yet.
if ! $($(include_marker)) {
# Do not include more than once
$(include_marker) = TRUE ;
SubInclude $(<) ;
}
}
---IN root/RepMain:---
IN jamfile:
SubDir TOP RepMain ;
Main main : main.cpp ;
LinkLibraries main : a b ;
SubIncludeOnce TOP RepA ;
SubIncludeOnce TOP RepB ;
IN Main.cpp:
Call FonctA //FonctA is in a.cpp
Call FonctB //FonctB is in b.cpp
---IN root/RepA:---
IN jamfile:
SubDir TOP RepA ;
Dll a : a.cpp ;
LinkLibraries a : c ;
SubIncludeOnce TOP RepC ;
IN a.cpp:
Call FonctC //FonctC is in c.cpp
---IN root/RepB:---
IN jamfile:
SubDir TOP RepB ;
Dll b : b.cpp ;
LinkLibraries b : c ;
SubIncludeOnce TOP RepC ;
IN b.cpp:
Call FonctC //FonctC is in c.cpp
---IN root/RepC:---
IN jamfile:
SubDir TOP RepC ;
Dll c : c.cpp ;
IN c.pp:
2) How could I specify a different directory for my target files. For
example, I would like to have the files main.exe, a.dll and b.dll into
root/RepExe instead of the default directory (at the same location that the source files).
3) I want to know the difference beetween version 2.3 and 2.5 about the
environment variable JAM_TOOLSET.
Date: Tue, 15 Apr 2003 14:35:30 -0700
From: Dietrich Epp <dietrich@zdome.net>
Subject: Generated Sources
I have a perl script that generates a .gperf file from a template and
some header files. I got this to work when it was all in the same
directory, but in a different directory it won't find the files in
E2H.SCANFILES (see below). I tried inserting "MakeLocate..." or
"LOCATE on..." into the ObjectE2HScan rule, or feeding the rule some
explicit grist, but it either feeds the grist directly to the shell or
feeds the plain file name to the shell. How do I replicate the
behavior for $(<) and $(>) with this extra parameter?
The relevant rules and actions are:
rule ObjectE2HScan { E2H.SCANFILES on [ FGristFiles $(<:S=.gperf) ] += $(>) ; }
actions E2H { $(ENUM2HASH) $(<) $(>) $(E2H.STRING) $(E2H.SCANFILES) }
The resulting action:
perl ./tools/enum2hash.pl zedc/zedc_keyword.gperf zedc/zedc_keyword.e2h
zedc_token_kw_ zedc_token.h
zedc_token.h should be zedc/zedc_token.h, with some rules munging I get
<zedc>zedc_token.h instead, which does some undesirable piping =)
Date: Wed, 16 Apr 2003 13:57:15 -0700
From: "Hemantharaju Subbanna x4832" <raju@galaxy.nsc.com>
Subject: First time user
This example from Tutorial does not do as it say .
rule MyRule { TouchFile $(1) ; Depends all : $(1) ; }
actions TouchFile { touch ($1) }
MyRule j1 ;
MyRule j2 ;
% jam -d5 -f jam2
make -- all
time -- all: missing
make -- j1
time -- j1: Wed Apr 16 11:52:27 2003
made* newer j1
make -- j2
time -- j2: Wed Apr 16 11:52:47 2003
made* newer j2
made+ missing all
...found 3 target(s)...
I don't get
...updating 2 target(s)...
Nor the files in my worspace has changed data
What am I doing wrong.
I tried other script with "actions". They also fail to do the action part.
I guess there is something that is not executing the system commands.
But I don't know what that something is . I need your help
From: "Michael Champigny" <michael.champigny@verizon.net>
Date: Mon, 21 Apr 2003 08:28:25 -0400
Subject: Fw: Jam mailing list post
Like most folks here, I'm using jam as a cross-platform replace of make.
It has been working well but I've run into a snag.
The following is the smallest directory structure which demonstates my
problem (ie. my real world build environment is considerably more
complex, but this should get my point across).
The basic idea is that I have a single top level Jamfile, an include
directory for header files, and a source directory. The first thing Jam
needs to do is populate my staging area, called "staging". Note that
headers and source files are looked for in staging, not in the real
source tree. Although it's hard to see with this simple example, I have
a reason for populating a staging area before a build. Once the staging
area is created and populated with all files used for building, I want
to build a simple library. I can do all this from the command line with
(ie. assuming a UNIX machine):
$ jam first
$ jam
This works well. Using first as a target creates the staging directory
and populates it. The second jam is for building the library.
This leads to my problem...how can I get the same effect by just giving:
$ jam
I want one call to jam to 1) create the staging directory, 2) populate
or otherwise update the staging area with all files needed for building,
3) actually perform the build.
Note that if a source or header file changes in the source tree, the
staging area is properly updated (ie. the File rule is invoked).
I don't want to have to type "jam first" after every change. I just want
to type "jam" and have jam figure the dependencies out. Can anyone spot
what I'm missing in this very simple set up? Thanks for any help!
Here are my files:
Jamfile:
rule MakeDir { Depends first : $(<) ; MkDir $(<) ; }
rule PopulateInclude {
MakeDir staging ;
Depends first : [ FDirName staging $(<).h ] ;
File [ FDirName staging $(<).h ] : [ FDirName include $(<).h ] ;
}
rule PopulateSource {
MakeDir staging ;
Depends first : [ FDirName staging $(<).cc ] ;
File [ FDirName staging $(<).cc ] : [ FDirName source $(<).cc ] ;
}
HDRS = staging ;
SEARCH_SOURCE = staging ;
LOCATE_TARGET = lib ;
PopulateInclude foo ;
PopulateSource foo ;
Library libfoo : foo.cc ;
lib:
{ This is where the library will go once built }
include/foo.h:
{ This file is empty...it's just a stub for testing jam }
#include "foo.h"
void foo() { }
The build tree:
+--- Jamfile
|
+--- lib/ { the file libfoo.a will go here once built }
|
+--- staging/ { jam actually creates this directory if it doesn't
exist }
|
+--- include/
| |
| +--- foo.h
|
+--- source/
|
+--- foo.cc
Subject: Re: Fw: Jam mailing list post
From: "Ingo Weinhold" <bonefish@cs.tu-berlin.de>
Date: Mon, 21 Apr 2003 16:54:01 +0200 CEST
complex, but this should get my point across).
needs to do is populate my staging area, called "staging". Note that
headers and source files are looked for in staging, not in the real
source tree. Although it's hard to see with this simple example, I have
a reason for populating a staging area before a build. Once the staging
area is created and populated with all files used for building, I want
to build a simple library. I can do all this from the command line with
(ie. assuming a UNIX machine): building,
spot what I'm missing in this very simple set up? Thanks for any help!
At the end there is a fixed Jamfile, that should do what you intend.
Changes:
1) Removed MakeDir. There is MakeLocate, which does pretty much the
same, additionally setting the LOCATE variable on the target.
2) Overriden the original MakeDir. This version adds grist to the
directory target to avoid the cyclic `lib' dependency (otherwise Jam
considers the directory and the pseudo `lib' target the same).
3) Abstracted PopulateInclude and PopulateSource to a more generic
Populate rule.
4) Cleaned up the FDirName stuff in Populate*. The files are real
targets and now treated so (the source file with grist set to
`_orig_'), i.e. the LOCATE and SEARCH variables determine their
location.
5) The `first' dependency in Populate* is not needed.
Your original problem was caused by incorrect dependencies. The Library
rule set up the build to make the library depend on target `foo.cc',
but the File rule made known a target `staging/foo.cc'. Note, that
these are considered different targets, since their identifiers differ.
Now the File rule generates the target `foo.cc', too.
One should always avoid to code a path into a target name, as this has
often unwanted side effects. Instead grist should be used and the
target location specified via the SEARCH and LOCATE variables.
Unfortunately a lot of Jam rules simply set the grist of supplied
target names to the current grist (instead of setting it only when it
is not yet set) which disables some applications that would be possible otherwise.
Jamfile:
rule MakeLocate {
if $(>) {
local dir = $(>:G=_make_locate_) ;
LOCATE on $(<) = $(>) ;
Depends $(<) : $(dir[1]) ;
MkDir $(dir[1]) ;
}
}
rule Populate {
# Poplate <basename> : <source dir> : <suffix> ;
local s = $(1:G=_orig_:S=$(3)) ;
local t = $(1:G=:S=$(3)) ;
MakeLocate $(t) : staging ;
File $(t) : $(s) ;
SEARCH on $(s) = $(2) ;
}
rule PopulateInclude { Populate $(1) : include : .h ; }
rule PopulateSource { Populate $(1) : source : .cc ; }
HDRS = staging ;
SEARCH_SOURCE = staging ;
LOCATE_TARGET = <dir>lib ;
PopulateInclude foo ;
PopulateSource foo ;
Library libfoo : foo.cc ;
Date: Mon, 21 Apr 2003 18:35:17 +0200
From: Enno Rehling <ennor@funcom.com>
Subject: Re: Fw: Jam mailing list post
I actually don't do that - yet. Our two platforms are linux and windows
(VC7) and I don't think the benefits of using jam are all that big
considering the integration effort to get it to work with devstudio (how do
I get jam, the integrated debugger and the edit&continue features all
working in harmony?
Date: Wed, 23 Apr 2003 09:28:19 +0200 (MEST)
From: Manuela Thygs <Thygs@gmx.net>
Subject: How can I analyze the return value of an external shell-tool?
If Jam calls an external shell-tool (compiler, linker, own script,...), how
can I analyze the return value of the tool?
Is it possible to analyze only the state success/failure or can I read out
something like an error message?
Date: Wed, 23 Apr 2003 13:35:52 +0200 (MEST)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: How can I analyze the return value of an external shell-tool?
In your rules' actions you can do anything you can do with shell scripts.
So, if you want to filter your compiler's error message, you need to
override the Cc or C++ actions implementing whatever handling you desire.
In case you're driving at at something like the $(shell ...) feature of
make, there's nothing comparable in Jam (unfortunately).
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Date: Wed, 23 Apr 2003 14:11:05 +0200
Subject: Simplicity is a valuable feature.
I find myself compelled to disagree with that last paren. It's the pedant in me. Sorry.
*climbs onto soapbox*
Jam's design says that building is a two-step process:
1. Decide what do do.
2. Do that.
All the 'doing' is in step 2, all the 'deciding' is in step 1.
Very simple. Simplicity is good.
Sometimes it would be handy to have e.g. $(shell), but the lack of
simplicity would be a bigger problem than the lack of a minor feature
such as $(shell). IMO.
*climbs off soapbox*
Date: Wed, 23 Apr 2003 16:36:32 +0200 (MEST)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Simplicity is a valuable feature.
Then I have to answer. :-P
No doubt, simplicity is good. A lack of power may increase complexity, on
the other hand. Unfortunately Jam's lack of power in this respect often
requires to either introduce a step 0 -- Jam-external build configuration
step -- or dirty tricks and abuse of the GLOB rule and/or messy
$(OS)/$(OSPLAT)/... case distinctions.
From: "Michael Champigny" <michael.champigny@verizon.net>
Subject: Re: Fw: Jam mailing list post
Date: Thu, 24 Apr 2003 08:29:27 -0400
Thanks! That works great. I'm having problems grasping how grist is actually
used in Jam. I know that conceptually it's just a prefix for assuring uniqueness
amoung files with the same name in different directories. But I don't have an
understanding of exactly when you must use grist.
How did you know that you'd need grist in this particular case? Also is the
syntax ":G=:" really necessary in the above rule? It seems that the semantics of
this is that it strips the grist off a target, but the Populate rule worked fine
when I left it out. I don't believe there was grist there to begin with, so why
use that syntax? Am I missing something? For example:
"local t = $(1:S=$(3))" is equivalent to "local t = $(1:G=:S=$(3))"
Thanks for the help...after years of using Imake as my portable build system,
Jam has a STEEP learning curve. Examples would really help. :-)
From: "Michael Champigny" <michael.champigny@verizon.net>
Date: Thu, 24 Apr 2003 08:42:01 -0400
Subject: Grist 101
How does grist actually make a target name unique? For example:
dir1:
foo.c
dir2:
foo.c
I can apply grist to make these two targets unique (assuming foo.c is in $(<) ):
$(<:G=grist)
But wouldn't that expand to <grist>foo.c in both cases? It seems Jam still needs
to tack on the full directory spec to insure uniqueness. Maybe something like:
<grist!dir1>foo.c and <grist!dir2>foo.c
I can't see any other way that Jam could differentiate the files. Is this what
Jam is doing under the hood?
Also, why not allow a "default" grist prefix so we don't have to come up with a
bogus name?
$(<:G)
Jam should make up a unique prefix string on the fly. Why must a Jam user be
bothered by what the grist string is? Obviously I'm missing something about
grist. Can anyone enlighten me? :-) Maybe I just haven't had my coffee yet.
Date: Thu, 24 Apr 2003 16:05:55 +0200 (MEST)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: Grist 101
Most Jambase rules apply FGristFiles, which sets the grist to
$(SOURCE_GRIST), the grist of the current subdirectory set by the SubDir rule.
If you only use Jambase rules you don't need to care about grist, since
those rules will set it for you. If you're writing your own rules, you
may need to deal with it -- mostly you will use FGristFiles then. Only if
you're doing more `esoteric' things, you'll need to add a special grist.
Subject: Re: Fw: Jam mailing list post
From: "Ingo Weinhold" <bonefish@cs.tu-berlin.de>
Date: Thu, 24 Apr 2003 19:38:27 +0200 CEST
Yes, as long as you don't supply arguments with grist, these are
equivalent. More or less a copy and paste leftover in this case.
From: Craig Stout <Craig.Stout@SiliconAccess.com>
Date: Tue, 29 Apr 2003 13:09:59 -0400
Subject: bug in jam dependency checker?
I have found a scenario where Jam does not update a target when one of it's
dependencies is updated. The scenario involves circular #include's. Though
it may not be a terribly useful thing to do, it did arise in my source code
and I believe Jam should handle it.
Given the following five files, jam does not cause a recompile of a.cpp when
foo.hpp is modified.
a.cpp (calls function foo) b.cpp
| |
v v
^ |
| |
|
v
foo.hpp (defines function foo)
b.hpp #include's both a.hpp and foo.hpp
a.hpp contains an #ifndef A_HPP / #define A_HPP / #endif
My Jamfile contains the single line: 'Library test : b.cpp a.cpp ;'
Interestingly, the order the cpp files are listed DOES matter! (it works if
you reverse them).
Also: if the useless #include of a.hpp by b.hpp is removed, it works.
I verified this problem with jam 2.5 though until today I was using 2.3. I
haven't tried sifting through any debug output, maybe someone can suggest a
strategy here...
From: "Joshua Little" <animedemon@hotmail.com>
Date: Wed, 30 Apr 2003 01:46:34 -0500
Subject: New to Jam, trying to include headers.
I've managed to get Jam to work well with a few self contained programs. I'm
now trying to compile a simple test program for my litle XML parser and I'm
running into problems. When ever I try to include headers from a seperate
directory I get compile errors. In my case I have a library and headers in a
seperate directory structure. For simplicity lets say I have my lib in
/e/Ricin/lib dir and the headers are in /e/Ricin/include. My test program in
/e/RicinTest/src. I can do almost anything (compile Libs, applications ,
ect) as long as its all within the hierarchy of the TOP directory. Anytime I
try to include a header from another directory I get this error :
g++.exe: cannot specify -o with -c or -S and multiple compilations
g++ -c -o src/RicinTest.o -DMSYS -I/e/ricin -O -Isrc src/RicinTest.cpp
I've tried directly setting the include directory with
SubDirC++Flags -I"/e/ricin" ;
in the Jamfile for the src directory and I've tried :
SubDirHdrs "e" "ricin" ;
Both seem to produce the correct -I flag but I get that multiple
compilations error. Whats the correct way to specify the header directories
for external libs and how do I link to them correctly?
Oh and in the real directories some do have spaces in the names like
/e/programming projects/ricin but I didn't think that would make a difference.
I'm using Jam 2.5rc3 on Windows 98 MingW/Msys.
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 30 Apr 2003 12:23:18 +0200
Subject: How to merge different jams
I would like to have the semaphore and header-cache patches from the
craig_mcpheeters branch, combined with the 2.5rc2 (or rc3) branch.
I suppose this is easy if you now you way around Perforce, could anyone
please provide me with the magical p4-command to merge these things, or
extract the changesets as a diff?
From: Jacob Gorm Hansen <jg@ioi.dk>
Date: 30 Apr 2003 14:52:51 +0200
Subject: any plans for hdrscan cache in mainline Jam?
I am trying to merge in header scan caching in my jam-2.5rc2. It seems a
bit nontrivial, and I was wondering if anyone has already done this, and
if there are any plans to incorporate this cache in jam2.5 or jam2.6?
Date: Tue, 29 Apr 2003 21:42:13 -0700
From: rmg@foxcove.com
Subject: jam 2.5rc3
jam 2.5rc3 (release candidate 3) is available in the Perforce Public
Depot. Release Notes are at
http://public.perforce.com/public/jam/src/RELNOTES
Release archives (source code only):
ftp://ftp.perforce.com/jam/jam-2.5.tar
or ftp://ftp.perforce.com/jam/jam-2.5.zip
The release will bear the "rc3" suffix for a few weeks, to give some
time for any serious bugs to surface. If none do, it shall quietly be
dropped, and this will become the final jam 2.5 release.
Here are the changes between jam2.5rc2 and rc3 (from the RELNOTES):
0.1. Changes between 2.5rc2 and 2.5rc3:
More SubDir work after rc2: if a Jamrules invoked SubDir to
establish other roots, and that Jamrules isn't in the current
directory, the roots it established were wrong.
Fix mysterious rebuild problem: in an attempt to make 'jam -dc'
output report headers updates more accurately, internal (header
collection) targets were being bound as T_BIND_PARENTS so that
they could carry the timestamp of the actual source file. But
that caused the fate of the internal node to be marked as
T_FATE_NEEDTMP if anything they included was newer, and that
easily happens among header files (something is always newer
than something else). Now internal targets carry their parents
time, but with T_BIND_UNBOUND, like other NOTFILE targets.
Remove temp .bat files created on NT. They used to all have
the same name and get reused, but with 2.5 the names were salted
with the PID and they would litter $TEMP. Now they get removed
after being used.
Undocumented support for SUBDIRRULES, user-provided rules
to invoke at the end of the SubDir rule, and SUBDIRRESET,
SUBDIR variables to reset (like SUBDIRC++FLAGS, SUBDIRHDRS, etc)
Date: Fri, 2 May 2003 12:26:36 -0700 (PDT)
Subject: "Stack overflow" running 'jam clean' on Windows
Has anyone had this happen? (And more importantly, if you have, did you
find a fix for it you'd be willing to share? :)
Date: Fri, 2 May 2003 12:50:45 -0700 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: "Stack overflow" running 'jam clean' on Windows
This happened to me on windows and MacOSX. Alter the Jamfile to contain
something like this:
# We have to signal jam.h for these
if $(OS) = NT { CCFLAGS += /DNT ; }
# CWM - enlarge stack sizes
if $(NT) {
LINKFLAGS += /STACK:0x400000 ;
}
else if $(OS) = MACOSX {
LINKFLAGS += -Wl,-stack_size -Wl,0x800000 ;
}
Business phone: VNET: 772-8670, or (206) 789-1374
Mailstop: DWR-2953
Date: Mon, 5 May 2003 20:32:31 -0700 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: fyi - my branch is updated
I've just integrated the latest Jam mainline changes into my branch:
//guest/craig_mcpheeters/jam/src/...
From: Dag Asheim <dash@linpro.no>
Date: 07 May 2003 14:38:18 +0200
Subject: SubInclude and current working directory for the compiler
I have a problem with the SubInclude command, because I (incorrectly)
thought it should behave as if "doing an cd to the subdirectory in
question and doing the actions from the local Jamfile".
I have condensed my problem down to just a few files:
~/simple/src/Jamfile:
HDRS = ../include ;
Main hello : hello.c ;
~/simple/src/hello.c:
#include <message.h>
main() { printf(STR); }
~/simple/include/message.h
#define STR "Hello, World!\n"
~/simple/src/Jamfile:
SubInclude TOP src ;
I have also defined the environment variable TOP:
export TOP=$HOME/simple
When I stand in the ~/simple/src directory and build it, everything
works as expected. But when I stand in the directory ~/simple, things
break down:
...found 12 target(s)...
...updating 2 target(s)...
Cc src/hello.o
src/hello.c:1: message.h: No such file or directory
cc -c -o src/hello.o -O -Isrc -I../include src/hello.c
...failed Cc src/hello.o ...
...skipped hello for lack of <src>hello.o...
...failed updating 1 target(s)...
...skipped 1 target(s)...
The compiler can no longer find hello.h. I can see why this is
happening (the compiler seemingly has a working directory of ~/simple
instead of ~/simple/src), but is there a better way to write the
~/simple/Jamfile to avoid this problem?
A work around is to abolish all relative paths from the local Jamfiles
(using HDRS = $(TOP)/include ;), but I would rather avoid that.
Btw, I have tried this with both Jam 2.5rc2 and jam-2.5rc3 with same results.
Date: Wed, 7 May 2003 16:52:32 +0200 (MEST)
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: SubInclude and current working directory for the compiler
Fascinating, your filesystem seems to support overlaying of files. ;-)
The SUBDIR variable contains the path to the directory of the Jamfile. You
can use that as a base for constructing a relative path. BTW, it is also
better pratice to use the SubDirHdrs rule to add include directories for
the current subdir. So, your line would read:
SubDirHdrs [ FDirName $(SUBDIR) $(DOTDOT) include ] ;
Subject: Re: SubInclude and current working directory for the compiler
From: Dag Asheim <dash@linpro.no>
Date: 07 May 2003 18:24:30 +0200
Yes, it's a really nifty feature. It can be confusing, though! :-)
Thanks! This looks like a workable solution. I'll try it out tomorrow morning.
Side note: Wouldn't it be easier if actions be default was done with
current working directory (cwd) set to the right value? There are
bound to be tools that use cwd in some way and that can't be
overridden like the C compiler.
Subject: Re: SubInclude and current working directory for the compiler
From: "Ingo Weinhold" <bonefish@cs.tu-berlin.de>
Date: Wed, 07 May 2003 18:56:51 +0200 CEST
I think this would complicate things for Jam, maybe even decreasing
performance. For tools that only operate in the current working
directory it should be rather easy to adjust the actions to make it
work. E.g. like:
actions StupidTool {
cd `dirname $(1)`
stupidtool `basename $(1)` `basename $(2)`
}
From: "Michael Ashworth" <aisu@muf.biglobe.ne.jp>
Date: Thu, 8 May 2003 23:32:54 +0900
Subject: problems on window
I am trying to set up a simple jam file on a Window XP home addition
(Japanese version). The simple main rule works but trying to create
actions to run on the CMD shell doesn't seem to work. Looking at the
debug output, the actions is being called but it doesn't expand to
anything. The JAMSHELL is also blank. Can I run on CMD and how or do I
need to get cygwin or the like installed?
Date: Fri, 16 May 2003 08:24:26 +0200 (CEST)
Subject: MkDir not working!?
i have some questions on jam.
at the beginning i wasn't able to compile jam with the dedicated makefile
for Win2k. therefor i used Visual C++ 6.0 to compile a jam executable from
the 2.5rc3 sources. i used the jambase.c file from the delivery but that
doesn't make a change if i generate it by my own with mkjambase.
then i build up a project with several jamfiles and a jamrule file.
i have to copy some header files from the original directory to another
directory which doesn't exist the first time. if i want to use the following files
this is not working. It seems that MkDir is not working. There is no
output (even not with -d3) for the MkDir statement.
The error output is always:
The system cannot find the path specified.
That means, the system wants to copy the file with the
Bulk rule but the MkDir rule is not working.
I tried to use InstallFile but this is as well not working.
Has anyone an idea why this is not working!?
####### Begin Jamrule ############
COPYINC = $(TOP)\\INC ;
COPYLIB = $(TOP)\\LIB ;
####### End Jamrule ##############
####### Begin Jamfile ############
SOURCES
file1.c
file2.c
file3.c
;
HEADERS
file1.h
file2.h
file3.h
;
Library bla : $(SOURCES) ;
# The InstallAll rule can also be copied to the Jamrule file.
rule InstallAll {
MkDir $(COPYINC) ;
Bulk $(COPYLIB) : $(1) ;
Bulk $(COPYINC) : $(HEADERS) ;
}
InstallAll bla.lib ;
####### End Jamfile ##############
__________________________________________________________________
Date: Fri, 16 May 2003 00:49:40 -0700 (PDT)
Subject: Re: MkDir not working!?
Bulk lacks a dependency on the directory into which the files are being
copied. It also doesn't do a MkDir for the directory -- but the one you
have won't do you any good without the dependency on the directory.
The File rule, to which the files passed to Bulk are passed, also lacks a
dependency on the directory to which the file is being copied, a MkDir for
it, and a Clean on the target file.
(Basically, Bulk and File could use a little work :)
Date: Thu, 29 May 2003 10:24:14 -0400
From: "Sean Wilson" <swilson@rim.net>
Subject: Reading *NIX-style libraries on NT
I'm trying to get the NT version of Jam to read archives produced by the
ARM ADS compiler (which are in COFF format). However, it looks like the NT
version of Jam only supports OMF libraries. Are there any workarounds apart
from hacking the Jam source as described in the "Cross Compiling for VmWorks
on NT using Jam" thread?
Subject: RE: Reading *NIX-style libraries on NT
Date: Thu, 29 May 2003 10:42:15 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
The MSVC compiler produces COFF objects. Lib.exe converts OMF objects
to COFF before archiving them in the lib file. (Just now double checked
in MSDN). Jam works great with the objects and libs produced by the
MSVC compiler, and a while ago I was actually in that code and observed
that it matches the lib format spec as documented in MSDN.
Maybe the problem is something different from what you thought?
Sent: Thursday, May 29, 2003 7:24 AM
Subject: Reading *NIX-style libraries on NT
I'm trying to get the NT version of Jam to read archives produced by the
ARM ADS compiler (which are in COFF format). However, it looks like the
NT version of Jam only supports OMF libraries. Are there any
workarounds apart from hacking the Jam source as described in the "Cross
Compiling for VmWorks on NT using Jam" thread?
Subject: RE: Reading *NIX-style libraries on NT
Date: Thu, 29 May 2003 14:02:41 -0400
From: "Sean Wilson" <swilson@rim.net>
Sorry, I was getting my object formats confused. The ADS compiler actually
produces ELF object code. The problem is the archive format produced by the
ADS librarian is slightly different than the format produced by MSVC.
I'll have to look into it some more to determine exactly what the issue is.
Date: Thu, 29 May 2003 12:00:39 +0200
From: Joachim Falk <joachim.falk@gmx.de>
Subject: Request for clarification
Excerpt from http://public.perforce.com/public/jam/src/Jam.html:
This seems not to be true
(http://www.boost.org/tools/build/jam_src/index.html).
I am working with boost.jam here, maybe it is correct for
perforce jam ? I wonder if this counterintuitive feature/bug
will stay alive for bakward compatibility reasons, or if
it will be fixed. Or if a fix from me would be accepted back
into the baseline ?
There seems to be a bug here too, following Jamfile fragment
doesn't work as intenden I think:
<- Jamfile-fragment start ->
rule foo { bar on batz = 12 ; }
rule dumpbar { ECHO BAR: $(bar) ; }
on batz foo ;
dumpbar ;
<- Jamfile-fragment end ->
Executing this fragments modifies the global batz Variable
when it clearly shouldn't. This is more or less the same
problem as fixed with following code fragment in "jam_src/make.c":
<- make.c-fragment start ->
#ifdef OPT_FIX_TARGET_VARIABLES_EXT
/* we must make a copy of the target's settings before pushing
since calling the target's HDRRULE can change these
settings. */
saved = copysettings( t->settings );
pushsettings( saved );
#else
pushsettings( t->settings );
#endif
...
#ifdef OPT_FIX_TARGET_VARIABLES_EXT
popsettings( saved );
freesettings( saved );
#else
popsettings( t->settings );
#endif
<- make.c-fragment end ->
However this fix implies that changing the
target dependent variables which are blended in
the global variable space of the rule will be lost
after leaving the rule. And changing them with
"variable on targets =" syntax will not change the
blended in copies. Is this intended ?
Date: Fri, 30 May 2003 17:38:54 +0200 (CEST)
From: Matze Braun <matze@braunis.de>
Subject: Re: Request for clarification
jam 2.4 had the 2 bugs you describe. jam2.5 is fixed AFAIK, but boostjam
is still based on jam 2.4 and may still have the bugs.
Date: Tue, 3 Jun 2003 11:57:08 -0700
From: "Srinivasan Murari" <smurari@paramanet.com>
Subject: Long command problem with Jam
I downloaded Jam 2.5-rc3 and built it for Windows XP and am attempting to
replace our current make based system with Jam rules and have run into a problem.
Certain targets have have actions that are too long so I get the following error
C++ actions too long (max 996)!
because I have too many "DEFINES" and "HDR" variables for certain targets.
Has anyone created rules to work around this problem?
From: "Peter Klotz" <peter.klotz@aon.at>
Subject: Re: Long command problem with Jam
Date: Tue, 3 Jun 2003 23:53:30 +0200
replace our current make based system with Jam rules and have run into a
problem. Certain targets have have actions that are too long so I get the following error
The maximum command line length under Windows XP for cmd.exe is exactly 8190 characters.
Jam uses the default value of 996 to be Windows NT 4 compatible.
You can change the limit by adjusting the value of MAXLINE in jam.h.
It is also possible to use cmd.exe from Windows XP under Windows 2000 which
by default has a 2046 character limit.
If 8190 characters are insufficient you can modify the rules Cc, C++ and
Link to use so called response files. See the command line option @ of the
Microsoft Compiler/Linker.
From: "David Colton" <david.colton@mobilecohesion.com>
Date: Thu, 5 Jun 2003 13:57:47 +0100
Subject: Newbie Question
I've searched the archives for a solution, similar to lex or yacc,
to generate CORBA / SOAP stubs, skeleton, xsd files etc. from idl
definitions with no luck.
Have I missed something.
From: johan.nilsson@esrange.ssc.se
Date: Fri, 6 Jun 2003 14:35:44 +0200
Subject: Header scan caching under VMS?
this format, some or all of this message may not be legible.
does anyone know if the header-cache feature under Craig McPheeters branch
should work under OpenVMS? I tried to download and compile (which worked
fine after a lesser fix in hcache.c), but it doesn't seem like the cache is
in use. I tried to invoke jam using "-sHCACHEFILE=<filename>" for one of my
projects, but no cache file appeared.
Quick steps on how to test/fix it?
Date: Sun, 8 Jun 2003 10:43:45 +0200
From: Paul Guyot <pguyot@kallisys.net>
Subject: Link fails every other time: what's wrong?
I should say this is my first project with Jam. This project can be
built with ProjectBuilder and CodeWarrior on MacOS X and since it
aims to be a multi-platform project, I considered Jam instead of make
to build it on general Unices.
I got jam 2.4. I have a single processor (does Jam automatically
detects this and span work to two processes if I had a dual
processor?). Anyway, whether I put -j1 or not, linking fails every
other time. The first build succeeds, the next one fails, then it
succeeds, then fails, etc.
I get:
I should say I changed the rules quite a bit to fit my needs, basically:
- I have spaces in paths.
- I want the build scripts to be in a single directory, not
everywhere mixed up with source code (I hate recursive make).
- I have files with the same prefix and different suffixes, so
replacing the suffix with .o in the traditional Unix spirit won't do
it. So I patched rules so they'll add .o instead of replace the suffix.
The jam file is here:
http://www.kallisys.net/Jamfile.txt
Any idea of what I'm doing wrong here?
NPDS: http://newton.kallisys.net:8080/
Apache: http://www.kallisys.com/
Date: Tue, 10 Jun 2003 11:46:04 -0700
From: Matt Armstrong <matt@lickey.com>
Subject: [BUG] 2.5rc2 - mem leak in var_string()
var_string() allocates a list, iterates through it and then frees it.
Problem is, it loses the head of the list when iterating so the list is
never actually freed.
Suggested fix below...
==== //guest/matt_armstrong/jam/fix/dc/variable.c#1
@@ -176,7 +176,8 @@
if( dollar )
{
- LIST *l = var_expand( L0, lastword, out, lol, 0 );
+ LIST *h = var_expand( L0, lastword, out, lol, 0 );
+ LIST *l = h;
out = lastword;
@@ -196,7 +197,7 @@
*out++ = ' ';
}
- list_free( l );
+ list_free( h );
}
}
Date: Tue, 10 Jun 2003 16:08:25 -0700
From: Matt Armstrong <matt@lickey.com>
Subject: [bug] -g breaks the "first" pseudotarget
I have found that the new -g flag breaks the "first" pseudotarget. This
is because make0sort() sorts things with a 0 timestamp last. So the
"first" pseudotarget gets put last in all the dependency lists it occurs
in, including "all."
I experimented with two fixes, both worked. I'm not sure about either.
First fix just skipped the make0sort() call for T_FLAG_NOTFILE targets.
The assumption is that the dependencies of NOTFILE targets should be
built in the prescribed order.
The second fix had make0sort() place targets with a 0 timestamp before
all others, so "first"got put first.
I'm not sure how either of these work with a parallel build, since it
isn't clear how either guarantees that all dependencies of "first" get
built before any other targets are built. E.g. this stuff in Jambase
doesn't seem to cut it, since just the fact that "lib" depends on first
does not mean that the dependencies of "lib" do.
Depends all : shell files lib exe obj ;
Depends all shell files lib exe obj : first ;
NotFile all first shell files lib exe obj dirs clean uninstall ;
From: "Badari Kakumani" <badari@cisco.com>
Date: Thu, 19 Jun 2003 08:27:01 -0700
Subject: obsoleted objects in libraries
i am curious as how the other jammers are handling the cases of:
Library foo.a : a.c b.c c.c
at a later point changing to
Library foo.a : a.c b.c
with the above change of dropping c.c from the archive foo.a,
the basic jam infrastructure would never realize this and hence
would never drop the object c.o from the archive foo.a
to work around this, i had to
a) place a dependency of all sources a.c, b.c and c.c to the Jamfile AND
b) introduced rule+actions to remove foo.a when Jamfile is updated
From: johan.nilsson@esrange.ssc.se
Subject: RE: Header scan caching under VMS?
Date: Wed, 25 Jun 2003 15:32:58 +0200
as no one else came up with an answer ... I've come one step closer to a
solution. I found that the defines needed to include the options were set
using CCFLAGS, e.g.:
CCFLAGS += /DEFINE=($(OPT_DEFINES)) ;
and that VMS also was defined using CCFLAGS ("CCFLAGS += /DEFINE=VMS ;")
The sad thing about this is that the DEC C++ compiler can only handle a
single /DEFINE option in the form of /DEFINE=(op1,opt1,...). If more are
specified only the last one is used - hence why no extensions where included
in my build (didn't see it until using the -dx qualifier).
Using DEFINES instead of CCFLAGS solves the problem, e.g.:
--- Jamfile.config ---
..
DEFINES += $(OPT_DEFINES) ;
DEFINES += VMS ;
Also, an '#include "hcache.h"' statement was missing in make.c.
Still haven't got the header cache to work though ...
Date: Wed, 25 Jun 2003 14:55:56 +0100
From: Barrie Stott <G.B.Stott@bolton.ac.uk>
Subject: problem using jam to convert .html to .ps
The following gives a cut-down version of what I'm trying to do.
Start with abc.html. Use Dave Raggett's tidy program to tidy up the
html and then html2lout to get abc.lt. Finally, use lout to get abc.ps.
There's a lot of badly formed html out there so that, in practice,
abc.lout needs editing several times. Eventually, I will decide that
no more html editing is required. The .lt output from html2lout then
needs massaging by several edits of abc.lt.
The crux of the problem is that, if I do any more .html edits after
starting .lt edits, all my abc.lt edits will be overwritten by a
straightforward `compile' of .html into .ps. I want to make a mark
when I finish html editing which says that, from now on, my `compile'
is from .lt to .ps (rather than .html to .ps).
I've solved my problem using make without difficulty; this is probably
because I've used make for a long time and this is my first brush with
jam. I've solved my problem using jam but only by modifying jam source
to make jam command line arguments available to jam via ARGV (the
required mods. were described in the mailing list archive). Both
these solutions created a dummy file whose presence means that further
compiling from .html is now forbidden; the jam solution always used
command line arguments and simply gave an error message when trying to
compile from .html with the dummy file present.
I would like to be able to type simply `jam' to get my .ps file and
`jam dummy' to indicate that I no longer wanted to consider the .html
file. Is this possible and, if so, how do I do it?
Date: Wed, 25 Jun 2003 14:54:04 +0100
From: Barrie Stott <G.B.Stott@bolton.ac.uk>
Subject: Documentation on SUBST in ftjam
I've looked through what documentation I can find about jam and I've
scanned the mailing list archive without finding anything definitive
about SUBST. Can someone point me in the right direction?
Date: Sat, 28 Jun 2003 23:31:26 +0100
From: Barrie Stott <G.B.Stott@bolton.ac.uk>
Subject: Re: problem using jam to convert .html to .ps
A little while ago I posted a query about making abc.ps from abc.lt
which was, in turn, created from abc.html. Initially we would
alternate between creating abc.ps and editing abc.html. So far things
are simple. However, after a while, we want to abandon the dependence
of abc.ps on abc.html and alternate creating abc.ps with editing abc.lt.
Andrew Bachmann suggested how this could be done using grist and,
playing with this, an even simpler method was found. I present it here
since there seems to be a lack of public examples of simple Jamfiles.
I have to confess that, now, it seems too trivial and obvious to
present and I can now see several ways of doing what I want. However
I didn't think this earlier and someone else who is new to jam might
benefit from seeng this.
The Jamfile below is sufficient for our purposes. Store it in some
directory, sit there and touch file abc.html. Nothing is added by
giving detailed information about how, for example, abc.lt is
created from abc.html, so we simply touch abc.lt and display a message
about what is being done in the actions for Html2Lout.
Initially, we obey `touch abc.html; jam -d0' several times to mimic
alternating between editing the html file and running jam. Info is
displayed which shows what's being done.
We then `touch done-html' to indicate that we no longer want to
depend on abc.html. We cannot prevent abc.html being edited and made
more recent but we do not want such information to determine what
happens. That's the reason for Html2Lout doing useful work only when
done-html does NOT exist.
We then obey `touch abc.lt; jam -d0' since we are now editing the lout
file. Similar info is displayed. At one time the Lout2PS rule did not
have `Depends all : $(1) ;' and then nothing was displayed.
I hope the simplicity of this example has offended no one.
rule Html2Lout { Depends $(1) : $(2) ; Depends all : $(1) ; }
rule Lout2PS { Depends $(1) : $(2) ; Depends all : $(1) ; }
actions Html2Lout { if [ ! -f done-html ]
then
echo "make abc.lt from $(2)" ; touch abc.lt
fi
}
actions Lout2PS { echo "make abc.ps from $(2)" ; touch abc.ps }
Html2Lout abc.lt : abc.html ;
Lout2PS abc.ps : abc.lt ;
Date: Tue, 1 Jul 2003 17:37:57 +0100
From: Barrie Stott <G.B.Stott@bolton.ac.uk>
Subject: Use of clean with directories
A simple question. I am happy with Clean for getting rid of files but
don't know how to get rid of directories. Can someone help, please?
Date: Tue, 01 Jul 2003 10:24:14 -0700
From: Matt Armstrong <matt@lickey.com>
Subject: Re: Use of clean with directories
There is no built-in rule for deleting directories. That is something you have
to create yourself. I don't personally have a thought-out example.
Date: Tue, 1 Jul 2003 10:56:32 -0700 (PDT)
Subject: Re: Use of clean with directories
I'm wondering why you'd bother -- if you want the whole thing gone,
'rm -rf *' would do the job easier.
From: "Hoff, Todd" <Todd.Hoff@Ciena.com>
Date: Tue, 1 Jul 2003 12:49:10 -0700
Subject: ideas on getting data and time in jamrules?
Is there a good way to get current date and time
in jamrules (windoze platform)? I've looked around
for a solution but have not found one.
The time stamp would be redirected to a source file
so it can be displayed as part of the build information.
Date: Wed, 2 Jul 2003 14:29:48 +0100
From: Barrie Stott <G.B.Stott@bolton.ac.uk>
Subject: Re: Use of clean with directories
Many thanks to those who replied. With some reluctance, I had
temporarily overridden the definition of RM as `rm -f' in Jambase to
`rm -rf' in my Jamfile; perhaps I would have been happier if I had had
the sense not to remove each file in a to-be-cleaned directory as well
as the directory itself. I was particularly pleased with the more
rounded solution posted by Eric Sunshine. It just has a feeling of
rightness about it.
From: "Eric Sunshine" <jam@sunshineco.com>
Subject: Re: Use of clean with directories
Date: Mon, 7 Jul 2003 03:58:15 -0400
For the Crystal Space project (http://crystal.sf.net/), I handled this issue
by creating a CleanDir rule which parallels the existing Clean rule. The
code looks like this:
DELTREE ?= "rm -rf" ;
# CleanDir <tag> : <dir> ...
# Forcibly delete a set of directories, even if they are not empty.
# Tag is one of the standard targets used with the "Clean" rule, such as
# "clean" or "distclean".
rule CleanDir {
Always $(<) ;
NotFile $(<) ;
NoCare $(>) ;
}
actions piecemeal together existing CleanDir { $(DELTREE) $(>) }
You use the CleanDir rule just like you would use Clean. For example:
Clean clean : $(OBJS) ;
CleanDir clean : $(OBJDIR) ;
CleanDir distclean : $(CONFIGDIR) ;
CleanDir maintainerclean : autom4te.cache ;
Note that you will probably want to conditionalize DELTREE on a per-platform basis.
From: "Hoff, Todd" <Todd.Hoff@Ciena.com>
Date: Mon, 14 Jul 2003 14:01:31 -0700
Subject: jamdate and spaces
$(JAMDATE) worked as the time stamp, thanx, but i must also pass
in a time stamp from the build system if it is built from the build system.
The jam date time stamp looks like:
"Mon Jul 14 13:52:32 2003"
My time stamp looks like:
From: "Hoff, Todd" <Todd.Hoff@Ciena.com>
Date: Mon, 14 Jul 2003 14:14:07 -0700
Subject: RE: jamdate and spaces
$(JAMDATE) worked as the time stamp, thanx, but i must also pass
in a time stamp from the build system if it is built from the build system.
The jam date time stamp looks like:
"Mon Jul 14 13:52:32 2003"
My time stamp looks like:
2003-07 14:11:0:7
When i use it to generate a build info file it looks like:
static char* pBuild_time = "2003-07"; "14:11:0:7";
Jam is treating it like a list because of a space i guess. There
was some email in the archive about this, but i never saw a resolution.
Is there a way to treat it like a scalar?
In the jamdate string has a bunch of spaces in it, yet it doesn't get
treated like a list. Does jam know a variable was assigned from jam date
and then treats it like a scalar?
Date: Tue, 15 Jul 2003 00:28:54 +0200
From: Harri Porten <porten@kde.org>
Subject: Re: RE: jamdate and spaces
Surround the value by double quotes, e.g. "2003-07 14:11:0:7". This way
it won't be split.
From: Vladimir Prus <ghost@cs.msu.su>
Date: Thu, 24 Jul 2003 17:03:16 +0400
Subject: TEMPORARY bug?
suppose I have the following Jamfile:
Depends all : a ;
Depends a : b ;
Depends b : c ;
TEMPORARY b ;
actions copy { cp $(>) $(<) }
copy a : b ;
copy b : c ;
If "c" initially exists, and I run "jam -fJamfile", everything's OK. But on
all other invocations, jam recreates "a" over and over. Is this a bug?
This reproduces on a freshly "p4 sync"-ed sources from public depot.
Date: Wed, 30 Jul 2003 15:27:23 -0400
From: "Jeff Nicholson" <JNicholson@rfmd.com>
Subject: Better string table support in file_archscan (filent.c)
We are experimenting with using Jam 2.5rc3 to build our embedded
applications. We use the MULTI 2000 v3.5 compiler toolchain from Green
Hills Software (GHS) on Windows XP, and I made the appropriate Jambase
changes to invoke it, but I noticed that Jam was rebuilding some object
targets residing in libraries even though they were up-to-date. It
appeared that the affected files were the ones that had longer file
names. I tracked the problem down to function file_archscan() in
filent.c. I'm not sure which librarian(s) this function was written
for, but the section of code dealing with long filenames
else if (ar_hdr.ar_name[0] == '/' && ar_hdr.ar_name[1] != ' ') {
/* Long filenames are recognized by "/nnnn" where nnnn is
** the offset of the string in the string table represented
** in ASCII decimals.
*/
}
wasn't working for the GHS librarian (ax). The problem is that the
string table section of the ax library files does not null-terminate the
entries. Instead, they are delimited by linefeed characters (\n
0x0A). So I've changed the code to handle either case
else if (ar_hdr.ar_name[0] == '/' && ar_hdr.ar_name[1] != ' ') {
/* Long filenames are recognized by "/nnnn" where nnnn is
** the offset of the string in the string table represented
** in ASCII decimals.
*/
}
and it seems to work fine. Anyone see a reason why this change
shouldn't be incorporated into the release? Other comments?
Date: Sat, 9 Aug 2003 11:13:58 +0200
From: Drew McCormack <drewmccormack@mac.com>
Subject: Fortran 90 modules
I am looking at alternative build systems for a reasonably large piece
of Fortran 90 software. The problem with Fortran 90, is that is has
modules, which are a bit like header files, but not quite.
Dependencies for C are easy to find, because you simply look at the
file named in an include statement. In fortran 90, you have a 'use'
statement. The problem is, the 'use' relates to a fortran language
construction, not directly to a file name.
Eg
In file bob.f90:
module tom
end module
In file terry.f90:
use tom
In this example, the terry.f90 file depends on bob.f90, but bob.f90 is
not mentioned explicitly in terry.f90.
One option is just to ensure that all 'modules' are placed in a file
with exactly the same name; this leads to a situation which is the same
as in C. But I was wondering if it would be possible to use jam to
achieve the more general case, where the module and file have different names.
Basically you would need to be able to scan files not only for 'use'
statements (equivalent to 'include'), but also 'module' statements.
Each file could define multiple 'modules'.
Anyone know if this is possible, or, even better, whether it has been done?
From: Michael Beach <michaelb@ieee.org>
Date: Wed, 13 Aug 2003 16:09:11 +1000
Subject: Perplexing behaviour of TEMPORARY
Hello all. In one of my Jamfiles I have a file target marked TEMPORARY.
However if the target exists when jam is invoked, the rule to build it is
invoked even if it is up to date with respect to its dependencies. Is this a
bug, or is this behaviour deemed desirable for some reason which has not
occurred to me?
however there seemed to be no response to this email.
The following Jamfile illustrates the problem...
SubDir TOP ;
rule Foo {
Depends $(1) : $(2) ;
Temporary $(2) ;
}
rule Bar { Depends $(1) : $(2) ; }
actions Foo { sleep 1 ; cp $(2) $(1) ; }
actions Bar { sleep 1 ; cp $(2) $(1) ; }
Foo x : y ;
Bar y : z ;
Assuming x, y and z don't initially exist, saying "touch z" and then "jam x"
results in rules "Bar y" and "Foo x" being invoked. However further "jam x"
commands always result in "Foo x", which is not what I was expecting.
Commenting out "Temporary $(2) ; " in the Jamfile gets rid of the invocations
of "Foo x" but has the (expected) consequence of causing "Bar y" and "Foo x"
to be invoked if "y" is missing.
Can anybody shed any light on this? Thanks.
From: Michael Beach <michaelb@ieee.org>
Subject: Re: Perplexing behaviour of TEMPORARY
Date: Wed, 13 Aug 2003 16:19:41 +1000
Oops! I meant to say that the file target marked TEMPORARY causes other file
targets which depend on it to be built even if they appear to be up to date
with respect to it.
Date: Mon, 18 Aug 2003 11:01:27 -0700 (PDT)
From: "Jim Geist" <jimge@CS.Stanford.EDU>
Subject: MSDEV and the jam -j flag
I'm investigating jam as a possible alternative to building a fairly
large project that's currently being built with Visual Studio.
I've got the project building fine using just one process, but I run
into problems with the -j flag. The Microsoft compiler likes to treat
several files as project-wide, even during the compilation of
individual source files. For example, assuming a.cpp and b.cpp build foo.lib:
cl /c /Zi /Fdfoo.pdb a.cpp <-- creates foo.pdb
cl /c /Zi /Fdfoo.pdb b.cpp <-- updates foo.pdb
lib /out:foo.lib a.obj b.obj
where foo.pdb will contain the debugging info for foo.lib (if you
don't name foo.pdb, the same thing happens, just with a compiler
generated name).
Obviously if the two cl steps are run concurrently you've got
problems. Unfortunately that's exactly what happens with -j2.
I tried making one of the project-wide files dependent on all the
source files in that project, but that didn't seem to help.
So, anyone else run into this? Is there an easy way to influence what
files jam decides to compile in concurrent jobs?
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: MSDEV and the jam -j flag
Date: Mon, 18 Aug 2003 21:31:44 +0200
It really sounds like a Microsoft bug: The compiler should be doing
proper locking. If it isn't, that compiler isn't going to work very
well with SMP machines, no matter which program invokes it.
I know there have been some fairly evil hacks done to make it work in
the past. Search the archive. But I guess there's a proper solution at
Microsoft somewhere - surely the developers there have sexy SMP
machines they want to use?
Date: Wed, 20 Aug 2003 19:18:30 +0200
From: Jacob Gorm Hansen <jg@ioi.dk>
Subject: Re: MSDEV and the jam -j flag
You should have a look at Craig McPheeter's (sorry if I spelled the name
wrong, been out of the loop for a while) version of Jam, with the
semaphore patches. I have managed to make Jam work perfectly with MSVC
and -j. However, my Jambase is a heavily modified version of Chris
Antos' 2.3-Jambase, and not suitable for general use.
Date: Wed, 20 Aug 2003 20:43:00 -0700
From: Paul Forgey <paulf@metainfo.com>
Subject: trying to link via response file on NT
With a parial Jamfile below, I am attempting to link via a response
file on NT. It works as long as LOCATE_TARGET isn't somewhere outside
the source directory. In that case, the response file is still created
in the source directory, and the Link action attempt to look for in in
the output directory. If I use $(RESPFILE) instead of $(1).rsp in the
action, it works so long as RESPFILE isn't made local, since actions
can't see local variables in a rule, but of course this only works if
there is one target being linked. And I can't use RESPFILE on $(1)
$(1).rsp because for reasons I still can't figure out, Jam doesn't seem
to think the target is $(1) at during the Link rule and action.
Why does the rule see $(1) in a different directory than the action?
Is it because the rule got evaluated before the output directory got
established? How do I work around this?
rule Link {
local RESPFILE = $(1).rsp ;
Depends $(1) : $(RESPFILE) ;
EmptyResponseFile $(RESPFILE) : $(2) ;
MakeResponseFile $(RESPFILE) : $(2) ;
RmTemps $(1) : $(RESPFILE) ;
}
rule EmptyResponseFile { Depends $(1) : $(2) ; }
rule MakeResponseFile { Depends $(1) : $(2) ; }
actions ignore EmptyResponseFile { $(RM) $(<) 2> NUL }
actions piecemeal MakeResponseFile { echo $(>) >> $(<) }
actions Link bind NEEDLIBS { $(LINK) $(LINKFLAGS) /out:$(<) $(UNDEFS) @$(1).rsp $(NEEDLIBS) $(LINKLIBS) }
Date: Thu, 21 Aug 2003 16:34:05 -0700
From: Paul Forgey <paulf@metainfo.com>
Subject: improved MakeLocate rule
I've found that if I set LOCATE_TARGET to be anywhere other than the
current directory of the Jamfile, things work as expected unless a
source file lives in another directory. For example:
LOCATE_TARGET = subdir ;
Main prog : file.c src/file.c ;
Although I'd never do this in a real project, I tested having the
source filenames the same in both directories just to verify things.
The problem is jam will properly compile file.c -> subdir/file.o,
however it will also try to compile src/file.c -> subdir/src/file.o.
The trouble is Objects has only anticipated the output going into
subdir, not subdir/src and therefore the MkDir and associated
dependancies are not created.
This may be a bit of a hack, but it's the least impact way I've found
around the problem. If somebody has a better solution, I'd love to use
it instead. I've attached what I came up with below. The added part
is the $(_subdir) business.
If $(<), the object file, has a parent directory, then we prepend it to
the directory, $(>), and use that new directory in the MkDir and
Depends. However, we leave LOCATE just as it was before. In my above
example, src/file.c gets compiled into subdir/src/file.o, but jam does
it in terms of subdir, setting the -o flag to src/file.o. So we think
in terms of subdir, but we make certain subdir/src exists. I've
verified things work just as before if LOCATE_TARGET is not set, where
the objects are created in the same directories the source files are.
rule MakeLocate {
# MakeLocate targets : directory ;
# Sets special variable LOCATE on targets, and arranges
# with MkDir to create target directory.
# Note we grist the directory name with 'dir',
# so that directory path components and other
# targets don't conflict.
if $(>) {
# see if $(<) is in a subdirectory
local _subdir = $(<:D) ;
if ( $(_subdir) != "" ) {
# it does, so we make sure it exists
_subdir = $(>)$(SLASH)$(_subdir) ;
} else {
# nope, never mind then
_subdir = $(>) ;
}
LOCATE on $(<) = $(>) ;
Depends $(<) : $(_subdir[1]:G=dir) ;
MkDir $(_subdir[1]:G=dir) ;
}
}
Date: Fri, 22 Aug 2003 05:10:00 +0200 (CEST)
From: Matze Braun <matze@braunis.de>
Subject: Re: improved MakeLocate rule
I proposed a similar fix a while ago :)
Look here for why this fix isn't a good idea:
Subject: RE: MSDEV and the jam -j flag
Date: Sat, 23 Aug 2003 19:43:03 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
The simplest solution is to use /Z7 instead of /Zi. See the MSDN docs for details.
Sent: Monday, August 18, 2003 11:01 AM
Subject: MSDEV and the jam -j flag
I'm investigating jam as a possible alternative to building a fairly
large project that's currently being built with Visual Studio.
I've got the project building fine using just one process, but I run
into problems with the -j flag. The Microsoft compiler likes to treat
several files as project-wide, even during the compilation of
individual source files. For example, assuming a.cpp and b.cpp build foo.lib:
cl /c /Zi /Fdfoo.pdb a.cpp <-- creates foo.pdb
cl /c /Zi /Fdfoo.pdb b.cpp <-- updates foo.pdb
lib /out:foo.lib a.obj b.obj
where foo.pdb will contain the debugging info for foo.lib (if you
don't name foo.pdb, the same thing happens, just with a compiler
generated name).
Obviously if the two cl steps are run concurrently you've got
problems. Unfortunately that's exactly what happens with -j2.
I tried making one of the project-wide files dependent on all the
source files in that project, but that didn't seem to help.
So, anyone else run into this? Is there an easy way to influence what
files jam decides to compile in concurrent jobs?
From: Vladimir Prus <ghost@cs.msu.su>
Date: Mon, 1 Sep 2003 12:05:16 +0400
Subject: Buglet in :P and :D docs.
a Boost.Build user noted the following problem with Jam documentation which
is present in main Jam line as well.
Date: Tue, 9 Sep 2003 12:29:03 -0700
From: Paul Forgey <paulf@metainfo.com>
Subject: precompiled headers using msvc and working with -j
I've seen requests bounced around the list for such a thing. I've come
up with something that's been working quite well here for a while,
although it is kind of ugly. Then again, vc's handling of pre-compiled
headers is a mess if you try to make it work in an smp build
environment anyway.
Hopefully, we'll be able to keep the same external Jamfile interface to
it when we wire up precompiled headers to gcc-3.4 in our build system.
To use it, invoke the PreCompiledHeaders rule, with $(1) being the list
of source files to be injected with the precompiled header, and $(2) to
be the header file (single header file!). Two pch files will be
generated; one for C and the other for C++. For all C sources, the
compiler forces in an #include of $(2) using the C compiled pch output,
and likewise for C++. That means you want proper #ifdef guarding and
#ifdef/#ifndef __cplusplus sections in your pch and any header files it includes.
Assuming your header files are properly guarded, your code can continue
to include the same files your precompiled header did with little
impact on build performance. This is important if the same code is to
be built where precompiled headers aren't supported. In this case,
that's presently everything not built with msvc.
This isn't quite the Microsoft paradigm for handling precompiled
headers, but it seems closer to the way most other compilers do it.
Since we build on lots of platforms using different compilers, we
wanted something that would translate well to non msvc environments.
I don't like how I added .h to the UserObject rule to support this.
Maybe it would be better to use a new extension, like .pch, similar to Code Warrior.
This also works great for us on MFC projects. Assuming you have a
SubDirC++Flags set for the proper -D options (like -D_AFXDLL -D_MBCS,
or whatever options you are building with), just use:
PreCompiledHeaders $(SOURCE_FILES) : stdafx.h ;
If you prefer to use ObjectDefines on the sources, then pass the same
defines to $(3):
PreCompiledHeaders $(SOURCE_FILES) : stdafx.h : $(MFC_DEFS) ;
rule UserObject {
switch $(2) {
# other stuff
# nt specific, for precompiled header output.
# the .obj file is essentially throwaway
case *.h :
if $(NT) {
switch $(<:B) {
case CC* :
CCFLAGS on $(<) += /TC /Yc /Fp$(LOCATE_TARGET)$(SLASH)$(<:G=:S=.pch) ;
Cc $(<) : $(>)
;
case C++* :
C++FLAGS on $(<) += /Yc /Fp$(LOCATE_TARGET)$(SLASH)$(<:G=:S=.pch) ;
C++ $(<) : $(>)
;
}
# other stuff
case * : Echo "unknown suffix on " $(2) ;
}
}
rule PreCompiledHeaders { # source-file : header-file : [ defines ]
if $(NT) {
local _srcs = [ FGristFiles $(1) ] ;
local _hdrs = [ FGristFiles $(2) ] ;
Includes $(_srsc) : $(SUBDIR)$(SLASH)$(2) ;
local _ccpchs = [ FGristFiles "CC"$(2:S=.pch) ] ;
local _c++pchs = [ FGristFiles "C++"$(2:S=.pch) ] ;
local _ccobjs = $(_ccpchs:S=$(SUFOBJ)) ;
local _c++objs = $(_c++pchs:S=$(SUFOBJ)) ;
Object $(_ccobjs) : $(_hdrs) ;
Object $(_c++objs) : $(_hdrs) ;
local _objs = $(_ccobjs) $(_c++objs) ;
DEFINES on $(_objs) += $(3) ;
CCDEFS on $(_objs) = [ on $(_objs) FDefines $(DEFINES) ] ;
MakeLocate $(_ccpchs) : $(LOCATE_TARGET) ;
MakeLocate $(_c++pchs) : $(LOCATE_TARGET) ;
Clean clean : $(_ccpchs) ;
Clean clean : $(_c++pchs) ;
Depends $(_srcs:S=$(SUFOBJ)) : $(_ccobjs) $(_c++objs) ;
ObjectCcFlags $(1) : /FI$(2) /Yu$(2) /Fp$(LOCATE_TARGET)$(SLASH)$(_ccpchs:G=) ;
ObjectC++Flags $(1) : /FI$(2) /Yu$(2) /Fp$(LOCATE_TARGET)$(SLASH)$(_c++pchs:G=) ;
} # $(NT)
}
From: <piot@hotmail.com>
Date: Sun, 14 Sep 2003 00:40:34 +0200
Subject: Automatic inclusion of source files
Since Jam automatically parses the dependencies of the header files, how can
I get it to automatically include add the source files as dependencies? I
have a lot of source files (several hundred) in sub directories. Now, I
obviously don't want to add them all in my jamfile. How can I tell Jam to do
From: <piot@hotmail.com>
Subject: Re: Automatic inclusion of source files
Date: Sun, 14 Sep 2003 11:11:46 +0200
Your code solved most of my jam problems!
However, I have a couple f libraries (again with several hundred source
files in subdirectories) that doesn't come with any jam or make files. The
problem with those are that they are muilti platform (e.g. GameCube,
Playstation, Palm, MacOS etc) and those obviously fails to compile on the
"wrong" platform.
So therefore I am interested in only having those .cpp files that have been
determined that there is an include dependency with. Jam must be able to
correctly parse "#ifdef" and "#if defined(xxx)" macro statements.
The solution for me now is that everytime I get a new version of the library
sources, I delete the directories and source files that contain code for the
"wrong" platform. This is *very* time consuming.
Maybe there is a program which can correctly parse the dependencies and then
feed that list to jam?
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: Automatic inclusion of source files
Date: Sun, 14 Sep 2003 11:31:15 -0000
I don't understand how could include dependencies between .cpp files be determined automatically?
You don't #include .cpp files, do you?
We also build for multiple platforms here, but we solve that problem in a specific way (we used that
even before Jam, to improve readability and keep our sanity :)). We put all platform-specific parts
of each library in subdirectory of a Sys subdir. Like SomeLibrary/Sys/Win32, SomeLibrary/Sys/XBox,
SomeLibrary/Sys/SDL, etc. My original version of RGLOB has additional case in ListDir to skip */Sys
when searching directories. Then I can manually add needed per-platform subdirs, like this:
FILES = [ RGLOB $(PRJDIR) : *.cpp *.c ] ;
if $(CONFIG) = Win32 {
FILES += [ RGLOB $(PRJDIR)\\Sys\\Win32 : *.cpp ] ;
} else # .... etc.
Main MyApp : $(FILES) ;
From: <piot@hotmail.com>
Subject: Re: Automatic inclusion of source files
Date: Sun, 14 Sep 2003 14:58:15 +0200
No, but in most cases each .h corresponds to a .cpp file in the same
directory. There are of course exceptions to this rule, sometimes the
definition is located elsewhere, sometimes there isn't a definition at all
(pure virtual interface) and sometimes it is a different source type (.inc,
.asm etc). However, it is easier to set up a dependency for those files that
doesn't follow this rule, than the other way around.
SomeLibrary/Sys/Win32, SomeLibrary/Sys/XBox,
That is what I would like to have as well. The designers of those libraries
thought it was better to do it like this:
SomeLibrary/Input/Win32/...
SomeLibrary/Input/MacOs/...
SomeLibrary/Graphics/Win32/...
SomeLibrary/Graphics/MacOs/...
.....
The best solution might be to ask them to change their directory structure.
I certainly would try to do that, cause it would make things a lot easier :)
PS. We are in the same business, I work for Dice (creators of Battlefield
1942, Midtown Madness 3, Rallisport Challenge etc) and I think Sam is
serious fun :o)
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: Automatic inclusion of source files
Date: Sun, 14 Sep 2003 16:00:45 -0000
You could modify ListDir so that it enters special dirs (Win32, MacOs, ...) only if a particular var
is set. That would work as well, even though the directory structure is not as tidy.
Thanks. I hear that BF1942 is a great game as well, I wish I could say that from personal
experience, but I don't get to play a lot games lately. Isn't that ironic? :)
Anyway, I wish you luck with Jam, and let me know if you find out some useful tricks in it.
From: "Alen Ladavac" <alen@croteam.com>
Date: Sun, 14 Sep 2003 16:40:53 -0000
Subject: Batching compile actions _does_ make a difference! _Huge_ difference!
I'd like to dust off this topic again. I searched the archives and found that
it was already discussed before, but with no result.
To reiterate.... Some (most?) compilers are able to compile more than one
source file in one invocation. As the things are now, Jam is able to batch
several updates of same library archive together, but it cannot batch C++
compilations because the target files are different.
The previous discussion has, AFAIK, ended with a claim that this ability would
not gain any significant performance improvements. I would like to show you
results of tests I've made which show that improvements are _very_ significant!
library A:
separate: 31 seconds
batched: 22 seconds
library B:
separate: 351 seconds
batched: 195 seconds
Horrors... this is almost _2 times_ slower than it should be!
Does anyone have some solutions or workarounds for this problem? Or at least
some pointers on what would be the simplest wat to add this functionality to
Jam source, or even better to Jambase. I thought of maybe writing a special
rule that will collect sources with same flags and create a pseudo target for
them. But I don't think I could do that without forcing recompilation even on
those that haven't changed.
P.S. For interested readers...
The compiler in question is MS VisualC++7.1 (or if you will, .NET 2003),
running under WinXP Pro, on AthlonXP 1.9GHz with 768MB ram. Library A is
smaller with 106 source files and about 2MB of source code, library B is
bigger with 296 files with about 5.4MB source code. The bigger one also
includes some big dependencies (like DX9 and Windows SDK).
"Batched" case was measured as being compiled using the Visual's proprietary
.vcproj format (for non-Win32 people, .vcproj is a very weird concept of
holding a list of .cpp files and their settings in an .xml file).
"Separate" case was measured using Jam. All of the time is spent executing
actions, Jam's dependency catching phase took <1 sec.
All compiler settings are same, the only difference is in the fact that
.vcproj batches the sources with same flags and output directory.
Note that this is compiled with precompiled headers. If precompiled headers
are not used then batching doesn't make much difference, but only because
compilation is _much slower overall_. And by much slower I mean >3 times slower!
Date: Sun, 14 Sep 2003 07:51:24 -0700
From: Matt Armstrong <matt@lickey.com>
Subject: Re: Batching compile actions _does_ make a difference! _Huge_ difference!
I would be interested to see how your tests turned out if they were done
in the same way jam might batch compile. E.g. compile groups of 10 or
20 source files together. This is the kind of test I ran 9 months ago
and did not see the performance gain you see (this was with VC++ 6.0).
Also, I wonder if pre-compiled headers entered the picture when
compiling with VC++.NET. They usually don't with jam.
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: Batching compile actions _does_ make a difference! _Huge_ difference!
Date: Sun, 14 Sep 2003 18:01:13 -0000
Why only 10 or 20? Why not all files with same flags? And even if it is only
10 files at the time, it will be much better than 1 at a time.
Yes, I know, I've seen your posts, but but the key points are:
1) Large project.
2) Lot of includes (DX9 and windows.h are evil).
3) Must use precompiled headers, and must use the manual version (/Yc, /Yu),
not automatic one (/YX) as it is not good enough.
4) There are no tricks, the compiler used by the .vcproj is same command
line cl.exe, the GUI front end is just sugar coating.
5) Batching does matter. A lot.
The thing is that for a lot of includes, you _must_ have precompiled headers,
and must have them tuned well. Without it, it is so slow that nothing makes a
difference. But if you have precompiled headers for a large project, the .pch
file is several megabytes (cca 7MB in the library B). Each invocation of the
compiler must load that file. I don't know exactly what is going on, but I
speculate that even though the .pch stays in file cache, compiler must parse
its data and recreate its internal lists from it.
Compiling from withing the .NETs GUI or from Jam is essentially the same.
Except for the infamous batching.
Compare command lines below if you don't believe me and you'll see that all
switches are same except that they build into different target dirs, and that
Jam has some clutter specifying bogus /I. /I.. /I<default include path>.
D:\VS.NET2003\VC7\bin\cl /nologo /c
/Fo..\..\Obj.Win32\Debug\Engine\World\ScriptingDomain.obj /MDd /W3 /WX /Gm /GS
/Zi /Od /RTC1 /D WIN32 /D _DEBUG /D _WINDLL /D _MBCS /EHsc /Wp64 /D
ENGINE_EXPORTS /YuEngine/StdH.h /Fp..\..\Bin.Win32\Debug\Engine.pch
/Fd..\..\Bin.Win32\Debug\Engine.pdb /I. /I.. /ID:\VS.NET2003\VC7\include /I..
/Tp..\Engine\World\ScriptingDomain.cpp
ScriptingDomain.cpp
D:\VS.NET2003\VC7\bin\cl /nologo /c
/Fo..\..\Obj.Win32\Debug\Engine\World\Simulation.obj /MDd /W3 /WX /Gm /GS /Zi
/Od /RTC1 /D WIN32 /D _DEBUG /D _WINDLL /D _MBCS /EHsc /Wp64 /D ENGINE_EXPORTS
/YuEngine/StdH.h /Fp..\..\Bin.Win32\Debug\Engine.pch
/Fd..\..\Bin.Win32\Debug\Engine.pdb /I. /I.. /ID:\VS.NET2003\VC7\include /I..
/Tp..\Engine\World\Simulation.cpp
Simulation.cpp
D:\VS.NET2003\VC7\bin\cl /nologo /c
/Fo..\..\Obj.Win32\Debug\Engine\World\World.obj /MDd /W3 /WX /Gm /GS /Zi /Od
/RTC1 /D WIN32 /D _DEBUG /D _WINDLL /D _MBCS /EHsc /Wp64 /D ENGINE_EXPORTS
/YuEngine/StdH.h /Fp..\..\Bin.Win32\Debug\Engine.pch
/Fd..\..\Bin.Win32\Debug\Engine.pdb /I. /I.. /ID:\VS.NET2003\VC7\include /I..
/Tp..\Engine\World\World.cpp
World.cpp
Creating command line "d:\Work\main\Sources\Engine\Debug\BAT000013.bat"
Creating temporary file "d:\Work\main\Sources\Engine\Debug\RSP000014.rsp" with contents [
/Od /I "../" /D "WIN32" /D "_DEBUG" /D "ENGINE_EXPORTS" /D "_WINDLL" /Gm /EHsc
/RTC1 /MDd /GS /Yu"Engine/StdH.h" /Fp"Debug/EngineD.pch" /Fo"Debug/"
/Fd"Debug/vc70.pdb" /W3 /WX /c /Wp64 /Zi /TP
.\World\World.cpp
.\World\Simulation.cpp
.\World\ScriptingDomain.cpp
<snip... it goes on listing about 250 files on one invocation!!!>
]
Creating command line "cl.exe @d:\Work\main\Sources\Engine\Debug\RSP000014.rsp /nologo"
I am interested, how did you arrange for Jam to compile in batches when you
did your test?
Date: Sun, 14 Sep 2003 13:35:29 -0700
From: Matt Armstrong <matt@lickey.com>
Subject: Re: Batching compile actions _does_ make a difference! _Huge_ difference!
This is probably what I ran into then, since I wasn't using pre-compiled
headers. ;-) So, for me, it makes more sense to concentrate on
pre-compiled headers first.
I just created a batch file that compiled N files individually and
another batch file that compiled the same N files in batches.
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: Batching compile actions _does_ make a difference! _Huge_ difference!
Date: Sun, 14 Sep 2003 23:02:23 +0200
Well, that part is easy. It's all in the manual. :)
Doh! I hoped you had some Jam trick for that. :(
Date: Mon, 15 Sep 2003 09:41:57 -0700
From: Matt Armstrong <matt@lickey.com>
Subject: Re: Batching compile actions _does_ make a difference! _Huge_ difference!
It is a hard problem to just hack in, since Jam allows different .c
files to be compiled with different compiler flags.
Subject: Automatic inclusion of source files
Date: Wed, 17 Sep 2003 17:13:33 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Thank you for the info on the [ GLOB $(SUBDIR) : *.cpp ] bit to get all the
sources in a directory. I successfully get the sources using this method,
but when I try to use them in an Objects line like in the following Jamfile
# begin Jamfile
SubDir TOP Util ;
SRCS = [ GLOB $(SUBDIR) : *.cpp ] ;
OBJS = $(SRCS:S=.o) ;
LibraryFromObjects libUtil.a : $(OBJS) ;
Objects $(SRCS) ;
# end Jamfile
What I get is jam trying to build like the following:
g++ -c -o Util/Util/Random.o Util/Random.cpp
So, it's trying to compile the proper file but isn't putting it in thhe right
place. I've looked at this over and over and tried using '.' instead of
$(SUBDIR) with no luck.
Date: Thu, 18 Sep 2003 11:17:34 -0700
Subject: Re: Automatic inclusion of source files
From: Paul Forgey <paulf@metainfo.com>
The object file is the source file with the suffix replaced, rooted in
SubDir. If you are really in $(TOP) and the SRCS file is
Util/Random.cpp, I could see this behavior.
Use SubDir with the correct directory (the one you are in) and add
$(SUBDIR)$(SLASH)Util (or as I prefer, [ FDirName $(TOP) Util ]) to
your SEARCH_SOURCE. I'd have the GLOB function ditch the directory
name from the sources and let the SEARCH mechanism find them. If the
source files aren't unique, search for my prior postings offering a
MakeLocate replacement. But be also sure to read the follow-ups on why
it's a bad idea.
No matter what you do, don't specify a SubDir stating anything other
than the directory the Jamfile is in, otherwise things could get more
complicated than they need to be.
From: Wallace, Richard
Sent: Friday, September 19, 2003 10:46 AM
Subject: RE: Automatic inclusion of source files
Still learning jam and how to work with it, so please bear with me =)
Not sure what you mean here. The $(SUBDIR) is set to $(TOP)$(SLASH)Util
already isn't it. At least, it appears to be when I compile from the $(TOP)
directory and the Jamfile in there is as simple as:
SubDir TOP ;
SubInclude TOP Util ;
So why would I want to tack on another $(SLASH)Util to that?
Ok, but how does one go about doing that? I haven't been able to find a
reference to GLOB in the docs.
Ok, I guess I'm just generally unsure of how building in subdirectories works.
Here's our tree structure.
project/src
Jamfile
Jamrules
/Util
Jamfile
Random.h
Random.cpp
wrapperPthread.h
wrapperPthread.cpp
wrapperSocket.h
wrapperSocket.cpp
/stuff1
Jamfile
... srcs ...
/stuff2
Jamfile
... srcs ...
The Jamfile in the project/src directory (where things are built from) is as follows:
SubDir TOP ;
SubInclude TOP Util ;
SubInclude TOP stuff1 ;
SubInclude TOP stuff2 ;
Then, in the Util directory the Jamfile is:
SubDir TOP Util ;
SRCS = ??? # all C++ source files in the Util directory
OBJS = ??? # object files of compiled C++ source files
LibraryFromObjects libUtil.a : $(OBJS) ; # this is the way to get a library archive, right?
Objects $(SRCS) ; # this should compile the sources right?
Is this generally the way compiling subdirectories is done?
Date: Fri, 19 Sep 2003 14:54:19 -0700
Subject: Re: FW: Automatic inclusion of source files
From: Paul Forgey <paulf@metainfo.com>
I was under the impression you were gathering source files from sub
directories to be built in a parent directory. Now I see that is not
what you are doing.
By the way, you could simply use Library libname : $(SRCS) rather than
the intermediate $(OBJS), unless there's more to it than what is show here.
$(SRCS) ? OBJS = $(SRCS:S=$(SUFOBJ)) should do it, and it's how the
non-FromObjects rules call the ..FromObjects rules. If your source
file is subdir1/filea.c(pp), then unless you've set a LOCATE elsewhere,
the output should be subdir1/filea.o(bj). What happens if you run jam
from this directory rather than by inclusion from the parent Jamfile?
One potential problem I do see is SRCS = and OBJS = apparently being
set globally for all your included Jamfiles. A technique I like to use
in a sub Jamfile, where ThisThing is a unique name across the project:
rule BuildThisThing {
local SRCS = stuff ... ;
local ANYTHING_ELSE = whatever ;
...
}
BuildThisThing ;
because now SRCS is set locally from the rule BuildThisThing and in
anything it calls.
Hardly an expert myself, but I cut my teeth in a similar situation
Subject: RE: FW: Automatic inclusion of source files
Date: Fri, 19 Sep 2003 15:13:52 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Ok, I replaced the two lines with the one liner Library libUtil : $(SRCS) but
still have the same issue when using the GLOB. If I statically enter the files
it works great.
If I run the jam build with the GLOB in the actual Util/ directory everything works fine.
This sounds like a really good idea. Would something like BuildLibUtil go in
the Util/Jamfile and BuildLibUtil go in the top Jamfile or how would that work?
Any chance I could get a simple sample? There is just some subtle point
I think I'm missing and seeing an example might help me figure it out.
Date: Fri, 19 Sep 2003 15:31:06 -0700
Subject: Re: FW: Automatic inclusion of source files
From: Paul Forgey <paulf@metainfo.com>
Aha! This tells me GLOB is returning the $(SUBDIR) part of the files
back. Try saying $(SRCS:D=) to strip the subdirectory off. Whenever
you specify a file to the standard Jam rules, they expect them to be
without directories so the SEARCH and LOCATE mechanisms do their thing.
process. Since $(SUBDIR) is . when you jam in that directory, things
appear to work until you SubInclude it.
I've found Echo to be useful while debugging my own hairy problems like
this. Pardon my own ignorance of GLOB; I haven't had a need to use it
so I'm not too intimate with how it behaves.
Just do that in place of what you have already, so it goes in
Util/Jamfile. All we are doing is specifying a rule that invokes what
you are doing already, and then invoking it instead. To re-write your
quoted Jamfile:
SubDir TOP Util ;
rule BuildLibUtil {
SRCS = ??? # all C++ source files in the Util directory
OBJS = ??? # object files of compiled C++ source files
LibraryFromObjects libUtil.a : $(OBJS) ; # this is the way to get a
library archive, right?
Objects $(SRCS) ; # this should compile the sources right?
}
BuildLibUtil ;
Subject: RE: FW: Automatic inclusion of source files
Date: Fri, 19 Sep 2003 16:02:09 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Awesome, using $(SRCS:D=) worked beautifully! Thanks.
I think I'm starting to get the hang of this. Now I have the main library
directory built, I'm playing with some of the simpler execs in the system.
I have a couple of simple questions about how best to specify libraries and
link flags.
I have one utility program called pmdutil that depends on the libUtil.a file.
So far this is simple enough, I have
Main pmdutil : pmdutil.cpp
LinkLibraries pmdutil : libUtil ;
Very simple, very cool. One question I have, though not relevant to the
project at the moment, is how would I change this to make libUtil be a
shared library rather than a staticlly built in one? Just change C++FLAGS
and LINKFLAGS to include -shared (we're using gcc 3.2 just so you know).
I found the section in the docs where it says that you should specify libs to
be linked in using
LINKLIBS on pmdutil$(SUFEXE) = some collection of flags ;
I have a few that are pretty standard like pthread, and some that may not be
so standard like mysql. If I just use -lpthread that gets included fine, but
something like mysql whose library lives in /usr/lib/mysql won't automagically
be found. Should I specify the directory the libraries should be found in in
this LINKLIBS line or is there a better place to put it. Since LINKLIBS just
gets put straight on the linker command I suppose it doesn't matter, just
wondering if there is a standard, cleaner way of doing this.
Date: Fri, 19 Sep 2003 16:49:02 -0700
Subject: Re: Libraries (was FW: Automatic inclusion of source files)
From: Paul Forgey <paulf@metainfo.com>
Well, there's the way I am handling these.. LINKLIBS on
program$(SUFEXE) = -lwhatever ; isn't pretty. I've got a
LinkPathLibrary rule which does this for me. I've also got a
LinkLibrariesFrom which points to specific locations (such as
/usr/lib/mysql), and that also has the often desired side effect of
causing a relink if the library changes. Libraries like libstd++ and
libm aren't likely to change as often as your project..
Don't forget -fPIC from your sources if your platform requires it for
building shared libraries. You'll want to modify these rules to suit
your needs. They are rather specific to ours. In these rules, a
library referred to as "example" is "libexample.a" on unix or
"example.lib" on NT. A shared version is "libexample.so" or
"example.lib" if referred to on the linker for import, otherwise the
full .so and .dll. The version is any sequence of anything you want,
but intended to be major minor bugfix. So:
SharedLibrary example 1 2 3 : source1.c source2.c ;
will produce:
libexample.so.1.2.3, and a link libexample.so -> libexample.so.1.2.3
--or on windows--
example1_2_3.dll with an import library named example.lib
This keeps the executables and other shared libraries using the shared
library from caring about the current version, since we consider the
entire project tree to be the _current_ ball of goo, and the dead
bodies of older versions can stick around if they are still being used
by prior installations. I really should further modify this so the
last digit is excluded from the -soname and another link is created, so
the bugfix part of the version can actually be a bugfix without
relinking the client objects.
The windows shared library rule has a bug where the .def file isn't
considered a dependency of the target and isn't searched in
$(SEARCH_SOURCE). I haven't bothered to fix it since we don't use .def
files that much (it's better to use __declspec (dllimport | dllexport)
to avoid an indirect jump from a table of indirect jumps).
Then to use the shared library:
LinkLibrariesFrom myexecutable : example : $(TOP) where example lives ;
The same rule can also link a static library;
LinkLibrariesFrom myexecutable : somestaticlib$(SUFLIB) : $(TOP) where
somestaticlib lives ;
The key to this is using $(SUFLIB) in $(2).
You can also use LinkLibrariesFrom to link a shared library to a shared
library, but it isn't as pretty:
LinkLibrariesFrom [ FSharedLibraryName otherexample 1 2 3 ] : example :
$(TOP) where example lives ;
So here's the fragment of our Jamrules:
# apply these compiler flags to both c and c++
rule SubDirCompilerFlags {
SubDirC++Flags $(1) ;
SubDirCcFlags $(1) ;
}
rule ObjectCompilerFlags {
ObjectC++Flags $(1) : $(2) ;
ObjectCcFlags $(1) : $(2) ;
}
# returns the name of a (possibly shared) library
rule FLibName {
local _name = [ FAppendSuffix $(1) : $(SUFIMP) ] ;
if ! $(NT) { _name = lib$(_name) ; }
return $(_name) ;
}
# build a shared library
rule SharedLibrary {
# library name [ v.v.v..] : sources : [ windows-def-file ]
SharedLibraryFromObjects $(1) : $(2:S=$(SUFOBJ)) : $(3) ;
if $(UNIX) { ObjectCompilerFlags $(2) : $(FPIC) ; }
Objects $(2) ;
}
rule SharedLibraryFromObjects {
local _basename = $(1[1]) ;
local _version ;
local SUFSHR = $(SUFSHR) ;
if $(1[1]:S) != "" { SUFSHR = $(1[1]:S) ; }
local _libname = [ FSharedLibName $(1) ] ;
if $(NT) {
_version = $(1[2-]:J=_) ;
local _impname = $(_basename:S=$(SUFLIB)) ;
local _expname = $(_basename:S=.exp) ;
LINKFLAGS on $(_libname) +=
$(LINKFLAGS)
/DLL
/BASE:@$(TOP)$(SLASH)dll-map.txt,$(_basename)
/DEF:$(3)
/IMPLIB:$(LOCATE_TARGET)$(SLASH)$(_impname)
;
MakeLocate $(_impname) $(_expname) : $(LOCATE_TARGET) ;
Depends $(_impname) $(_expname) : $(_libname) ;
Depends $(_libname) : $(TOP)$(SLASH)dll-map.txt ;
Clean clean : $(_impname) $(_expname) ;
}
else
if $(UNIX) {
local _linkname = [ FLibName $(_basename:S=$(SUFSHR)) ] ;
_version = $(1[2-]:J=.) ;
LINKFLAGS on $(_libname) +=
$(LINKFLAGS)
-shared
-export-dynamic
;
switch $(OS) {
case LINUX :
LINKFLAGS on $(_libname) +=
-Wl,-soname,$(_libname) ;
case SOLARIS :
LINKFLAGS on $(_libname) +=
-Wl,-h,$(_libname) ;
case * :
Exit don't know how to twink ld for shared objects on $(OS) ;
}
if $(_version) {
LibLink $(_linkname) : $(_libname) ;
}
}
MainFromObjects $(_libname) : $(2) ;
InstallBin $(LIBDIR) : $(_libname) ;
}
rule FSharedLibName {
local _libname ;
local _basename = $(1[1]:S=) ;
if $(NT) {
local _version = $(1[2-]:J=_) ;
if $(_version) { _libname = [ FLibName $(_basename)$(_version)$(SUFSHR) ] ; }
else { _libname = [ FLibName $(_basename)$(SUFSHR) ] ; }
}
else
if $(UNIX) {
local _linkname = [ FLibName $(_basename:S=$(SUFSHR)) ] ;
local _version = $(1[2-]:J=.) ;
if $(_version) { _libname = $(_linkname).$(_version) ; }
else { _libname = $(_linkname) ; }
}
return $(_libname) ;
}
rule LibLink {
Depends $(<) : $(>) ;
Clean clean : $(<) ;
SEARCH on $(>) = $(LOCATE_TARGET) ;
LOCATE on $(<) = $(LOCATE_TARGET) ;
Depends files : $(<) ;
}
# these are assumed to be in the same directory!
actions LibLink {
$(RM) $(<) && cd $(<:D) && $(LN) -s $(>:D=) $(<:D=) ;
}
# additional library paths
rule AddLibraryPath {
local _target = [ FAppendSuffix $(1) : $(SUFEXE) ] ;
# target : paths ..
if $(NT) { LINKFLAGS on $(_target) += /LIBPATH:$(2) ; }
else { LINKFLAGS on $(_target) += -L$(2) ; }
}
# Link libraries from nowhere in particular. Use it just like
# LinkLibraries, except we add the [ FLibName ] for you. It's better
# to use LinkLibraries if you can, since that establishes a dependency.
# This method uses the search path, indicating libraries we don't build
rule LinkPathLibraries {
if $(NT) { LINKLIBS on [ FAppendSuffix $(1) : $(SUFEXE) ] += [ FLibName $(2) ] ; }
else if $(UNIX) { LINKLIBS on [ FAppendSuffix $(1) : $(SUFEXE) ] += -l$(2) ; }
}
# Link libraries from somewhere
rule LinkLibrariesFrom {
local _i ;
for _i in $(2) { LinkLibraryFrom $(1) : $(_i) : $(3) ; }
}
rule LinkLibraryFrom {
local _lib = [ FLibName $(2) ] ;
# if we explicitly asked for a static library, then just use it directly
# on NT, linking against a shared library uses the same procedure
if $(NT) || $(2:S) = $(SUFLIB) {
SEARCH on $(_lib) = [ FDirName $(3) ] ;
LinkLibraries $(1) : $(_lib) ;
}
# otherwise, use -L and -l so the elf executable imports from the search
# path rather than ../../from/here/and/back/again
else {
local _dir = [ FDirName $(3) ] ;
local _exe = [ FAppendSuffix $(1) : $(SUFEXE) ] ;
SEARCH on $(_lib) = [ FDirName $(3) ] ;
Depends $(_exe) : $(_lib) ;
AddLibraryPath $(_exe) : $(_dir) ;
LINKLIBS on $(_exe) += -l$(2) ;
}
}
Date: Sat, 20 Sep 2003 11:56:45 +0200
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: FW: Automatic inclusion of source files
There's no need to put it in a rule (the `local's are missing here BTW),
since local variables are block local:
x = 0 ;
# x is 0
{
local x = 1 ;
# x is 1
{
local x = 2 ;
# x is 2
}
# x is 1
}
# x is 0
Subject: RE: Libraries (was FW: Automatic inclusion of source files)
Date: Mon, 22 Sep 2003 10:55:15 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Those rules will come in very handy, thank you very much! =)
I am having a bit of trouble getting LinkLibrariesFrom to work right, though.
I'm trying to do this:
LinkLibrariesFrom pmdutil : mysqlclient : /usr/lib/mysql
and Jam is telling me "don't know how to make libmysqlclient" ... "...skipped
pmdutil for lack of libmysqlclient...".
That is the correct usage isn't it? I've looked at the code and added Echo's
in but everything looks ok to my (untrained) eye.
Subject: RE: Libraries (was FW: Automatic inclusion of source files)
Date: Mon, 22 Sep 2003 11:00:21 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
D'oh, nevermind that last email. I see now that it is because I need to do
mysqlclient$(SUFLIB).
Date: Mon, 22 Sep 2003 12:30:33 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Subject: KEEPOBJS confusion
As I understand it LibraryFromObjects deletes the object files after building
the library. To stop it from doing this you an define the KEEPOBJS variable.
I have done this in the Jamrules file because I can't see a good reason to
remove the object files in this case.
Now I'm running into a problem where the same object files are used in two
different libraries. At first, just for testing, I tried
Library libsomelib : somesrc.cpp ;
Library libanotherlib : anothersrc.cpp somesrc.cpp ;
to see if jam was smart enough to only build somesrc.cpp once. It isn't =(
So, no big deal, I thought I could use
LibraryFromObjects libsomelib : somesrc.o ;
LibraryFromObjects libanotherlib : anothersrc.o somesrc.o ;
Objects somesrc.cpp anothersrc.cpp ;
I tried this (before setting the KEEPOBJS variable) and it cruised along until
trying to build the second library at which point it complained there was no
somesrc.o file.
Ok, I said, I saw something about object files being removed and looked up how
to prevent that. I found KEEPOBJS and set it in the Jamrules file.
The thing is that now jam doesn't try and build the libraries when invoking jam
with no targets. For some reason setting KEEPOBJS seems to exclude libsomelib
and libanotherlib from the all target. Is this the expected behaviour?
Also, I tried doing a 'jam libsomelib.a' to force it to build the library.
It builds the library properly, but it still removes the object file.
That means that doing a 'jam libanotherlib.a' right after building libsomelib
jam compiles somesrc.cpp again and then builds the library.
So, LibraryFromObjects seems to be ignoring the KEEPOBJS variable.
Date: Mon, 22 Sep 2003 15:49:12 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Subject: More on GLOB
Ok, this is the last question regarding GLOBs, I promise.
Now that I can get all the source files in a directory without having to list
them (thanks guys!), I need to do some post processing.
For instance, I want to grab all the files and then remove two or three from
the list (or even better, have them excluded by the GLOB).
How can I go about doing that?
Subject: Re: KEEPOBJS confusion
From: Paul_Donovan@scee.net
Date: Tue, 23 Sep 2003 09:28:55 +0100
how to prevent that. I found KEEPOBJS and set >it in the Jamrules file.
The thing is that now jam doesn't try and build the libraries when invoking
jam with no targets. >For some reason setting KEEPOBJS seems to exclude
libsomelib and libanotherlib from the all target. Is this the expected
It builds the library properly, but it still >removes the object file.
That means that doing a 'jam libanotherlib.a' right after building
libsomelib jam compiles >somesrc.cpp again and then builds the library.
So, LibraryFromObjects seems to be ignoring the KEEPOBJS variable.
I came across the same problem. The way I fixed it was to create my own
LibaryFromObjects rule. I did this a while ago, and haven't used Jam much
since then, so I'm not sure of the exact details any more. I basically
copied the original rule and then altered the last line:
rule LibraryFromObjects {
local _i _l _s ;
# Add grist to file names
...
if $(RANLIB) { Ranlib $(_l) ; }
# If we can't scan the library, we have to leave the .o's around.
# If KEEPOBJS is set, then the library won't get built (!), so can't use it to keep .o files around
# Just comment the RmTemps out instead.
# if ! ( $(NOARSCAN) || $(KEEPOBJS) ) { RmTemps $(_l) : $(_s) ; }
}
From: Ray Malus <ray.malus@digitalinsight.com>
Date: Tue, 23 Sep 2003 10:13:33 -0700
Subject: Turning of Optimizations
I am the librarian for our team.
For some reason, Jam insists on sending a -O flag in the compile line.
This is fine for Production, but not for development, as it really confuses dbx.
Anyone know a quick way to defeat this?
Subject: Re: Turning of Optimizations
From: michael.allard@acterna.com
Date: Tue, 23 Sep 2003 12:25:09 -0500
Try:
jam -sOPTIM=
This will override the "OPTIM" Jambase setting with an empty string,
removing your "-O". (This assumes an essentially-stock Jambase. :-)
Subject: Turning of Optimizations
From: "Craig Allsop" <callsop@sceptre.net>
Subject: Re: More on GLOB
Date: Wed, 24 Sep 2003 08:54:39 +1000
Perhaps like this:
# Summary:
# Returns the elements of the first list that are not members of the
second.
#
# Args:
# first - Superset of elements.
#
# second - Subset of elements.
#
# Returns:
# The difference between first and second lists.
#
rule Difference {
local result ;
local element ;
{
if ! ( $(element) in $(>) ) { result += $(element) ; }
}
return $(result) ;
}
SOURCE = [ Glob $(SUBDIR) : *.cpp ] ;
# ensure RemoveMe.cpp and NotThisFile.cpp are not included in 'SOURCE'.
SOURCE = [ Difference $(SOURCE:BS) : RemoveMe.cpp NotThisFile.cpp ] ;
get all the source files in a directory without having to list them (thanks
guys!), I need to do some post processing.
from the list (or even better, have them excluded by the GLOB). How can I
go about doing that?
Date: Wed, 24 Sep 2003 13:37:30 +0200
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: RE: More on GLOB
This rule might help
# RemoveFromVars NameOfVariableList : ListEntriesToRemove
# e.g. RemoveFromVars _all_objs : usrConfig_st$(SUFOBJ
sysLib$(SUFOBJ) sysALib$(SUFOBJ) ;
rule RemoveFromVars {
local i , alt, neu;
neu = ;
alt = $($(<)) ;
{
if $(i) in $(>) { } else { neu += $(i) ; }
}
$(<) = $(neu) ;
}
Subject: RE: More on GLOB
Date: Wed, 24 Sep 2003 10:21:33 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Thanks everyone! I thought I might have to do something like that, but wanted
to check and see if there was something already there that might do the same
but I was overlooking.
Subject: RE: KEEPOBJS confusion
Date: Wed, 24 Sep 2003 10:25:54 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Alright, that's what I wound up doing, or at least something close to that.
I created my own LibraryFromObjects rule and just copied and pasted what was
in the Jambase file. Here's my changes:
rule LibraryFromObjects {
...
# if $(KEEPOBJS)
# {
# Depends obj : $(_s) ;
# }
# else
# {
Depends lib : $(_l) ;
# }
...
if ! ( $(NOARSCAN) || $(NOARUPDATE) ) && ! $(KEEPOBJS) { RmTemps $(_l) : $(_s) ; }
}
Date: Wed, 24 Sep 2003 12:39:09 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Subject: Linking Libraries
I have a couple of questions about linking in libraries.
1) Why the use of
g++ -o exec exec.o path/to/libsomething.a
rather than
g++ -o exec exec.o -Lpath/to/ -lsomething
for linking in libraries statically? Any difference in the two?
2) How might I go about making sure that a rule is only ever done once for a
target? For instance, I have a couple of targets to specify the most commonly
linked libraries for binaries in our system. The problem is that some
libraries depend on the same things so if a binary depends on both of
those libraries their dependencies get added twice.
So, for instance, I could have two instances of
/usr/lib/mysql/libmysqlclient.a in the call to g++.
Is this easy to take care of? Should I even care?
Having the same library on the command makes it a PITA to read, is there
another drawback like performance, etc? Or will gcc/g++ ignore the duplicates?
Date: Wed, 24 Sep 2003 22:37:39 +0200 (CEST)
From: Harri Porten <porten@froglogic.com>
Subject: Re: Linking Libraries
A difference I'm aware of: the first variant ensures that the static
library is chosen even if an .so file is present in the same directory.
The same might be accomplished by a linker flag though this might cause
some compatibility problems.
Unfortunately, I didn't find an easy way to do this with Jam. I found
myself writing more and more rules (for this and other features we needed)
until I gave up (and wrote a new tool from scratch).
In the case of some libraries you'll have to.
Some (static) libraries you can't link in multiple times because of
duplicated symbols. We ran into problems with libieee.a for example.
Subject: RE: Linking Libraries
Date: Wed, 24 Sep 2003 15:22:10 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Ok, that's kinda what I thought. I might try and figure it out later, but for now I'll just
have to do it manually I think.
Along these same lines, I had another issue come up with libraries.
Here's the directory structure:
TOP/
Jamfile
execs/
exec1.cpp
Jamfile
libs/
lib1.cpp
Jamfile
exec1.cpp has a dependency on lib1.a. When running jam from the TOP directory
everything builds without an issue. If I go into the execs/ directory and try
and build exec1 from there, jam complains that it doesn't know how to make lib1.a.
Here's what the Jamfiles look like
# start TOP/Jamfile
SubDir TOP ;
SubInclude libs ;
SubInclude execs ;
# end TOP/Jamfile
# start TOP/libs/Jamfile
SubDir TOP libs ;
Library lib1 : lib1.cpp ;
# end TOP/libs/Jamfile
# start TOP/execs/Jamfile
SubDir TOP execs ;
Main exec1 : exec1.cpp ;
LinkLibraries exec1 : lib1 ;
# end TOP/execs/Jamfile
The way I fixed this was to use the LinkLibrariesFrom rule that someone
specified in a previous post and changed
LinkLibraries exec1 : lib1 ;
to
LinkLibrariesFrom exec1 : lib1$(SUBLIB) : $(TOP) libs ;
I was just wondering if there is a better solution to this problem. Any ideas?
Date: Wed, 24 Sep 2003 16:38:52 -0700
Subject: Re: Linking Libraries
From: Paul Forgey <paulf@metainfo.com>
For the same reason that everything works if invoked from the top, if
one Jamfile included the other Jamfile, then it would know about how to
build the dependent library, and what that library depended on. This
has a very obviously bad side effect if included from a higher directory.
I've been meaning to play with some sort of #ifdef guarding equivalent.
Perhaps by making a SubInclude style rule (MySubInclude for this
example) which defines a unique-per-Jamfile macro as a condition for
not re-including (say, _included$(SUBDIR) = true) you could get away
with this? Then, you could MySubInclude the Jamfile building your
dependent library at the bottom of your executable's Jamfile. If
invoked from a higher level that already included it, your MySubInclude
statement would be a safe no-op.
From: "Alen Ladavac" <alen@croteam.com>
Date: Sun, 28 Sep 2003 12:01:41 -0000
Subject: Unclear parts of the Jam sematics
I've been playing around with Jam for some time now, and there are some things
I still don't get. The docs are quite grayish in those areas, so I'd like to
ask if someone can please explain these:
1) actions bind <vars>
What exactly does this do? Docs say "$(vars) will be replaced with bound
values". I get an impression that target-dependent vars are always bound when
executing a rule or action? Why would I need the "bind" keyword?
E.g. why is only NEEDLIBS bound in the Link actions?
actions Link bind NEEDLIBS {
$(LINK) $(LINKFLAGS) -o $(<) $(UNDEFS) $(>) $(NEEDLIBS) $(LINKLIBS)
}
2) :E and :E= modifiers on variables
:E=value is documented as: ":E=value - Use value instead if the variable is
unset", but I can't say I understand what that should mean. :E is not
documented at all, yet it is used on several places in Jambase. Can some one
please explain this?
3) Getting a target bound variable value.
I heard people say that it is not posible. But this seems to work:
rule Var { return $($(1)) ; }
Is there something wrong with this method?
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 11:08:57 +0200
Probably not. I like jam, but other people know more about it than I.
Consider any errors an incentive for an Expert to step forward.
Well, $(<) and $(>) always bound, ie. the value of $(<) will be
/home/alen/src/mumble/stumble/fumble.o. But $(SOMETHING) is not, so its
value may well be <stumble>fumble.o. Most actions only need $(<) and $(>).
$(<), $(>) and $(NEEDLIBS) are all bound.
:E says that if a variable is an empty list, or does not exist at all,
then the value should be used instead. So $(A:E=B) says "give me the
value of A, but if A does not exist, give me the constant 'B' instead".
You have the same thing in sh, ${A:-B}.
$ echo ${A:-B}
B
$
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 11:42:04 -0000
Thanks. I understand the thing with binding now. The thing with :E= is ok, but
what about :E (notice missing = sign)? As in:
HDRSEARCH on $(>) = $(SEARCH_SOURCE:E) $(SUBDIRHDRS) $(HDRS) $(STDHDRS) ;
What is that supposed to mean?
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 11:49:36 +0200
(I'm getting ready to flame someone for making :E and :E= confusingly
similar while different in function, though...)
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 14:01:10 +0400
Looking at the source, it seems like $(VAR:E) is the same as $(VAR:E=).
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 13:51:56 -0000
If it was, then it would be unneeded, because that is the default: if var is
not set, it evaluates to an empty var, doesn't it? Why would the original
Jamfile use it, then?
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 16:04:19 +0400
That's a mistery ;-) However, it still seems to me that $(VAR:E) has the same
effect as $(VAR:E=) (i.e. no effect). The following jam code
local l = 1 ;
local l2 ;
ECHO '$(l:E=x)' '$(l2:E=x)' ;
ECHO '$(l:E)' '$(l2:E)' ;
produces
bash-2.05b$ ./jam -fe.jam
'1' 'x'
'1' ''
don't know how to make all
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 14:12:09 +0200
Uh, if VAR is unset, $(VAR:E) is a list containing one zero-length
string, while $(VAR) is a zero-entry list.
Someone is being very subtle there...
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 16:20:38 +0400
Yeah, wizards welcome. File history does not reveal the purpose of the change.
From: Johan Nilsson <johan.nilsson@esrange.ssc.se>
Subject: RE: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 14:38:29 +0200
this format, some or all of this message may not be legible.
I wouldn't say it has no effect (or did I misunderstand you?):
non-empty-list = whatever whenever however ;
Echo 'Without E: $(non-empty-list)$(non-existing-var) ' ;
Echo 'With E : $(non-empty-list)$(non-existing-var:E) ' ;
outputs:
' '
' whatever whenever however '
don't know how to make all
...found 1 target...
...can't find 1 target...
From: Johan Nilsson <johan.nilsson@esrange.ssc.se>
Subject: RE: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 14:59:16 +0200
this format, some or all of this message may not be legible.
Oops. That should read :
non-empty-list = whatever whenever however ;
Echo ' $(non-empty-list)$(non-existing-var) ' ;
Echo ' $(non-empty-list)$(non-existing-var:E) ' ;
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: Unclear parts of the Jam sematics
Date: Mon, 29 Sep 2003 15:13:26 -0000
Aha! List products. I see... But, why is that used in HDRSEARCH? :/
P.S. Why does this list require "reply to all"? Anyway, perhaps I should stop
asking too many questions. ;)
From: Vladimir Prus [SMTP:ghost@cs.msu.su]
Sent: Monday, September 29, 2003 2:04 PM
Subject: Re: Unclear parts of the Jam sematics
That's a mistery ;-) However, it still seems to me that $(VAR:E) has the same
effect as $(VAR:E=) (i.e. no effect). The following jam code
local l = 1 ;
local l2 ;
ECHO '$(l:E=x)' '$(l2:E=x)' ;
ECHO '$(l:E)' '$(l2:E)' ;
produces
bash-2.05b$ ./jam -fe.jam
'1' 'x'
'1' ''
don't know how to make all
I wouldn't say it has no effect (or did I misunderstand you?):
non-empty-list = whatever whenever however ;
Echo 'Without E: $(non-empty-list)$(non-existing-var) ' ;
Echo 'With E : $(non-empty-list)$(non-existing-var:E) ' ;
outputs:
' '
' whatever whenever however '
don't know how to make all
...found 1 target...
...can't find 1 target...
From: "Steve Stukenborg" <steve@electric-cloud.com>
Date: Mon, 29 Sep 2003 09:18:51 -0700
Subject: market research - jam support
Electric Cloud is a startup focused on software build infrastructure.
We are trying to determine whether the Jam market is large enough for us
to support Jam in our product line.
Our first product is a distributed version of Make that uses clusters of
inexpensive rack-mounted servers to speed up builds by 10-20x. Our goal
is to be plug-compatible with our customers' existing build tools.
Currently, we support GNU make, SysV make and Microsoft NMAKE.
We're trying to prioritize the next "flavors" of build tools we want to
support. One of the candidates is Jam, but we don't have a good feel for
how many people are using it for large software projects. We're trying
to determine if the Jam market opportunity justifies the engineering
investment and on-going support.
If you have a Jam-based project that takes over three hours to build
sequentially (without the -j switch), would you please email me with the
following information:
o How long does it take to build your Jam project(s) sequentially
(without -j)? If you only do -j parallel builds, then how long does the
build take, and how many jobs are you running at the same time?
o What other build tools does your company use besides Jam? {GNU Make? ANT?}
o What are the Operating Systems your engineers primarily develop on?
o What are the Operating Systems your product supports?
Your answers will be kept strictly confidential. You will not receive
any marketing spam or sales calls. I would be happy to send a summary interesting.
Date: Mon, 29 Sep 2003 11:54:51 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Subject: LibraryFromObjects using objects in a different dir
I'm trying to get a small library to build using Jam.
The problem I'm running into is the library is spread across a couple of
different directories. Here's the situation
src/
Jamfile
a/
Jamfile
source1.cpp
b/
Jamfile
source2.cpp
The library should wind up being built in the src/a/ directory, not the base
directory. I've tried various things, like
LibraryFromObjects libab : source1.o source2.o
but that looks in a/ for source2.o.
I've also tried
LibraryFromObject libab : source1.o $(_b_dir)$(SLASH)b.o
but Jam complains that it doesn't know how to build <src!a!>../b/source2.o.
I'm stumped as to what to try next. Suggestions?
Date: Mon, 29 Sep 2003 21:39:56 +0200
From: Ingo Weinhold <bonefish@cs.tu-berlin.de>
Subject: Re: LibraryFromObjects using objects in a different dir
That's a bit difficult, since LibraryFromObjects adds grist to the supplied
object file names, which differs from the one the objects were built with
(when built in another directory).
The best you can do is to build the library from the sources in the `src' directory:
SEARCH_SOURCE += [ FDirName $(SUBDIR) a ] [ FDirName $(SUBDIR) b ] ;
Library libab : source1.cpp source2.cpp ;
Then care must be taken, that the names of the source files in the different
directories do not clash. If that is so, you could set SEARCH on the source
files directly (after the Library invocation, and not extend SEARCH_SOURCE
then).
If you really want to build from objects you can set the TARGET variables on your objects:
LibraryFromObject libab : source1.o source2.o ;
TARGET on [ FGristFiles source1.o ] = [ FDirName $(SUBDIR) a ] ;
TARGET on [ FGristFiles source2.o ] = [ FDirName $(SUBDIR) b ] ;
Now jam looks for <src>source1.o and <src>source2.o in the right
directories. But they are considered different targets than <src!a>source1.o
and <src!b>source2.o. So, you need to add respective dependencies:
Depends [ FGristFiles source1.o ] : <src!a>source1.o ;
Depends [ FGristFiles source2.o ] : <src!b>source2.o ;
Best you write a rule that simplifies things a bit...
Date: Mon, 29 Sep 2003 16:36:08 -0700 (PDT)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: Unclear parts of the Jam sematics
| Aha! List products. I see... But, why is that used in HDRSEARCH? :/
If SEARCH_SOURCE is unset, then $(SEARCH_SOURCE:E) expands to a single
null string, which _anyone_ knows refers to the current directory, at
least in jam.
OK, so maybe everyone doesn't know it.
p.s. I'm not expecting extra credit in my programming class for this one.
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: Unclear parts of the Jam sematics
Date: Tue, 30 Sep 2003 08:30:53 -0000
LOL! :) Thanks for the explanation Christopher. This really was a nice pice of puzzle!
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: LibraryFromObjects using objects in a different dir
Date: Tue, 30 Sep 2003 10:13:55 +0200
As it happens, I almost solved the same problem last week. Not really
fond of my solution, but anyway. I made two rules, one analogous to
Objects and one to Program (Library in your case).
rule Build {
Objects $(>) ;
set-$(<) += [ FGristFiles $(>:S=$(SUFOBJ)) ] ;
}
rule Program {
local target ;
Depends exe : $(<) ;
Depends $(<) : $(set-$(>)) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
Clean clean : $(<) ;
Link $(<) : $(set-$(>)) ;
}
The first rule adds a number of .o files to a named set, the second
builds a program from all the .o files in a list of named sets. The
trick is: Build can be in src/a/Jamfile and Program in src/b/Jamfile
and it still works, provided you SubInclude them in that order.
But I'm not entirely happy. Maybe Diane or someone can improve on that.
From: Steve Goodson <steve.goodson@mscsoftware.com>
Subject: Re: LibraryFromObjects using objects in a different dir
Date: Tue, 30 Sep 2003 11:39:28 -0700
I think you can do this by simply using the Library rule more than once.
In a/Jamfile: Library libab : source1.cpp ;
In b/Jamfile: Library libab : source2.cpp ;
and just make sure LOCATE_TARGET is set to the same place in both Jamfiles.
I haven't actually tried this, but I do something similar using a custom
MLibrary rule that builds multiple variants of a library with different
compilers/switches. It's based on the Library rule so I expect it should
work there too.
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Date: Wed, 1 Oct 2003 12:35:54 -0400
Subject: invoking external build processes
Examine the jam file below. What I want to happen is
that bee is built if and only if cigar is changed by
the call to build. What happens is bee always builds.
Is there a way to get what I want?
Depends all : ant ;
Depends ant : bee ;
Depends bee : cigar ;
Always cigar ;
rule MyRule { TouchFile $(1) ; }
actions TouchFile { touch $(1) }
rule MyRule2 { Build $(1) ; }
actions Build { build $(1) }
MyRule ant ;
MyRule bee ;
MyRule2 cigar ;
Subject: RE: invoking external build processes
Date: Wed, 1 Oct 2003 09:53:40 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
Try removing 'Always cigar'. My understanding is that if cigar is
always built, then bee will also be always built because bee depends on cigar.
Sent: Wednesday, October 01, 2003 9:36 AM
Subject: invoking external build processes
Examine the jam file below. What I want to happen is
that bee is built if and only if cigar is changed by
the call to build. What happens is bee always builds.
Is there a way to get what I want?
Depends all : ant ;
Depends ant : bee ;
Depends bee : cigar ;
Always cigar ;
rule MyRule { TouchFile $(1) ; }
actions TouchFile { touch $(1) }
rule MyRule2 { Build $(1) ; }
actions Build { build $(1) }
MyRule ant ;
MyRule bee ;
MyRule2 cigar ;
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Subject: Re: invoking external build processes
Date: Wed, 1 Oct 2003 14:04:29 -0400
Sorry for not being clear.
cigar is _not_ an abstract target. cigar is meant to be a file generated
by another build process. Hence bee correctly depends on cigar.
and I'm trying to avoid naming them in the jam file. I was hoping for a
mode or a hack which forces Jam to look at the results of the call.
From: "Chris Antos" <chrisant@windows.microsoft.com>
Sent: Wednesday, October 01, 2003 1:37 PM
Subject: RE: invoking external build processes
So figure out how to express some kind of correct dependency.
E.g. make bee depend on a particular file in cigar, that is the last
file updated while building cigar, rather than depending on the abstract target.
From: Alan Baljeu [mailto:alanb@cornerstonemold.com]
Sent: Wednesday, October 01, 2003 10:21 AM
Subject: Re: invoking external build processes
But if I do that, then cigar is never built, because it exists and
the dependencies aren't known to jam. I need another option.
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Subject: Re: invoking external build processes
Date: Wed, 1 Oct 2003 15:28:06 -0400
We are calling build. The results of build can be any of these things:
a) an error
b) cigar is already up to date - no change
c) cigar is updated.
According to the dependency analysis which takes place before everything
is built, cigar is up to date. However, I specified ALWAYS on cigar, so
the system calls "build cigar". Build does one of the above. If build
fails, Jam catches the error no problem. On the other hand, if build
succeeds, Jam doesn't check whether build has updated cigar or not.
I've just been reading the Jam sources - the easiest source file reading
I've ever done - and I see now that Jam's design really doesn't support
what I want. All the dependency analysis is done before making, so
there's no opportunity during the build steps to revise the graph.
The relation I need is this:
Build B if and only if one of its dependents, C, is updated.
This must be a dynamic relation because the dependencies of C are
hidden. So, whether or not C builds is determined late. Consequently,
whether or not B needs to build is also late-determined, and if A depends
on B, A's rebuilding is late-determined.
It appears this kind of check doesn't exist in Jam yet.
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Subject: Re: invoking external build processes
Date: Thu, 2 Oct 2003 13:00:46 -0400
So I've modified the my JAM sources, introducing two changes:
1) after a target is built, I compare the target file's
date with its parent's date in make1b(). if no child
targets changed, i nullify the action for the parent.
2) I disabled caching of the file time in search.c by calling
file_time() instead of timestamp()
This gives me the desired behaviour, and jam can still build
itself with these sources. However, I'm not sure this change
is universally desirable. If people wanted this submitted,
I imagine a couple more changes should be done, such as adding a flag
MAYBE_UPDATES to rules, and having a way to retain the date caching.
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Subject: Re: invoking external build processes
Date: Thu, 2 Oct 2003 15:59:08 -0400
I'm using a date check on the new file to see whether to continue. It seems
to me you want to run a comparison check to evaluate whether to continue.
This points to a more general requirement possibly of inserting some custom
request to decide whether X has changed to require further changes upstream.
I'm not too familiar with JAM files, so I don't know what is the normal way
to perform such a test. If the language is extended, how should it be written?
The extension I'm making would make each build step have 3 possible results:
error, stable, update. Maybe a rule could be associated with an action,
such that the rule binds a system result variable for the action. Maybe
another plan is better.
Subject: Re: invoking external build processes
From: Dag Asheim <dash@linpro.no>
Date: 03 Oct 2003 09:35:15 +0200
For inspiration on such a feature, you might want to take a look at
Cons (http://www.dsmit.com/cons/), which is a Perl-based building
system that uses MD5 signatures for keeping track of changes:
From: Mark Sheppard <msheppard@climax.co.uk>
Subject: RE: invoking external build processes
Date: Fri, 3 Oct 2003 12:14:01 +0100
I too am interested in this. I'm currently writting a fairly complex build
system that needs to update a file with information held within a Jamfile.
Do do this Jam would invoke a command with this information passed in on the
command line. The command then only writes to the file if the information
if different to the information held in the file from the previous
invocation, or if the file doesn't yet exist.
The solution I'd thought of was to add a builtin SHELL rule which would
immediately execute a shell command during the parsing phase. This would
allow the normal timestamp dependency checking to achieve the correct
results during the binding phase. However someone on the jamboost list said
that this idea had been discussed before and it's apparently quite hard to
implement. Not knowing about Jam internals I had assumed that it wouldn't
be too hard because it sounds like just linking up two features that Jam
already has (builtin rules and invoking shell commands), but I guess my
assumptions were wrong.
The work-around I've thought of (but not yet implemented) is to have a
two-pass system where the first pass does the updating of the file as an
action then calls jam again as another action which does the real work.
It's not very elegant but it should work.
But I think your patch solves my problem properly, and probably in a better
way than adding a SHELL builtin rule. By re-checking the timestamp after
the action Jam will know if the file has been updated so can act accordingly.
So I'm all for this new functionality and would love to see it added to the
main Jam codebase.
Date: Fri, 3 Oct 2003 08:35:31 -0700
From: Matt Armstrong <matt@lickey.com>
Subject: Re: invoking external build processes
If the information to be updated is expressed completely within the
jamfiles, then a built in MD5 rule might get you going.
We have a header file that contains all the configuration #defines in
our project. It is currently built "ALWAYS" by a perl script, but that
causes a full build.
So I added an MD5 builtin rule to Jam. Jam computes the MD5 sum of all
the #defines and creates blddefs-<md5sum>.h. Then we have rules to copy
that file to blddefs.h.
That way, it'll only regenerate blddefs.h when the list of configuration
values changes.
See //guest/matt_armstrong/jam/patched_version/... in the public
perforce demon.
Jamfile: Option OPT_BUILTIN_MD5_EXT : yes ;
Jamfile: if OPT_BUILTIN_MD5_EXT in $(LOCAL_DEFINES) {
builtins.c:#ifdef OPT_BUILTIN_MD5_EXT
builtins.c:#endif /* OPT_BUILTIN_MD5_EXT */
builtins.c:#ifdef OPT_BUILTIN_MD5_EXT
builtins.c:#endif /* OPT_BUILTIN_MD5_EXT */
builtins.c:#ifdef OPT_BUILTIN_MD5_EXT
builtins.c:#endif /* OPT_BUILTIN_MD5_EXT */
and the new files md5.h md5c.c md5test.jam.
From: "Mark Sheppard" <msheppard@climax.co.uk>
Sent: Friday, October 03, 2003 7:14 AM
Subject: RE: invoking external build processes
I wasn't aware of jamboost before this message came. It seems jamboost is a
significant enhancement to jam. Does anyone know why it shouldn't be
integrated and made part of jam? Except for a couple of boost-specific
items, it seems pretty general-purpose their changes. For another angle,
can anyone suggest why all current jam users wouldn't prefer to use
jamboost for builds, or why I wouldn't? Which variant is more actively used?
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Fw: jamboost
Date: Fri, 3 Oct 2003 21:21:08 +0200
Simplicity is one of jam's features. Jam isn't simple because it's
unfinished, but because it's meant to be simple.
By all means, use boost. Or jam.
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Subject: Re: invoking external build processes
Date: Mon, 6 Oct 2003 10:18:48 -0400
I'm missing something here. How do you get Jam to read the contents of
the file you want to MD5?
Date: Wed, 6 Oct 2004 09:09:56 -0700
From: Matt Armstrong <matt@lickey.com>
Subject: Re: invoking external build processes
You MD5-sum the jam variables that affect the build (#defines and other
compiler flags).
From: Vladimir Prus <ghost@cs.msu.su>
Date: Tue, 7 Oct 2003 13:56:14 +0400
Subject: Bug with bound vars and SEARCH/LOCATE?
I seem to have found a bug with bound vars handling. I have an action of the form
actions do bind INPUT {
....
When this action is executed, the "LOCATE" variable on $(<) is set to
something. The INPUT variable includes some target and that target has SEARCH
set on it. The problem is that when binding INPUT, jam looks at LOCATE
setting on $(<), and ignores SEARCH setting on $(INPUT[1]).
A complete reproduction recipe is attached. When I run it with
./jam -fsl.jam -n
I get
cp b /tmp/a && cat /tmp/some-target >> /tmp/a
while I'd expect to see
cp b /tmp/a && cat some-target >> /tmp/a
Depends all : a ;
Depends a : b ;
SEARCH on some-target = . ;
rule do {
LOCATE on $(<) = /tmp ;
INPUT on $(<) = some-target ;
}
actions do bind INPUT { cp $(>) $(<) && cat $(INPUT) >> $(<) }
do a : b ;
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: Bug with bound vars and SEARCH/LOCATE?
Date: Tue, 7 Oct 2003 15:55:44 +0400
Hmm.. I read your words to mean that if input contains target "some-target",
then variables on "some-target" are used for binding of "some-target"? Okay,
then why "SEARCH" variable on "some-target" is ignored?
Another aspect is that the problem only happens when "some-target" is not in
dependency graph. In that case it's bound after variable on $(<) are in
effect, which cause the strange behaviour.
When I add
Depends $(<) : some-target ;
to the "do" rule, the "some-target" is bound at a different time, and all is
fine.
Subject: Re: Bug with bound vars and SEARCH/LOCATE?
From: David Abrahams <david.abrahams@rcn.com>.
Date: Tue, 07 Oct 2003 10:06:44 -0400
It isn't. ./some-target is equivalent to some-target. Try setting
SEARCH to something other than "."
I'm not sure what you mean by "all is fine". What code do you use,
what behavior do you see, and why do you expect (or not expect) it?
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: Bug with bound vars and SEARCH/LOCATE?
Date: Tue, 7 Oct 2003 18:27:04 +0400
Well, if I set SEARCH to "." and don't set LOCATE, I really don't expect the
target to be bound to /tmp/some-target. And if I set SEARCH to some different
value, I still get the behaviour I don't like.
Hmm... I've attached code to my original post and given the output that I've
got and the output which I consider right. In case code attachement was
blocked by gname interface, it's at
http://zigzag.cs.msu.su:7813/sl.jam
As to why I don't like the current behaviour, I've explained that in the
previous paragraph: it's surprising that LOCATE setting on some other target
affect binding of 'some-target'.
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Date: Tue, 7 Oct 2003 11:24:39 -0400
Subject: bind modifier and other variables stuff
From the reference:
values;
passed
values;
I absolutely don't understand what it means when talking about bound values.
Can someone give an example of how the bind modifier is used?
Also, if the target is multiple files, how does $(LOCATE) work?
Is there a way I can access values on targets within an actions definition?
How can I get information from one target and use to set a variable anywhere else?
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Date: Tue, 7 Oct 2003 18:03:13 -0400
Subject: What is build.com?
I found it in jam-2.4.zip, but it isn't an MS-DOS application. What's it for?
It appears to be a script for building a bootstrap jam on VMS.
Date: Mon, 13 Oct 2003 10:07:26 -0700
From: Matt Armstrong <matt@lickey.com>
Subject: Re: invoking external build processes
I use the GLOB rule to find "old" versions of blddefs.h.<sum>.md5. I
glob for the pattern blddefs.h.*.md5. I make the "current"
blddefs.h.<sum>.md5 depend on the deletion of the out of date ones, if GLOB finds any.
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: invoking external build processes
Date: Mon, 13 Oct 2003 09:32:42 +0400
Let me make sure I understand correctly. After first build, you've got:
Unitl something changes, the same dependency exists and jam thinks blddefs.h
is up-to-date. When #defines change, the new dependency will be
so jam will notice that blddesfs-<md5sum2>.h does not exist, and will create
it and update blddefs.h, right?
Is so, then the question is: when blddesfs-<md5sum1>.h is cleaned up? Or those
files are left forever?
From: "Sander Stoks" <sander@stoks.nl>
Date: Sun, 12 Oct 2003 22:54:14 +0200 CEST
Subject: [newbe] explicit special target
This is probably a FAQ, but I couldn't find it in the archives
(probably because I didn't know what to look for).
I'm in the process of converting my Makefiles to Jamfiles, and ran into
something which I didn't know how to solve.
My makefile has a special fake target like so:
floppy: all
mcopy -n file1 a:
mcopy -n file2 a:/some/dir
I would like to Jammify this.
One problem I had is that the default target for jam is "all", and I
don't want to make a floppy every time (most of the time, there is no
floppy inserted, for one). I therefore thought I'd need to add a
special target in a similar way as the "clean" target is handled.
My efforts so far include:
NOTFILE floppy ;
NOCARE floppy ;
Depends floppy : all ;
ALWAYS floppy ;
actions existing Floppy {
MCopy file1 a: ;
MCopy file2 a:/some/dir ;
}
actions MCopy { mcopy -n $(1) $(2) ; }
followed by the rest of my Jamfile (which works just fine).
What I'd like is that if I simply type "jam", "all" is made (but not
the floppy), and when I type "jam floppy", the relevant files are
copied onto a floppy.
Is this possible, and if so, am I on the right track? A pointer to a
FAQ list where I can find this is welcome too.
Date: Sun, 12 Oct 2003 13:15:21 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Subject: Duplicate actions executing
I'm just wondering what the best way of tracking down duplicate actions being
executed is. I have my project being built now, and am using the InstallBin
rule to move executables into the right place. Most of them are being done
just fine. But there are two that jam seems to think it needs to install twice.
I've looked all over to make sure that that shouldn't be happening, but can't
seem to find a cause.
From: "Sander Stoks" <sander@stoks.nl>
Sent: Sunday, October 12, 2003 4:54 PM
Subject: [newbe] explicit special target
try
Depends all : floppy ; # build all -> build floppy first
ALWAYS floppy ;
NOTFILE floppy ;
Floppy floppy ; # how to make floppy - call the rule/action Floppy.
Actions are strictly shell scripts, so one action cannot call another.
However, rules can call shell scripts, and actions use Jam variables. Try this:
rule Floppy {
MCopy file1 a: ;
MCopy file2 a:/some/dir ;
}
actions existing MCopy { mcopy -n $(1) $(2) ; }
or this:
MCopy = "mcopy -n" ;
actions existing Floppy { $(MCopy) file1 a: ; $(MCopy) file2 a:/some/dir ; }
The docs are not very strong. You must look at several sources to figure
things out. Read the reference. Study the examples, including Jambase.
Search the mailing list archives. Have a look at the FTJam instructions
found elsewhere on the web.
Date: Tue, 14 Oct 2003 08:23:03 -0700
From: Matt Armstrong <matt@lickey.com>
Subject: Re: Duplicate actions executing
Run Jam with a high enough debug level and see why InstallBin is being
called more than once for the given target. jam -d5 might do it, but
jam -d7 certainly will.
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Date: Tue, 14 Oct 2003 11:24:18 -0400
Subject: target substitution not working
What am I doing wrong?
Fragment:
rule Dir {
Depends $(1) : $(2) ;
NOTFILE $(1) ;
NOTFILE $(2) ;
CopyDir $(1) : $(2) ;
}
actions CopyDir { xcopy /s /e /i /y $(2) $(1) ; }
LOCATE on <build>TSTSolverRuleA = $(X); # I know $(X) is a valid directory name
Dir <output>TSTSolverRuleA : <build>TSTSolverRuleA ;
LOCATE on <output>TSTSolverRuleA = $(Y) ;
output is
CopyDir <output>TSTSolverRuleA
The system cannot find the file specified.
xcopy /s /e /i /y <build>TSTSolverRuleA <output>TSTSolverRuleA
...failed CopyDir <output>TSTSolverRuleA ...
I expect the system to substitute actual target locations, but it doesn't.
Date: Tue, 14 Oct 2003 09:26:19 -0700
From: "Daniel Adent" <dadent@microsoft.com>
Subject: Question on Variable Products / Expansion
I am relatively new to using Jam and we are in the process of converting
over our existing efforts to build on Jam. In this process I am
writing rules and actions to build Managed C# projects for Windows. In
this effort, I have encountered a situation where I want to have a
target-dependent variable that lists resources (that are themselves
targets) that are to be used in the link-process of the final executable
target. I have the situation below:
* When the resources are defined, I set NEEDRESOURCES on Target to
a list of resource files:
* E.g. NEEDRESOURCES on $(_t) += $(i)
* $(_t) is "SampleApp.exe"
* The list of resources usually multiple files (e.g.
"Form1.resource" "Form2.resource")
* In my final compile/link command (for managed code), I bind the NEEDRESOURCES
* actions Csc bind NEEDRESOURCES (is the rule that I have created)
* basically does a $(CSC) /out:$(<) $(>) $(resource-gunk)
* I am having trouble generating resource-gunk from NEEDRESOURCES.
For each resource, I need to alter it. For each resource in the
NEEDRESOURCES variable, I need to change the argument to be
"/res:$(NEEDRESOURCES),$(<:BL).$(NEEDRESOURCES:BS)". The problem that
I currently have is that you can see this is a product-expansion and it
only works correctly if there is only one element in NEEDRESOURCES
(which there never is only 1). In my sample above, I need my argument
to the "csc" application to be "/res:Form1.resources,sampleapp.Form1.resouces
/res:Form2.resources,sampleapp.Form2.resources". I tried tacking on
the goo in the setting of NEEDRESOUCES, but then jam does not associate
the entries with their targets (and paths get messed up). Also, I
cannot do a loop inside an action, so I can't seem to do it with a
natural loop through the list of resources.
Is there a way to keep the product from expanding (but rather shuffle)?
Or should I be doing this a different way?
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Subject: Re: Question on Variable Products / Expansion
Date: Tue, 14 Oct 2003 14:55:57 -0400
Try something like this:
rule B {
X = ;
{
X += "/res:$(j),$(<:BL).$(j:BS)" ;
}
X on $(<) = $(X) ;
}
actions B { echo $(X) }
B x.cs : Form1.resource Form2.resource Form3.resource ;
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Subject: Re: Question on Variable Products / Expansion
Date: Tue, 14 Oct 2003 14:55:57 -0400
Try something like this:
rule B {
X = ;
{
X += "/res:$(j),$(<:BL).$(j:BS)" ;
}
X on $(<) = $(X) ;
}
actions B { echo $(X) ; }
B x.cs : Form1.resource Form2.resource Form3.resource ;
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: target substitution not working
Date: Wed, 15 Oct 2003 08:19:53 -0000
Just a wild guess, but aren't the NOTFILE statements wrong here. If you said
to Jam that it's not a file, why do you expect it to bind it? Perhaps what you
want there is NoUpdate (meaning that timestamp is ignored)?
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Subject: Re: Question on Variable Products / Expansion
Date: Tue, 14 Oct 2003 14:55:57 -0400
Try something like this:
rule B {
X = ;
{
X += "/res:$(j),$(<:BL).$(j:BS)" ;
}
X on $(<) = $(X) ;
}
actions B { echo $(X) ; }
B x.cs : Form1.resource Form2.resource Form3.resource ;
Subject: Re: [newbe] explicit special target
From: Matthew Doar <matt@trpz.com>
Date: 15 Oct 2003 08:51:50 -0700
What I do is
actions MCopy { mcopy -n $(1) $(2) ; }
MCopy floppy ;
No rule for floppy means that the Jam "all" target doesn't build it.
But specifying jam floppy will run the action.
Date: Sat, 18 Oct 2003 21:03:14 +0200
From: Bartosz Fenski aka fEnIo <fenio@o2.pl>
Subject: running jam in fakeroot
I'm trying to package game NetPanzer for Debian. It uses jam for
building. And I've got problem.
How to pass prefix option to jam?
I've got the following line with make :
$(MAKE) install prefix=$(CURDIR)/debian/netpanzer/usr
What is the analogue line using jam?
I found variable LOCATE and I've set :
LOCATE=/home/fenio/packaging/netpanzer-0.1.1/debian/netpanzer/usr jam install
But this doesn't work.
Any suggestions are welcome.
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Date: Mon, 20 Oct 2003 09:43:18 -0400
Subject: reading on-target variables
As I understand things, you can write a variable like
X on <ouput>foo = xxx ;
but there is no syntax to read x back.
_If_ a rule accessed target-specific variables, we could do this:
rule READ { return $($(>)) ; }
For example:
rule B { X on $(<) = hello ; Y on $(<) = [ READ $(<) : X ] eh ; }
actions B { echo $(Y) }
But rules don't access local variables. We might extend the
variable access syntax:
rule B { X on $(<) = hello ; Y on $(<) = $(X on $(<)) eh ; }
There doesn't seem to be any problems with this except to define the more
complex cases, such as $(x:E="blue_moon" on <output>foo). But in current
jam, this isn't defined either. Solution? Remove the spaces:
rule B { X_on_$(<) = hello ; Y_on_$(<) = $(X_on_$(<)) eh ; }
actions B { echo $(Y_on_$(<)) }
B x.cs ;
ALWAYS x.cs ;
Depends all : x.cs ;
On-target variables turn out to be unnecessary.
Subject: RE: reading on-target variables
Date: Mon, 20 Oct 2003 11:05:17 -0700
From: "Chris Antos" <chrisant@windows.microsoft.com>
However $(X) on foo will return plain old $(X) if there is no $(X) on
foo set yet (and that's used extensively).
Your approach below doesn't seem to mimic that.
Sent: Monday, October 20, 2003 6:43 AM
Subject: reading on-target variables
As I understand things, you can write a variable like
X on <ouput>foo = xxx ;
but there is no syntax to read x back.
_If_ a rule accessed target-specific variables, we could do this:
rule READ { return $($(>)) ; }
For example:
rule B { X on $(<) = hello ; Y on $(<) = [ READ $(<) : X ] eh ; }
actions B { echo $(Y) }
But rules don't access local variables. We might extend the
variable access syntax:
rule B { X on $(<) = hello ; Y on $(<) = $(X on $(<)) eh ; }
There doesn't seem to be any problems with this except to define the more
complex cases, such as $(x:E="blue_moon" on <output>foo). But in current
jam, this isn't defined either. Solution? Remove the spaces:
rule B { X_on_$(<) = hello ; Y_on_$(<) = $(X_on_$(<)) eh ; }
actions B { echo $(Y_on_$(<)) }
B x.cs ;
ALWAYS x.cs ;
Depends all : x.cs ;
On-target variables turn out to be unnecessary.
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: reading on-target variables
Date: Mon, 20 Oct 2003 21:28:48 +0200
How about:
rule Var { return $($(>)) ; }
rule test {
FOO on BAR = "foo on bar" ;
local foo_on_bar = [ on BAR Var FOO ] ;
Echo foo_on_bar ;
}
From: Alan Baljeu
Sent: Monday, October 20, 2003 3:43 PM
Subject: reading on-target variables
As I understand things, you can write a variable like
X on <ouput>foo = xxx ;
but there is no syntax to read x back.
_If_ a rule accessed target-specific variables, we could do this:
rule READ { return $($(>)) ; }
For example:
rule B { X on $(<) = hello ; Y on $(<) = [ READ $(<) : X ] eh ; }
actions B { echo $(Y) }
But rules don't access local variables. We might extend the
variable access syntax:
rule B { X on $(<) = hello ; Y on $(<) = $(X on $(<)) eh ; }
There doesn't seem to be any problems with this except to define the more
complex cases, such as $(x:E="blue_moon" on <output>foo). But in current
jam, this isn't defined either. Solution? Remove the spaces:
rule B { X_on_$(<) = hello ; Y_on_$(<) = $(X_on_$(<)) eh ; }
actions B { echo $(Y_on_$(<)) }
B x.cs ;
ALWAYS x.cs ;
Depends all : x.cs ;
On-target variables turn out to be unnecessary.
From: "Alen Ladavac" <alen@croteam.com>
Subject: Re: reading on-target variables
Date: Tue, 21 Oct 2003 07:34:03 -0000
Oops. Should have been
Echo $(foo_on_bar) ;
But you get the point, I guess.
From: "Alen Ladavac" <alen@croteam.com>
Sent: Monday, October 20, 2003 7:28 PM
Subject: Re: reading on-target variables
Date: Thu, 13 Nov 2003 02:14:06 +0800
From: Andy Sy <andy@netfxph.com>
Subject: how well written is this Jamfile? / using the output of pkg-config with Jam
I'm trying to use Jam to compile Windows Gtk2 programs. The output of
c:\Gtk-dev\bin\pkg-config.exe --cflags --libs gtk+-2.0
is:
-Ic:/Gtk-dev/include/gtk-2.0 -Ic:/Gtk-dev/lib/gtk-2.0/include -Ic:/Gtk-dev/include/atk-1.0
-Ic:/Gtk-dev/include/pango-1.0 -Ic:/Gtk-dev/include/glib-2.0 -Ic:/Gtk-dev/lib/glib-2.0/include
-Lc:/Gtk-dev/lib -lgtk-win32-2.0 -lgdk-win32-2.0 -latk-1.0 -lgdk_pixbuf-2.0 -lpangowin32-1.0
-lgdi32 -lpango-1.0 -lgobject-2.0 -lgmodule-2.0 -lglib-2.0 -lintl -liconv
and I translated it to a Jamfile:
INCDIR
-Ic:/Gtk-dev/include/gtk-2.0 -Ic:/Gtk-dev/lib/gtk-2.0/include -Ic:/Gtk-dev/include/atk-1.0
-Ic:/Gtk-dev/include/pango-1.0 -Ic:/Gtk-dev/include/glib-2.0 -Ic:/Gtk-dev/lib/glib-2.0/include ;
LIBDIR
-Lc:/Gtk-dev/lib ;
LINKLIBS
-lgtk-win32-2.0 -lgdk-win32-2.0 -latk-1.0 -lgdk_pixbuf-2.0
-lpangowin32-1.0 -lgdi32 -lpango-1.0 -lgobject-2.0 -lgmodule-2.0 -lglib-2.0 -lintl -liconv ;
CCFLAGS = -mms-bitfields ;
CCFLAGS += $(INCDIR) ;
LINKFLAGS += $(LIBDIR) ;
MAIN base : base.c ;
which seems to work fine.
2 Questions:
1) How can the Jamfile be improved? This is my first Jamfile and I may be using
some conventions wrong...
2) Is it wise to have the Jamfile automatically include the output of pkg-config
instead of copy-pasting it? If so, how to go about this?
Date: Tue, 18 Nov 2003 10:47:02 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Two small fixes for CYGWIN and a question
I made the two following fixes for the CYGWIN-Build:
jam.h (line 259) around # ifdef __cygwin__
added
# define DOWNSHIFT_PATHS
In Jambase replaced Line 997 (Just after # Don't try to
create A: or A:\ on windows)
if $(NT)
by
if $(NT) || ( "$(OS)" = "CYGWIN" ) || ( "$(OS)" = "MINGW" )
That was all.
Why didn't you release the version 2.5?
Best regards and many thanks for your great product which we use heavily.
From: Roger.Shimada@lawson.com
Date: Tue, 18 Nov 2003 19:00:15 -0600
Subject: Building .rc files on Windows
I'm a Jam newbie looking into it as a possible nmake replacement.
Our Windows makefiles currently create .rc files and get compiled into
.res files, which are linked into a program.
The following compiles the .rc file if it exists, but does not create the
.rc file. The LawMkRc action is not being called.
Any pointers would be appreciated!
# modified Main rule
{
local objs var ;
Objects $(>) ;
objs = $(>:S=$(SUFOBJ)) ; {
LawMkRc $(var)_ver.rc : $(objs) : $(var) ;
Objects $(var)_ver.rc ;
MainFromObjects $(GENBIN)$(var) : $(objs) $(var)_ver.res ;
}
}
rule LawMkRc {
local source target ;
source = [ FGristSourceFiles $(>) ] ;
target = [ FGristSourceFiles $(<) ] ;
Depends $(target) : $(source) ;
MakeLocate $(target) : $(LOCATE_SOURCE) ;
SEARCH on $(target) = $(SEARCH_SOURCE) ;
RCOBJ on $(target) = $(3) ;
Clean clean : $(target) ;
}
actions LawMkRc {
echo LawMkRc action $(<) $(>)
cd $(<:D)
$(MKS)\touch $(<:B).rc
if exist $(MKRC) $(MKRC) -n $(RCOBJ) -o $(<:B).rc $(>)
}
rule LawRc {
Depends $(<) : $(>) ;
Clean clean : $(<) ;
}
actions LawRc {
cd $(<:D)
$(RC) /I$(MSSDK)\include /I$(VCDEVDIR)\mfc\include /fo$(<) $(>)
}
rule UserObject {
switch $(>:S) {
case .rc : LawRc $(>:S=.res) : $(>) ;
case * : EXIT Unknown suffix for $(>) ;
}
}
Date: Wed, 19 Nov 2003 17:57:58 +0100
From: "Niklaus Giger" <n.giger@netstal.com>
Subject: Another small patch for Windows
If I want to build a target on a different drive letter on Windows NT, the
default Jambase does not correctly handle this situation as it places
all output directly under ALL_LOCATE_TARGET instead of putting them into subdirectories
Here is an example. I am using the directory T:\niklaus with the following jamfile
FIRST = H:/xrun/bb/make/1_2_x/test ;
ALL_LOCATE_TARGET = . ;
SubDir FIRST proj1 sub1 ;
SubInclude FIRST proj1 sub1 ;
Calling "jam -dx exe" results in
<..>
gcc -c -o sub_1_2.o -DMINGW
-IH:/xrun/bb/make/1_2_x/test\proj1\sub1\sub1_2
H:/xrun/bb/make/1_2_x/test\proj1\sub1\sub1_2\sub_1_2.c gcc -o
sub1_2.exe sub_1_2_main.o sub_1_2.o
After setting LOCATE_TARGET in the Jambase in the rule SubDirto
LOCATE_TARGET = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS) ] ;
instead of
LOCATE_TARGET = $(ALL_LOCATE_TARGET) $(SUBDIR) ;
I get the desired behaviour, e.g.
gcc -c -o proj1\sub1\sub1_1\sub_1_1.o -DMINGW
-IH:/xrun/bb/make/1_2_x/test\proj1\sub1\sub1_1
H:/xrun/bb/make/1_2_x/test\proj1\sub1\sub1_1\sub_1_1.c
gcc -o proj1\sub1\sub1_1\sub1_1.exe
proj1\sub1\sub1_1\sub_1_1_main.o proj1\sub1\sub1_1\sub_1_1.o
Date: Fri, 21 Nov 2003 10:18:23 -0800
From: Matt Armstrong <matt@lickey.com>
Subject: [BUG] jam 2.5rc3 and parallel builds
I've found and isolated a bug with parallel builds (-j<n>) and jam 2.5rc3.
A small source tree that exhibits the bug is at
//guest/matt_armstrong/jam/bug/1/...
A .h file is generated at run time. It gets into the dependency tree
via normal header scanning.
With -j1 builds, the generated.h file is built first and everything is
fine. With -j2 or higher, the generated.h file is built first, but
other .c files that #include it get built in parallel and fail.
I haven't been able to decipher make1[abcd] well enough to figure out
what is going on or develop a fix. I have discovered no workaround
either (e.g. making 'first' depend on generated.h doesn't seem to help).
Subject: RE: [BUG] jam 2.5rc3 and parallel builds
Date: Tue, 25 Nov 2003 11:27:44 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
There's a known bug about A -> B,C,D -> F (where -> means depends on)
with parallel builds. Also, as far as I know, the "first" target has
little or no effect on the order things are built in. At least, that's
been my experience, and in some cases I'm pretty sure it was at least
partly due to the bug mentioned above. Anyway, the bug is that as soon
as any of B,C,D is complete then A starts. I've posted about this bug a
couple times before, and the .pch and .idl rules I posted for Jam+MSVC a
(long) while ago work around problem, but I don't remember if they use
the trick I mention below, or another trick.
I tracked down the problem inside the Jam code, but I don't see a simple
solution so I have not yet tried to design or implement a real fix. The
workarounds suffice for my needs. The problem decision is made in
make1c() in the "else" block. Here is my copy of the function, with a
big comment block in the "else" block describing the problem.
static void
make1c( TARGET *t ) {
CMD *cmd = (CMD *)t->cmds;
/* If there are (more) commands to run to build this target */
/* (and we haven't hit an error running earlier comands) we */
/* launch the command with execcmd(). */
/* If there are no more commands to run, we collect the status */
/* from all the actions then report our completion to all the */
/* parents. */
if( cmd && t->status == EXEC_CMD_OK ) {
if( DEBUG_MAKE )
if( DEBUG_MAKEQ || ! ( cmd->rule->flags & RULE_QUIETLY )) {
printf( "%s ", cmd->rule->name );
list_print( lol_get( &cmd->args, 0 ) );
printf( "\n" );
}
if( DEBUG_EXEC ) printf( "%s\n", cmd->buf );
if( globs.cmdout ) fprintf( globs.cmdout, "%s", cmd->buf );
if( globs.noexec ) {
make1d( t, EXEC_CMD_OK );
} else {
fflush( stdout );
execcmd( cmd->buf, make1d, t, cmd->shell );
}
} else {
TARGETS *c;
ACTIONS *actions;
/* Collect status from actions, and distribute it as well */
for( actions = t->actions; actions; actions = actions->next)
if( actions->action->status > t->status )
t->status = actions->action->status;
for( actions = t->actions; actions; actions = actions->next)
if( t->status > actions->action->status )
actions->action->status = t->status;
/* Tally success/failure for those we tried to update. */
if( t->progress == T_MAKE_RUNNING )
switch( t->status ) {
case EXEC_CMD_OK:
++counts->made;
break;
case EXEC_CMD_FAIL:
++counts->failed;
break;
}
/* Tell parents dependent has been built */
/*
* Multi-job builds (-j2) can get here prematurely:
*/
/*
* rule Abc
* {
* Depends $(<) : $(>) ;
* Clean clean : $(<) ;
* }
*
* rule Xyz
* {
* Depends $(<) : $(>) ;
* Depends all : $(<) ;
* }
*
* actions Abc
* {
* echo abc > $(<[1])
* pause
* echo xyz > $(<[2])
* }
*
* actions Xyz
* {
* type $(>)
* }
*
* Abc foo bar : stinky.txt ;
* Abc one two : stinky.txt ;
* Xyz zzz : foo ;
* Xyz yyy : bar ;
*/
/*
* One set of actions is reponsible for building both foo and bar,
* and Jam correctly assigns the actions to be performed only
* once. So 'bar' has no actions, and thus immediately drops into
* this case and falsely tells its parents it has been built.
*
* The hack work around that jamfiles currently use is to make the
* dependency chain linear ( bar : foo : stinky.txt ). But this
* breaks when 'bar' is missing but 'foo' exists; Jam is unable to
* rebuild 'bar'.
*
* A real solution is not yet clear to me. Maybe Jam could assign
* job affinities so that 'foo' and 'bar' run in the same job slot
* thus avoiding the concurrency problem. Maybe Jam could create
* an in-memory dependency ( bar : foo ). Etc?
*
* Note, I think the job affinity idea would have to extend all
* the way down the dependency graph, so it could potentially
* degrade parallel builds by serializing them too much.
*/
t->progress = T_MAKE_DONE;
for( c = t->parents; c; c = c->next )
make1b( c->target );
}
}
Sent: Friday, November 21, 2003 10:18 AM
Subject: [BUG] jam 2.5rc3 and parallel builds
I've found and isolated a bug with parallel builds (-j<n>) and jam 2.5rc3.
A small source tree that exhibits the bug is at
//guest/matt_armstrong/jam/bug/1/...
A .h file is generated at run time. It gets into the dependency tree
via normal header scanning.
With -j1 builds, the generated.h file is built first and everything is
fine. With -j2 or higher, the generated.h file is built first, but
other .c files that #include it get built in parallel and fail.
I haven't been able to decipher make1[abcd] well enough to figure out
what is going on or develop a fix. I have discovered no workaround
either (e.g. making 'first' depend on generated.h doesn't seem to help).
Date: Thu, 20 Nov 2003 18:15:40 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: Re: Another small patch for Windows
I submitted exactly the same thing a while ago. Apparently the current
behaviour is intended (I still don't understand why though). But there is
a solution for this problem since jam 2.5: You can set the
undocumented SUBDIRRULES variable and provide some rules that are
invoked for each SubDir rule that is called. So you can easily achive the
wanted behaviour:
SUBDIRRULES += FixSubDirPath ;
rule FixSubDirPath {
LOCATE_TARGET = [ FDirName $(ALL_LOCATE_TARGET) $(SUBDIR_TOKENS) ] ;
}
Date: Mon, 24 Nov 2003 09:08:09 -0800
From: Matt Armstrong <matt@lickey.com>
Subject: [BUG] jam 2.5rc3 and parallel builds
I've found and isolated a bug with parallel builds (-j<n>) and jam 2.5rc3.
A small source tree that exhibits the bug is at
//guest/matt_armstrong/jam/bug/1/...
A .h file is generated at run time. It gets into the dependency tree
via normal header scanning.
With -j1 builds, the generated.h file is built first and everything is
fine. With -j2 or higher, the generated.h file is built first, but
other .c files that #include it get built in parallel and fail.
I haven't been able to decipher make1[abcd] well enough to figure out
what is going on or develop a fix. I have discovered no workaround
either (e.g. making 'first' depend on generated.h doesn't seem to help).
Date: Sun, 23 Nov 2003 00:36:12 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: bugfix for SubInclude rule
The current SubInclude rule is problematic when used in the middle of a
Jamfile. For example:
---
SubDir TOP src ;
SubInclude TOP src bla ;
# this will go wrong because the SubInclude rule already changed the
# subdir and grist settings.
Main blup : bla.cpp ;
---
This is an attempt to fix the broken behaviour. Unfortunately it won't
work when several different TOP dirs are used. (Maybe we should save the
name of the TOP dir too in the SubDir rule).
rule SubInclude {
# SubInclude TOP d1 ... ;
#
# Include a subdirectory's Jamfile.
# We use SubDir to get there, in case the included Jamfile
# either doesn't have its own SubDir (naughty) or is a subtree
# with its own TOP.
if ! $($(<[1]))
{
Exit SubInclude $(<[1]) without prior SubDir $(<[1]) ;
}
local $(save_SUBDIR_TOKENS) = $(SUBDIR_TOKENS) ;
SubDir $(<) ;
include $(JAMFILE:D=$(SUBDIR)) ;
SubDir $(<[1]) $(save_SUBDIR_TOKENS) ;
}
Date: Wed, 26 Nov 2003 14:39:18 -0800
From: "Hong Zhang" <hong@tapwave.com>
Subject: locate problem
I am running into problems with specifying output folder for some
intermediate files. The intermediate file has to be the same folder
as the input.
rule Foo {
SEARCH on $(<) = $(SEARCH_SOURCE) ;
return $(<:S=.foo) ;
}
actions Foo { $(FOO) $(<) -o $(<:S=.foo) ; }
However, the caller of Foo will not find $(<:S=.foo). I can't use
LOCATE in this case, since it depends on SEARCH_SOURCE. Thanks in
advance for your suggestion.
From: Roger.Shimada@lawson.com
Date: Wed, 26 Nov 2003 17:32:10 -0600
Subject: Unwanted generated header dependancies
We use a program that generates a .h file from a .c file. This is the
LawMkHdr rule mentioned below.
There are times when a generated .h #includes another generated .h. If
the #included .h is updated, Jam will regenerate the .h that #includes the
first generated .h. This is bad, as both .h files are for libraries, so
the updating of a second level .h will force unnecessary rebuilds of objects.
For example, there are files a.c and b.c. a.h is generated from a.c and
b.h is generated from b.c. b.c has a #include "a.h". After a.c is
updated I get the following Jam output:
...found 15 target(s)...
...updating 5 target(s)...
warning: using independent target a.c
LawMkHdr a.h
Processing a.c to a.h
warning: using independent target b.c
LawMkHdr b.h
Processing b.c to b.h
Cc a.obj
a.c
Cc b.obj
b.c
Archive broke.lib
The "LawMkHdr b.h" (which displays "Processing b.c to b.h") is the problem.
I tried setting HDRRULE to a do nothing rule in LawMkHdr, but this did not
have an effect.
How can I tell Jam that b.h _only_ depends on b.c? This is a big deal -
it's enough to stop my research into Jam.
Date: Thu, 27 Nov 2003 10:34:36 +0900
From: Darren Cook <darren@dcook.org>
Subject: Making different library versions
I'm new to Jam, and am struggling trying to work out how to make multiple
versions of a library with different compile flags. I've stripped it down to
a fairly minimal case with just one file:
SIZES = 9 19 ;
# Have a library of core files that actually have a cpp file.
for S in $(SIZES) {
Object assertion.$(S).o : assertion.cpp ;
C++Flags on assertion.$(S).o = -DBWIDTH=$(S) -DBHEIGHT=$(S) ;
LibraryFromObjects core_lib.$(S) : assertion.$(S).o ;
}
(I also tried ObjectC++Flags)
But jam -n tells me:
cc -c -O -I/usr/local/src/boost-1.30.2/ -o assertion.9.o assertion.cpp
cc -c -O -I/usr/local/src/boost-1.30.2/ -o assertion.19.o assertion.cpp
ar ru core_lib.a assertion.9.o assertion.19.o
When what I want it to do is:
cc -c -O -I/usr/local/src/boost-1.30.2/ -DBWIDTH=9 =DBHEIGHT=9
-o assertion.9.o assertion.cpp
cc -c -O -I/usr/local/src/boost-1.30.2/ -DBWIDTH=19 =DBHEIGHT=19
-o assertion.19.o assertion.cpp
ar ru core_lib.9.a assertion.9.o
ar ru core_lib.19.a assertion.19.o
I guess I'm misunderstanding something fundamental - could someone tell me
what it is, and the most elegant way to achieve this.
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Making different library versions
Date: Thu, 27 Nov 2003 10:24:35 +0100
core_lib.$(S) seems to be the problem. Either core_lib_$(S) or
core_lib.$(S).a seems to work fine.
Jam evidently wants archives to end in .a and will change the extension if necessary.
Date: Thu, 27 Nov 2003 20:12:40 +0900
From: Darren Cook <darren@dcook.org>
Subject: Re: Making different library versions
I see - I got confused with "ObjectC++Flags".
Still haven't got my head around "grist", but I copied your example below
and it worked. I then followed the pattern for the other places I was
stumbling and they now work as well. Thanks.
Here's how I'd do it:
SIZES = 9 19 ;
# Have a library of core files that actually have a cpp file.
for size in $(SIZES) {
local objfile = [ FGristFiles assertion.$(size)$(SUFOBJ) ] ;
C++FLAGS on $(objfile) += -DBWIDTH=$(size) -DBHEIGHT=$(size) ;
Object $(objfile) : assertion.cpp ;
LibraryFromObjects core_lib.$(size)$(SUFLIB) : $(objfile) ;
}
Subject: RE: Making different library versions
Date: Thu, 27 Nov 2003 14:15:48 +0100
From: "Turner David" <dturner@cptechno.com>
Forget about grist, ObjectC++Flags and LibraryFromObjects. You could make your
life much easier with a little trick involving local variables:
# Have a library of core files that actually have a cpp file.
for size in $(SIZES) {
# define local variable, whose value is the global C++FLAGS plus additionnal
# flags that will be used by Library and other rules called by it
#
local C++FLAGS = $(C++FLAGS) -DBWIDTH=$(size) -DBHEIGHT=$(size) ;
Library core_lib_$(size) : assertion.cpp ;
}
Another excellent use of Jam's dynamic variable scoping. This works
also extremely well to temporarily change the include path
(local HDRS = $(HDRS) <yourotherpaths> ;) and many more options.
Date: Thu, 27 Nov 2003 09:32:51 -0800 (PST)
Subject: RE: Making different library versions
That won't actually work, because there's nothing to distinguish the
object file. What that will get you is two compiles of assertion.cpp, with
the flags doubled up on each compile, and that one (compiled-twice) object
file going into both the size-specific libs -- or at least trying to, but
since the first Archive will remove the object file, the second Archive will fail.
there's more than one of the same name (eg., main.c). By default,
it's the subdir path elements (ie., $(SUBDIR_TOKENS)), separated by "!"s,
and enclosed in "<" and ">", all of which is prepended to the file name --
eg., <subdir1!subdir2>main.c. You can change what it is if you want/need
to, but ordinarily you can just let it be.
Date: Thu, 27 Nov 2003 13:29:17 -0800
From: Matt Armstrong <matt@lickey.com>
Subject: Re: Unwanted generated header dependancies
What version of Jam are you using? The 2.5rc3 version attempts to fix this problem (I think).
Date: Fri, 28 Nov 2003 09:07:22 +0900
From: Darren Cook <darren@dcook.org>
Subject: -q flag not working?
When I run "jam -q" it seems to have no effect - it tries to compile and
fails each source file in turn (i.e. the problem is in a header file they
all depend on). However ctrl-c does interrupt it correctly.
I had a look at the source and in make1.c:make1d() it seems to be treating
...failed C++ myfile.o...
which is being output in the same make1d() function. So I don't see why it doesn't work.
Is this a bug? (I'm using linux and bjam version 3.1.0).
From: Vladimir Prus <ghost@cs.msu.su>
Subject: Re: -q flag not working?
Date: Fri, 28 Nov 2003 09:15:50 +0300
First off, I think it generally better to post bjam bugs to
Jam. Though in this case your post is applicable to both jams.
Anyway, about this bug report. I think bjam has it fixed, at least there was this commit
2002-07-02 15:53 vladimir_prus
* make1.c: Fix "-q" option, thanks to Markus Scherschanski.
* jam_src/make1.c (make1d): Quickquit in all cases, not only
when
DEBUG_MAKE is set.
This change went into 3.1.1, so you might try to get that version, or better
yet, the most current one at
http://sourceforge.net/project/showfiles.php?group_id=7586
I think Perforce Jam still has this bug.
From: Roger.Shimada@lawson.com
Date: Sun, 30 Nov 2003 20:42:39 -0600
Subject: Details on unwanted generated header dependancies
This is with Jam 2.5rc3 on Windows 2000.
I was told that the "independant target" warnings might be a problem, so
here's everything.
Jamrules
========
rule
LaHeaderLocal {
local var ;
for var in $(<) {
LawMkHdr $(var:B).h : $(var:B).c : -l ;
}
}
rule LawMkHdr {
local source ;
source = [ FGristSourceFiles $(>) ] ;
Depends files : $(<) ;
Depends $(<) : $(source) ;
Clean clean : $(<) ;
SEARCH on $(source) = $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_SOURCE) ;
MKHDRFLAG on $(<) = $(3) ;
}
actions LawMkHdr {
cd $(>:D)
mkhdr $(MKHDRFLAG) $(>:B)$(>:S)
}
hdr/Jamfile
===========
SubDir TOP hdr ;
LaHeaderLocal a.h b.h ;
Library broke : a.c b.c ;
hdr/a.c
=======
int IntA = 0;
hdr/b.c
=======
#include "a.h"
int IntB = 0;
mkhdr generates hdr/a.h
=======================
/* Source: C:\tiny\hdr/a.c */
extern int IntA;
mkhdr generates hdr/b.h
=======================
/* Source: C:\tiny\hdr/b.c */
#include "a.h"
extern int IntB;
output of "jam -a"
==================
...found 17 target(s)...
...updating 5 target(s)...
warning: using independent target a.c
LawMkHdr a.h
C:\tiny\hdr
Processing a.c to a.h
warning: using independent target b.c
LawMkHdr b.h
C:\tiny\hdr
Processing b.c to b.h
Cc a.obj
a.c
Cc b.obj
b.c
Archive broke.lib
Replacing a.obj
Replacing b.obj
b.obj : warning LNK4006: _IntB already defined in broke.lib(b.obj); second
definition ignored
a.obj : warning LNK4006: _IntA already defined in broke.lib(a.obj); second
definition ignored
...updated 5 target(s)...
output of "jam" after updating a.c
==================================
...found 17 target(s)...
...updating 5 target(s)...
warning: using independent target a.c
LawMkHdr a.h
C:\tiny\hdr
Processing a.c to a.h
warning: using independent target b.c
LawMkHdr b.h
C:\tiny\hdr
Processing b.c to b.h
Cc a.obj
a.c
Cc b.obj
b.c
Archive broke.lib
Replacing a.obj
Replacing b.obj
b.obj : warning LNK4006: _IntB already defined in broke.lib(b.obj); second
definition ignored
a.obj : warning LNK4006: _IntA already defined in broke.lib(a.obj); second
definition ignored
...updated 5 target(s)...
The generated .h files should _only_ depend on their corresponding .c
files. So after updating a.c, there should not have been a rebuild of
b.h. (The rebuild of a.obj and b.obj okay.)
Date: Tue, 2 Dec 2003 00:05:09 -0800 (PST)
From: Ken Perry <whistler@blinksoft.com>
Subject: Bison, Flex, G++ project problem
To start out with I have to admit I am pretty new at Jam. I
have simplified my build process for 4 source trees in the
last couple of days with Jam but by no means am I an expert
and I haven't even had to write any of my own rules yet.
IN truth if I had to write my own rule I might be lost since
I can't even find where the 2.5 rpm I used to instal jam hid
my Jambase file.
So now to my question.
I have a compiler that has 14 .cpp files and 2.y files and
2.l files. I tried to make just a MAIN statement to do them
all at once but it failed on the .y and .l files and after
looking at the makefile I can understand why. The following
is the shortened version of the makefile I need to convert.
vmcpar.cpp: vmcpar.y
YFLAG = -d -y -v
bison --debug $(YFLAG) vmcpar.y
@-if [ -f y.output ]; then mv y.output ../platform/$(PLATFORM)/vmc/tmp_vmcpar.output; fi
@mv y.tab.c ../platform/$(PLATFORM)/vmc/tmp_vmcpar.cpp
@mv y.tab.h ../platform/$(PLATFORM)/vmc/tmp_vmcpar.h
vmclex.cpp: vmclex.l
flex -t vmclex.l $(LEXFILTER) > ../platform/$(PLATFORM)/vmc/tmp_vmclex.cpp
dilpar.cpp: dilpar.y
bison --debug -d -v -pdil dilpar.y
@-if [ -f dilpar.output ]; then mv dilpar.output ../platform/$(PLATFORM)/vmc/tmp_dilpar.output; fi
@mv dilpar.tab.c ../platform/$(PLATFORM)/vmc/tmp_dilpar.cpp
@mv dilpar.tab.h ../platform/$(PLATFORM)/vmc/tmp_dilpar.h
dillex.cpp: dillex.l
flex -Pdil -t dillex.l $(LEXFILTER) > ../platform/$(PLATFORM)/vmc/tmp_dillex.cpp
As you can see both parsers are generated using different
flags for the bison and flex generator. What I want to know
is how do I build these parsers and compile and link them in
with the other .cpp files in the directory. Do I need to
make multiple Jamfiles or is there some way to run bison and
lex with one set of flags and create the resulting sources
and then change the flags for the next parser I create? I
also then have to include the resulting source files into
the MAIN build so do I do the bison and lex stuff first in
the file or does it matter?
Further more if you look at the lines where flex is run you
will see I run a filter on the output of flex by piping the
text out put of flex into a perl script and then directing it into a .cpp file
can this be done in Jam if it can I haven't found how.
Well I am not sure if this question is clear enough but if
anyone is willing to work with me I am even willing to make
phone calls if necessary I am just banging my head against
the Jam documents I have found on the web and haven't come up with a fix.
Subject: RE: Making different library versions
Date: Tue, 2 Dec 2003 10:29:43 +0100
From: "Turner David" <dturner@cptechno.com>
Sorry, I was wrong, this will not work because the same source file
is used as a source, and the Library rule will not distinguish between
the two distinct resulting objects..
Well, Jam's dynamic scoping can be very useful, but not in this
case. Sorry for the misunderstanding. I still stand that it can
be very useful in many other cases though...
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Date: Thu, 4 Dec 2003 19:13:09 +0100
Subject: blah.y depends on itself
I have a .y file which includes its own y.tab.h. Jam complains that the
.y file depends on itself, which seems wrong. I hacked that by setting
HDRSCAN to null on the file in question.
But the deeper question remains. How to fix that?
The .y file doesn't depend on that which it includes. The .c which is
built from that .y does, and I don't see any way to express that in
Jam. Does any of you?
Date: Mon, 8 Dec 2003 12:15:53 +0900
From: Anthony Heading <aheading@jpmorgan.com>
Subject: Re: blah.y depends on itself
Yes, it's very annoying.
I think the relationships are expressed OK here. It's the warning
that's at fault.
The case that first bit me was running "cproto" to automatically
generate a C-function-prototype file, which was then included at
the top of the source file. Same warning message.
So I think in your case, the ".c" stage is not the issue. Yes, it
exists, but so does next stage ".o" file. Equivalently, if you
imagined you had a direct .y -> .o compiler, you still hit the
same problem. Jam believes that if you #include generated code
back into its source, you have a circular build dependency.
And that's wrong. Clearly the .y file doesn't depend
on _anything_. It just _is_. It's source.
Why this is happening, as far as I can tell, is that jam "inclusion"
relationships are held in the same dependency graph as real dependencies,
with a flag that makes them invisible some of the time. This is a
clever implementation short-cut, but one consequence is that the "loops"
that jam detects are bogus.
So I "fixed" it by removing the warning from the jam source. A proper
fix didn't seem trivial, and just how useful is this warning anyhow?
If one is confused enough to program a jam file which has reversed
file dependencies, there are likely bigger problems than anything fixed
by a simple loop detection.
Having done that, I'm now scratching my head about why so-called
"independent targets" are regarded as evil. Can anyone explain
why these justify a warning, or why the default Jambase file even
describes them as "deadly"? Otherwise, that might be the next
warning that I remove!
From: Paul Forgey <paulf@metainfo.com>
Subject: Re: blah.y depends on itself
Date: Sun, 7 Dec 2003 21:05:12 -0800
I have found them useful when writing rules that do a lot of a->b->c->d
type of stuff. On occasion we have intermediate programs which get
built to parse things or generate generators. The kind of stuff where
Jam really shines for readability and maintenance of a complex build process.
This means that while there is an a->b->c->d, the actual dependencies
aren't that straight forward, since b would provide for more things
than c, and c for more things than d, etc.. When interpreted as a
sequence of steps, like what a sequential script would do, things would
seem to work out OK. However, you could (and with 'make' I frequently
did) run into situations where a, b or c gets updated and the rules
don't specify they really all do depend on each other. Without
realizing why you have an incremental build which really isn't properly updated.
The other big reason to have a dependency graph which properly connects
all the dangling pieces is for parallel builds (-j) to work properly.
So on more than one occasion for me this warning has saved me.
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Date: Mon, 8 Dec 2003 13:09:21 +0100
Subject: Re: blah.y depends on itself
I see the bug. If a depends on b, which includes c, which depends on a,
then the "except for includes" logic in make.c near line 275 breaks for
a, since the intervening includes isn't visible.
I changed the code, but I'm not sure whether the change is sound... will
test it when I get back to my Jamfile. A better way might be to add
another argument to make0(), saying whether this part of the graph is
coloured by an include.
@@ -152,6 +152,8 @@
return status;
}
+static int including;
+
/*
* make0() - bind and scan everything to make a TARGET
*
@@ -272,10 +274,16 @@
/* Warn about circular deps, except for includes, */
/* which include each other alot. */
+ if ( internal )
+ including++;
+
if( c->target->fate == T_FATE_INIT )
make0( c->target, ptime, depth + 1, counts, anyhow );
- else if( c->target->fate == T_FATE_MAKING && !internal )
+ else if( c->target->fate == T_FATE_MAKING && !including )
printf( "warning: %s depends on itself\n", c->target->name );
+
+ if ( !internal )
+ including--;
}
/* Step 3b: recursively make0() internal includes node */
Date: Mon, 8 Dec 2003 15:18:07 +0000
From: Matt Kern <matt.kern@undue.org>
Subject: NoUpdateDependents
I recently had an issue with building shared libraries. (For the
remainder of this email, I will use library to mean *shared* libraries.)
Basically, targets really need to depend on their libraries, or they
might not be created. However, if targets do depend on libraries,
whenever the libraries are rebuilt, so are the targets that depend
upon them. This is non-ideal, since in many cases, the libraries are
linked with many programs, causing a whole cascade of relinking. The
only instance where you would wish to rebuilt the targets is when the
library's ABIs change.
Invoking NOUPDATE on the libraries doesn't help, since the libraries
are never changed once first built. I have seen workarounds on the
list to deal with this problem, but what is really needed is a flag to
disconnect the library dependencies from the targets that use them.
Now, there may be such changes in the very latest version of jam (I am
running 2.5rc3 which the website claims is current), so what follows
may well duplicate work I haven't seen, but....
I have attached a short patch. By invoking the "NoUpdateDependents"
rule on a library, changes to that library will not be propagated up
the dependency tree. It works simply by skipping over dependencies if
both target and library exist (and the appropriate flag is set).
P.S. The debugging code in make.c can be safely removed.
--- jam-2.5rc3.orig/builtins.c 2003-04-23 05:45:50.000000000 +0100
+++ jam-2.5rc3/builtins.c 2003-12-07 11:33:05.000000000 +0000
@@ -108,6 +108,10 @@
bindrule( "NOUPDATE" )->procedure
parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_NOUPDATE );
+ bindrule( "NoUpdateDependents" )->procedure
+ bindrule( "NOUPDATEDEPENDENTS" )->procedure
+ parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_NOUPDATEDEPENDENTS );
+
bindrule( "Temporary" )->procedure
bindrule( "TEMPORARY" )->procedure
parse_make( builtin_flags, P0, P0, P0, C0, C0, T_FLAG_TEMP );
--- jam-2.5rc3.orig/make.c 2003-04-23 05:45:51.000000000 +0100
+++ jam-2.5rc3/make.c 2003-12-08 14:55:46.000000000 +0000
@@ -305,6 +305,22 @@
for( c = t->depends; c; c = c->next ) {
+ /* NoUpdateDependents checks */
+ if (c->target->flags & T_FLAG_NOUPDATEDEPENDENTS &&
+ c->target->binding == T_BIND_EXISTS &&
+ t->binding == T_BIND_EXISTS) {
+ if (DEBUG_Depends)
+ printf( "Skipping \"%s\" : \"%s\" ;\n",
+ t->name, c->target->name );
+
+ continue;
+ } else {
+ if (DEBUG_Depends)
+ printf( "Not Skipping %d/%d/\"%s\" : %d/%d/\"%s\" ;\n",
+ t->binding, t->fate, t->name,
+ c->target->binding, c->target->fate, c->target->name );
+ }
+
/* If LEAVES has been applied, we only heed the timestamps of */
/* the leaf source nodes. */
--- jam-2.5rc3.orig/rules.h 2003-04-23 05:45:52.000000000 +0100
+++ jam-2.5rc3/rules.h 2003-12-07 11:30:50.000000000 +0000
@@ -113,7 +113,8 @@
# define T_FLAG_TOUCHED 0x08 /* ALWAYS applied or -t target */
# define T_FLAG_LEAVES 0x10 /* LEAVES applied */
# define T_FLAG_NOUPDATE 0x20 /* NOUPDATE applied */
-# define T_FLAG_INTERNAL 0x40 /* internal INCLUDES node */
+# define T_FLAG_NOUPDATEDEPENDENTS 0x40 /* NOUPDATEDEPENDENTS applied */
+# define T_FLAG_INTERNAL 0x80 /* internal INCLUDES node */
char binding; /* how target relates to real file */
Date: Mon, 8 Dec 2003 14:26:29 -0800 (PST)
From: Christopher Seiwald <seiwald@perforce.com>
Subject: Re: blah.y depends on itself
| I see the bug. If a depends on b, which includes c, which depends on a,
| then the "except for includes" logic in make.c near line 275 breaks for
| a, since the intervening includes isn't visible.
Actually, if you use jam -dd you'll see that there is indeed a circular
dependency listed. In this case:
Includes blah.y : blah.h ;
Depends blah.h : blah.y ;
That is to say, to generate anything from blah.y you need the text of
blah.h as well, yet blah.h depends on blah.y. Jam is correct in issuing
the warning.
What's wrong is the Yacc rule in the Jambase, which allows the header
scan operation to attribute blah.h to blah.y rather than the generated
blah.c. The fact is if blah.y has a #include in it, it is blah.c that
actually needs the "Includes" dependency.
Here's a Jambase diff:
*** /tmp/tmp.93249.0 Mon Dec 8 14:24:42 2003
--- /usr/big/seiwald/jam/Jambase Mon Dec 8 14:05:43 2003
***************
*** 37,42 ****
# 11/21/96 (peterk) - Support for BeOS
# 07/19/99 (sickel) - Support for Mac OS X Server (and maybe client)
# 02/18/00 (belmonte)- Support for Cygwin.
+ # 12/08/03 (seiwald) - New YaccHdr to attribute #includes to generated .c.
# Special targets defined in this file:
#
***************
*** 1445,1450 ****
# a deadly independent target
Includes $(<) : $(_h) ;
+
+ # Handle #includes in .y file
+
+ HDRRULE on $(>) = YaccHdr ;
+ }
+
+ rule YaccHdr
+ {
+ # YaccHdr .y : hdrs ;
+ # For yacc, includes are actually on the generated
+ # .c file, not on the source .y.
+
+ HdrRule $(<:S=$(YACCGEN)) : $(>) ;
}
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Re: blah.y depends on itself
Date: Tue, 9 Dec 2003 10:28:01 +0100
I saw that, of course, but I don't think that's a circular dependency.
If both were includes, or both were depends, it would be. If both were
depends, it would warrant a warning.
IMO, no. The Yacc rule is wrong, and I'm glad you fixed it, but the
condition for the warning is wrong too.
The warning is emitted for X if there is a circular chain of Includes
and Depends where the dependency pointing to X is a Depends. If nothing
else, that condition is too obscure to be correct. The warning should
be emitted if there is a chain of Depends.
I'll give two cases where the warning occurs wrongly. One is modified
from my own experience, the other from Anthony Heading's recent posting.
1. Suppose you have a small library of .c files, with one public header
file which is built from the .c files. There is a script which looks at
the .c files, picks API functions and documentation out and writes a .h
file and some man pages.
In that case, the header file would certainly depend on the three .c
files through a custom PublicAPI rule, and a .c file would some
documentation source such as this:
/*! The Mumble library contains many easy-to-use functions to help you
mumble more effectively. To use them, simply add
#include <mumble.h>
and call the functions. Blah blah blah. */
Jam's header file detection would pick that up, of course, and there you
have it:
Depends mumble.h : a.c ; # from PublicAPI
Includes a.c : mumble.h ; # from cheap-and-cheerful HdrRule
But here, if a .c file is modified, the documentation and object code
must be rebuilt. If not, nothing need be done.
2. Suppose you're building a single executable from a single .h file,
and are using cproto. (I didn't like that use of cproto, but that's
neither here nor there.) Then, the dependencies aree these:
Depends foo : foo.o ; # from Main
Depends foo.o : foo.h ; # From Includes foo.c : foo.h ;
Depends foo.o : foo.c ; # From Object
Depends foo.h : foo.h : # from cproto rule
Includes foo.c : foo.h ; # observed in foo.c
If you look closely at this, there's nothing that can break. If foo.c is
modified or foo.h is removed, cproto must be run and then cc. Else,
nothing need be done.
In both cases, if there actually were a dependency circle, jam should be
chasing its own tail, unable to pick a single, correct order of
actions. But there is no such doubt - all is lucid. Therefore, jam's
warning about a circular dependency is wrong.
If jam were to avoid the includes (as the code in make0() seems to try
to), then it would not give the incorrect warning, but simply build
correctly in both of these cases.
Date: Tue, 9 Dec 2003 08:39:23 -0800 (PST)
Subject: Re: Re: blah.y depends on itself
From: "Christopher Seiwald" <seiwald@perforce.com>
Hmmm. I don't think the condition is too obscure. Certainly, if you need
a .c file to generate a .h file, and the .c file includes the .h file, you've
got a circular dependency. This was the problem with the Yacc rule: it said
to make the .c/.h files you needed the .y file, but the .y file appeared to
include the .h file. How is anything (other then that Yacc rule) to know that
a .y file #including a .h file means something other than that the .h file
is necessary to proceed with the .y file?
If you're trying to confuse the cheap-and-cheerful HdrRule, putting a #include
in comments in the way to go.
(I think you mean "foo.h : foo.c" here.)
If you're generating two different things from the same .c file -- both
the .o and a .h, then the stock HdrRule isn't enough: it only assumes that
everything generated from the .c file will require the contents of the .h file.
If you modify the .h, the dependencies you have listed will require the .h
to be regenerated. I'm not sure I'd call that broken, but it is hardly ideal.
In this last case, you could have a special HdrRule for cproto files, that
knows that the .h is generated from everything in the .c file except for
the included .h. Or perhaps jam could use a "Ignore this tangled web"
target modifier for such situations.
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Re: blah.y depends on itself
Date: Tue, 9 Dec 2003 18:26:49 +0100
Everything _generated_from_ the .c file requires the contents of the .h
file. I like that. That's a right way to view includes. The .c itself
doesn't depend on the .h, it merely includes .h.
Taking a step back: If the .c file cannot and should not be generated,
how can it ever be involved in a dependency loop, for any useful
definition of "dependency loop"?
No, it only needs to limit its dependency loop detection to files it can
possibly generate.
Date: Wed, 10 Dec 2003 09:43:15 -0800
From: "Srinivasan Murari" <smurari@paramanet.com>
Subject: RE: [BUG] jam 2.5rc3 and parallel builds
I have found the source of both of these problems have applied fixes which
seem to work. The two problems are actually independent of each other, with
the commonality being that they show up only under -j2 or higher.
Details below.
There is a problem with jam not being able to express complete dependencies
when there is a circularity in the include files.
The following change needs to be applied to the function "make0".
/* Step 3c: add dependents' includes to our direct dependencies */
incs = 0;
for( c = t->depends; c; c = c->next )
if( c->target->includes )
incs = targetentry( incs, c->target->includes );
t->depends = targetchain( t->depends, incs );
needs to be changed to
/* Step 3c: add dependents' includes (flattened) to our direct dependencies */
incs = 0;
for( c = t->depends; c; c = c->next )
if( c->target->includes )
for ( n = c->target->includes->depends; n; n = n->next ) {
if( t != n->target ) {
incs = targetentry( incs, n->target );
}
}
t->depends = targetchain( t->depends, incs );
Jam built with this fix passes both of Matt's tests.
The problem is exactly as Chris describes. The fix is to determine if an action
is already running on behalf of another target and if so to bail out of make1b()
prior to calling make1cmds() by adding more parents to the in-progress target
and incrementing the asynccnt of the new target.
In rules.h, change the definition of action to with the addition of the
"run_tgt" member.
/* ACTION - a RULE instance with targets and sources */
struct _action {
RULE *rule;
TARGETS *targets;
TARGETS *sources; /* aka $(>) */
TARGET *run_tgt; /* Target on whose behalf action is being run */
char running; /* has been started */
char status; /* see TARGET status */
} ;
The tail of make1b changes from
case T_FATE_TOUCHED:
case T_FATE_MISSING:
case T_FATE_NEEDTMP:
case T_FATE_OUTDATED:
case T_FATE_UPDATE:
/* Set "on target" vars, build actions, unset vars */
/* Set "progress" so that make1c() counts this target among */
/* the successes/failures. */
if( t->actions ) {
++counts->total;
if( DEBUG_MAKE && !( counts->total % 100 ) )
printf( "...on %dth target...\n", counts->total );
pushsettings( t->settings );
t->cmds = (char *)make1cmds( t->actions );
popsettings( t->settings );
t->progress = T_MAKE_RUNNING;
}
break;
}
/* Call make1c() to begin the execution of the chain of commands */
/* needed to build target. If we're not going to build target */
/* (because of dependency failures or because no commands need to */
/* be run) the chain will be empty and make1c() will directly */
/* signal the completion of target. */
make1c( t );
}
to
case T_FATE_TOUCHED:
case T_FATE_MISSING:
case T_FATE_NEEDTMP:
case T_FATE_OUTDATED:
case T_FATE_UPDATE:
/* Set "on target" vars, build actions, unset vars */
/* Set "progress" so that make1c() counts this target among */
/* the successes/failures. */
if( t->actions ) {
make1wait( t );
if( t->asynccnt != 0 ) { return; }
++counts->total;
if( DEBUG_MAKE && !( counts->total % 100 ) )
printf( "...on %dth target...\n", counts->total );
pushsettings( t->settings );
t->cmds = (char *)make1cmds( t->actions, t );
popsettings( t->settings );
t->progress = T_MAKE_RUNNING;
}
break;
}
/* Call make1c() to begin the execution of the chain of commands */
/* needed to build target. If we're not going to build target */
/* (because of dependency failures or because no commands need to */
/* be run) the chain will be empty and make1c() will directly */
/* signal the completion of target. */
make1c( t );
}
with the following definition for make1wait().
static void
make1wait( TARGET *t ) {
ACTIONS *a0;
for( a0 = t->actions; a0; a0 = a0->next ) {
if( a0->action->running && a0->action->run_tgt->progress != T_MAKE_DONE ) {
a0->action->run_tgt->parents = targetentry( a0->action->run_tgt->parents, t );
t->asynccnt++;
}
}
}
and make1cmds() changes from
static CMD *
make1cmds( ACTIONS *a0 ) {
CMD *cmds = 0;
LIST *shell = var_get( "JAMSHELL" ); /* shell is per-target */
/* Step through actions */
/* Actions may be shared with other targets or grouped with */
/* RULE_TOGETHER, so actions already seen are skipped. */
for( ; a0; a0 = a0->next ) {
RULE *rule = a0->action->rule;
SETTINGS *boundvars;
LIST *nt, *ns;
ACTIONS *a1;
int start, chunk, length, maxline;
/* Only do rules with commands to execute. */
/* If this action has already been executed, use saved status */
if( !rule->actions || a0->action->running ) continue;
a0->action->running = 1;
/* Make LISTS of targets and sources */
/* If `execute together` has been specified for this rule, tack */
/* on sources from each instance of this rule for this target. */
nt = make1list( L0, a0->action->targets, 0 );
ns = make1list( L0, a0->action->sources, rule->flags );
if( rule->flags & RULE_TOGETHER )
for( a1 = a0->next; a1; a1 = a1->next )
if( a1->action->rule == rule && !a1->action->running ) {
ns = make1list( ns, a1->action->sources, rule->flags );
a1->action->running = 1;
}
/* If doing only updated (or existing) sources, but none have */
/* been updated (or exist), skip this action. */
if( !ns && ( rule->flags & ( RULE_UPDATED | RULE_EXISTING ) ) ) {
list_free( nt );
continue;
}
/* If we had 'actions xxx bind vars' we bind the vars now */
boundvars = make1settings( rule->bindlist );
pushsettings( boundvars );
/*
* Build command, starting with all source args.
*
* If cmd_new returns 0, it's because the resulting command
* length is > MAXLINE. In this case, we'll slowly reduce
* the number of source arguments presented until it does
* fit. This only applies to actions that allow PIECEMEAL
* commands.
*
* While reducing slowly takes a bit of compute time to get
* things just right, it's worth it to get as close to MAXLINE
* as possible, because launching the commands we're executing
* is likely to be much more compute intensive!
*
* Note we loop through at least once, for sourceless actions.
*
* Max line length is the action specific maxline or, if not
* given or bigger than MAXLINE, MAXLINE.
*/
start = 0;
chunk = length = list_length( ns );
maxline = rule->flags / RULE_MAXLINE;
maxline = maxline && maxline < MAXLINE ? maxline : MAXLINE;
do {
/* Build cmd: cmd_new consumes its lists. */
CMD *cmd = cmd_new( rule,
list_copy( L0, nt ),
list_sublist( ns, start, chunk ),
list_copy( L0, shell ),
maxline );
if( cmd ) {
/* It fit: chain it up. */
if( !cmds ) cmds = cmd;
else cmds->tail->next = cmd;
cmds->tail = cmd;
start += chunk;
} else if( ( rule->flags & RULE_PIECEMEAL ) && chunk > 1 ) {
/* Reduce chunk size slowly. */
chunk = chunk * 9 / 10;
} else {
/* Too long and not splittable. */
printf( "%s actions too long for %s (max %d)!\n",
rule->name, nt->string, maxline );
exit( EXITBAD );
}
}
while( start < length );
/* These were always copied when used. */
list_free( nt );
list_free( ns );
/* Free the variables whose values were bound by */
/* 'actions xxx bind vars' */
popsettings( boundvars );
freesettings( boundvars );
}
return cmds;
}
to
static CMD *
make1cmds( ACTIONS *a0, TARGET *t ) {
CMD *cmds = 0;
LIST *shell = var_get( "JAMSHELL" ); /* shell is per-target */
/* Step through actions */
/* Actions may be shared with other targets or grouped with */
/* RULE_TOGETHER, so actions already seen are skipped. */
for( ; a0; a0 = a0->next ) {
RULE *rule = a0->action->rule;
SETTINGS *boundvars;
LIST *nt, *ns;
ACTIONS *a1;
int start, chunk, length, maxline;
/* Only do rules with commands to execute. */
/* If this action has already been executed, use saved status */
if( !rule->actions || a0->action->running ) continue;
a0->action->running = 1;
a0->action->run_tgt = t;
/* Make LISTS of targets and sources */
/* If `execute together` has been specified for this rule, tack */
/* on sources from each instance of this rule for this target. */
nt = make1list( L0, a0->action->targets, 0 );
ns = make1list( L0, a0->action->sources, rule->flags );
if( rule->flags & RULE_TOGETHER )
for( a1 = a0->next; a1; a1 = a1->next )
if( a1->action->rule == rule && !a1->action->running )
{
ns = make1list( ns, a1->action->sources, rule->flags );
a1->action->running = 1;
a1->action->run_tgt = t;
}
/* If doing only updated (or existing) sources, but none have */
/* been updated (or exist), skip this action. */
if( !ns && ( rule->flags & ( RULE_UPDATED | RULE_EXISTING ) ) ) {
list_free( nt );
continue;
}
/* If we had 'actions xxx bind vars' we bind the vars now */
boundvars = make1settings( rule->bindlist );
pushsettings( boundvars );
/*
* Build command, starting with all source args.
*
* If cmd_new returns 0, it's because the resulting command
* length is > MAXLINE. In this case, we'll slowly reduce
* the number of source arguments presented until it does
* fit. This only applies to actions that allow PIECEMEAL
* commands.
*
* While reducing slowly takes a bit of compute time to get
* things just right, it's worth it to get as close to MAXLINE
* as possible, because launching the commands we're executing
* is likely to be much more compute intensive!
*
* Note we loop through at least once, for sourceless actions.
*
* Max line length is the action specific maxline or, if not
* given or bigger than MAXLINE, MAXLINE.
*/
start = 0;
chunk = length = list_length( ns );
maxline = rule->flags / RULE_MAXLINE;
maxline = maxline && maxline < MAXLINE ? maxline : MAXLINE;
do {
/* Build cmd: cmd_new consumes its lists. */
CMD *cmd = cmd_new( rule,
list_copy( L0, nt ),
list_sublist( ns, start, chunk ),
list_copy( L0, shell ),
maxline );
if( cmd ) {
/* It fit: chain it up. */
if( !cmds ) cmds = cmd;
else cmds->tail->next = cmd;
cmds->tail = cmd;
start += chunk;
} else if( ( rule->flags & RULE_PIECEMEAL ) && chunk > 1 ) {
/* Reduce chunk size slowly. */
chunk = chunk * 9 / 10;
} else {
/* Too long and not splittable. */
printf( "%s actions too long for %s (max %d)!\n",
rule->name, nt->string, maxline );
exit( EXITBAD );
}
}
while( start < length );
/* These were always copied when used. */
list_free( nt );
list_free( ns );
/* Free the variables whose values were bound by */
/* 'actions xxx bind vars' */
popsettings( boundvars );
freesettings( boundvars );
}
return cmds;
}
Date: Wed, 10 Dec 2003 16:16:25 -0800
From: "Srinivasan Murari" <smurari@paramanet.com>
Subject: RE: [BUG] jam 2.5rc3 and parallel builds
To resummarize, Jam assumes that you have an directed acyclic dependency graph.
Unfortunately circular includes cause the introduction of cycles in the graph.
The previously posted fix is a sledgehammer fix which works but is not in
keeping with Jam's philosophy of small footprint and fast execution.
This solution attempts to detect and break the cycles in the graphs.
This change is somewhat hairy, but after thinking about it for a while I have
convinced myself it is right.
Steps
=====
1. Add two integer fieds called "depth" and "epoch" to the TARGET structure in rules.h.
2. Change make() to look as follows.
int
make(
int n_targets,
const char **targets,
int anyhow ) {
int i;
COUNTS counts[1];
int status = 0; /* 1 if anything fails */
memset( (char *)counts, 0, sizeof( *counts ) );
for( i = 0; i < n_targets; i++ ) {
TARGET *t = bindtarget( targets[i] );
make0( t, 0, i, 0, counts, anyhow );
}
3. Change the beginning of make0() to look like
/*
* make0() - bind and scan everything to make a TARGET
*
* Make0() recursively binds a target, searches for #included headers,
* calls itself on those headers, and calls itself on any dependents.
*/
static void
make0(
TARGET *t,
TARGET *p, /* parent */
int epoch, /* Top level invocation number for make0 */
int depth, /* for display purposes */
COUNTS *counts, /* for reporting */
int anyhow ) /* forcibly touch all (real) targets */
{
TARGETS *c, *incs, *n;
TARGET *ptime = t;
time_t last, leaf, hlast;
int fate;
const char *flag = "";
SETTINGS *s;
/*
* Step 1: initialize
*/
if( DEBUG_MAKEPROG )
printf( "make\t--\t%s%s\n", spaces( depth ), t->name );
/*
* Assign epoch and depth when the node is seen for the very first time
*/
if( t->fate == T_FATE_INIT )
{
t->epoch = epoch;
t->depth = depth;
}
t->fate = T_FATE_MAKING;
4. Change Step 3 of make0() to look like
/*
* Step 3: recursively make0() dependents & headers
*/
/* Step 3a: recursively make0() dependents */
for( c = t->depends; c; c = c->next ) {
int internal = t->flags & T_FLAG_INTERNAL;
if( DEBUG_Depends )
printf( "%s \"%s\" : \"%s\" ;\n",
internal ? "Includes" : "Depends",
t->name, c->target->name );
/* Warn about circular deps, except for includes, */
/* which include each other alot. */
if( c->target->fate == T_FATE_INIT )
make0( c->target, ptime, epoch, depth + 1, counts, anyhow );
else if( c->target->fate == T_FATE_MAKING && !internal )
printf( "warning: %s depends on itself\n", c->target->name );
}
/* Step 3b: recursively make0() internal includes node */
if( t->includes )
make0( t->includes, p, epoch, depth + 1, counts, anyhow );
/* Step 3c: add dependents' includes to our direct dependencies */
incs = 0;
for( c = t->depends; c; c = c->next )
if( c->target->includes )
if ( c->target->includes->epoch == epoch && c->target->includes->depth <= depth )
{
/*
* Found a loop in the graph, break it by flattening the dependencies
*/
for ( n = c->target->includes->depends; n; n = n->next ) {
if( t != n->target ) {
incs = targetentry( incs, n->target );
if( n->target->fate == T_FATE_INIT ) {
/*
* Found never visited dependent node, visit it before picking up fate and time.
*/
make0( n->target, c->target, epoch, c->target->includes->depth + 1, counts, anyhow );
}
}
}
} else {
incs = targetentry( incs, c->target->includes );
}
t->depends = targetchain( t->depends, incs );
Date: Wed, 7 Jan 2004 13:58:52 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Subject: Unit tests
I'm happy to say that our build system is now running with jam and life is good.
Now I have to work in unit tests for our C++ classes. We haven't yet decided
if we're going to go with a unit testing framework like CppUnit or something
homegrown yet.
In the simplest case we'll just have a ClassTest.cpp for every Class.cpp which
contains a main and instantiates an instance and does a bunch of asserts.
With that in mind I came up with some rules that should make things easier.
# in Jamrules
NotFile test ;
Depends test : first ;
# called as Test TestExecutable : TestSource.cpp : <list of libs>
rule Test {
local _testbin = $(1) ;
local _testsrc = [ FGristSourceFiles $(2) ] ;
Main $(_testbin) : $(2) ;
LinkLibraries $(_testbin) : $(3) ;
LinkSystemLibraries $(_testbin) ;
RunTest $(_testbin) ;
}
rule RunTest {
local _bin = [ FAppendSuffix $(1) : $(SUFEXE) ] ;
Depends test : $(_bin) ;
TESTEXE on $(<) = $(SUBDIR)$(SLASH)$(_bin) ;
}
actions RunTest { $(TESTEXE) }
This mostly works. Anychanges to Class.h or Class.cpp cause the ClassTest.cpp
to be recompiled and relinked. The only thing that is happening that I don't
really want is that the tests are being run EVERY time the test binary is rebuilt.
I would like it to only be run when the test target is run on jam (ie 'jam test').
Something else this is missing is a way to collect the number of tests being
run and the number of failures, where a failure is indicated by a non-zero
return value of one of the tests.
Anyone have any ideas how to accomplish these things? I'm also interested in
anyone elses opinions and approaches to doing unit tests and integrating them
with jam.
From: Paul Forgey <paulf@metainfo.com>
Subject: Re: Unit tests
Date: Wed, 7 Jan 2004 17:51:39 -0800
I did something similar myself, and your solution is very close to
mine. I think it's your dependency on 'first' for 'test'. Try 'Always
test ;' instead.
Then hopefully when you do 'jam test', the dependency tree will work
itself out correctly to build the objects being tested and their test
drivers, and otherwise the driver won't get called upon for a normal
'all' build.
I find the output -dd -a -n (include dependencies in the debug output,
pretend everything needs updating, don't really do anything) to be
extremely helpful diagnosing these problems, as this effectively dumps
your dependency tree.
Subject: Re: Unit tests
From: David Abrahams <david.abrahams@rcn.com>
Date: Wed, 07 Jan 2004 22:06:53 -0500
Boost has a complete build/test system built around Boost.Jam, with
results that are postprocessed into tables showing regressions, newly
passing tests, etc. See
http://www.meta-comm.com/engineering/index.html and
http://boost.sourceforge.net/regression-logs/ for examples.
Subject: RE: Unit tests
Date: Thu, 8 Jan 2004 14:21:35 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
I took the dependency to first out and made some other modifications.
Now I only have the tests being built, linked and run when 'jam test' is run.
I copy and pasted the Main, MainFromObjects and Objects rules to with TestMain,
etc. and changed any 'Depends x ;' lines to 'Depends testx ;' to make
dependencies work out.
I only have one issue left. When the RunTest action fails it removes the
target, which is the executable. I originally had the target being some
output file, but it was decided that all output should go to the console.
So, rather than have jam display
RunTest SomeTest.out
I made the target the exe so it displays
RunTest SomeTest
Like I said, the problem is that if SomeTest returns a non-zero exit code
(indicating execution failed), jam removes the executable.
Is there a way to side step this? I'd like to have jam display
'RunTest SomeTest' for consistency but if I must do it the other way I can.
Can I somehow make the target SomeTest.out and still have jam display
'RunTest SomeTest'? Is there a way to turn off this behaviour temporarily?
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Subject: Re: Unit tests
Date: Thu, 8 Jan 2004 17:03:59 -0500
Make SomeTest a parameter, not the target, of the RunTest action.
Put an echo at the beginning of the action to tell what test you are running.
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Sent: Thursday, January 08, 2004 4:21 PM
Subject: RE: Unit tests
only have the tests being built, linked and run when 'jam test' is run. I copy
and pasted the Main, MainFromObjects and Objects rules to with TestMain, etc. and
changed any 'Depends x ;' lines to 'Depends testx ;' to make dependencies work
out.
target, which is the executable. I originally had the target being some output
file, but it was decided that all output should go to the console. So, rather
than have jam display
(indicating execution failed), jam removes the executable. Is there a way to side
step this? I'd like to have jam display 'RunTest SomeTest' for consistency but if
I must do it the other way I can. Can I somehow make the target SomeTest.out and
still have jam display 'RunTest SomeTest'? Is there a way to turn off this
behaviour temporarily?
Subject: RE: Unit tests
Date: Thu, 8 Jan 2004 15:25:38 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Ah, and make the action run with the 'quietly' modifier. Good idea!
Ok, now for what will hopefully be the last question: When a test fails and
a non-zero result is returned, jam dumps the commands that were executed.
So, I wind up with output like:
echo RunTest SomeTest
./SomeTest && touch ./SomeTest.done
along with the messages SomeTest printed to stdout and stderr.
Is there a way to suppress this command dump?
From: <boga@mac.com>
Date: Mon, 12 Jan 2004 16:33:43 +0100
Subject: Using a "(Test)" as a directory name?!
I'm unable to create a directory named "(test)".
The following program's output is also wrong:
actions Test { echo "$(<)" }
target = "./Test/As/(Hello)" ;
Depends all : $(target) ;
The output is:
>Test ./Test/As(Hello)
>./Test/As(Hello)
^^^^^^^^^^^^^^- the"/" disappeared!
The (Hello) part seems to be interpreted as archive name....
Is there any suggested workaround for this problem?!
From: Mark Beall <mbeall2@simmetrix.com>
Date: Sun, 18 Jan 2004 14:40:45 -0500
Subject: new to jam, a few questions
I'm trying out jam and have run into a few questions. I did search
through the archives of this list and found answers to many of my
questions, but not these:
1. How does jam set the OS variable and can I control that? We need to
build with multiple compilers on the same os and I need to figure out a
environment variable).
2. How do a get jam to put object files and libraries in a different
location than the directory where the Jamfile is? Having the object
files all go in the same directory makes it rather problematic when
building on multiple platforms at once.
From: Mark Beall <mbeall2@simmetrix.com>
Subject: Re: new to jam, a few questions
Date: Sun, 18 Jan 2004 19:36:00 -0500
I found LOCATE_TARGET and ultimately answered my second question. There
is a little weirdness with the default Jambase in that if your sources
aren't in . then the rules there won't make sub-directories of your
LOCATE_TARGET directory, but will try to compile them to that
sub-directory. Didn't matter since I didn't want that anyhow.
I gather from the source that the OS variable is just compiled in, so I
guess I'll have to write a Jambase file that uses something else. Might
be nice to read that from an env variable (JAM_OS?) so that it gives a
way to tweak things rather than rewriting them.
So you all can ignore the first two questions, but one more:
3. Does anyone know if jam builds/works on Windows XP 64 bit?
(Exim 4.20)
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: new to jam, a few questions
Date: Mon, 19 Jan 2004 06:58:46 +0100
Jam is almost a "hello world" as far as platform dependencies go, so you
may confidently assume that it does.
From: Bob Cook <Bob.Cook@Creo.com>
Subject: RE: new to jam, a few questions
Date: Mon, 19 Jan 2004 08:57:48 -0800
For selecting platform compilers (and various compiler options) we set up a
few variables that are passed down from the jam command line via the -s
option. Our root Jamrules has some reasonable defaults set with the '?=' assignment.
For example, the command 'jam -sCORE_BUILD_COMPILER=MSVC7' will set the Jam
variable 'CORE_BUILD_COMPILER' equal to the string 'MSVC7' which our scripts
will detect internally and use to decide which rules to invoke. We pretty
much rewrote all of the stock Jambase rules for compilation to get this to
work seemlessly across multiple platforms and multiple compilers.
We have something like a dozen different knobs that can be adjusted using
these -s type options. Developers tend to make scripts on their local
machine to invoke Jam with the long string of -s options to create the type
of build they are targeting for a particular product.
Sent: Sunday, January 18, 2004 11:41 AM
Subject: new to jam, a few questions
<snip>
1. How does jam set the OS variable and can I control that? We need to
build with multiple compilers on the same os and I need to figure out a
environment variable).
</snip>
Date: Mon, 19 Jan 2004 22:39:46 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: Announcing autojam packages
I just decided to release the Jamrules I wrote for the CrystalSpace and
NetPanzer projects as a separate package. It's basically a set of rules
replacing Jambase with more powerfull rules that are loosely designed
after the features that automake provides. Together with some helpers to
use autoconf together with jam.
You can find the stuff here:
http://developer.berlios.de/projects/autojam
Subject: Re: Announcing autojam packages
From: Paul_Donovan@scee.net
Date: Tue, 20 Jan 2004 10:59:56 +0000
Your package looks very interesting - thanks for creating it.
Unfortunately, the working example doesn't seem to work for me.
campd@p_donovan /cygdrive/c/tools/autojam-2004-01-19/example
$ ls
Jamfile* Jamrules* README* autogen.sh* configure.ac* src/
campd@p_donovan /cygdrive/c/tools/autojam-2004-01-19/example
$ less README
This is an example build which utilizes autojam.
campd@p_donovan /cygdrive/c/tools/autojam-2004-01-19/example
$ ./autogen.sh
aclocal: couldn't open directory `mk/autoconf': No such file or directory
configure.ac:65: error: possibly undefined macro: AM_PATH_SDL
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure.ac:68: error: possibly undefined macro: AC_INIT_JAM
If I change autogen.sh so that MACRODIR=../mk/autoconf I get:
campd@p_donovan /cygdrive/c/tools/autojam-2004-01-19/example
$ ./autogen.sh
aclocal: configure.ac: 65: macro `AM_PATH_SDL' not found in library
campd@p_donovan /cygdrive/c/tools/autojam-2004-01-19/example
$ ./configure
configure: error: cannot find install-sh or install.sh in mk/autoconf
./mk/autoconf
Is there some other stage I should have done, or am I misunderstanding
exactly how the example is supposed to be used?
Some more information in the README would be helpful :-)
Date: Tue, 20 Jan 2004 12:44:27 +0100 (CET)
From: Matze Braun <matze@braunis.de>
Subject: Re: Announcing autojam packages
Well here the original package contained a symlink. The mk directory was
linked to ../mk. It seems that cygwin doesn't support this. You should be
able to easily fix this by copying the mk dir from the main in the example
dir.
Oops I forgot to include sdl.m4 in my package. I wanted to demonstrate how
to use existing autoconf macros for external libraries. You might
workaround this for now by deleting the line with AM_PATH_SDL from the
configure.ac file.
should be fixed when copying the mk dir.
Yes as you might have noticed the package is put together on a hurry. I'll
try to release a new version this week which fixes your problems and try
to improve the README.
Subject: Re: Announcing autojam packages
From: Paul_Donovan@scee.net
Date: Tue, 20 Jan 2004 11:53:18 +0000
example
OK, that was my fault. I expanded the archive with WinZip, not cygwin's
tar. Extracting again with the latter created the symlink fine.
how
Yep, that worked fine.
Ok, so the example's built. Now I'll have to work out how it all works :-)
Date: Wed, 21 Jan 2004 16:46:36 +0900
From: Darren Cook <darren@dcook.org>
Subject: Calling a custom compiler
From a jamfile I'd like to run one of my own utility programs to generate a
data file from all files with a certain extension in a certain directory.
I've tried each of the below [1][2][3] and all fail (usually by saying it
doesn't know how to make "tactest_positions.19.dat").
(../tests is the directory the Jamfile is in).
The data file produced is then one of the dependencies for certain unit
tests. The below excerpts are from rule Test { ... }
local ftest = tactest_positions.19.dat ;
#Depends $(ftest) : ../unit_tests/sgf/19.tac/*.sgf ;
switch $(fout) {
case *tacsearch* : Depends $(fout) : $(fexe) $(ftest) ;
case * : Depends $(fout) : $(fexe) ;
}
The uncommented lines work (as long as $(ftest) exists), but if I uncomment
the Depends line I get:
"don't know how to make ../unit_tests/sgf/19.tac/*.sgf"
(BTW, in the real Jamfile the "19" is actually ${2} - if that makes a
difference let me know).
I suspect my two problems are related and I'm missing some fundamental
understanding. Could someone kindly point me to the correct tree to bark up?
[1]:
rule tactest_positions.19.dat CompileTacFile19 ;
actions CompileTacFile19 {
../progs/tactests_compile/19.tactests_compile.exe
../tests/tactest_positions.19.dat ../unit_tests/sgf/19.tac/*.sgf
}
[2] (also tried without quotes)
GenFile tactest_positions.19.dat :
"../progs/tactests_compile/19.tactests_compile.exe
../tests/tactest_positions.19.dat ../unit_tests/sgf/19.tac/*.sgf" ;
[3] (also tried without quotes)
Depends tactest_positions.19.dat :
"../progs/tactests_compile/19.tactests_compile.exe
../tests/tactest_positions.19.dat ../unit_tests/sgf/19.tac/*.sgf" ;
From: "Alan Baljeu" <alanb@cornerstonemold.com>
Subject: Re: Calling a custom compiler
Date: Wed, 21 Jan 2004 08:17:07 -0500
I see braces in ${2}. You should be using parentheses. $(2).
From: Paul Forgey <paulf@metainfo.com>
Subject: Re: new to jam, a few questions
Date: Wed, 21 Jan 2004 14:37:11 -0800
Well, I did this in a way that works with multiple Jamfiles across an
entire project by putting this in my $(TOP)/Jamrules. (There's
something about this that just feels wrong, and if somebody has a more
elegant way to do it I'd love to see it)
# place all intermediate and binary files in a subdirectory of where they
# would go normally named after the configuration
#
rule __ObjectsInConfig {
LOCATE_TARGET = [ FDirName $(LOCATE_TARGET) $(CONFIG) ] ;
LOCATE_SOURCE = $(LOCATE_TARGET) ;
}
SUBDIRRULES += __ObjectsInConfig ;
Where $(CONFIG) is the name of the intermediate output directory. This
doesn't gather all output project wide into the same $(CONFIG)
directory, which is why the SubDir rule.
From: "Alen Ladavac" <alenl-ml@croteam.com>
Date: Thu, 22 Jan 2004 15:37:20 -0000
Subject: Parallel builds with multiple hosts on Windows
There is an example in Jam.htm explaining how Jam can do multi-host parallel
builds, by setting JAMSHELL to some kind of remote shell tool.
"Jam does not directly support building in parallel across multiple hosts,
since that is heavily dependent on the local environment. To build in
parallel across multiple hosts, you need to write your own shell that
provides access to the multiple hosts. You then reset $(JAMSHELL) to
reference it."
Has anyone tried using something like that on Windows, or does anyone know
of any kind of remote shell for Windows that might be able to to this? I was
looking into Incredibuild (www.incredibuild.com) lately, and it provides
facilities like that for MSVS 6.0 and .NET, but not for command line
compilation. Several Rsh implementations for Windows are available, but I am
not sure whether Rsh is the right tool for the job.
Any pointers are greatly appreciated.
Subject: Re: Parallel builds with multiple hosts on Windows
From: Paul_Donovan@scee.net
Date: Thu, 22 Jan 2004 14:47:39 +0000
parallel
[snip]
know
was
Check out distcc (distcc.samba.org). It's not a remote shell, but it'll do
the equivalent job very well.
There's a cygwin package for it, and someone on the distcc mailing list has
done a protocol compatible native Windows version (although it may not be
public yet).
It is possible to drive a non-cygwin gcc from distcc, but you might need to
make a tiny mod to the code (I used it with SN Systems' ee-gcc). Check the
distcc archives for my postings.
From: "Alen Ladavac" <alenl-ml@croteam.com>
Subject: Re: Parallel builds with multiple hosts on Windows
Date: Thu, 22 Jan 2004 18:24:31 -0000
Thanks a bunch Paul! This is a new idea for me. However, I think it won't do
what I need. We are using cl.exe from MSVC7.1 , and distcc seems to be
gcc-centric. Also, it works on a file-by-file basis, while we use
precompiled headers. Do you think that it would be used anyway? Something
more like Rsh, but transferring environment and doing dynamic host
allocation might be better - though I don't know if it exists. :/
Date: Fri, 23 Jan 2004 08:11:14 +0900
From: Darren Cook <darren@dcook.org>
Subject: Re: Calling a custom compiler
I was surprised I got no replies; perhaps I phrased the question poorly? So,
rephrased, it is: what is the jam equivalent of this make file?
tactest_positions.19.dat: ../unit_tests/sgf/19.tac/*.sgf
../progs/tactests_compile/19.tactests_compile.exe \
tactest_positions.19.dat ../unit_tests/sgf/19.tac/*.sgf
Subject: Re: Re: Calling a custom compiler
From: "shatty" <shatty@myrealbox.com>
Date: Thu, 22 Jan 2004 16:56:06 -0800
You should look into the GLOB rule.
Instead of:
Depends $(ftest) : ../unit_tests/sgf/19.tac/*.sgf ;
You should have:
Depends $(ftest) : [ GLOB [ FDirName $(SUBDIR) .. unit_tests sgf 19.tac ] : *.sgf ] ;
The above assumes that you are using SubDir and you want a relative path from
the directory the Jamfile that has the above depends is in.
See: http://public.perforce.com/public/jam/src/Jam.html for a reference on GLOB.
This may or may not be your only problem, but it should get you farther along. :-)
From: "shatty" <shatty@myrealbox.com>
Date: Thu, 22 Jan 2004 17:00:16 -0800
Subject: Splitting values, or "undoing :J="
I have a situation where I want to break up a single element into a list of elements.
Basically I have: "java.lang.String" and I want to get a list of: java lang String.
I have used :J= before to go from java lang String to "java.lang.String",
but now I need the reverse. Any suggestions?
From: "Alen Ladavac" <alenl-ml@croteam.com>
Date: Fri, 23 Jan 2004 09:56:48 -0000
Subject: Header cache
Out of curiosity, I've just applied the "header cache" patch by Craig
McPheeters, and my timings with it are a bit strange. When a test project I
use (4860 targets, cca 10MB sources&headers) is built, running jam (result
is 'nothing to do') on a freshly booted machine (nothing in OS file cache)
takes 34sec. Subsequent run takes only 6 sec, since all files are in cache,
and header scanning is much faster. That is without header cache. Now with
header cache, it always takes 5 sec. This would say that I will only get 20%
speedup in the dependency searching from the header cache (assuming I'm
building frequently, so all files are in cache). Am I missing something? Any ideas?
Subject: Re: Splitting values, or "undoing :J="
From: Dag Asheim <dash@linpro.no>
Date: Fri, 23 Jan 2004 11:00:22 +0100
I have had the same problem, and my solution is below in form of a
Split function. I also included my implementations of Join and Subst,
with examples. The implementation of Split is a modified version of
one made by Matt Armstrong:
His version was based on an nonstandard MATCH rule and it also was
slightly buggy (it used a variable called "last", but actually
declared "last," - it took me a while to figure that out!).
I hope this helps!
# Return a list consisting of a string split where a regexp matches
#
# Usage: list = [ Split regexp : string ] ;
rule Split {
local re = $(1) ;
if $(re) = "\\" {
re = "\\\\" ; # A hack: make it easier to split on $(SLASH)
}
local match = [ MATCH "^(.*)("$(re)")(.*)" : $(2) ] ;
local last ;
local element ;
if $(match) && $(match[2]) != $(2) {
for element in $(match) {
last = $(element) ;
}
return [ Split $(1) : $(match[1]) ] $(last) ;
} else { return $(2) ; }
}
# Join the element of list as a string with fields separated with expr
#
# Usage: string = [ Join expr : list ] ;
rule Join { return $(>:J=$(<)) ; }
# Substitute regular expression within a string with something else
#
# Usage: resultstring = [ Subst re substring : string ] ;
rule Subst { return [ Join $(<[2]) : [ Split $(<[1] : $(>) ] ] ; }
list = [ Split "\\." : java.lang.String ] ;
Echo $(list) ;
Echo 1. element: $(list[1]) ;
Echo 2. element: $(list[2]) ;
Echo 3. element: $(list[3]) ;
Echo [ Join ", " : $(list) ] ;
Echo [ Subst "\\." $(SLASH) : "java.lang.String" ] ;
Date: Fri, 23 Jan 2004 12:25:00 -0800 (PST)
From: Craig McPheeters <cmcpheeters@aw.sgi.com>
Subject: Re: Header cache
The header cache doesn't provide much of a win until you have a larger
build. Your build may be small enough that other things are dominating
its time, jam is doing a lot of work outside of the header dependency checking.
In one of my builds I have 38000 targets. Without the header cache present,
it takes 2 1/2 minutes for a do-nothing build. With the cache present it
takes 53 seconds. In a second build there are 66000 targets, without the
cache it takes 5min, with its 2:20.
Subject: RE: Header cache
Date: Fri, 23 Jan 2004 13:13:07 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
It also has a lot to do with how grist is applied to headers. If you
aren't gristing headers, then it does N times more work, where N is the
number of directories in your source tree.
Sent: Friday, January 23, 2004 12:25 PM
Subject: Re: Header cache
The header cache doesn't provide much of a win until you have a larger
build. Your build may be small enough that other things are dominating
its time, jam is doing a lot of work outside of the header dependency checking.
In one of my builds I have 38000 targets. Without the header cache present,
it takes 2 1/2 minutes for a do-nothing build. With the cache present
it takes 53 seconds. In a second build there are 66000 targets, without
the cache it takes 5min, with its 2:20.
From: "Alen Ladavac" <alenl-ml@croteam.com>
Subject: Re: Header cache
Date: Sat, 24 Jan 2004 11:59:44 +0100
First of all, kudos for supplying your version of Jam with optional
extensions. It is very easy to pick parts that one likes and integrate to
the main line. Excellent method.
So you would say that the speedup you get would be around 2x, or a bit
better. I'd expect more. It would mean that dependency checking was only
about 20% of the execution time, what looks strange to me. Perhaps I can do
something to optimize some of my Jam rules, and also try to profile Jam to
see where it spends its time. I'll let you know of what I find out.
From: "Alen Ladavac" <alenl-ml@croteam.com>
Subject: Re: Header cache
Date: Sat, 24 Jan 2004 11:17:45 +0100
This sounds interesting, though I'm not sure I get your point completely. I
thought it would be the other way around, i.e N times slower with gristing,
faster without it. Perhaps I didn't get the point of gristing correctly. Can
you please explain this in more detail?
From: "Chris Antos" <chrisant@windows.microsoft.com>
Sent: Friday, January 23, 2004 10:13 PM
Subject: RE: Header cache
It also has a lot to do with how grist is applied to headers. If you
aren't gristing headers, then it does N times more work, where N is the
number of directories in your source tree.
The header cache doesn't provide much of a win until you have a larger
build. Your build may be small enough that other things are dominating
its time, jam is doing a lot of work outside of the header dependency
checking.
In one of my builds I have 38000 targets. Without the header cache present,
it takes 2 1/2 minutes for a do-nothing build. With the cache present it
takes 53 seconds. In a second build there are 66000 targets, without the
cache it takes 5min, with its 2:20.
Subject: RE: Header cache
Date: Sat, 24 Jan 2004 16:42:55 -0800
From: "Chris Antos" <chrisant@windows.microsoft.com>
Yeah, I said it backwards. The point is that for the system headers,
you want only one instance of each, rather than a separate gristed
instance for each directory in your tree. The performance difference is
huge. If system headers are being gristed (which is conceptually
wrong), then a header cache will give a huge performance boost and hide
the underlying problem. But Jam will still do a lot more work than is
necessary, and use a lot of extra memory (don't know specifically how
much, I'm speaking abstractly).
To clarify: I'm not commenting on whether a header cache is good or
whatever -- I'm just saying that gristing the system headers is very
costly to performance, and is unnecessary.
Sent: Saturday, January 24, 2004 2:18 AM
Subject: Re: Header cache
This sounds interesting, though I'm not sure I get your point completely. I
thought it would be the other way around, i.e N times slower with gristing,
faster without it. Perhaps I didn't get the point of gristing correctly.
Can you please explain this in more detail?
From: "Chris Antos" <chrisant@windows.microsoft.com>
Sent: Friday, January 23, 2004 10:13 PM
Subject: RE: Header cache
It also has a lot to do with how grist is applied to headers. If you
aren't gristing headers, then it does N times more work, where N is the
number of directories in your source tree.
Sent: Friday, January 23, 2004 12:25 PM
Subject: Re: Header cache
The header cache doesn't provide much of a win until you have a larger
build. Your build may be small enough that other things are dominating
its time, jam is doing a lot of work outside of the header dependency
checking.
In one of my builds I have 38000 targets. Without the header cache present,
it takes 2 1/2 minutes for a do-nothing build. With the cache present it
takes 53 seconds. In a second build there are 66000 targets, without the
cache it takes 5min, with its 2:20.
From: "Alen Ladavac" <alenl-ml@croteam.com>
Date: Sun, 25 Jan 2004 14:26:26 -0000
Subject: Fix for multiple targets generated by a single action [Was: Header cache]
My research on Jam's speed when there is nothing to do showed the following
results:
Majority of time is spent in Jam.exe itself (there is perhaps %5 or less
spent by system, probably to read file dates from cache, etc). Inside
Jam.exe 73% of time is spenty inside the make1a function. Further tests
reveal that the fix for "multiple targets generated by a single action" is
what causes the problem. When I remove that fix, I get more than 2x speedup,
pushing make1a() down to 0.18% spent time.
The fix consists of chaging the end of make1a() this way:
t->progress = T_MAKE_ACTIVE;
/* Now that all dependents have bumped asynccnt, we now allow */
/* decrement our reference to asynccnt. */
make1b( t );
}
{
ACTIONS *actions;
for( actions = t->actions; actions; actions = actions->next )
{
TARGETS *targets;
targets->next) {
if (targets->target != t)
make1a (targets->target,t);
}
}
}
t->progress = T_MAKE_ACTIVE;
/* Now that all dependents have bumped asynccnt, we now allow */
/* decrement our reference to asynccnt. */
make1b( t );
}
The fix was submitted to this mailing list by Miklos Fazekas. And it works
great, except for this speed issue. Now, I cannot work without that fix,
because I need it for batch compilation and generation of .pch and .pdb
files on Win32. Does anyone have a better version of the fix? Or perhaps
Miklos can explain the intricacies of how it works, and what should I do to improve it.
From: "Alen Ladavac" <alenl-ml@croteam.com>
Subject: Re: Fix for multiple targets generated by a single action [Was: Header cache]
Date: Sun, 25 Jan 2004 15:50:12 -0000
One possible idea is to add a "touched" flag in the ACTION type, and put this:
if (actions->action->touched) { continue; }
actions->action->touched = 1;
in the outer of the two added loops. This keeps performance on acceptable
level (~2% time in make1a() ), and seems to work ok. But I don't have enough
insight into the Jam's functionality to be really sure this won't break
something else, especially when used with -j .
From: Roger.Shimada@lawson.com
Date: Wed, 4 Feb 2004 16:58:51 -0600
Subject: Undesired multi pass build
I thought that I should ask the mailing list about this before getting
lost in the jam source. :-)
We generate .h files based on .c files. For example, we might have common.c:
int Answer = 42;
from which we generate a common.h in an include directory:
extern int Answer;
Note that common.c is used to both compile and to generate the header file.
Now let's say there is a source.c that does #include <common.h>.
There is a library that is built from common.c and source.c.
When I update common.c and run jam, common.h gets regenerated and common.c
is recompiled. But source.c does not get recompiled. I run jam again, and
source.c gets recompiled.
Any ideas on how to fix the following Jamfile so an update of common.c
will rebuild both common.h and compile both .c files? If not, any hints
on fixing this in jam?
GENINC = $(GENDIR)/include ;
HDRS += $(GENINC) ;
rule LaHeaderGlobal {
local var ;
{
LawMkHdr $(GENINC)/$(var:B).h : $(var:B).c : -g ;
}
}
rule LawMkHdr {
local source ;
source = [ FGristFiles $(>) ] ;
Depends files : $(<) ;
Depends $(<) : $(source) ;
Clean clean : $(<) ;
SEARCH on $(source) = $(SEARCH_SOURCE) ;
MakeLocate $(<) : $(LOCATE_TARGET) ;
MKHDRFLAG on $(<) = $(3) ;
LawMkHdr1 $(<) : $(source) ;
}
actions LawMkHdr1 { mkhdr $(MKHDRFLAG) $(>) }
Library mylib : common.c source.c ;
LaHeaderGlobal common.c ;
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Undesired multi pass build
Date: Thu, 5 Feb 2004 00:49:01 +0100
That can't be done, I'm afraid. But maybe the problem can be restated so
that the goal is achievable.
Jam's design is (simplifying):
1. Look at the files.
2. Decide what to do.
3. Do it.
Nice and simple. But it prevents you from e.g. first generating common.h
and then deciding whether to compile source.o.
So, what can you do? One possibility is to tell jam explicitly that
source.o depends on common.h, so that whenever jam decides to build
common.h, it will also build source.o.
Date: Wed, 4 Feb 2004 20:36:05 -0800 (PST)
Subject: Re: Undesired multi pass build
I suspect this is your problem. Your header-file target isn't the same as
what source.c #include's -- HdrRule puts an Includes on source.c of
common.h, not $(GENINC)/common.h, so there's no association between
common.h the target and common.h the #include'd header file, so there's
none between common.h the target and source.c, just between source.c and
the common.h it finds in $(GENINC) (via $(HDRSEARCH)) once common.h has
been built. So there's nothing saying to rebuild source.c, until after
common.h has been built (and its timestamp bumped), which is why source.c
recompiles on a second pass but not on one. If you make your target the
same as what's #include'd, you should probably be okay.
From: "Marco Pappalardo" <fa658021@skynet.be>
Date: Fri, 6 Feb 2004 20:10:12 -0800
Subject: .obj output dir
I'm new to Jam (to make a long story short I'm doing an internship in a
company and they've asked me to write jamfiles for their projects) and
although I find it quite nice to use the lack of documentation /
tutorials / examples is really slowing me down :(
Anyway here's my question :
assume I'm writing a jamfile to build an exe which needs to link with
several libs. I've got it working so that for example typing jam
win32_debug will link to the debug libs, and jam win32_release to the
release libs. I am able to build the exe in one version, do a jam clean,
and rebuild in another version without any problems. So far so good ...
Now the problem is when I build say the debug version, I'm left with all
the .obj files, so trying to build the release version right after that
(without doing a jam clean) will try to link the current debug .obj
files with the release libs (instead of recompiling the sources with the
release flags, since the .obj are already there), which of course is no
good... Since this is a big project, doing a clean and rebuilding
everything every time you switch versions is out of the question.
What I'd like to do is output all .obj files to a different dir for each
version. I've tried LOCATE_TARGET and ALL_LOCATE_TARGET, and although
ALL_LOCATE_TARGET does relocate the exe, the obj are still generated in
the same dir as the sources. My question is what is the standard way to
tell Jam to output the .obj to a certain dir ? (if there is any) Or
should I just modify the c++ rule so that it /Fo's to a specific dir ?
Will that raise any other path problems ? Should I add the .obj output
path to SEARCH_SOURCE ? Any other suggestions on how to do this ? Thanks
a lot for any help you can provide !
From: Paul Forgey <paulf@metainfo.com>
Subject: Re: .obj output dir
Date: Fri, 6 Feb 2004 16:34:14 -0800
LOCATE_TARGET should do it. How are you setting it? For example,
LOCATE_TARGET on whatever$(SUFEXE) = won't affect the intermediate
files which built the exe. If you have multiple subprojects, I posted
a simple $(SUBDIRRULES) a while ago you can put in your Jamrules which
gives you msdev-like behavior. It sort of breaks the spirit of Jam
(take any Jamfile project and send output anywhere without modifying
the Jamfile), but it's probably what you want.
No. The stock rules already did that for you.
Date: Wed, 11 Feb 2004 10:58:06 +0800
From: "Li Yun" <yunli@utstar.com>
Subject: How to create a directory
I want to create a directory when build my project. So I added the
following line into the Jamfile.
MkDir lib.mpc ;
But it does nothing, could you tell me why?
From: "Marco Pappalardo" <fa658021@skynet.be>
Date: Wed, 11 Feb 2004 19:26:30 -0800
Subject: xxx already defined in lib, ignoring second definition
First of all thanks a lot to Paul and Randy :) I managed to fix my
LOCATE_TARGET problem ( in case you're curious, the project is way too
big to specify every .cpp file by hand, so I was "harvesting" them from
predefined dirs, and I was passing them to Library with path attached,
which caused LOCATE_TARGET to not be set properly. I fixed it by
stripping the paths from the source files passed to Library and using
SEARCH_SOURCE instead :p )
Now I'm back with another question for you Jam Gurus :) I remember
didn't turn anything up and I can't find it in the archive by hand
anymore, so my apologies for posting this again :
myfunction(), then re-build mylib.lib without doing a jam clean first, I
get the following warnings :
(roughly, I'm at home now and forgot to mail myself the exact error messages)
warning : myfunction() already defined in mylib.lib, ignoring second definition.
Am I right to assume from this message that the old code for
myfunction() is not being replaced by the new code for myfunction() ?
Any ideas on how to fix this ?
From: Paul Forgey <paulf@metainfo.com>
Subject: Re: xxx already defined in lib, ignoring second definition
Date: Wed, 11 Feb 2004 15:41:43 -0800
If you build from one directory, then update one of the library's
members and build from another directory, the object file isn't seen as
the same object file because the paths are different.
So your updated object file is now in the library twice. cd to where
the library lives and jam clean.
Unlike ar, lib.exe stores the path specified on the command line of the
object files in the resulting library. This is a problem for us too.
I suppose it could be solved by modifying the Library rule to be in the
directory of the object files when updating the library, but this
becomes a hard problem to solve when the object files are all over the place.
Add it to the very long list of stupid MS behaviors in their tools.
(You should see the tricks I had to do just to get parallel builds working properly)
From: "Alen Ladavac" <alenl-ml@croteam.com>
Subject: Re: xxx already defined in lib, ignoring second definition
Date: Fri, 13 Feb 2004 08:11:22 -0000
Do you use the parallel build on an SMP machine, or across different
machines? I'm interested in making this work, on MS as well.
From: Paul Forgey <paulf@metainfo.com>
Subject: Re: xxx already defined in lib, ignoring second definition
Date: Mon, 16 Feb 2004 17:01:50 -0800
SMP. Currently, there's no good way to distribute builds around an NT
environment. Maybe with gcc, and as soon as the x86 optimizations get
better. Unfortunately, msvc currently produces faster code.
Effectively, the same rules apply for either SMP or distributed builds
since they both involve multiple processes working on the same project.
There are two basic problems with msvc's default behavior that gets in
the way of SMP builds. Writing debug info to a common database file,
and automatic pre-compiled headers.
For debug builds, use -Z7 -Yd to put the debug information into the
object files (then into the dll or exe). A separate .pdb file still
gets generated at link time. So far, I haven't noticed any difference
in debugging behavior doing things this way vs. the way dev studio
projects want do it by default.
source file that simply includes what you want in your pch (like
stdafx.cpp for vc generated mfc projects) with -Yc and use -Yu with the
rest of your files to use it. Set up the dependencies properly to
avoid race conditions between the processes. You can use -Fp to place
the precompiled header data in $(LOCATE_TARGET).
From: Paul Forgey <paulf@metainfo.com>
Subject: SMP (was xxx already defined in lib, ignoring second definition)
Date: Mon, 16 Feb 2004 17:19:56 -0800
This has been known to unix developers for a long time, but there are
no MS supplied development tools for NT that take advantage of this.
On NT machines, we have noticed you can get around 15% faster builds
(at least with our code) using -j2 on a single CPU, non hyperthreaded
machine. Using -j3 cuts another 30-40% off the build time on a
hyperthreaded machine. In fact, using number of CPU's (both real and
hyperthreaded, or the product on SMP hyperthreaded machines) +1 seems
to be about optimal. Unfortunately, the NT console doesn't deal well
with multiple processes writing to it, so you may be able to read any
errors that occur. And you can't redirect Jam's output from the
compiler messages because as of MSVC 7.0, nothing is ever written to
stderr. Probably best to use -d to turn off all of Jam's output and -q
to keep Jam from continuing if there is an error. Then hope you aren't
unlucky enough for more than one process to simultaneously have
something say. But then you can't watch the progress. But then again
writing server code on NT is always a compromise anyway.
Even on unix, Jam is superior to make in this area because Jam looks at
everything as a whole. A make process, for example, could recurse into
a subdirectory providing a dependency, all cpu's blazing, to take care
of maybe one file while the entire build process waits for this. In
Jam, the entire dependency tree is known ahead of time so all the cpu's
are better tasked for the entire project. Sometimes you'll see
different subprojects being built at the same time!
From: "Alen Ladavac" <alenl-ml@croteam.com>
Subject: Re: xxx already defined in lib, ignoring second definition
Date: Tue, 17 Feb 2004 17:18:53 -0000
Doh! I was hoping someone found a good way to do it. I hope Incredibuild
will support it in the near future.
Did you notice difference in compilation speed? I thought the single-pdb
system was supposed to be a speedup - though you never know with MSVC. :/ I
know that Incredibuild redirects it to use one .pdb per machine. Perhaps it
might be good to use one per-job?
We already do that, so we should be cool on that part.
But I don't believe we'll bother with -j until we find a way to distribute
it over several machines.
From: "Alen Ladavac" <alenl-ml@croteam.com>
Subject: Re: SMP (was xxx already defined in lib, ignoring second definition)
Date: Tue, 17 Feb 2004 17:21:52 -0000
Yes, I know. And can't wait to make it work on NT. Just waiting for an
equivalent of "on" command.... :/
Date: Tue, 17 Feb 2004 12:44:08 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Subject: Move and link
In the project I'm working on it has been decided that after building a library
or executable we want to move the files to $(TOP)/bin and $(TOP)/lib respectively.
After doing the move a symbolic link should be created from the source directory
to the file in bin or lib. I've tried this two ways, both involve copying the
InstallInto, InstallBin and InstallLib rules.
The first way was to try and replace the invocation of the Install action in
the InstallInto rule with a $(MV) and a Softlink. It winds up looking like:
Depends $(tt) : $(i) ;
Move $(tt) : $(i) ;
SoftLink $(i) : $(tt) ;
with Move action simply being
actions Move { $(MV) $(>) $(<) }
The only problem is that this pops up a warning at the start of Jam that says
'warning: libFoo.a depends on itself'. Looking at the dependencies this makes
sense since SoftLink sets up a dependency of $(i) on $(tt) and the first line
sets a dependency of $(tt) on $(i). So, indirectly $(i) does depend on itself.
So I tried a different tactic. Instead of doing the move and link in two
different actions I tried to do them in one. This worked and didn't cause
any circular dependencies or anything like that, but now doing a 'jam clean'
doesn't remove the link from the source directories. I've tried all sorts of
things to try and force it to be cleaned out but with no luck.
Has anyone had experience doing something similar? Got any suggestions?
From: Arnt Gulbrandsen <arnt@gulbrandsen.priv.no>
Subject: Re: Move and link
Date: Wed, 18 Feb 2004 11:38:01 +0100
I think the basic problem is that you have two different objects by the
same name: First a proper library, then a symlink. Jam expects each
name to identify a single object.
I suggest building the library directly into its final destination, then
adding the symlink where you need it.
From: Roger.Shimada@lawson.com
Date: Thu, 19 Feb 2004 11:17:07 -0600
Subject: Confusion with overloaded ctype.h
We have a program called mkhdr which basically converts a .c to a .h.
(And thanks to Randy Roesler for finding my incorrect MakeLocate!)
We also rolled our own internationalization. Unfortunately we were
lazy about it and created our own ctype.h.
On a clean build, we need to build both mkhdr and ctype.h.
mkhdr requires the standard ctype.h. So for this I did a "HDRS = ;".
But in a stituation when Jam needs to build both mkhdr and our ctype.h, I get:
...updating 5 target(s)...
LawMkHdr1 /bld/univ/include/ctype.h
/bin/sh[3]: mkhdr: not found.
cd univ/src/lib/stdlaw
mkhdr -g ctype.c
...failed LawMkHdr1 /bld/univ/include/ctype.h ...
...skipped <univ!src!local>mkhdr.o for lack of <univ!src!local>mkhdr.c...
...skipped mkhdr for lack of <univ!src!local>mkhdr.o...
...failed updating 1 target(s)...
...skipped 2 target(s)...
a "jam -d6 | grep ctype.h" includes:
sys/types
.h sys/stat.h ctype.h string.h stdlib.h errno.h
ctype.h
string.h stdlib.h errno.h = univ/src/local /usr/include
I fumbled around in the debugger for awhile, and it looks like search() finds
ctype.h via LOCATE. This would be cool if the LOCATE matched SEARCH,
which it doesn't when building mkhdr.
Any ideas asides from renaming our ctype.h? (We're also multiplatform, so
changing mkhdr to do an #include </usr/include/ctype.h> won't work either.)
My love/hate relationship with Jam continues....
Date: Mon, 23 Feb 2004 13:44:47 +0900
From: Anthony Heading <aheading@jpmorgan.com>
Subject: Re: xxx already defined in lib, ignoring second definition
Yes, it's pretty horrible. Took me a while to figure out this was
what was happening. Might be worth having an FAQ of things that go
wrong if you try to use MS tools in a Unix style framework...
In case anybody is interested, I fixed this for a cygwin environment
by wrapping lib.exe to make the pathnames absolute, i.e. call this script
'wlib' and use it instead of 'lib' to build the libraries.
Not extensively tested - YMMV etc - but the idea at least seems to work.
#!/bin/env zsh
local i=0
local j=0
while (( i++ < $# )) ; do
a=$argv[i]
case ${(U)a} in
[-/](DEF|LIST|NAME|OUT):*)
fname[++j]=${a#*:}
;;
[-/]*)
;;
*)
fname[++j]=$a
;;
esac
done
set -A qname $(print -l $fname | cygpath -m -a -f -)
i=0
j=0
while (( i++ < $# )) ; do
a=$argv[i]
case ${(U)a} in
[-/](DEF|LIST|NAME|OUT):*)
argv[i]=${a%%:*}:$qname[++j]
;;
[-/]*)
;;
*)
argv[i]=$qname[++j]
;;
esac
done
lib $@
Date: Tue, 16 Mar 2004 17:25:16 -0700
From: "Wallace, Richard" <Richard.Wallace@specastro.com>
Subject: Recursive copy
I've been trying but haven't had much success at creating a rule that does a
recursive copy. The biggest problem is testing whether a file is a directory
or a regular file. Anybody done this before and got some tips?
From: "Jim Hanrahan" <j.hanrahan@advantest-ard.com>
Date: Mon, 5 Apr 2004 11:11:38 -0700
Subject: MSVS6 to Jam?
Is there any conversion tools available to convert