Guide for software developers

Developed by the Ptolemy Project at the University of California at Berkeley.

File layout and infrastructure

  • Software Engineering
  • Source Code Control System (SCCS)
  • The Future and SCCS

    Debugging

  • Purify - Purelink - Quantify - Purecov - Pure Problems - Fixing/Preventing Memory Leaks
  • Compiling under other platforms
  • Running make depend
  • Shared libraries
  • Debugging versions
  • HPPA Cfront Debugging versions
  • gdb and emacs
  • gdb and StringLists
  • Debugging hints
  • Compiling under emacs
  • Class browsing under emacs
  • Compiler Hints
  • Undefined symbols
  • Testing
  • Tracking bugs with gnats

    Documentation

  • Documentation extraction from C and C++ code
  • Writing an Index
  • WWW

    Configuration Management

  • Patches
  • Moving Ptolemy
  • Porting to a new architecture
  • Changing gcc versions
  • Ptk commands
  • Contributing Stars

    Misc.

  • See Also

  • Source Code Control System (SCCS

    Just about all the source code for C, C++, make, and script files in the Ptolemy development tree are under source code control. Currently, we are using SCCS. SCCS is essentially a library of files that can be checked out by users. Every directory that has one or more files under SCCS control will have an SCCS sub-directory. The individual files under SCCS control are not writable by any user--- they become writable for the user who checks them out.

    SCCS works best for ASCII files because it stores the difference between incremental changes. Binary files are supported, but each version requires a new copy, so the size of each corresponding SCCS history file will grow linearly with the number of versions.

    Each SCCS file can only be checked for editing by one user via the Unix command

    sccs edit filename
    
    To get a read-only copy of the latest version of a file, use the command
    sccs get filename
    
    One can retrieve an older version x.y and save it to filex.y by the command
    sccs get -rx.y -p filename >filex.y
    
    One can back out of the most recently submitted change to a file by removing it:
    sccs rmdel -rx.y filename
    
    To view a history of the different versions of an SCCS file, use
    sccs prs -e filename
    
    To see what files are being edited in the current directory, use
    sccs info
    

    SCCS has two drawbacks:

    Ptolemy developers should have their own development trees with links that point to the SCCS directories in the development version. Please do not use SCCS when you are logged in as ptuser. The reason is that if you use SCCS when you are ptuser, then it is impossible to determine who made the changes. The environment for ptuser has been modified so that ptuser cannot check out files from SCCS for editing.

    If you only need to make a minor change, such as fixing a comment, to the following, as yourself, in your own directory:

    sccs -d ~ptdesign/mk edit common.mk
    
    Once you've made your change, just use the "-d" option to sccs to check your changed file back in.

    Continuing with the above example:

    sccs -d ~ptdesign/mk delta common.mk
    

    You can always use sccs as yourself in your own directory by using the PROJECTDIR environment variable or the "-d" option to sccs. For example, to list checked-out files and update the file common.mk:

    % setenv PROJECTDIR ~ptdesign/mk
    % sccs info
    % sccs edit common.mk
    ...
    % sccs delta common.mk
    % unsetenv PROJECTDIR
    
    or
    % sccs -d ~ptdesign/mk info
    % sccs -d ~ptdesign/mk edit common.mk
    ...
    % sccs -d ~ptdesign/mk delta common.mk
    
    To check in a new file:
    sccs create -fi newfile.c
    
    The -fi option will have SCCS check to make sure that you have included SCCS identifier strings in the file. After you edit a file and check it in, you can set up your makefiles to update the file's version to the latest one available under SCCS.

    For more information, refer to the man page for sccs.

    SCCS tricks

    You can print the contents of a file to stdout with the version id for each line prepended with:

    sccs get -p -m myfile.cc
    
    This is very useful when you are trying to track down what change modified a specific line of a file.

    Permissions

    SCCS directory permissions

    SCCS directories should be in the ptolemy group and group writable with the group stickbit set. Here is a proper example. cxh@markov 22% ls -ldg ~ptdesign/SCCS drwxrwsr-x 2 ptuser ptolemy 512 Nov 2 10:27 /users/ptdesign/SCCS/ cxh@markov 23% You can set these permissions and ownerships by being logged in as ptuser and doing: ptuser@markov 8% chmod g+ws SCCS ptuser@markov 9% chgrp ptolemy SCCS ptuser@markov 10% ls -ldg SCCS ls -ldg SCCS drwxrwsr-x 2 ptuser ptolemy 37 Nov 3 17:43 SCCS/ ptuser@markov 11%

    The Future and SCCS

    RCS is a more flexible system than SCCS. RCS is freely distributable, whereas SCCS is commerical software. RCS is available for all Unix machines, whereas SCCS is generally available only on Sun computers.

    Debugging

    Purify

    Purify is a commercial tool by Pure Software Inc. that is useful for tracking and recording memory leaks in large software programs. Specifically, Purify detects

    For more information about Purify, see

  • The purify manpage.
  • /usr/sww/doc/Pure/pure.doc
  • The makefiles in the $PTOLEMY/obj.$PTARCH/pigiRpc have automated the inclusion of Purify into the Ptolemy binaries. Generally, one just has to say

    cd $PTOLEMY/obj.sol2/pigiRpc
    make pigiRpc.ptrim.purify
    setenv PIGIRPC $PTOLEMY/obj.$PTARCH/pigiRpc/pigiRpc.debug.purify
    pigi &
    
    or, if you want to investigate where in the source code the memory leaks are incurring in an interactive fashion, build
    cd $PTOLEMY/obj.sol2/pigiRpc
    make pigiRpc.ptrim.debug.purify
    setenv PIGIRPC $PTOLEMY/obj.$PTARCH/pigiRpc/pigiRpc.ptrim.debug.purify
    pigi -debug &
    
    The latter command will create a Vem window, a demo palette window, a window shared by the GNU debugger and the Ptolemy console, and a Purify window.

    Purify will report memory leaks and violations as they happen, and you can use the GNU debugger to examine the leak or violation when it occurs. When the GNU debugger starts, give it the command

    break purify_stop_here
    
    This will cause the GNU debugger to stop whenever a memory leak or violation occurs. To set a purify watch point, in gdb do:
    print purify_watch_n(char *addr, unsigned int size, char *type)
    
    where,
    addr - the address of the mem location to watch
    size - number of bytes to watch starting at addr
    type - one of "r" "w" "rw"
    
    The print command will return the purify watch point number. To remove a watch point:
    print purify_watch_remove(#)
    
    or to remove all watch points:
    print purify_watch_remove_all()
    

    We have used Purify to fix approximately 1000 memory leaks and violations between the Ptolemy 0.5.2 and Ptolemy 0.6 releases. Even so, there are still many memory leaks and violations that remain in Ptolemy. You can ignore the memory leaks and violations in the graphical user interface (pigilib). You can also ignore the memory leaks and violations that Purify reports in X window routines, standard C library routines, and so forth, because we cannot fix them. You can supress the reporting of certain memory leaks and violations by defining a ~/.purify file, e.g., see /users/ptdesign/.purify.

    When you exit Ptolemy, Purify will backtrack over the memory that has not been deallocated and report them as errors. Purify will report about 20 messages per domain that CGTarget is leaking memory:

    MLK: 91 bytes leaked in 9 chunks
    This memory was allocated from:
         malloc         [rtlib.o]
         __builtin_new  [libgcc.a]
         __builtin_vec_new [libgcc.a]
         NamedNode::NamedNode(void*,const char*) [miscFuncs.h:90]
         NamedList::append(void*,const char*) [NamedList.cc:72]
         CGTarget::addStream(const char*,CodeStream*) [CodeStreamList.h:51]
    
    These leaks are due to static declarations of instances of targets, e.g. in the CGC domain,
    static CGCBDFTarget proto("bdf-CGC", "CGCStar", ...
    static CGCMakefileTarget targ("Makefile_C","CGCStar", ...
    static CGCMultiTarget targ("unixMulti_C","CGCStar", ...
    static CGCNOWamTarget targ("CGCNOWam","CGCStar", ...
    
    The constructors of these static instances create dynamic memory. However, the static instances are never destroyed (because they themselves are not dynamic) so Purify reports a memory leak.

    Purify should detect violation in Tcl variables and procedures, but it does not. If you define a Tcl star in Ptolemy, for example, that creates a global variable "fred$starID", then you must unset this variable in the star destructor. Otherwise, you've created a memory leak. If you don't want to hassle with this, then use the recommended array mechanism for Tcl stars: define ${starID}(fred). The base class takes care of freeing the array $starID.

    Vem under Purify

    To build a vem.pure:
    cd ~ptdesign/obj.sol2/octtools/vem
    make CC="/usr/tools/bin/purify cc"
    mv vem.debug ~ptdesign/octtools/bin.sol2/vem.debugpurify
    setenv VEMBINARY ~ptdesign/octtools/bin.sol2/vem.debugpurify
    
    Apparently, if the binary is nameed vem.debug.pure, then you will get an error message about a bad default (autosave) upon startup. See Debugging Vem for more information.

    Purify will produce a lot of warnings, you may find it useful to have a ~/.purify file that contains at least the following to filter out memory leaks from the X library:

    suppress umr writev; _XSendClientPrefix; XOpenDisplay
    suppress umr XNextEvent; Tk_DoOneEvent
    suppress umr XDrawString
    suppress umr XTextExtents
    suppress umr _XUpdateGCCache; XCreateGC
    suppress umr _XSendClientPrefix; XOpenDisplay
    suppress umr _XFreeExtData; XFreeGC
    suppress umr write; _XFlush
    suppress umr write; _XFlush
    suppress umr writev; _XSend
    
    You may also want to add the following to your ~/.purify file to filter out warnings generated by pigilib, the clunky interface between Tcl/Tk and Vem/Oct plus three others:
    suppress umr RPCSendOctObject ; octOpenRelative
    suppress umr RPCSendOctObject ; octGetOrCreate
    suppress umr RPCSendOctObject ; octCreate
    suppress umr RPCSendOctObject ; octCreateOrModify
    suppress umr RPCSendOctGenerator
    suppress umr RPCoctInitGenSpecial
    suppress mlk RPCReceiveString 
    suppress plk malloc; RPCReceiveString
    suppress umr ptkRun ; RpcRun ; ptkRPCFileHandler
    suppress umr InitKeymapInfo
    suppress umr qckBoot
    suppress umr qckGetY
    

    Purelink

    "PureLink - significantly improves program link times.

    PureLink addresses the problem of long link times and significantly improves the throughput on executable program generation after an edit-compile cycle."

    Note that with shared libraries, we no longer need to use Purelink under Solaris.

    Purelink has manpage.

    To link pigiRpc with purelink :

    cd ~ptdesign/obj.sol2/pigiRpc
    make LINKER="/usr/tools/bin/purelink -collector=/users/ptdesign/vendors/bin/ld-collect-2.4.3 g++"
    

    Quantify

    "Quantify determines your performance bottlenecks."

    Quantify has manpage. See /usr/sww/doc/Pure/pure.doc for a simple quantify demo.

    To build a quantify pigiRpc.debug, try:

    cd ~ptdesign/obj.sol2/pigiRpc
    make pigiRpc.debug.quantify
    

    Purecov

    "Purecov determines your performance bottlenecks."

    Purecov has manpage. See /usr/sww/doc/Pure/pure.doc for a simple purecov demo.

    To build a purecov pigiRpc.debug, try:

    cd ~ptdesign/obj.sol2/pigiRpc
    make pigiRpc.debug.purecov
    

    Pure Inc Problems

    See /usr/sww/doc/Pure/pure.doc for some hints.

    Problems with too many open files

    Purify slave: Error: Could not open file /usr/sww/pure/purify/cache/usr/openwin/lib/libXext.so.0_pure_p1_c0_301_54.so.0 for reading. System error code: Too many open files Purify: Read from rtslave failed, rtslave shutdown. If you are running csh, then use the 'limit' command: limit descriptors 128 To see what your limits are, type: 'limit' cxh@markov 28% limit cputime unlimited filesize unlimited datasize 2097148 kbytes stacksize 8192 kbytes coredumpsize 0 kbytes descriptors 128 memorysize unlimited

    Problems with purify libcgddf

    In cg/ddfScheduler, quantify fails to process libcgddf.so

    The two files CGDDFScheduler.o and CGRecurScheduler.o are necessary to replicate the bug. If I create a shared library with:

    g++ -shared -L/users/cxh/pt/gnu/sol2/lib/shared -o libcgddf.so \ CGDDFScheduler.o CGRecurScheduler.o And then run quantify: quantify g++ libcgddf.so The error message is Error: Relocating TEXT mode address 0xffffffec (section 9) not in range [0x0,0x408c). If possible, please send a bug report to support@pure.com including the product name and version (Quantify 2.0.1 Solaris 2), this and any preceding error messages, and ideally a uuencoded copy of any object or data files involved. Thank you.

    However, if CGRecurScheduler is compiled without -O2, then quantify works. Compiling with the -O option or with no optimization at all works fine.

    My guess is that there is a bug in GNU's optimizer. CGRecurScheduler defines a class that uses multiple inheritance, I'm not surprised that there is a bug.

    I've submitted a bug report to Pure Inc. I also upgraded our installation of quantify from 2.0 to 2.0.1

    So, in the unlikely event (of a water landing :-) that you are quantifying a full pigiRpc, you will have to build CGRecurScheduler.o with -O, install a new libcgddf.so and re run quantify.

    Incremental linking fails under Purify

    If you are incrementally linking in a star, the link may fail under purify because Purify seems to have problems reading the RPATH in a file. You may see something like: rm SDFcd2dat.cc multiLink: /users/ble/PTOLEMY_SYSTEMS/SDFcd2dat.o -L/users/ptdesign/lib.sol2 -lCGCrtlib Purify 3.2 Solaris 2, Copyright (C) 1992-1996 Pure Software Inc. All rights reserved. Instrumenting: __ptlink13766_0.so Purify engine: While processing file /tmp/__ptlink13766_0.so: Error: Couldn't resolve library name libg++.so.2.7.1 needed by /tmp/__ptlink13766_0.so in search path "/usr/lib:/usr/openwin/lib:/usr/sww/X11/lib:/usr/sww/sunos-X11R5/lib:/users/ptdesign/vendors/s56dsp/lib:/usr/ucblib:/opt/synopsys/sparcOS5/sim/lib". The solution is to add $PTOLEMY/gnu/$PTARCH/lib to your LD_LIBRARY_PATH before attempting to run demos that multilink.

    Fixing/Preventing Memory Leaks

    Memory leaks occur when the new operator is used to create dynamic memory and that memory is never freed. There are several sources of memory leaks, and the ones commonly found in the Ptolemy source code are described below. Memory leaks can be tracked and recorded by compiling Purify into the ptolemy executables.
    1. The most common error I've found is that the wrong delete operator is applied. The "delete" operator should be used to free a single allocated class or data value, whereas the "delete []" operator should be used to free an array of data values.
    2. The second most common error is overwriting a variable containing dynamic memory without freeing the memory from the last time the code was evaluated. For example, assume that "thestring" is a data member of a class, and in one of the methods (other than the constructor), there is
      thestring = new char[buflen];
      
      This code should be
      delete [] thestring;
      thestring = new char[buflen];
      
      In writing stars, the delete operator should be applied to variables containing dynamic memory in both the star's setup and destructor methods, and in the star's constructor method, the pointers to dynamic memory should be initialized to zero. By freeing up memory in both the setup and destructor, you cover the cases when (1) the user stops and restarts a simulation and (2) the user exits a simulation, respectively.
    3. The third most common error is not paying attention to the kinds of strings returned by functions. The function "savestring" returns a new string dynamically allocated and should be deleted when no longer used. The expandPathName, tempFileName, and makeLower functions return new strings, as does the Target::writeFileName method. Therefore, the strings returned by these routines should be deleted when they are no longer needed, and code such as
      savestring( expandPathName(s) )
      
      is redundant and should be simplified to
      expandPathName(s)
      
      to avoid a memory leak.
    4. Occasionally, dynamic memory is being used when instead local memory could have been used. For example, if a variable is only used as a local variable inside a method or function and the value of the local variable is not returned or passed to outside the method or function, then it would be better to simply use local memory. For example,
      char* localstring = new char[len + 1];
      if ( dude == bogus ) return;
      strcpy(localstring, otherstring);
      delete [] localstring;
      return;
      
      could easily return without deallocating localstring. The code should be rewritten to use either the StringList or InfString class, e.g.,
      InfString localstring;
      if ( dude == bogus ) return;
      localstring = otherstring;
      return;
      
      There are casts defined that will automatically convert StringList to a const char* and InfString to a const char* or a char*, so that instances of the StringList and InfString classes can be passed into routines that take string arguments. You should use StringList when you want a list of strings, and InfString when you want a string of unbounded size. Now, when the function or method exits, the destructors of the StringList and InfString variables will be called which will automatically deallocate their memory.

      It is tempting to use constructs like

      char localstring[buflen + 1];
      
      in which buflen is a variable, instead of StringList and InfString classes, but this syntax is a GNU extension and not portable to other C++ compilers.
    5. In some places, the use of dynamically allocated strings can be simplified by using the StringList class. The StringList class supports strings that can grow to arbitrary sizes, and the StringList destructor will automatically deallocated any new memory allocated to manage the strings in a StringList. A good example is in "$PTOLEMY/src/kernel/StringArrayState.cc" in the StringArrayState::initialize method.
    6. The last problem that I noticed is that sometimes the return value from a routine that returns dynamic memory is not stored, and therefore, the pointer to the dynamic memory gets lost, esp. in nested function calls such as
      puts( savestring(s) );
      

    Compiling under other platforms

    We use other C++ compilers to verify that the Ptolemy code is fairly portable, and to catch any bugs that g++ may have missed. As of 3/96, we were using the following PTARCHs: sol2.cfront, sol2.5.cfront and hppa.cfront.

    One of the problems with the non-g++ compilers is that make depend may not work right. This means that you should first do "make sources" with PTARCH=sol2 in the source tree before attempting a non-g++ build.

    To build with sol2.cfront as ptuser

    setenv PTARCH sol2.cfront
    set path = ( $PTOLEMY/bin.$PTARCH /usr/lang $path)
    cd ~ptdesign
    MAKEARCH
    make install >& ~ptdesign/logs.cfront/installTodaysDate
    
    Cfront is an excellent way to catch bugs that g++ will not catch. Note that cfront .o files and g++ .o files are not compatible.

    If you want to just test out building a few files, you can do:

    make PTARCH=sol2.cfront
    
    Don't forget to remove the .o files when you are done, lest you poison your binaries. One good way to avoid this problem is to create the cfront .o files in your src directory, and then remove the .o file when you are done.

    Running make depend

    Joe says:

    General rule for when to re-run make depend:

    If you change an #include line in any file (add one or remove one), at least some of the dependencies are going to be out of date. If the changed #include line is in a .cc file, re-running make depend just in that source directory is enough. If it is in an .h file, you'll have to re-run make depend also in all directories that might have files that include that .h file: if in doubt, do it from the top.

    Symptoms that "make depend" may be the problem: mysterious crashes where virtual function calls seem to go off into hyperspace or call the wrong function.

    Edward says:

    Note that there are many changes you can make to a star.pl file that, in effect, cause a new #include to appear in the .cc or .h file. For instance, adding a new state of some new type. So if you do such changes, its a good idea to run "make depend" in the directory where the star is defined. Just there should be enough.

    Note that make depend on non-g++ platforms can fail, you should always run make depend with the g++ compiler.

    Debugging versions

    To build versions of pigiRpc and ptcl that have the -g option turned on, cd to that directory and type
    make pigiRpc.debug
    
    or
    make ptcl.debug
    
    respectively.

    HPPA Cfront Debugging versions

    To debug hppa.cfront code, one may use the hp specific version of gdb (hpgdb) from jaguar.cs.utah.edu. A copy of this binary is located at ~ptdesign/src/gnu/hp/hpgdb. To use this version of the debugger, be sure that there is a link from bin.hppa.cfront/gdb to the binary: ln -s ~ptdesign/src/gnu/hp/hpgdb $PTOLEMY/bin.hppa.cfront/gdb

    Alternatively, to debug hppa.cfront code, one can use the hppa debugger 'xdb'

    To debug with xdb, "The support module /usr/lib/end.o must be included as the last object file linked, except for libraries included with the -l option to ld (see ld(1))." To build an xdb pigiRpc.debug:

    make SYSLIBS="-lm /usr/lib/end.o" pigiRpc.debug
    
    The ~ptdesign/bin/pigiEnv.csh has been modified so that if the -debug flag is present ant the architecture is hppa.cfront, then xdb is used, rather than gdb.

    Debugging Vem

    $PTOLEMY/octtools/bin.$PTARCH should contain two binaries, vem and vem.debug.

    I've modified the pigiEnv.csh script so that you can determinie which vem is run by setting an environment variable, VEMBINARY.

    Usually pigiEnv.csh will run $PTOLEMY/bin.$PTARCH/vem. To run a debug version of vem, set the VEMBINARY variable with:

    setenv VEMBINARY $PTOLEMY/octtools/bin.$PTARCH/debug/vem And then start up pigi.

    If you are having problems with bindings or fonts while running vem.debug, then you may need to rename the vem.debug binary to vem to get the right resources. The default octtools installation creates a link for you at $PTOLEMY/octtools/bin.$PTARCH/debug/vem for you.

    If you are custom building a vem, you could try making a private link from from vem to vem.debug:

    cd ~/bin ln -s $PTOLEMY/octtools/bin.$PTARCH/vem.debug vem

    One potential fix to this would be to change the Xresource defaults from vem* to Vem*. See bug vem/296: Setting VEMBINARY to vem.debug results in bogus Xresources for more information.

    Looking at vem core files

    If you create a core file with vem.debug, use gdb to get a backtrace:

    gdb $PTOLEMY/octtools/bin.$PTARCH/vem.debug core

    You may find it helpful to use the 'dir' command inside gdb to set the path that gdb looks for sources with:

    dir ~ptdesign/src/octtools/vem

    gdb and emacs

    Note that this needs to be updated to reflect the use of the PT_DEBUG environment variable, which makes everything easier. Joe points out:

    Here's how to [debug with gdb from within emacs] with pigi code.

    Set your PIGIRPC to point to a pigiRpc.debug (or some other image that has debugging info). Start pigi *without* specifying debugging.

    Now, what you need to know is the process ID of the pigiRpc process. This number will appear in the vem window when the startup window pops up, in the form

    Version: 
    created 
    Running , 
    
    For example, I just fired it up and got a process ID of 10622. Now, in emacs, type ESC-x gdb . Enter the full path of the pigiRpc when prompted. gdb will fire up within Emacs. At the gdb prompt, type
    attach 
    
    In my case I would type
    attach 10622
    
    This attaches gdb to the pigiRpc process. The pigiRpc process will be stopped; you can set breakpoints, examine variables, or re-start the process by typing the "continue" command (or just "c") to gdb.

    To find out more about the gdb mode, try

    M-x info
    Then type:
    m emacs
    
    Then go down to:
    Running Debuggers Under Emacs
    
    * Starting GUD::	How to start a debugger subprocess.
    * Debugger Operation::	Connection between the \
                            debugger and source buffers.
    * Commands of GUD::	Key bindings for common commands.
    * GUD Customization::	Defining your own commands for GUD.
    
    The GUD mode command:
    `C-c C-l'
    `C-x C-a C-l'
       Display in another window the last line referred to in
       the GUD buffer (that is, the line indicated in the last
       location message).  This runs the command `gud-refresh'.
    
    Is quite useful

    Tom points out:

    I modified the "pigiRpcDebug" script so that if you define an environment variable PT_DEBUG, then that program will be used instead of just running gdb inside an xterm. I also installed a script "ptgdb" in ~ptdesign/bin that will run gdb inside emacs. So if you "setenv PT_DEBUG ptgdb" then you'll be all set.

    I'm still looking at mxgdb and xxgdb, but they seem to be a bit too buggy to be useful. William Li mentioned that there is a Tk interface to gdb, but I haven't had time to look for it.

    Note that the documentation for gdb says the following:
    *Warning:* GDB runs your program using the shell indicated by your `SHELL' environment variable if it exists (or `/bin/sh' if not). If your `SHELL' variable names a shell that runs an initialization file--such as `.cshrc' for C-shell, or `.bashrc' for BASH--any variables you set in that file affect your program. You may wish to move setting of environment variables to files that are only run when you sign on, such as `.login' or `.profile'.

    gdb and StringLists

    The printStringList function call does not print out the contents of a StringList, it dumps core. Jose Pino suggests the following solution below.

    The function printStringList looks like:

    void printStringList(const StringList *s, char* delimitter = 0) { StringListIter nexts(*s); const char* p; while ((p = nexts++) != 0) { cout << p; if (delimitter) cout << delimitter; } cout << "\n"; fflush(stdout); } Jose writes: I also received core dumps using this function. The problem is that you are using GDB to send in a char*... however the char* memory belongs to GDB and not the pigiRpc process. Once inside of this function, pigiRpc tries to access memory that does not belong to it and you get a core dump.

    I have written and tested the two functions:

    const char* displayStringListItems(const StringList& theList) { StringListIter nexts(theList); const char* p; StringList contents; while ((p = nexts++) != 0) { contents << p << '\n'; } return (const char*) contents; } const char* displayStringList(const StringList& theList) { StringList contents = theList; return (const char*) contents; } In gdb, to see the StringList contents of say StringList foo you would either: printf "%s",displayStringListItems(foo) -or- printf "%s",displayStringList(foo)

    Debugging hints

    If you are seeing code that is jumping from line to line in a non-sequential way, here's a description of the steps we went through to debug such a problem.

    The first thing we tried was recompiling the kernel with out optimization: cd $PTOLEMY/obj.$PTARCH/kernel; make clean; make OPTIMIZER= install The problem persisted.

    The code looked like:

    100 #include <stdio.h> 101 102 int FileParticle::initParticleStack(Block* parent, ParticleStack& pstack, 103 Plasma* myPlasma, const char* delay) { 104 StringArrayState initDelays; 105 106 printf("In FileParticle::initParticleStack!\n"); 107 fflush(stdout); 108 109 initDelays.setState("initDelays",parent,delay); 110 initDelays.initialize(); 112 int numInitialParticles = initDelays.size(); If we set a break point with break FileParticle::initParticleStack and then run the program, it would stop inside NamedObj.h. This seems bogus, but the first line of FileParticle::initParticleStack is: 104 StringArrayState initDelays; and StringArrayState inherits from StringState which inherits from State which inherits from NamedObj. So, the first code that gets executed in FileParticle::initParticlStack is the NamedObj constructor which makes sense.

    If we set a break point at line 102, then gdb would actually stop at line 101. I think the problem might have to do with having the bracket at the end of the line, rather than on a separate line. If we change:

    102 int FileParticle::initParticleStack(Block* parent, ParticleStack& pstack, 103 Plasma* myPlasma, const char* delay) { To: 102 int FileParticle::initParticleStack(Block* parent, ParticleStack& pstack, 103 Plasma* myPlasma, const char* delay) { Then when we break on 102, then gdb actually stops on 102.

    The next weird problem was that the execution of the program seemed to jump around in a non-sequential fashion, even if the program was compiled with the optimizer off. What was happening was that we would hit

    112 int numInitialParticles = initDelays.size(); and then jump back up to line 104.

    The problem here is that we are declaring an int part way through the block, and I think g++ is reordering things to that the int is actually declared at the top of the block. Personally, I prefer to see variables declared at the top of a block, rather than in the middle. Changing the code to

    int FileParticle::initParticleStack(Block* parent, ParticleStack& pstack, Plasma* myPlasma, const char* delay) { int numInitialParticles; StringArrayState initDelays; printf("In FileParticle::initParticleStack!\n"); fflush(stdout); initDelays.setState("initDelays",parent,delay); initDelays.initialize(); numInitialParticles = initDelays.size(); fixed this problem. If you are having problems debugging, here's what to check.
    1. Verify that your $PTOLEMY is set to what you thing it is set to If you are building binaries in your private tree, be sure that $PTOLEMY is set to your private tree and not ~ptdesign.
    2. Verify that your $LD_LIBRARY_PATH does not include libraries in another ptolemy tree. You could do unsetenv $LD_LIBRARY_PATH
    3. gdb sources your .cshrc, so your $PTOLEMY and $LD_LIBRARY_PATH could be different. Inside gdb, use show env PTOLEMY to see what it is set to. This problem is especially common if you are running gdb inside emacs via ptgdb.
    4. Verify that you are running the right binary by looking at the creation times. You may find it useful to use the -rpc option: pigi -debug -rpc $PTOLEMY/obj.$PTARCH/pigiRpc/pigiRpc.mine ~ptdesign/init.pal
    5. Recompile the problem files with optimization turned off and relink your pigiRpc. You can do this with rm myfile.o; make OPTIMIZER= install Then rebuild your pigiRpc
    6. Look for wierd coding styles, such as declaring variables in the middle of a block and brackets that open a function body on the same line as the function declaration: int foo(int bar){ vs int foo(int bar) {
    7. Use stepi to step by instructions, rather than step.

    If you are spending a lot of time debugging a problem, you may want to use ptcl instead of pigiRpc, as ptcl is smaller and starts up faster. Also, you can keep your breakpoints between invocations of ptcl, as debugging ptcl does not start up a separate emacs each time. However, ptcl cannot handle demos that use tk or hof. Here's how to use ptcl to debug.

    1. Run pigiRpc on the universe, and use compile-facet to generate a ~/pigiLog.pt file. Note the number of iterations for the universe, and then exit pigiRpc
    2. Copy ~/pigiLog.ptto somewhere. I suggest something like /tmp/tst.tcl. Use a short filename here since you may be typing it alot, and don't use something inside your home directory as you can't easily use ~ inside ptcl. If the file is named ~cxh/t.tcl, then I would have to type /users/cxh/t.tcl, which is a lot longer than /tmp/t.tcl
    3. Edit the file and add a run XXX line and a wrapup line at the end. If the demo should run for 100 iterations, then add: run 100 wrapup to the end of the file.
    4. Build a ptcl debug that has just exactly the functionality you need If your demo is sdf, then try building and using ptcl.ptiny.debug.
    5. If you use emacs, then you can start up gdb on your binary with: M-x gdb Then type in the name of the binary. You may have to use the full pathname: /users/cxh/pt/obj.sol2/ptcl/ptcl.ptiny.debug You can then set breakpoints in the gdb window, then type r to start the process, and then source your demo with: source /tmp/tst.tcl If you want to recompile your demo outside of gdb and then reload it into your gdb session, use the file command inside gdb file /users/cxh/pt/obj.sol2/ptcl/ptcl.ptiny.debug Your breakpoints will be saved, which is a big time saver.

    Compiling under emacs

    Tom Parks points out: I wrote a simple emacs command named pt-compile. It works just like the compile command except that it sets default-directory to the appropriate obj.$PTARCH directory before compiling. To use it: Feel free to look at ~parks/.emacs to see how I did this configuration.

    Class browsing under emacs

    Tom points out: I made a simple file "pt-browse.el" and put it in the "/usr/tools/gnu/lib/emacs/ohm-lisp" directory. Just put
    (load "pt-browse")
    
    in your .emacs file. The ~ptdesign/src/makefile still needs a little work, but I did build a TAGS table in ~ptdesign/src by hand. I think I'll define a TDIRS variable that specifies which subdirectories should have TAGS tables. Once this works, then there should be nightly jobs to rebuild the tables.

    The TAGS file is built with a special version of the etags program, installed as etags++ on sww. It does not like some of the files in ptdesign. I think that is is possible to concatenate small TAGS files to create larger ones, so it may not be too hard to add a TAGS target to the makefiles in ptolemy.

    For more documentation, view the file /usr/sww/share/lib/info/c++-browse with info or tkinfo.

    See also tycho class browser docs.

    Compiler Hints

    See appendx A of the Ptolemy User's manual, Look for:
    1. Problems with the compiler itself
    2. Problems compiling files

    Problems with the compiler itself

    The first thing to try is compiling a 'hello world' program in C or C++. In C++, you should probably try using the stream functions, below is a sample file:
    2#include 
    main()
    {
      cout << "Hello, Ptolemy.\n";
    }
    
    Try compiling the file with gcc -v and -H flags turned on. -v tells you what steps the compiler is running, -H tells you what include files are being run.
    gcc -v -H hello.cc
    
    Look at each step of the compile, and pay particular attention to the assembler and loader steps.

    You can use the -save-temps gcc option to save any temporary files created in each step. Then, if necessary, you can try running each step by hand.

    as vs gas

    gcc can use the native assembler or the GNU assembler. Often the GNU assembler is called 'as'. Check you path to see which version you are getting. gcc can often be configured at compiler build time to use the native assembler or the GNU assembler, but once the compiler is built, you are stuck with one or the other assemblers.

    Collect

    To pick up C++ Constructors and destructors, gcc can use the native loader or a program called 'collect' (See Joe Buck's g++ faq for more info). We usually use collect, because it works with the Pure tools. The collector is usually located at gcc-lib/$PTARCH/$COMPILER_VERSION/ld, for instance the sol2 collector might be at ~ptolemy/gnu/sol2/lib/gcc-lib/sol2/2.5.8/ld

    You can pass the collector arguments so that it will print out more information. Try

    g++ -v -Wl,-debug hello.cc
    
    or:
    make LINKER="g++ -v -Wl,-debug" PURELINK=
    
    The collector will also respond to certain environment variables, see the source in ~ptolemy/src/gnu/src/gcc/collect2.c

    collect creates a temporary file that has the constructors and destructors in it. To get collect to save the temporary file, set the following environment variable:

    setenv COLLECT_GCC_OPTIONS -save-temps		
    

    If the collector is getting an old version of GNU nm, then you could have problems

    Environment variables

    Certain environment variables can control where the compiler looks for subprograms and include files. These four variables are usually in the Ptolemy distribution so that users can run the prebuilt compiler, even if the distribution is not at /users/ptolemy.
    setenv GCC_EXEC_PREFIX $PTOLEMY/gnu/$PTARCH/lib/gcc-lib/$PTARCH/2.5.8/
    setenv C_INCLUDE_PATH $PTOLEMY/gnu/$PTARCH/lib/gcc-lib/$PTARCH
    setenv CPLUS_INCLUDE_PATH $PTOLEMY/gnu/$PTARCH/lib/g++-include:$PTOLEMY/gnu/$PTARCH/$PTARCH/include
    setenv LIBRARY_PATH $PTOLEMY/gnu/$PTARCH/lib
    
    See the gcc info format file for a complete list of environment variables. Note that GCC_EXEC_PREFIX must have a trailing /

    If, under gcc-2.5.8, you get warnings about 'conflict with built in declaration', and your compiler is not installed where it was built, you may need to create a link in your gcc-lib. If you compile the file with the -v option, you can see what directories it is including. You could try creating a link:

    (cd $PTOLEMY/gnu/$PTARCH/lib/gcc-lib/$PTARCH/2.5.8; ln -s .. $PTARCH)
    

    Using Trace

    The SunOS4.1 trace command can be invaluable in determining what a program is doing at run time. If you compile with
    gcc -v -save-temps
    
    then you can try running trace on the various steps, and see each system call. Unfortunately, the filenames are truncated, but often this is enough to see what's going on.

    Problems Compiling files

    If you are having problems with include files, try modifying your hello.cc program to include those files. Note that you could be getting unexpected substitutions from cpp, so looking at the cpp output can be useful in solving compiler installations problems and include file problems

    The gcc -E and -P options are very useful in wading through include file problems. -E stops compilation after the C preprocessor runs, and outputs the resulting file. -P strips off the line numbers from the output.

    1. Try using the -E option, and look at the output file. Sometimes the problem will be obvious. Note that if your -E compile args include -o filename.o, then filename.o will have cpp text output, not the usual object file. If you are within Ptolemy, you can try using the OPTIMIZER makefile flag to pass args to the compile. For instance:
      cxh@dewitt 18% make OPTIMIZER=-E Linker.o > Linker.e
      cxh@dewitt 19% ls -l Linker.e
      -rw-r--r--  1 cxh         68392 Jun 15 09:06 Linker.e
      cxh@dewitt 20% 
      
    2. Use -E -P, save the output to a new file called tst.cc and compile that file.
       make OPTIMIZER="-E -P" Linker.o > tst.cc
      
      Edit tst.cc and remove the first line, which will have the gcc command in it. Make tst.o
      make OPTIMIZER="-v -H" tst.o
      
      If you are getting weird substitutions this will help.
    3. Using '-E -dM' will tell you what symbols are defined by cpp at the end of the compile. See the gcc man page or the gcc info format file for more information.

    Sources of information for compiler problems

    ~ptolemy/gnu/common/man/man1/gcc.1
    The gcc man page Use emacs (M-x info) or tkinfo to view the info pages.
    ~ptolemy/gnu/common/info/gcc*
    The gcc info pages.
    /users/ptdesign/src/gnu/g++-FAQ.txt
    Joe Buck's g++ info page, (Available via anonymous ftp from rtfm.mit.edu in pub/usenet/news.answers )
    There are number of faq's that are useful. Locally most faq's can be found in /usr/sww/doc/faq. See especially:
    /usr/sww/doc/faq/c.faq
    /usr/sww/doc/faq/c++.faq	
    /usr/sww/doc/faq/hpux.faq
    /usr/sww/doc/faq/Sun/solaris2.faq	
    /usr/sww/doc/faq/Sun/solaris2_porting.faq	
    /usr/sww/doc/faq/Sun/sun_sysadmin.faq	
    
    

    Undefined symbols

    If you get undefined symbols while linking pigiRpc or ptcl, use the 'nm -o' command to find out what file the symbol is coming from. On the hppa and sol2, use 'nm -rp'

    For example, if there was an undefined symbol, called __MyMissingSymbol, during the pigiRpc link.

    1. Run nm -o on the library directory:
      nm -o $PTOLEMY/lib.$PTARCH/* | grep __MyMissingSymbol
      
      If you find the missing symbol, then go to that library and try reinstalling it. That will often solve the problem.
    2. If step above did not work, then you need to run 'nm -o' on all the object files used to create the binary.

      Copy the link command into a separate tmp file, edit the tmp file so that only the .o files remain, and add 'nm -o' to the beginning

      If the link command is

      g++ -L../../lib.sol2 -Xlinker -S -Xlinker -x -static pigiMain.o \
      defpalettes.o ../ptcl/PTcl.o ../../lib.sol2/sdfstars.o \
      ../domains/cgc/targets/main/CGCBDFTarget.o -L../../octtools/lib.sol2 \
      -lsdfimagestars -lImage -lsdfdspstars -lsdfstars -lLS -lsdf \
      -lsdfmatrixstars -lsdftclstars -lrpc -lpigi -lptk -lgantt -lptolemy -loh \
      -lrpc -llist -ltr -lutility -lst -lerrtrap -luprintf -lport \
      -L../../tcltk/tk.sol2/lib -ltk -lXpm -L../../tcltk/tcl.sol2/lib -ltcl \
      -L/usr/X11/lib -lX11 -lg++ -lm version.o 
      
      then edit this down to:
      nm -o \
      pigiMain.o defpalettes.o ../ptcl/PTcl.o ../../lib.sol2/sdfstars.o \
      ../domains/cgc/targets/main/CGCBDFTarget.o version.o
      
      If you have problems with line length and vi, then either use Emacs, or use backslashes to break up the line into multiple lines.

      One could write a makefile that automates this.

    3. Run your tmp file:
      sh ~/nmit | grep __MyMissingSymbol
      
      Look for files that contain the undefined reference.
    See the nm page for more info. Note that the hppa nm does not have a -o option.

    You may find it useful to see what steps the linker is taking. For instance, the linker might be accessing different library files than what you think. A common error is to be accessing the libg++.a file from an old version of libg++.

    c++filt

    The c++filt program can be used to demangle symbols so that you can find out what the refer to. Locally, c++filt for GNU compilers is at /usr/sww/bin/c++filt.

    Note that the name mangling scheme differs for different compilers. So you need to use a different c++filt for Cfront and GNU compilers. /usr/sww/bin/c++filt should work with GNU compilers. Be sure you are using the right c++filt with the right compiler. Locally, /usr/lang/c++filt might be in your path before /usr/sww/bin/c++filt

    If your symbol has special characters, you may need to place it in quotes.

    ptuser@babbage 22: echo '__$_4KUIM' | /usr/sww/bin/c++filt 
    KUIM::~KUIM(void)
    
    Note also, that the mangling scheme in GNU compilers changes between versions of the compiler, so c++filt might not be able to figure everything out.

    c++filt can be found in the GNU binutils distribution.

    Passing args to ld

    You can pass args to the C and C++ compilers which will then be passed to ld.

    Under Suns running SunOS4.1.3 with GNU C and C++ (ARCH=sol2):

    cd $PTOLEMY/obj.sol2/ptcl
    make LINKER="g++ -v -Wl,-debug -Wl,-t" PURELINK=
    
    Under Sun cfront, the following will tell you what files ld is processing.
    cd $PTOLEMY/obj.cfront/ptcl
    make ARCH=cfront LINKER="CC -qoption ld -t -v -V"
    

    Testing

    We should do some testing from aniljain and tereza. On aniljain, set $PTOLEMY to /vol/turing/turing1/scratch/gen. On aniljain, /scratch is a local scratch directory.

    On tereza, you will have to do your testing in the ~ptolemy directory and hope for the best, as tereza is not mounting /scratch.

    If you have an aniljain account, then you should work from it. On both aniljain and tereza, if you don't have your own account, you should work as ptuser, not as ptolemy. For best results, try setting the path:

    set path = ($PTOLEMY/bin /usr/bin /usr/ucb /bin .)
    

    Things to test

    1. Run all the demos and be sure that they work. The run-all-demos command will fail in the following palettes:
      sdf/multirate:                  barfs on 'broken'
      sdf/signalProcessingSystems:    barfs on 'animatedLMS'
      sdf/matrix:                     barfs on MatrixTest1 (crashes pigiRpc)
      cgc/multirate:                  barfs on filterBank
      
    2. Make a Purify'd version of pigiRpc and repeat 1. Eliminate any serious errors found (for example, you may have accesses to freed memory but still the demos would run OK just by luck).
    3. Core dump contest. The goal in the core dump contest is to make pigi or ptcl dump core (no user-written incrementally linked stars allowed). To score, you need to report the bug in a way so that it's repeatable (e.g. a pigiLog.pt that causes a crash, or a facet on ohm somewhere that makes pigi blow up). No points if your crash is judged to be identical to someone else's previously reported crash (same basic cause, that is). The person with the most core dumps wins. Tie breaker: a core dump that the average user is more likely to encounter could be considered to be worth more.
    4. Repeat 1 and 2 after fixing core dumps from 3. For any core dumps not easily fixed, we can add warnings to the documentation.
    5. Dynamic linking There is an example of dynamic linking in ~cxh/c++/pt. A simpler example would be better You may want to try out the gnu compiler for dynamic linking, here are the variables:
      setenv GCC_EXEC_PREFIX $PTOLEMY/gnu/$PTARCH/lib/gcc-lib/$PTARCH/2.5.8/
      setenv C_INCLUDE_PATH $PTOLEMY/gnu/$PTARCH/lib/gcc-lib/$PTARCH
      setenv CPLUS_INCLUDE_PATH $PTOLEMY/gnu/$PTARCH/lib/g++-include:$PTOLEMY/gnu/$PTARCH/$PTARCH/include
      setenv LIBRARY_PATH $PTOLEMY/gnu/$PTARCH/lib
      
    6. Look for bogus files. Untar all the tar files somewhere and look for weird files. Usually there is a copy of the untar'd distribution in /scratch/gen/ptolemy Try
      (cd ~ptolemy; gfind . -xdev -size 0 -ls)
      
      or
      (cd ~ptolemy/src; make checkjunk)
      
    Things to do before a release
  • Clean distibution: Before the prerelease, we will clean the distribution of bogus files and facets with bad pathnames (make checkjunk).
  • Memory leaks and errors: Before the prerelease, we'll fix as many of these as we can in pigilib, the kernel and the non-experimental domains
  • Compiler warnings: Before the prerelease, we'll fix as many as we can for gcc/g++, sol2.cfront and hppa.cfront. We'll pay special attention to cleaning up gcc/g++ warnings
  • Test builds: before the prerelease, we need to test build on: sol2, sun4, hppa, sol2.cfront, hppa.cfront, alpha.
  • Before the alpha release, we will do 'run all demos' under sol2, sun4 and hppa. If we have time, we will do 'run all demos' for sol2.cfront, hppa,cfront, irix5 and alpha
  • We will test build under the latest release of tcltk and gcc/g++ on sol2 before we ship alpha, beta and final.
  • We will test build ipus on sol2 before releasing. The binaries we ship will not include ipus. Ipus has lots of warnings and does not compile on non-g++ environments.
  • We will test build in an environment that does not include matlab and mathematica. The binaries we ship will include matlab and mathematica, only if the build in environment that don't have these packages succeeds.
  • Before beta, we'll test the release in an environment that does not have /users/ptolemy, so as to get rid of any dependencies.

  • Documentation

    Writing an Index

    When you list a class, include the word class:
    Star class
    
    When you are referring to the generic term star (vs. the class), don't capitalize
    star
    
    Star and star will be two separate index entries. Also, we should avoid plurals.
    star
    
    not
    stars
    
    Also, when you give a method, do it like this:
    Star:print method
    
    This will make an index subentry. Not:
    Star::print()
    
    Nor:
    print method
    
    The following would be OK, if you have reason for it:
    print method:in Star class
    

    We use specialized markers to indicate the different types of indexes. The markers are renamed in the $FMHOME/fminit/usenglish/Maker.us file. If Framemaker is updated on the Software Warehouse, then you will need to build a dummy tree locally so that we can modify this file. Usually the tree is in /opt and the FMHOME variable is set to that directory. To build the tree, as root you could do something like:

    cd /opt mkdir -p frame-5.1/fminit/usenglish cd frame-5.1 ln -s /usr/sww/share/frame-5.1/* . cd fminit ln -s /usr/sww/share/frame-5.1/fminit/* . cd usenglish ln -s /usr/sww/share/frame-5.1/fminit/usenglish/* . mv Maker.us Maker.us.orig cp Maker.us.orig Maker.us Then edit Maker.us. Below are the diffs: 37,42c37,48 < Maker*Marker.11: Type 11 < Maker*Marker.12: Type 12 < Maker*Marker.13: Type 13 < Maker*Marker.14: Type 14 < Maker*Marker.15: Type 15 < Maker*Marker.16: Type 16 --- > Maker*marker.11: IndexReference > Maker*marker.12: IndexExample > Maker*marker.13: IndexDefinition > Maker*marker.14: IndexStarRef > Maker*marker.15: IndexStarEx > Maker*marker.16: IndexStarDef > ! Maker*Marker.11: Type 11 > ! Maker*Marker.12: Type 12 > ! Maker*Marker.13: Type 13 > ! Maker*Marker.14: Type 14 > ! Maker*Marker.15: Type 15 > ! Maker*Marker.16: Type 16 brahe.eecs 21#

    Framemaker hints

  • If you turn on Framemaker's change bars, it will make it easier for people to proofread your changes. If the changes to a chapter are substantial, then we probably want to turn changebars off before shipping the documentation.
  • Make sure that you run Frame's spell checker on a doc.
  • The index and table of contents can be updated by running a rule in the makefile. For instance /users/ptdesign/doc/users_man/users_manIX.doc can be updated with cd ~ptdesign/doc/users_man; make update_book. The book can also be updated by opening users_man.book from within Frame. Note that to produce the Index and TOC, none of the files can be locked by other users editing them.
  • See /usr/cluster/doc/framemaker.doc for more information about frame.
  • Including Ptolemy Facet Images in Frame docs

    Figures that contain images of Ptolemy facets should all be of the same style and format. Using encapsulated postscript (EPS), while very tempting, is not recommended, as some printers have problems printing framemaker documents that contain EPS. Please stick with the style below, and save yourself the trouble of supporting users who cannot print your EPS docs. In the Ptolemy manuals, a figure consists of the following objects:
    1. A anchored frame that contains all of the parts of the figure. Usually the frame is anchored at the cross reference figure number in the description. Typical text would be The palette in figure 5-9 shows a collection of stars for format conversion . . . In the example above 5-9 is the cross reference, and the anchor should be just after the cross reference. Note that you have to create the figure before creating the cross reference.
    2. A text frame that contains the postscript by reference. The trick here is to type in the name of the postscript file into the text frame, and then tell Frame that this is postscript.
    3. A text frame that contains the figure paragraph that describes the figure. The cross reference will refer to this paragraph
    Here are the detailed instructions:
    1. From within Ptolemy, use the Control-P to bring up the Ptolemy print window. Then print your facet to a file. In the final results, you probably want to have each icon about 1/2 inch on a side. If you save the facet after printing it, your offsets will be saved on disk. Note that because of a bug in pigi, upon exiting you will not be prompted to save your facet, you will need to remember to save it yourself.
    2. From within Frame, create an anchored frame by mousing in the framemaker window and then selecting Special->Anchored Frame
    3. Place a text frame inside your anchored frame. (Use the text frame tool from the graphics tools palette).
    4. To import the postscript, type the name of the Postscript file into the text frame #include /users/ptdesign/src/domains/sdf/demo/init.pal.ps
    5. Then in Format->Customize layout->Customize Text frame , select the Postscript code box
    6. To check your image, you can print just one page to a Postscript file, then run fixframeps and then ghostview fixframeps sdf.ps; ghostview sdf.ps If you are using a virtual window manager, then you can have frame running in one screen, pigi running in another and ghostscript running in another. In this way, you can quickly preview images without printing.
    7. If you want to crop the top of the image so that the text that describes the palette is not visible in the frame document, you can grab the top of the outer frame and resize the entire frame. This is a little tricky, but once you get the hang of it, the process is fairly quick. If necessary, you may need to reprint the facet from Ptolemy with different horizontal and vertical offsets.
    8. Once you are happy with the figure itself, create another text box at the bottom of the anchored frame and type in a description. The paragraph should be of the figure format.
    9. Once you have a figure format description paragraph, you can create a cross reference at the top where the anchored frame is anchored.

    WWW

    WWW is the World Wide Web. We have a WWW front page on our ftp site. Currently, Tom Parks is doing most of the work with our WWW site.

    From: Tom Parks To: ptdesign@bennett.eecs.berkeley.edu Subject: Documenting demos for the World Wide Web

    If you want to document a demo for the World Wide Web (WWW), then first have a look at the demos that are already installed. Start at the Ptolemy home page

    http://ptolemy.eecs.berkeley.edu/
    
    then follow the link to the "quick tour". Use the "View Source" or "Save As..." items in the "File" menu of Mosaic if you want to view or save the hyper-text markup language (HTML) source for these demos. There is also on-line documentation available for HTML under the "Help" menu.

    To create your own document, create a new directory to keep all your files in. You'll need a HTML file, a screen dump of the facet, screen dumps of plots, and and sound files that are generated. Once you have it ready, let me know where to find it, and I'll install it on the FTP site.

    I have installed the script "xgif" in ~ptdesign/bin. It passes any command line arguments to xwd (I suggest you read the man page for xwd) then converts the window dump into a GIF file and writes it to stdout. If you want to make a movie, talk to me and I can give you some help.

    Tom's tips:

    Here are some tips I have figured out in the process of preparing scanned images for my WWW home page.

    When I scan an image, I adjust the resolution (dpi) so that the image will be between 500 and 1000 pixels in the larger dimension. Keep in mind that your screen is probably 1152x900, so there isn't any point in making really huge images. I save the scanned images as 24-bit color TIFF files, then use FTP to transfer them from the Macintosh to my workstation. Then I use xv to convert the file to JPEG and save it with the default quality setting of 75. JPEG offers about 30 to 1 compression with very little loss of quality. I find that the artifacts from quantizing the color map to 8-bit color for display are much more dramatic than the losses in JPEG coding. At this point I discard the original TIFF file and the JPEG file becomes my "master" copy.

    Because JPEG images cannot be included inline with HTML (but they can still be used with external viewers), it is necessary to create a GIF format version of your image. The goal for inline images is to make the file as small as possible so that it won't take too long to download over the net. I suggest that you crop your image and shrink it by a factor of 4 or more, then save it as a *reduced color* GIF. The reduced color option saves space, and it doesn't really degrade the image because Mosaic only reserves a small number of colors for inline images anyway.

    The URL to access the Ptolemy front page is:

    http://ptolemy.eecs.berkeley.edu
    
    The http docs are installed as:
    file://localhost/usr/tools/etc/httpd/docs/index.html
    
    messier the machine running the httpd daemon.

    Patches

    To generate patches, I use diff -c. I don't think I have it quite right as my patches require the -p2 flag to ignore the first two components of the diff. What I do is this:
    1. grab the old source, un tar it in a place and call it old
      mkdir old
      cd old
      gzcat pt-0.5.1.src.tar.gz | tar -xf -
      cd ..
      
    2. place the new source next to it, usually I make a tar, but you might be able to use links.
      mkdir new
      Untar the new stuff
      
    3. Now I have a directory with old/ and new/ and under old/ and new/ is a ptolemy/ directory:
      old/ptolemy
      new/ptolemy
      
    4. Then I do a diff:
      diff -c -r old new > patch3
      
    This is probably not the best way to do it, but it works for my purposes. Note that you should probably make a separate tar file for the octtools changes (get pt-0.5.1.other.src.tar.gz)

    When I generate a patch, I always try running it on the original source to make sure it works. Note that the ptolemy source has some links in it which means that some files end up in the patch twice (ddf/cg and tech/ptolemy are two offenders). If a file is in the patch twice, then the second time you will be asked if you want to reverse the patch. Merely edit the patch file by hand and remove the second diff.

    Any version of diff should work, the key thing is to use the -c flag, which causes the lines around the line that is different to be printed. I guess I do use GNU diff, but anything should work.

    Between release, we may end up producing patches to fix bugs, or provide enhancements.

    The patches should be applied in order. If a user needs patch3, then they should apply patch1 and then patch2.

    ~ptolemy/adm/gen-0.5/makefile contains targets to produce patches. ~ptolemy/adm/gen-0.5/mkpatch is a shell script that will produce a context style patch. See the makefile for usage.

    mkpatch automatically includes changes to pigiRpc/makefile that reflect the new patch number.

    Before releasing a patch, you should test it out on as many platforms as possible.

    1. On a scratch disk, or somewhere where there is space, grab the tar files from the ftp site.
      zcat /vol/ptolemy/pt0/ftp/pub/ptolemy/ptolemy0.5/pt-0.5.src.tar.Z | tar -xf -
      
    2. If you are making changes to octtools, you will have to grab the other.src tar file
      zcat /vol/ptolemy/pt0/ftp/pub/ptolemy/ptolemy0.5/pt-0.5.other.src.tar.Z | tar -xf -
      
    3. cd to the test directory
      cd ptolemy
      
    4. Apply all the patches in order:
      patch < /vol/ptolemy/pt0/ftp/pub/ptolemy/ptolemy0.5/patches/patch/pt-0.5-patch1
      patch < /vol/ptolemy/pt0/ftp/pub/ptolemy/ptolemy0.5/patches/patch/pt-0.5-patch2
      
      etc . . .
    5. set your PTOLEMY variable to the current directory
      setenv PTOLEMY `pwd`
      set path = ($PTOLEMY/bin.$PTARCH $PTOLEMY $path)
      
    6. If you changed octtools, build that first
      make install_octtools >& oct.0421 &
      
      If you are not rebuilding octtools and tcltk, make links:
      cd $PTOLEMY/tcltk; ln -s ~ptolemy/tcltk/* .
      cd $PTOLEMY/octtools; ln -s ~ptolemy/octtools/*
      
    7. Build
      make install >& in.0421 &
      
    8. Test out your fixes
    9. remove your test directory when you are done.
    We provide patch binaries and source on our ftp site in pub/gnu

    When you build a patch, you may want to rebuild the binaries and create a tar file with the new binaries. The name of the tar file should be something like pt-0.5p1.sol2.tar.Z

    If at all possible, we should never replace files in a ftp tar file without changing the name to something else.

    Moving Ptolemy

    Sometimes it is necessary to make ~ptolemy point to a different location. A common reason to do this is that the local ~ptolemy disk or machine needs work.

    If you move ~ptolemy to point to the version on sww, be sure that the .forward file in the new location is the same as the .forward file in the old location. Also check that the .plan file is the same in both locations.

    Some users depend on .o files in ~ptolemy/obj.sol2, so it would be nice if that directory was present in the new tree.

    To move ~ptolemy you will need root permission:

    1. edit /usr/cluster/etc/amd/maps/users.map. Look for the ptolemy section
    2. cd ~cxh/sa/bin/doall
    3. restart the automounter on all machines:
      ./doall "amd-restart" `cat ohm`
      
    4. verify that the move worked:
      ./doall "cd /users/ptolemy;pwd" `cat ohm`
      more *.out | cat
      

    Porting to a new architecture

    If you port to a new architecture, here are some of the files to look at:
    $PTOLEMY/bin/ptarch
    Should return the name of your arch
    $PTOLEMY/mk/config-$PTARCH.mk
    Needs to be set up
    src/octtools/Packages/port/port.h
    Lots of architecture dependencies. Note that this file gets installed in: $PTOLEMY/octtools/include/port.h and $PTOLEMY/src/octtools/include/port.h
    src/compat/ptolemy/compat.h
    Used in pigilib and libgantt
    src/kernel/ptsignals.cc
    Used in pigilib to speed up the GUI. Contains architecture dependencies for signals and interrupts.
    src/thread/makefile
    Used under solaris to build GNU threads for the PN domain
    Getting dynamic linking to work is tricky.
    src/pigilib/pigiLoader.cc
    Used to dynamically link stars
    src/kernel/Linker.*
    Used to dynamically link stars
    You may find it useful to determine what symbols are predefined for a new architecture. If you are using g++, try compiling a simple c++ file with the -E -dM flags. This will print out the cpp symbols that are defined at the end. See the gcc info (you can use tkinfo) for more information.

    If you are on a non GNU compiler, try looking at the cpp man page for hints about what symbols are predefined. If necessary, you can pass the compiler a symbol from your config-$PTARCH.mk file (i.e. CCFLAGS=-DMYARCH).

    For SystemV architectures, adding -DUSG -DSYSV to CCFLAGS in config-$PTARCH.mk could help.

    compat.h

    I'm trying to move more of the system dependent code into src/compat/ptolemy/compat.h. If more of the system dependent code is in one place, then ports might be easier.

    #ifdef arch

    If you need to make #ifdefs in a file that are architecture dependent, try using the #defines from compat.h. All the architectures should have a #define. For example, sol2 has #define PTSOL2 in compat.h, and sol2.cfront has #define PTSOL2_CFRONT. If you use the PTxxx defines, then you won't have to have some of the crazy 'how do I tell I'm on a solaris2 sun running cfront' #defines.

    function prototypes

    If you need to add a system call function prototype, you may want to add it to compat.h. Note that getting function prototypes to work on all platforms can be tricky. For example, Sun4 cc does not handle function prototypes. If you are working on site, you can use 'glimpse -H ~ptdesign/src a_function_call' to see if other files use 'a_function_call'.

    (Of course, all the compat.h stuff are merely guidelines)

    include files

    If you want to add another include file, the first thing to try is to look on an older machine, such as a SunOS4.1 machine and see if it is there. The second thing to do is to use glimpse to search for other files that use the include file in question

    general procedure

    First thing to port is octtools, make sure that vem starts up. If you do 'setenv OCTTOOLS $PTOLEMY', then vem should start.

    Then build tcltk.

    Then try to build what is necessary to build a SDF and DE only ptcl (kernel, ptcl, domains/de domains/df). The try a complete build.

    Eventually, we should use the GNU autoconfig tool, which would allow us to use defines like -DHAS_FOO, instead of depending on #if defined(mynewarch).

    testing

    Octtools has some tests now. The tests are not perfect, but they will point out some bugs. To run the octtools tests, build octtools and then do "cd $PTOLEMY/obj.$PTARCH/octtools; make tests"

    Changing gcc versions

    If you change gcc versions, you should look at the defines in $PTOLEMY/.cshrc and $PTOLEMY/bin/g++-setup make sure that they refer to the right version. Also, $PTOLEMY/src/gnu/makefile contains references to the current version, along with $PTOLEMY/src/gnu/README and $PTOLEMY/src/gnu/INSTALL. Also, check out the COLLECTOR variables in $PTOLEMY/mk/config-*.mk

    Ptk commands

    Currently, the ptk tcl/tk commands are not really documented. I believe that the interface is still in flux, so we don't want to publicize the specification. See ~ptdesign/adm/doc/ptk.doc for more info.

    Embedded Postscript (EPS) in Frame docs

    If you use EPS in a Frame doc that will be part of the Ptolemy doc distribution, then you must run ~ptdesign/bin/fixPS on it.

    If you use EPS in a Frame doc that will be part of the Ptolemy distribution, you should add rules to the appropriate makefile that will run fixPS on your Postscript file. For example, cg56.ps and cg96.ps in the users manual need to have fixPS run on them. So, ~ptdesign/doc/users_man/makefile contains the lines:

    FIXPS=/users/ptdesign/bin/fixPS
    fm2ps: update_book
    	fmprint2ps $(DOCS)
    	$(FIXPS) cg56.ps
    	$(FIXPS) cg96.ps
    
    If you don't run fixPS on your Frame postscript files that use EPS, we will hear about it in the next release.

    Contributing Stars

    [This section should move into the users manual, perhaps into AppendixA]

    The Ptolemy development group is interested in any stars Ptolemy users might want to contribute to the project, but we have limited resources, so here are a few guidelines for contributions:

    Distributing stars We are proposing two methods of distributing stars: via ftp and via the next Ptolemy distribution. These instructions are primarily for distribution via ftp from our ftp site.

    If you stars are to be distributed from our ftp site, we will put them in a contrib directory.

    If we distribute your star in the next Ptolemy release, we will probably want to add the Berkeley copyright to your stars.

    We have not resolved the details of where contrib stars will go, but currently we are thinking of putting stars into contrib directories next to the stars directories. So, $PTOLEMY/src/domains/sdf/contrib/stars would contain sdf .pl files that users had contributed. Stars that we find really useful, we would like to incorporate into the main stars/ directories. The primary reason for a contrib directory is that we don't have the resources to fold in all stars into the main tree.

    Packaging contrib stars to be included into the next Ptolemy release is more work for everyone involved, but the stars are more likely to be used. Due to time constraints, we might not be able to get contrib stars into the 'next' Ptolemy release, but we will try.

    Copyright If your stars are to be included in the Ptolemy distribution then we would like to put It makes our lives a lot easier if you put the UC Berkeley copyright on the .pl files. See any .pl file for the current UC Berkeley copyright. If you need a different copyright, then we'll have to negotiate.

    What to include If you are contributing stars, please include .pl files, icons, and demos.

    1) Your .pl file should have a description of the star. Be sure to include your name as the author in the .pl file.

    2) The demo directory should contain a facet "init.pal" with an icon for each demo and a facet "user.pal" with icons for each star. Including a custom icon is nice. Including a demo really helps the Ptolemy group test your star. Stars with out demos are more likely to be broken.

    [This step needs to be polished -cxh] 3) The init.pal and user.pal palettes, and all demos, should contain ***only*** references to the directory $PTOLEMY/src/domains/the-domain/contrib/your_hopefully_unique_dir_name. To check this latter condition, run

    masters {init.pal, user.pal ...}
    
    Type a ? to list the references. Any that have absolute path names, such as
    /usr/users/my_home/my_stars/star_name
    
    or relative path names using a user's home directory, such as
    ~user_name/my_stars/starname
    
    are no good. You can easily replace these with $PTOLEMY/contrib/... using the masters program. For example, a good contrib directory would look like:
    $PTOLEMY/src/domains/sdf/contrib/my_star/src/SDFMyStar.pl
    
    The demo and icon directories below would be included
    $PTOLEMY/src/domains/sdf/contrib/my_star/demo/MyStarDemo
    $PTOLEMY/src/domains/sdf/contrib/my_star/icons/MyStar
    $PTOLEMY/src/domains/sdf/contrib/my_star/icons/init.pal
    $PTOLEMY/src/domains/sdf/contrib/my_star/icons/user.pal
    
    Below is an example session that runs masters
    cp -R my_directory/* $PTOLEMY/src/domains/sdf/contrib/my_star
    cd $PTOLEMY/src/domains/sdf/contrib/my_star/icons
    remove junk
    masters init.pal
    ...
    masters user.pal
    ...
    masters others...
    cd $PTOLEMY/src/domains/sdf/contrib/my_star/demo
    masters MyStarDemo
    tar cvf ...
    uuendcode ...
    

    FTP

    For security reasons, we don't have a writable anonymous ftp site, so sending us stars can be tricky. If you have anonymous ftp, then we could pick up your stars. Another solution is to create a tar file of the facets and .pl files and then use uuencode to mail them.

    Misc.

    See Also

    $PTOLEMY/src/gnu/README All about GNU software
    ~ptdesign/adm/doc/porting.txt Notes about porting Ptolemy to different architectures
    $PTOLEMY/src/octtools/README Contains discussion about vem and gcc