man tricensus-mpi (Commandes) - Distribute a triangulation census amongst several machines using MPI

NAME

tricensus-mpi - Distribute a triangulation census amongst several machines using MPI

SYNOPSIS

tricensus-mpi [ -o, --orientable | -n, --nonorientable ] [ -f, --finite | -d, --ideal ] [ -m, --minimal | -M, --minprime | -N, --minprimep2 ] pairs-file output-file-prefix

DESCRIPTION

Allows multiple processes, possibly running on different machines, to collaborate in forming a census of 3-manifold triangulations. Coordination is done through MPI (the Message Passing Interface), and the entire census is run as a single MPI job. Note: This program is well suited for running on a formal cluster-like infrastructure. For a more ad-hoc census manager that does not rely on such infrastructure, see the tricensus-manager utility instead.

In preparing a census to be distributed amongst several processes or machines, it should be possible to split the census into smaller pieces. Running tricensus with option --genpairs (which is quite fast) will create a list of face pairings, each of which must be analysed in order to complete the census. The census is split into pieces by passing one face pairing to each process at a time. Note: Whereas tricensus-mpi uses a single large face pairings file (with MPI handling the distribution of pairings to individual processes), the alternative tricensus-manager uses many small face pairings files (with individual processes claiming individual files to work on).

The full list of face pairings should be stored in a single file, which is passed on the command-line as pairs-file. This file must contain one face pairing per line, and each of these face pairings must be in canonical form (i.e., must be a minimal representative of its isomorphism class). Note that the face pairings generated by tricensus --genpairs are guaranteed to satisfy these conditions.

This tricensus-mpi utility works as follows. One MPI process acts as the controller, and the remaining processes each act as slaves. The controller reads the list of face pairings from pairs-file and farms these face pairings out to the slaves for processing. Each slave processes one face pairing at a time, asking the controller for a new face pairing when it is finished with a previous one.

The individual face pairings are numbered 1,2,... according to their position in pairs-file. A slave processing face pairing number k will perform the following tasks.

•
A time file is created with the name output-file-prefix_k.time, listing the specific face pairing being processed as well as which MPI slave has taken the job and when.
•
The subcensus for this specific face pairing is run, and the resulting triangulations are written to the topology data file output-file-prefix_k.rga.
•
The time file is updated to include the time at which the subcensus finished and how much CPU time it consumed.
•
The slave notifies the controller of the results and requests another face pairing for processing.

The controller and slave processes all take the same tricensus-mpi options (excluding MPI-specific options, which are generally supplied by an MPI wrapper program such as mpirun or mpiexec). The different roles of the processes are determined solely by their MPI process rank (the controller is always the process with rank 0). It should therefore be possible to start all MPI processes by running a single command, as illustrated in the examples below.

As the census progresses, the controller keeps a log of which slaves are processing which face pairings. This log is written to the file output-file-prefix.log. Tip: Once the census is complete, the regconcat command may be used to combine the many small topology data files into one larger file for easier handling.

OPTIONS

The census options accepted by tricensus-mpi have identical behaviour to those same options when passed to tricensus. See the tricensus reference for further details.

Note that some tricensus options are not available here (e.g., tetrahedra and boundary options), since these must be supplied earlier on when generating the initial list of face pairings through tricensus --genpairs.

EXAMPLES

Suppose we wish to form a census of all 5-tetrahedron closed non-orientable triangulations, where the census is optimised for prime minimal P2-irreducible triangulations (and in particular, some triangulations that are not prime, minimal and P2-irreducible may be left out).

We begin by using tricensus to generate a full list of face pairings.

    example$ tricensus --genpairs -t 5 -i > 5.pairs
    Total face pairings: 28
    example$

We now use tricensus-mpi to run the distributed census. A wrapper program such as mpirun or mpiexec can generally be used to start the MPI processes, though this depends on your specific MPI implementation. The command for running a distributed census on 10 processors for the MPICH implementation of MPI is as follows.

    example$ mpirun -np 10 /usr/bin/tricensus-mpi -Nnf 5.pairs 5-nor
    example$

The current state of processing can be watched in the controller log 5-nor.log and in the individual time files 5-nor_1.time, ..., 5-nor_28.time. The resulting triangulations are saved in the files 5-nor_1.rga, ..., 5-nor_28.rga.

SEE ALSO

regconcat, sigcensus, tricensus, tricensus-manager, regina-kde.

AUTHOR

Regina was written by Ben Burton <bab@debian.org> with help from others; see the documentation for full details.