IMP  2.3.1
The Integrative Modeling Platform
IMP::parallel Namespace Reference

Distribute IMP tasks to multiple processors or machines. More...

Detailed Description

Distribute IMP tasks to multiple processors or machines.

This module employs a master-slave model; the main (master) IMP process sends the tasks out to one or more slaves. Tasks cannot communicate with each other, but return results to the master. The master can then start new tasks, possibly using results returned from completed tasks. The system is fault tolerant; if a slave fails, any tasks running on that slave are automatically moved to another slave.

To use the module, first create a Manager object. Add one or more slaves to the Manager using its add_slave() method (example slaves are LocalSlave, which simply starts another IMP process on the same machine as the master, and SGEQsubSlaveArray, which starts an array of multiple slaves on a Sun GridEngine cluster). Next, call the get_context() method, which creates and returns a new Context object. Add tasks to the Context with the Context.add_task() method (each task is simply a Python function or other callable object). Finally, call Context.get_results_unordered() to send the tasks out to the slaves (a slave only runs a single task at a time; if there are more tasks than slaves later tasks will be queued until a slave is done with an earlier task). This method returns the results from each task as it completes.

Setup in IMP is often expensive, and thus the Manager.get_context() method allows you to specify a Python function or other callable object to do any setup for the tasks. This function will be run on the slave before any tasks from that context are started (the return values from this function are passed to the task functions). If multiple tasks from the same context are run on the same slave, the setup function is only called once.

Troubleshooting

Several common problems with this module are described below, together with solutions.

  • Master process fails with /bin/sh: qsub: command not found, but qsub works fine from a terminal.
    SGEQsubSlaveArray uses the qsub command to submit the SGE job that starts the slaves. Thus, qsub must be in your system PATH. This may not be the case if you are using a shell script such as imppy.sh to start IMP. To fix this, modify the shell script to add the directory containing qsub to the PATH, or remove the setting of PATH entirely.
  • The master process 'hangs' and does not do anything when Context.get_results_unordered() is called.
    Usually this is because no slaves have successfully started up. Check the slave output files to determine what the problem is.
  • Slave output files contain only a Python traceback ending in ImportError: No module named IMP.parallel.slave_handler.
    The slaves simply run 'python' and expect to be able to load in the IMP Python modules. If you need to run a modified version of Python, or usually prefix your Python command with a shell script such as imppy.sh, you need to tell the slaves to do that too. Specify the full command line needed to start a suitable Python interpreter as the 'python' argument when you create the Manager object.
  • Slave output files contain only a Python traceback ending in socket.error: (110, 'Connection timed out').
    The slaves need to connect to the machine running the master process over the network. This connection can fail (or time out) if that machine is firewalled. It can also fail if the master machine is multi-homed (a common setup for the headnode of a compute cluster). For a multi-homed master machine, use the 'host' argument when you create the Manager object to tell the slaves the name of the machine as visible to them (typically this is the name of the machine's internal network interface).
  • Slave output files contain only a Python traceback ending in socket.error: (111, 'Connection refused').
    If the master encounters an error and exits, it will no longer be around to accept connections from slaves, so they will get this error when they try to start up. Check the master log file for errors. Alternatively, the master may have simply finished all of its work and exited normally before the slave started (either the master had little work to do, or the slave took a very long time to start up). This is normal.

Info

Author(s): Ben Webb

Maintainer: benmwebb

License: LGPL This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

Publications:

Namespaces

 master_communicator
 Classes for communicating from the master to slaves.
 
 subproc
 Subprocess handling.
 
 util
 Utilities for the IMP.parallel module.
 

Classes

class  Context
 A collection of tasks that run in the same environment. More...
 
class  Error
 Base class for all errors specific to the parallel module. More...
 
class  LocalSlave
 A slave running on the same machine as the master. More...
 
class  Manager
 Manages slaves and contexts. More...
 
class  NetworkError
 Error raised if a problem occurs with the network. More...
 
class  NoMoreSlavesError
 Error raised if all slaves failed, so tasks cannot be run. More...
 
class  RemoteError
 Error raised if a slave has an unhandled exception. More...
 
class  SGEPESlaveArray
 An array of slaves in a Sun Grid Engine system parallel environment. More...
 
class  SGEQsubSlaveArray
 An array of slaves on a Sun Grid Engine system, started with 'qsub'. More...
 
class  Slave
 Representation of a single slave. More...
 
class  SlaveArray
 Representation of an array of slaves. More...
 

Standard module functions

All IMP modules have a set of standard functions to help get information about the module and about files associated with the module.

std::string get_module_version ()
 
std::string get_module_name ()
 
std::string get_data_path (std::string file_name)
 Return the full path to installed data. More...
 
std::string get_example_path (std::string file_name)
 Return the path to installed example data for this module. More...
 

Function Documentation

std::string IMP::parallel::get_data_path ( std::string  file_name)

Return the full path to installed data.

Each module has its own data directory, so be sure to use the version of this function in the correct module. To read the data file "data_library" that was placed in the data directory of module "mymodule", do something like

std::ifstream in(IMP::mymodule::get_data_path("data_library"));

This will ensure that the code works when IMP is installed or used via the setup_environment.sh script.

std::string IMP::parallel::get_example_path ( std::string  file_name)

Return the path to installed example data for this module.

Each module has its own example directory, so be sure to use the version of this function in the correct module. For example to read the file example_protein.pdb located in the examples directory of the IMP::atom module, do

model));

This will ensure that the code works when IMP is installed or used via the setup_environment.sh script.