To optimize the pipeline we need quantified information about the bottlenecks of the pipeline. To get this information we need to monitor the run of the pipeline.We identified three possibilities to do so:

  1. Use existing monitor that is already in place on the cluster
  2. Write a script that runs some data gather tasks in the background
  3. Run the pipeline on a separate but comparable machine to which we have root access so we can install monitoring software ourselves.

We asked about option 1) but got a negative answer. Option 3) is the last resort, because it will not be entirely the same as running on the cluster. For the second option David van Enckevort wrote a script that will gather information from different tools in the background. This script needs to be tested on the cluster first and if it works needs to be integrated with the pipeline.

The script uses different tools that should normally be available on the cluster nodes (e.g. top, netstat, vmstat, lsof), and it can be easily extended with other tools. It will start the tools, using the tool options to run continuously where useful, before running the actual pipeline script. All the information is collected in files in a work directory.

Last modified 13 years ago Last modified on Dec 14, 2010 4:45:28 PM