There are two ways to run NAMD on the cluster : the old way, and the new way.
The old way
Is to submit a batch job via unix's 'at' command :- Collect all required files (plus the NAMD script) in a clean directory somewhere in the /work directory.
- Use the following to submit the job :
at -q z now /bin/mosrun -L /usr/local/NAMD_2.5/charmrun /bin/mosrun -L /usr/local/NAMD_2.5/namd2 +p9 heat.namd > LOG <CTRL-D>where '+p9' defines the number of processors to use (9 in this case).
- Check that the job starts.
- Renice jobs (man renice).
But, there were problems with the old way :
- The job could not be stopped cleanly : a killall should be sent to each node.
- The job could not be started from anywhere : to run a job on the new machines you should be logged on the server (but nowhere else). To start a job on the old machines you should log to pc13 (and nowhere else).
- Adjustment of the job's priority (so that the machines remain responsive) had to be done manually on each node.
- There was no way to schedule job execution (for example, to start a job at a latter time, or to queue a job until the previous finishes, to set an upper limit to the total machine load, or used memory, etc).
The new way
Enter SGE :
- Collect all required files (plus the NAMD script) in a clean directory somewhere in the /work directory.
- Create a file with a suitable name (like NAMD_job.sh) containing the following:
- Edit the last line of this file and change the name of your NAMD script ('heat.namd' for this example).
- Submit the job with qsub job.sh.
- That's it : use qmon or qstat to view the queues' status.
The advantages of using a proper queuing system are too many to discuss (see the SGE documentation). What is worth noting is that all this additional functionality comes at virtually no cost in terms of speed of execution. Compare the minimisation speed for the same identical job submitted via 'at' and 'qsub':
at timing statistics
TIMING: 200 CPU: 178.66, 0.8836/step Wall: 181.67, 0.896849/step, 0.448424 hours remaining, 13975 kB of memory in use. TIMING: 400 CPU: 345.77, 0.83555/step Wall: 351.351, 0.848402/step, 0.377068 hours remaining, 13975 kB of memory in use. TIMING: 600 CPU: 512.58, 0.83405/step Wall: 520.755, 0.847025/step, 0.329398 hours remaining, 13975 kB of memory in use. TIMING: 800 CPU: 679.38, 0.834/step Wall: 696.912, 0.880784/step, 0.293595 hours remaining, 13975 kB of memory in use. TIMING: 1000 CPU: 846.07, 0.83345/step Wall: 866.847, 0.849677/step, 0.236021 hours remaining, 15687 kB of memory in use. TIMING: 1200 CPU: 1013.13, 0.8353/step Wall: 1037.54, 0.853481/step, 0.189662 hours remaining, 15687 kB of memory in use. TIMING: 1400 CPU: 1180, 0.83435/step Wall: 1208.06, 0.852557/step, 0.142093 hours remaining, 15687 kB of memory in use. TIMING: 1600 CPU: 1347.06, 0.8353/step Wall: 1377.96, 0.849509/step, 0.0943899 hours remaining, 15687 kB of memory in use.
SGE timing statistics
TIMING: 200 CPU: 178.57, 0.8832/step Wall: 183.282, 0.904775/step, 0.452388 hours remaining, 13973 kB of memory in use. TIMING: 400 CPU: 345.51, 0.8347/step Wall: 354.533, 0.856257/step, 0.380559 hours remaining, 13973 kB of memory in use. TIMING: 600 CPU: 512.48, 0.83485/step Wall: 525.369, 0.85418/step, 0.332181 hours remaining, 13973 kB of memory in use. TIMING: 800 CPU: 678.87, 0.83195/step Wall: 696.738, 0.856846/step, 0.285615 hours remaining, 13973 kB of memory in use. TIMING: 1000 CPU: 845.37, 0.8325/step Wall: 867.586, 0.854238/step, 0.237288 hours remaining, 15685 kB of memory in use. TIMING: 1200 CPU: 1012.03, 0.8333/step Wall: 1039.36, 0.858858/step, 0.190857 hours remaining, 15685 kB of memory in use. TIMING: 1400 CPU: 1178.78, 0.83375/step Wall: 1210.17, 0.854063/step, 0.142344 hours remaining, 15685 kB of memory in use. TIMING: 1600 CPU: 1345.85, 0.83535/step Wall: 1383.31, 0.865716/step, 0.096190 hours remaining, 15685 kB of memory in use.