- Create a clean directory and cd in it.
- Get the CHARMM force field parameter and topology files :
- cp /usr/local/charmm27/par_all27_prot_na.inp ./
- cp /usr/local/charmm27/top_all27_prot_na.inp ./
- Download the B1 domain of protein G from the PDB [1] and save it on your disk.
- gunzip 1PGB.pdb.gz
- cp 1PGB.pdb proteinG.pdb
- Edit the file proteinG.pdb and remove all lines that do not start with ATOM or TER. Also, retain the final END. Add a chain identifier for all atoms of the monomer, say 'A'. Save the modified file which should look like this :
ATOM 1 N MET A 1 12.969 18.506 30.954 1.00 15.93 1PGB 70 ATOM 2 CA MET A 1 13.935 18.529 29.843 1.00 17.40 1PGB 71 ATOM 3 C MET A 1 13.138 18.692 28.517 1.00 14.65 1PGB 72 ATOM 4 O MET A 1 12.007 18.222 28.397 1.00 13.04 1PGB 73 ATOM 5 CB MET A 1 14.733 17.216 29.882 1.00 20.72 1PGB 74 ..... ATOM 434 OE1 GLU A 56 2.544 10.440 6.499 1.00 18.16 1PGB 503 ATOM 435 OE2 GLU A 56 1.737 8.791 7.641 1.00 20.42 1PGB 504 ATOM 436 OXT GLU A 56 6.410 6.617 4.667 1.00 24.74 1PGB 505 TER 437 GLU A 56 1PGB 506 END
- rasmol proteinG.pdb (you should see the protein).
- Now it is time to align the axes of inertia of the molecule with the orthogonal frame : prepare a file with the name moleman.csh containing the following :
#!/bin/tcsh -f # # This will read a PDB file and rotate/translate it so that # 1. the centre of gravity will be at the origin # 2. the axes of inertia will be aligned with the orthogonal frame # lx_moleman2 >& moleman.log << eof /usr/local/xutil/moleman2.lib REad proteinG.pdb XYz ALign_inertia_axes WRite aligned.pdb quit eof exit
- Run it : source moleman.csh. This should create two files : a log file (moleman.log) and the new pdb (aligned.pdb).
- rasmol aligned.pdb (to confirm the changes)
- Create a file with the name psfgen.csh containing the following :
#!/bin/tcsh -f /usr/local/NAMD_2.5/psfgen >& psfgen.log << END topology top_all27_prot_na.inp segment A { pdb aligned.pdb } alias atom ILE CD1 CD coordpdb aligned.pdb A guesscoord writepsf psfgen.psf writepdb psfgen.pdb END exit
- Use it to generate a new pdb file (psfgen.pdb) and the PSF file (psfgen.psf) needed for NAMD : source psfgen.csh. Examine the output from the program (file psfgen.log).
- Create a file with the name make_all.vmd containing the following script (for VMD) :
# # This is a short script implementing VMD's 'solvate' and # 'autoionize' to prepare a fully solvated and neutral # system for a periodic boundary simulation. It also # prepares two pdb files needed for implementing restrains # during the heating-up phase. # # # Make water box # package require hexsolvate hexsolvate psfgen.psf psfgen.pdb -o hydrated -b 1.80 -t 10.0 # # Add ions to neutralise charge # package require autoionize autoionize -psf hydrated.psf -pdb hydrated.pdb -is 0.150 # # Prepare restraints files # mol load psf ionized.psf pdb ionized.pdb set all [atomselect top all] set sel [atomselect top "protein and name CA"] $all set beta 0 $sel set beta 0.5 $all writepdb restrain_ca.pdb set all [atomselect top all] set to_fix [atomselect top "protein and backbone"] $all set beta 0 $to_fix set beta 1 $all writepdb fix_backbone.pdb
- Run this script via VMD :
- Start VMD : vmd
- Locate VMD's console (window with the prompt)
- In the console give source make_all.vmd
- If all goes well you should see your fully hydrated and ionised system on VMD's graphics window. Note the hexagonal cell.
- To finish type quit in the console.
- Use rasmol or vmd to study your system (ionized.pdb). Make sure that there are no waters or ions accidentally placed inside your molecule's hydrophobic core. If there are, you will have to think : Is there a cavity there ? Would it be possible that there is indeed a water molecule placed in that cavity ? Is there experimental evidence concerning the presence of these waters ?
- The last step before running NAMD is to determine the limits (along the orthogonal frame) of the system. A fast method to do this is to use the program pdbset from the CCP4 suite of program. Here it is how :
- Start it : pdbset xyzin ionized.pdb >& pdbset.log
- Type END
- Edit the file pdbset.log. Somewhere near the end of it you will see something similar to :
Orthogonal Coordinate limits in output file: Minimum Maximum Centre Range On X : -29.05 26.71 -1.17 55.77 On Y : -23.36 23.63 0.14 46.98 On Z : -20.69 20.61 -0.04 41.29Do write down these numbers, especially the range (on x,y,z) and position of centre.
- Create a NAMD minimisation and heating-up script with the name heat.namd containing the following :
# # Input files # structure ionized.psf coordinates ionized.pdb parameters par_all27_prot_na.inp paraTypeCharmm on # # Output files & writing frequency for DCD # and restart files # outputname output/heat_out binaryoutput off restartname output/restart restartfreq 1000 binaryrestart yes dcdFile output/heat_out.dcd dcdFreq 200 # # Frequencies for logs and the xst file # outputEnergies 20 outputTiming 200 xstFreq 200 # # Timestep & friends # timestep 2.0 stepsPerCycle 8 nonBondedFreq 2 fullElectFrequency 4 # # Simulation space partitioning # switching on switchDist 8 cutoff 9 pairlistdist 9.5 # # Basic dynamics # temperature 0 COMmotion no dielectric 1.0 exclude scaled1-4 1-4scaling 1.0 rigidbonds all # # Particle Mesh Ewald parameters. # Pme on PmeGridsizeX 54 # <===== CHANGE ME PmeGridsizeY 40 # <===== CHANGE ME PmeGridsizeZ 40 # <===== CHANGE ME # # Periodic boundary things # wrapWater on wrapNearest on cellBasisVector1 55.77 00.00 00.00 # <===== CHANGE ME cellBasisVector2 00.00 35.77 20.65 # <===== CHANGE ME cellBasisVector3 00.00 00.00 41.30 # <===== CHANGE ME cellOrigin -1.10 0.00 0.00 # <===== CHANGE ME # # Fixed atoms for initial heating-up steps # fixedAtoms on fixedAtomsForces on fixedAtomsFile fix_backbone.pdb fixedAtomsCol B # # Restrained atoms for initial heating-up steps # constraints on consRef restrain_ca.pdb consKFile restrain_ca.pdb consKCol B # # Langevin dynamics parameters # langevin on langevinDamping 10 langevinTemp 320 # <===== Check me langevinHydrogen on langevinPiston on langevinPistonTarget 1.01325 langevinPistonPeriod 200 langevinPistonDecay 100 langevinPistonTemp 320 # <===== Check me useGroupPressure yes ########################################## # The actual minimisation and heating-up # protocol follows. The number of steps # shown below are too small for a real run ########################################## # # run one step to get into scripting mode # minimize 0 # # turn off pressure control until later # langevinPiston off # # minimize nonbackbone atoms # minimize 400 ;# <===== CHANGE ME output output/min_fix # # min all atoms # fixedAtoms off minimize 400 ;# <===== CHANGE ME output output/min_all # # heat with CAs restrained # set temp 20; while { $temp < 321 } { ;# <===== Check me langevinTemp $temp run 400 ;# <===== CHANGE ME output output/heat_ca set temp [expr $temp + 20] } # # equilibrate volume with CAs restrained # langevinPiston on run 400 ;# <===== CHANGE ME output output/equil_ca # # equilibrate volume without restraints # constraintScaling 0 run 2000 ;# <===== CHANGE ME
- The lines (in the above script) saying cellBasisVector1, cellBasisVector2, cellBasisVector3 hold the secret of performing the simulation in a hexagonal cell. The numbers used there have been derived from the numbers reported from pdbset. Can you work out how and why ?
- You can now give it a quick try with NAMD just to make sure it starts without problems. To avoid overloading machines that are already busy with real calculations you must review the cluster usage as follows :
- Type mosmon from your terminal. You should see something similar to this
This graph shows (in real time) the CPU load for each of the nodes that are currently part of the cluster. The horizontal axis shows the various nodes, the vertical axis shows the corresponding load. If all machines are working properly and they are all running the openMosix linux kernel you should see at least 18 nodes (numbered 25700-25716, plus another node with ID 25854). In this case all cluster nodes are running a proper operating system (and can be seen in the mosmon output) and three of these (25708, 25709 and 25710 corresponding to pc08, pc09 and pc10) are idle. So, nodes pc08, pc09 and pc10 are good candidates for performing your tests.
- Type q to stop mosmon and then give qmon (from your terminal). The main Sun Grid Engine GUI window should appear:
- Select "Queue control". This should pop-up the corresponding window :
- Note the small green boxes (indicating that a job is running) on all nodes except pc08, pc09 and pc10. So, it does look like we could use these three machines for the tests. Press 'Done' on the queue control window and then exit in the qmon window.
- As a first test lets run NAMD on a single node, e.g. pc09 : login to it with ssh pc9 (use your password to login). Then cd to the directory containing your file : oops, no such file or directory ? That's right : you created the files on your local machine, which probably isn't pc09. So, here comes the need for a cluster-wide filesystem. This is implemented in the form of the /work directory. Proceed as follows :
- Logout from pc09 : exit
- cd /work
- ls
- cd tmp
- mkdir my_tests
- Now go back to the directory containing the files you created. Of all the files present in there you only need to copy those few :
cp ionized.p* par_all27_prot_na.inp heat.namd fix_backbone.pdb restrain_ca.pdb /work/tmp/my_tests/
- ssh pc9 and login again
- cd /work/tmp/my_tests/
- ls and you should see your files (although you are connected to pc9).
- mkdir output will create a subdirectory where the output files from NAMD will be written (during the run).
- Now, go for it : runhome /usr/local/NAMD_2.5/namd2 heat.namd. The runhome command tells the openMosix kernel to run the job on the local machine (not to migrate it away). Once the program performs a couple of minimisation steps stop it with <CTRL-C>. What you should see should be similar to :
Info: NAMD 2.5 for Linux-i686 Info: Info: Please visit http://www.ks.uiuc.edu/Research/namd/ Info: and send feedback or bug reports to namd@ks.uiuc.edu Info: Info: Please cite Kale et al., J. Comp. Phys. 151:283-312 (1999) Info: in all publications reporting results obtained with NAMD. Info: Info: Based on Charm++/Converse 050612 for net-linux-icc Info: Built Fri Sep 26 17:33:59 CDT 2003 by jim on lisboa.ks.uiuc.edu Info: Sending usage information to NAMD developers via UDP. Sent data is: Info: 1 NAMD 2.5 Linux-i686 2 aspera.cluster.mbg.gr glykos Info: Running on 2 processors. Info: 1469 kB of memory in use. Measuring processor speeds... Done. Info: Configuration file is heat.namd TCL: Suspending until startup complete. Info: SIMULATION PARAMETERS: Info: TIMESTEP 2 Info: NUMBER OF STEPS 0 Info: STEPS PER CYCLE 8 Info: PERIODIC CELL BASIS 1 55.77 0 0 Info: PERIODIC CELL BASIS 2 0 35.77 20.65 Info: PERIODIC CELL BASIS 3 0 0 41.3 Info: PERIODIC CELL CENTER -1.1 0 0 Info: WRAPPING WATERS AROUND PERIODIC BOUNDARIES ON OUTPUT. Info: WRAPPING TO IMAGE NEAREST TO PERIODIC CELL CENTER. Info: LOAD BALANCE STRATEGY Other Info: LDB PERIOD 1600 steps Info: FIRST LDB TIMESTEP 40 Info: LDB BACKGROUND SCALING 1 Info: HOM BACKGROUND SCALING 1 Info: PME BACKGROUND SCALING 1 Info: MAX SELF PARTITIONS 50 Info: MAX PAIR PARTITIONS 20 Info: SELF PARTITION ATOMS 125 Info: PAIR PARTITION ATOMS 200 Info: PAIR2 PARTITION ATOMS 400 Info: INITIAL TEMPERATURE 0 Info: CENTER OF MASS MOVING? NO Info: DIELECTRIC 1 Info: EXCLUDE SCALED ONE-FOUR Info: 1-4 SCALE FACTOR 1 Info: DCD FILENAME output/heat_out.dcd Info: DCD FREQUENCY 200 Warning: INITIAL COORDINATES WILL NOT BE WRITTEN TO DCD FILE Info: XST FILENAME output/heat_out.xst Info: XST FREQUENCY 200 Info: NO VELOCITY DCD OUTPUT Info: OUTPUT FILENAME output/heat_out Info: RESTART FILENAME output/restart Info: RESTART FREQUENCY 1000 Info: BINARY RESTART FILES WILL BE USED Info: SWITCHING ACTIVE Info: SWITCHING ON 8 Info: SWITCHING OFF 9 Info: PAIRLIST DISTANCE 9.5 Info: PAIRLIST SHRINK RATE 0.01 Info: PAIRLIST GROW RATE 0.01 Info: PAIRLIST TRIGGER 0.3 Info: PAIRLISTS PER CYCLE 2 Info: PAIRLISTS ENABLED Info: MARGIN 0.36 Info: HYDROGEN GROUP CUTOFF 2.5 Info: PATCH DIMENSION 12.36 Info: ENERGY OUTPUT STEPS 20 Info: TIMING OUTPUT STEPS 200 Info: FIXED ATOMS ACTIVE Info: FORCES BETWEEN FIXED ATOMS ARE CALCULATED Info: HARMONIC CONSTRAINTS ACTIVE Info: HARMONIC CONS EXP 2 Info: LANGEVIN DYNAMICS ACTIVE Info: LANGEVIN TEMPERATURE 320 Info: LANGEVIN DAMPING COEFFICIENT IS 10 INVERSE PS Info: LANGEVIN DYNAMICS APPLIED TO HYDROGENS Info: LANGEVIN PISTON PRESSURE CONTROL ACTIVE Info: TARGET PRESSURE IS 1.01325 BAR Info: OSCILLATION PERIOD IS 200 FS Info: DECAY TIME IS 100 FS Info: PISTON TEMPERATURE IS 320 K Info: PRESSURE CONTROL IS GROUP-BASED Info: INITIAL STRAIN RATE IS 0 0 0 Info: CELL FLUCTUATION IS ISOTROPIC Info: PARTICLE MESH EWALD (PME) ACTIVE Info: PME TOLERANCE 1e-06 Info: PME EWALD COEFFICIENT 0.348832 Info: PME INTERPOLATION ORDER 4 Info: PME GRID DIMENSIONS 54 40 40 Info: Attempting to read FFTW data from FFTW_NAMD_2.5_Linux-i686.txt Info: Optimizing 6 FFT steps. 1... 2... 3... 4... 5... 6... Done. Info: Writing FFTW data to FFTW_NAMD_2.5_Linux-i686.txt Info: FULL ELECTROSTATIC EVALUATION FREQUENCY 4 Info: USING VERLET I (r-RESPA) MTS SCHEME. Info: C1 SPLITTING OF LONG RANGE ELECTROSTATICS Info: PLACING ATOMS IN PATCHES BY HYDROGEN GROUPS Info: RIGID BONDS TO HYDROGEN : ALL Info: ERROR TOLERANCE : 1e-08 Info: MAX ITERATIONS : 100 Info: RIGID WATER USING SETTLE ALGORITHM Info: NONBONDED FORCES EVALUATED EVERY 2 STEPS Info: RANDOM NUMBER SEED 1120210498 Info: USE HYDROGEN BONDS? NO Info: COORDINATE PDB ionized.pdb Info: STRUCTURE FILE ionized.psf Info: PARAMETER file: CHARMM format! Info: PARAMETERS par_all27_prot_na.inp Info: SUMMARY OF PARAMETERS: Info: 250 BONDS Info: 622 ANGLES Info: 1049 DIHEDRAL Info: 73 IMPROPER Info: 130 VDW Info: 0 VDW_PAIRS Info: **************************** Info: STRUCTURE SUMMARY: Info: 7395 ATOMS Info: 5217 BONDS Info: 3729 ANGLES Info: 2262 DIHEDRALS Info: 137 IMPROPERS Info: 0 EXCLUSIONS Info: 56 CONSTRAINTS Info: 225 FIXED ATOMS Info: 6953 RIGID BONDS Info: 0 RIGID BONDS BETWEEN FIXED ATOMS Info: 14557 DEGREES OF FREEDOM Info: 2620 HYDROGEN GROUPS Info: 113 HYDROGEN GROUPS WITH ALL ATOMS FIXED Info: TOTAL MASS = 45579.7 amu Info: TOTAL CHARGE = 6.35162e-07 e Info: ***************************** Info: Entering startup phase 0 with 3369 kB of memory in use. Info: Entering startup phase 1 with 3369 kB of memory in use. Info: Entering startup phase 2 with 3960 kB of memory in use. Info: Entering startup phase 3 with 4018 kB of memory in use. Info: PATCH GRID IS 4 (PERIODIC) BY 2 (PERIODIC) BY 2 (PERIODIC) Info: REMOVING COM VELOCITY 0 0 0 Info: LARGEST PATCH (10) HAS 513 ATOMS Info: Entering startup phase 4 with 4970 kB of memory in use. Info: PME using 2 and 2 processors for FFT and reciprocal sum. Creating Strategy 4 Creating Strategy 4 Info: PME GRID LOCATIONS: 0 1 Info: PME TRANS LOCATIONS: 0 1 Info: Optimizing 4 FFT steps. 1... 2... 3... 4... Done. Info: Entering startup phase 5 with 5337 kB of memory in use. Info: Entering startup phase 6 with 4891 kB of memory in use. Info: Entering startup phase 7 with 4897 kB of memory in use. Info: COULOMB TABLE R-SQUARED SPACING: 0.0625 Info: COULOMB TABLE SIZE: 705 POINTS Info: NONZERO IMPRECISION IN COULOMB TABLE: 1.58819e-22 (657) 3.17637e-22 (657) Info: NONZERO IMPRECISION IN COULOMB TABLE: 2.42338e-27 (687) 5.65455e-27 (687) Info: NONZERO IMPRECISION IN COULOMB TABLE: 1.01644e-20 (700) 2.71051e-20 (700) Info: Entering startup phase 8 with 6088 kB of memory in use. Info: Finished startup with 7277 kB of memory in use. TCL: Minimizing for 0 steps ETITLE: TS BOND ANGLE DIHED IMPRP ELECT VDW BOUNDARY MISC KINETIC TOTAL TEMP TOTAL2 TOTAL3 TEMPAVG PRESSURE GPRESSURE VOLUME PRESSAVG GPRESSAVG ENERGY: 0 2136.4472 642.7569 252.5952 14.1131 -20870.1665 99999999.9999 0.0000 0.0000 0.0000 99999999.9999 0.0000 99999999.9999 99999999.9999 0.0000 99999999.9999 99999999.9999 82389.0768 99999999.9999 99999999.9999 TCL: Setting parameter langevinPiston to off TCL: Minimizing for 400 steps ETITLE: TS BOND ANGLE DIHED IMPRP ELECT VDW BOUNDARY MISC KINETIC TOTAL TEMP TOTAL2 TOTAL3 TEMPAVG PRESSURE GPRESSURE VOLUME PRESSAVG GPRESSAVG ENERGY: 0 2136.4472 642.7569 252.5952 14.1131 -20870.1665 99999999.9999 0.0000 0.0000 0.0000 99999999.9999 0.0000 99999999.9999 99999999.9999 0.0000 99999999.9999 99999999.9999 82389.0768 99999999.9999 99999999.9999 INITIAL STEP: 1e-06 GRADIENT TOLERANCE: 4.66469e+07 ENERGY: 1 2216.6582 646.2796 252.5886 14.1129 -21040.4570 363518.9081 0.0000 0.0000 0.0000 345608.0905 0.0000 345608.0905 345608.0905 0.0000 1220671.1003 1228509.5061 82389.0768 1220671.1003 1228509.5061 ENERGY: 2 2497.3478 656.4725 252.5821 14.1128 -21082.9261 15993.8346 0.0000 0.0000 0.0000 -1668.5763 0.0000 -1668.5763 -1668.5763 0.0000 46937.6944 51815.2083 82389.0768 46937.6944 51815.2083
- Wouldn't it be great to run NAMD a bit faster ? Say, approximately three times faster ? Isn't it a pity to have pc08 and pc10 filtering the dust in the terminal room ? While you are at /work/tmp/my_tests/ create a file with the name NAMD.sh containing the following :
#!/bin/csh -f # # # The name of the job # #$ -N My_test # # ====> CHANGE ME <==== # # The parallel environment (mpi_fast) and number of processors (3) # The options are : mpi_fast : the 9 new machines # mpi_slow : the 9 older machines # mpich : all machines on the cluster # #$ -pe mpi_fast 3 # # The version of MPICH to use, transport protocol & a trick to delete cleanly # running MPICH jobs ... # #$ -v MPIR_HOME=/usr/local/mpich-ssh #$ -v P4_RSHCOMMAND=rsh #$ -v MPICH_PROCESS_GROUP=no #$ -v CONV_RSH=rsh # # Nodes that can server as master queues # #$ -masterq server.q,pc01.q,pc02.q,pc03.q,pc04.q,pc05.q,pc06.q,pc07.q,pc08.q,pc09.q,pc10.q,pc11.q,pc12.q,pc13.q,pc14.q,pc16.q # # Execute from the current working directory ... # #$ -cwd # # Standard error and output should go into the current working directory ... # #$ -e ./ #$ -o ./ # # Built nodelist file for charmrun # echo "Got $NSLOTS slots." echo "group main" > $TMPDIR/charmlist awk '{print "host " $1}' $TMPDIR/machines >> $TMPDIR/charmlist cat $TMPDIR/charmlist # # ====> CHANGE ME <==== # # The name of the NAMD script file is defined here (heat.namd) as well as # the name of the job's log file (LOG) # /usr/local/NAMD_2.5/charmrun /bin/runhome /usr/local/NAMD_2.5/namd2 ++nodelist $TMPDIR/charmlist +p $NSLOTS heat.namd > LOG
- This script is a parallel NAMD job submission script for the Sun Grid Engine. All you have to do is : qsub NAMD.sh (from the /work/tmp/my_tests/ directory). You should see a message containing your job's ID.
- Type qstat : you should see your job and its status changing from qw to t to r.
- Use mosmon to confirm that the previously idle machines are indeed doing something useful.
- Use qmon to review cluster usage, and to see your job in Job Control/Running Jobs.
- tail -f LOG should show you the growing log file from your job.
- Type mknamdplot LOG and then click Draw to see a collection of graphs showing the evolution of quantities like the system's total energy, pressure, temperature, bond and angle energies, … :
- Click Exit to exit.
- The heating-up NAMD job will take 2-3 hours to finish. Let it do so. But if you'd rather not : qdel JobID will kill your job (or you can use the qmon GUI).
- When the job is done review the files that have been created in /work/tmp/my_tests/output/. You should see something like
-rw-rw-r-- 1 glykos glykos 2029503 Oct 30 15:19 equil_ca.coor -rw-rw-r-- 1 glykos glykos 2029502 Oct 30 15:19 equil_ca.vel -rw-rw-r-- 1 glykos glykos 222 Oct 30 15:19 equil_ca.xsc -rw-rw-r-- 1 glykos glykos 2029503 Oct 30 15:16 heat_ca.coor -rw-rw-r-- 1 glykos glykos 2029503 Oct 30 15:13 heat_ca.coor.BAK -rw-rw-r-- 1 glykos glykos 2029502 Oct 30 15:16 heat_ca.vel -rw-rw-r-- 1 glykos glykos 2029502 Oct 30 15:13 heat_ca.vel.BAK -rw-rw-r-- 1 glykos glykos 153 Oct 30 15:16 heat_ca.xsc -rw-rw-r-- 1 glykos glykos 153 Oct 30 15:13 heat_ca.xsc.BAK -rw-rw-r-- 1 glykos glykos 2029503 Oct 30 15:36 heat_out.coor -rw-r--r-- 1 glykos glykos 14613396 Oct 30 15:36 heat_out.dcd -rw-rw-r-- 1 glykos glykos 2029502 Oct 30 15:36 heat_out.vel -rw-rw-r-- 1 glykos glykos 227 Oct 30 15:36 heat_out.xsc -rw-rw-r-- 1 glykos glykos 3364 Oct 30 15:36 heat_out.xst -rw-rw-r-- 1 glykos glykos 2029502 Oct 30 14:25 min_all.coor -rw-rw-r-- 1 glykos glykos 2029501 Oct 30 14:25 min_all.vel -rw-rw-r-- 1 glykos glykos 152 Oct 30 14:25 min_all.xsc -rw-rw-r-- 1 glykos glykos 2029502 Oct 30 14:19 min_fix.coor -rw-rw-r-- 1 glykos glykos 2029501 Oct 30 14:19 min_fix.vel -rw-rw-r-- 1 glykos glykos 152 Oct 30 14:19 min_fix.xsc -rw-rw-r-- 1 glykos glykos 608836 Oct 30 15:31 restart.coor -rw-rw-r-- 1 glykos glykos 608836 Oct 30 15:22 restart.coor.old -rw-rw-r-- 1 glykos glykos 608836 Oct 30 15:31 restart.vel -rw-rw-r-- 1 glykos glykos 608836 Oct 30 15:22 restart.vel.old -rw-rw-r-- 1 glykos glykos 229 Oct 30 15:31 restart.xsc -rw-rw-r-- 1 glykos glykos 228 Oct 30 15:22 restart.xsc.old
- Play time : from the output/ directory give vmd ../ionized.psf heat_out.dcd
- You can now start preparing the equilibration-production run :
- cd /work/tmp/my_tests/
- mkdir heat
- mv * heat/
- mkdir equi
- cd heat/output
- cp heat_out.coor heat_out.vel heat_out.xsc ../../equi/
- cd ..
- cp ionized.p* par_all27_prot_na.inp ../equi/
- cd ../equi
- mkdir output
- Create a NAMD equilibration script with the name equi.namd containing the following :
# # Input files # structure ionized.psf coordinates heat_out.coor velocities heat_out.vel extendedSystem heat_out.xsc parameters par_all27_prot_na.inp paraTypeCharmm on # # Output files & writing frequency for DCD # and restart files # outputname output/equi_out binaryoutput off restartname output/restart restartfreq 1000 binaryrestart yes dcdFile output/equi_out.dcd dcdFreq 200 # # Frequencies for logs and the xst file # outputEnergies 20 outputTiming 200 xstFreq 200 # # Timestep & friends # timestep 2.0 stepsPerCycle 8 nonBondedFreq 2 fullElectFrequency 4 # # Simulation space partitioning # switching on switchDist 8 cutoff 9 pairlistdist 9.5 # # Basic dynamics # COMmotion no dielectric 1.0 exclude scaled1-4 1-4scaling 1.0 rigidbonds all # # Particle Mesh Ewald parameters. # Pme on PmeGridsizeX 54 # <===== CHANGE ME PmeGridsizeY 40 # <===== CHANGE ME PmeGridsizeZ 40 # <===== CHANGE ME # # Periodic boundary things # wrapWater on wrapNearest on wrapAll on # # Langevin dynamics parameters # langevin on langevinDamping 1 langevinTemp 320 # <===== Check me langevinHydrogen on langevinPiston on langevinPistonTarget 1.01325 langevinPistonPeriod 200 langevinPistonDecay 100 langevinPistonTemp 320 # <===== Check me useGroupPressure yes firsttimestep 9600 # <===== CHANGE ME run 10000 ;# <===== CHANGE ME
- Prepare a submission script for SGE, say NAMD.sh :
#!/bin/csh -f # # # The name of the job # #$ -N test_equi # # The parallel environment (mpi_fast) and number of processors (9) ... # #$ -pe mpi_fast 3 # # The version of MPICH to use, transport protocol & a trick to delete cleanly # running MPICH jobs ... # #$ -v MPIR_HOME=/usr/local/mpich-ssh #$ -v P4_RSHCOMMAND=rsh #$ -v MPICH_PROCESS_GROUP=no #$ -v CONV_RSH=rsh # # Nodes that can server as master queues # #$ -masterq server.q,pc01.q,pc02.q,pc03.q,pc04.q,pc05.q,pc06.q,pc07.q,pc08.q,pc09.q,pc10.q,pc11.q,pc12.q,pc13.q,pc14.q,pc16.q # # Execute from the current working directory ... # #$ -cwd # # Standard error and output should go into the current working directory ... # #$ -e ./ #$ -o ./ # # Built nodelist file for charmrun # echo "Got $NSLOTS slots." echo "group main" > $TMPDIR/charmlist awk '{print "host " $1}' $TMPDIR/machines >> $TMPDIR/charmlist cat $TMPDIR/charmlist # # Ready ... # /usr/local/NAMD_2.5/charmrun /bin/runhome /usr/local/NAMD_2.5/namd2 ++nodelist $TMPDIR/charmlist +p $NSLOTS equi.namd > LOG
- Run it : qsub NAMD.sh
- Enjoy.