Step Eight: Production MD
Upon completion of the two equilibration phases, the system is now well-equilibrated at the desired temperature and pressure. We are now ready to release the position restraints and run production MD for data collection. The process is just like we have seen before, as we will make use of the checkpoint file (which in this case now contains preserve pressure coupling information) to grompp. We will run a 1-ns MD simulation, the script for which can be found here. gmx grompp -f md.mdp -c npt.gro -t npt.cpt -p topol.top -o md_0_1.tpr grompp will print an estimate for PME load, which will dictate how many processors should be dedicated to the PME calculation, and how many for the PP calculations. Refer to the GROMACS 4 publication and the manual for details. Estimate for the relative computational load of the PME mesh part: 0.22 For a cubic box, the optimal setup will have a PME load of 0.25 (3:1 PP:PME - we're very close to optimal!); for a dodecahedral box, the optimal PME load is 0.33 (2:1 PP:PME). When executing mdrun, the program should automatically determine the best number of processors to assign for the PP and PME calculations. Thus, make sure you indicate an appropriate number of threads/cores for your calculation (the value of -nt X), so that you can get the best performance. Now, execute mdrun: gmx mdrun -deffnm md_0_1 In GROMACS 2018, the PME calculations can be offloaded to graphical processing units (GPU), which speeds up the simulation substantially. Using a Titan Xp GPU, this system can be simulated at an astounding 295 ns/day! Running GROMACS on GPUAs of version 4.6, GROMACS supports the use of GPU accelerators for running MD simulations. With the release of version 2018, the nonbonded interactions and PME are calculated on the GPU, with only bonded forces calculated on the CPU cores. When building GROMACS (see www.gromacs.org for installation instructions), GPU hardware will automatically be detected, if present. The minimum requirements for using GPU acceleration are the CUDA libraries and SDK, and a GPU with a compute capability of >= 2.0. A nice list of some of the more common GPUs and their specifications can be found here. Assuming you have one GPU available, the mdrun command to make use of it is as simple as: gmx mdrun -deffnm md_0_1 -nb gpu If you have more than one GPU available, or require customization of how the work is divided up via the hybrid parallelization scheme available in GROMACS, please consult the GROMACS manual and webpage. Such technical details are beyond the scope of this tutorial. |
Site design and content copyright Justin Lemkul
Problems with the site? Send them to the Webmaster