azurehpc/apps/opm
brlepore 395cbf114e
Update GCC and MPI modules
The HPC image must have changed to include gcc-9.2.0 and openmpi-4.0.2 rather than gcc-8.2.0 and openmpi-4.0.1 respectively. Need to load those modules now instead of the old.

{{{
[hpcuser@compute000000 ~]$ module av

---------------------------------------------------------------------------------------------------------------------------------------------------------------------- /usr/share/Modules/modulefiles ----------------------------------------------------------------------------------------------------------------------------------------------------------------------
dot                 gcc-9.2.0           module-git          module-info         modules             mpi/hpcx            mpi/hpcx-v2.5.0     mpi/impi            mpi/impi-2019       mpi/impi_2018.4.274 mpi/impi_2019.5.281 mpi/mvapich2        mpi/mvapich2-2.3.2  mpi/openmpi         mpi/openmpi-4.0.2   null                use.own
}}}
2020-02-05 14:58:17 -05:00
..
scripts initial import 2019-07-20 15:15:29 +01:00
flow_norne.sh Update GCC and MPI modules 2020-02-05 14:58:17 -05:00
full_install_opm.sh initial import 2019-07-20 15:15:29 +01:00
install_opm.sh initial import 2019-07-20 15:15:29 +01:00
readme.md Removed "-u" flag from azhpc-scp in docs 2019-08-30 17:12:44 +01:00

readme.md

OPM installation and running instructions

Prerequisites

Dependencies for binary version:

  • lapack

Installation

First copy the apps directory to the cluster. The azhpc-scp can be used to do this:

azhpc-scp -r $azhpc_dir/apps hpcuser@headnode:.

Alternatively you can checkout the azurehpc repository but you will need to update the paths according to where you put it.

Install from source

For this the headnode needs to be a HC node with CentOS-HPC-7.6 upwards

azhpc-run -u hpcuser $azhpc_dir/apps/opm/full_install_opm.sh

Install binaries

azhpc-run -u hpcuser $azhpc_dir/apps/opm/install_opm.sh

Connect to the headnode

azhpc-connect -u hpcuser headnode

Run the OPM norne scenario

To run on a single node with 30 cores run

qsub -l select=1:ncpus=30:mpiprocs=30 $azhpc_dir/apps/opm/flow_norne.sh

To run on two node with 30 cores run

qsub -l select=2:ncpus=15:mpiprocs=15 $azhpc_dir/apps/opm/flow_norne.sh

Notes:

  • All job outputs files will be stored in the user home dir with the prefix name OPM_norne.o.