WRF
The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility. The model serves a wide range of meteorological applications across scales from tens of meters to thousands of kilometres.
For more information, see:
Before you begin
Researchers using WRF on Pawsey supercomputing systems typically need to build their own custom version to support their simulations. For this reason, WRF is not provided as a module by Pawsey staff. However, the WRF dependencies are installed on the system.
Registration with WRF is also required to before downloading, which should be completed before following the instructions below.
See: WRF Registration (external site)
Installing WRF using an interactive session
The following instructions provide guidance to compile WRF on Setonix.
Step 1: Start an interactive session on the work partition
$ salloc -p work --ntasks=1 --mem=25G --time=4:00:00 salloc: Granted job allocation 82652 salloc: Waiting for resource configuration salloc: Nodes nid001007 are ready for job
Step 2: Change directory to the /software filesystem
We recommend building and compiling WRF on the /software filesystem, so it will not be purged by the 30 day purge policy on /scratch.
Move to your software directory using the $MYSOFTWARE
environment variable shown below, and create and enter any additional directories if desired.
$ cd $MYSOFTWARE
Step 3: Download WRF
Check the for the URL to the latest version on the releases page, this will likely be the file ending in tar.gz in the Assets box at the bottom of the latest release.
Use the wget utility to download the file on the debug node, as it is a relatively small file that should only takes a few seconds.
See: WRF Github Releases page (external site)
$ wget https://github.com/wrf-model/WRF/releases/download/v4.4/v4.4.tar.gz
Step 3: Extract WRF
Expand the WRF tarball using the tar
utility.
$ tar zxvf v4.4.tar.gz
Step 4: Enter the WRF directory
Use the cd
command to enter the WRFV4.4 directory:
$ cd WRFV4.4
Step 5: Prepare the Environment
Swap to the PrgEnv-cray programming environment, load the modules containing dependencies needed by WRF, and set up the NETCDF
environment variable.
$ module swap PrgEnv-gnu/8.4.0 PrgEnv-cray/8.4.0 $ module load cray-hdf5-parallel/1.12.2.3 $ module load cray-netcdf-hdf5parallel/4.9.0.3 $ module load cray-parallel-netcdf/1.12.3.3 $ export NETCDF=$NETCDF_DIR
Step 6: Configure WRF
Run the WRF configuration script:
./configure
Select the option that contains the Cray compiler and the corresponding type of parallelism desired.
For WRF 4.4 these options are:
46. (dmpar) CRAY CCE (ftn $(NOOMP)/cc): Cray XE and XC
for MPI support, or47. (smpar_dmpar) CRAY CCE (ftn $(NOOMP)/cc): Cray XE and XC
for MPI and OpenMP support.
The dmpar
configuration (option 46) is generally used over the hybrid approach (option 47).
Select the level of nesting as desired, which in most cases will the default option corresponding to basic nesting.
Step 7: Compile WRF
Run the relevant WRF compilation, which may take one to two hours:
./compile em_real
Installing WRF using an Slurm batch script
WRF can also be installed using a Slurm batch script instead of an interactive session.
Note that the options to the interactive configure script must be provided.
The following is an example for Option 46 (described in the previous section) and the default nesting.
#!/bin/bash --login #SBATCH --account=project_id #SBATCH --partition=work #SBATCH --ntasks=1 #SBATCH --ntasks-per-node=1 #SBATCH --mem=25G #SBATCH --time=4:00:00 #SBATCH --cpus-per-task=1 module swap PrgEnv-gnu/8.4.0 PrgEnv-cray/8.4.0 module load cray-hdf5-parallel/1.12.2.3 module load cray-netcdf-hdf5parallel/4.9.0.3 module load cray-parallel-netcdf/1.12.3.3 export NETCDF=$NETCDF_DIR wget https://github.com/wrf-model/WRF/releases/download/v4.4/v4.4.tar.gz tar zxfv v4.4.tar.gz cd WRFV4.4 ./configure <<< $'46' ./compile em_real
Using WRF on Setonix
To use the WRF installation on Setonix, we recommend:
- Copying your WRF installation to a directory for the simulation on /scratch
- Copy necessary input data files as needed to the directory
- Configure your simulation
- Prepare a Slurm batch script to launch the job, including the same environment commands used to compile WRF
- Submit your job to the scheduler
The following is an example jobscript for running an MPI-enabled (dmpar) version of WRF on 2 nodes, or 256 cores:
#!/bin/bash --login #SBATCH --account=project_id #SBATCH --partition=work #SBATCH --ntasks=256 #SBATCH --ntasks-per-node=128 #SBATCH --time=24:00:00 #SBATCH --exclusive #SBATCH --cpus-per-task=1 #SBATCH --nodes=2 module swap PrgEnv-gnu/8.4.0 PrgEnv-cray/8.4.0 module load cray-hdf5-parallel/1.12.2.3 module load cray-netcdf-hdf5parallel/4.9.0.3 module load cray-parallel-netcdf/1.12.3.3 export NETCDF=$NETCDF_DIR srun -N 2 -n 256 -c 1 ./wrf.exe
WRF Reference Data
The WPS Geog dataset has been placed in /scratch/references/wrf/wps_geog
to avoid the need for repeated staging of data commonly used by WRF users.
For more information, see WPS Geographical Static Data webpage (external link).
Installing WPS
WPS can be compiled in a similar manner to WRF.
Solutions to Common Issues
WRF simulation crashes due to numerical accuracy in the noah_mp module.
WRF contains internal checks on the ongoing accuracy of the simulation, and will halt the simulation if accuracy conditions are not met.
In practice, this looks like the following in one of the many rsl.out.* files created when running a simulation:
-------------- FATAL CALLED --------------- FATAL CALLED FROM FILE: <stdin> LINE: 1650 Stop in Noah-MP -------------------------------------------
In this example there is a known issue when using the noah_mp land surface model in WRF when compiled using the Cray Fortran compiler that causes numerical instabilities and model crash. This has been linked to compiler optimisation levels and a work-around is to compile the noah_mp module with -O1 optimisation, but the rest of WRF with -O3 optimisation. To do this, you first need to compile WRF as normal.
Then you need to manually delete the compiled noah_mp files, re-compile these with -O1, and then run ./compile em_real again. Note the compile example below may have different paths depending on the location of WRF.
$ cd phys $ rm -f module_sf_noahmplsm.o $ rm -f MODULE_SF_NOAHMPLSM.mod $ ftn -hnoomp -o module_sf_noahmplsm.o -c -O1 -Ofp1 -N1023 -f free -h byteswapio -I../dyn_em -I/$MYSOFTWARE/setonix/WRF4.4/WRF/external/esmf_time_f90 -I/$MYSOFTWARE/setonix/WRF4.4/WRF/main -I/$MYSOFTWARE/setonix/WRF4.4/WRF/external/io_netcdf -I/$MYSOFTWARE/setonix/WRF4.4/WRF/external/io_int -I/$MYSOFTWARE/setonix/WRF4.4/WRF/frame -I/$MYSOFTWARE/setonix/WRF4.4/WRF/share -I/$MYSOFTWARE/setonix/WRF4.4/WRF/phys -I/$MYSOFTWARE/setonix/WRF4.4/WRF/wrftladj -I/$MYSOFTWARE/setonix/WRF4.4/WRF/chem -I/$MYSOFTWARE/setonix/WRF4.4/WRF/inc -I/opt/cray/pe/netcdf-hdf5parallel/4.8.1.1/crayclang/10.0/include -s integer32 -s real{{expr 8 * 4}} module_sf_noahmplsm.f90 $ cd .. $ ./compile em_real
WRF 4.4 bug in phys/module_ra_clWRF_support.F
Be aware that the current 4.4 version of WRF contains a bug that will cause wrf.exe to crash:
This requires code changes in the phys/module_ra_clWRF_support.F file around line 171, specifically the lines:
IF (istatus /= -1) THEN WRITE(message,*) " Not normal ending of CAMtr_volume_mixing_ratio file" call wrf_error_fatal( message) END IF
need to be changed to:
IF (istatus == -1) THEN WRITE(message,) "Normal ending of CAMtr_volume_mixing_ratio file" CALL wrf_message( message) ELSE IF (istatus == -4001) THEN ! Cray CCE compiler throws -4001 rather than -1 WRITE(message,) "Normal ending of CAMtr_volume_mixing_ratio file" CALL wrf_message( message) ELSE ! if not -1 or -4001, then abort WRITE(message,*) " Not normal ending of CAMtr_volume_mixing_ratio file" call wrf_error_fatal( message) END IF
It is expected that versions newer than 4.4 will address this bug.