WRF学习之 ch5 WRF模式(一)引言,安装

Chapter 5: WRF Model

from https://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/v4.1/users_guide_chap5.html
Table of Contents

 • Introduction
 • Installing WRF

 • Running WRF
  o Idealized Case
  o Real Data Case
  o Restart Run
  o Two-Way Nested Runs
  o One-Way Nested Run Using ndown
  o Moving Nested Run
  o Analysis Nudging Runs
  o Observation Nudging
  o Global Run
  o DFI Run
  o SST Update
  o Using bucket_mm and bucket_J options
  o Adaptive Time Stepping
  o Stochastic Parameterization Schemes
  o Run-Time IO
  o Output Diagnostics
  o WRF-Hydro
  o Using IO Quilting
  o Using Physics Suites
  o Hybrid Vertical Coordinate.
 • Examples of namelists for various applications
 • Check Output
 • Trouble Shooting
 • Physics and Dynamics Options
 • Summary of PBL Physics Options
 • Summary of Microphysics Options
 • Summary of Cumulus Parameterization Options
 • Summary of Radiation Options
 • Description of Namelist Variables
 • WRF Output Fields
 • Special WRF Output Variables

Introduction

The WRF model is a fully compressible and nonhydrostatic model (with a run-time hydrostatic option). Its vertical coordinate is selectable as either a terrain-following (TF) or (beginning in Version 3.9) hybrid vertical coordinate (HVC) hydrostatic pressure coordinate. The grid staggering is the Arakawa C-grid. The model uses the Runge-Kutta 2nd and 3rd order time integration schemes, and 2nd to 6th order advection schemes in both the horizontal and vertical. It uses a time-split small step for acoustic and gravity-wave modes. The dynamics conserves scalar variables.
The WRF model code contains an initialization program (either for real-data, real.exe, or idealized data, ideal.exe; see Chapter 4), a numerical integration program (wrf.exe), a program to do one-way nesting (ndown.exe), and a program to do tropical storm bogussing (tc.exe). Version 4 of the WRF model, supports a variety of capabilities. These include
• Real-data and idealized simulations
• Various lateral boundary condition options for real-data and idealized simulations
• Full physics options, and various filter options
• Positive-definite advection scheme
• Non-hydrostatic and hydrostatic (runtime option)
• One-way and two-way nesting, and a moving nest
• Three-dimensional analysis nudging
• Observation nudging
• Regional and global applications
• Digital filter initialization
• Vertical refinement in a child domain

Other References

• WRF tutorial presentation: http://www.mmm.ucar.edu/wrf/users/supports/tutorial.html
• WRF-ARW Tech Note: http://www.mmm.ucar.edu/wrf/users/pub-doc.html (Tech Note for V4.0 is under preparation)
• See chapter 2 of this document for software requirement.

Installing WRF

Before compiling the WRF code on a computer, check to see if the netCDF library is installed. This is because one of the supported WRF I/O options is netCDF, and it is the one commonly used and supported by the post-processing programs. If the netCDF is installed in a directory other than /usr/local/, then find the path, and use the environment variable NETCDF to define where the path is. To do so, type
setenv NETCDF path-to-netcdf-library
Often the netCDF library and its include/ directory are collocated. If this is not the case, create a directory, link both netCDF lib and include directories in this directory, and use the environment variable to set the path to this directory. For example,
netcdf_links/lib -> /netcdf-lib-dir/lib
netcdf_links/include -> /where-include-dir-is/include
setenv NETCDF /directory-where-netcdf_links-is/netcdf_links
If the netCDF library is not available on the computer, it needs to be installed first. NetCDF source code or pre-built binary may be downloaded from, and installation instruction can be found on, the Unidata Web page at http://www.unidata.ucar.edu/.
Hint: for Linux users:
If PGI, Intel, gfortran or g95 compilers are are used on a Linux computer, make sure netCDF is installed using the same compiler. Use the NETCDF environment variable to point to the PGI/Intel/g95 compiled netCDF library.
Hint: If using netCDF-4, make sure that the new capabilities (such as parallel I/O based on HDF5) are not activated at the install time, unless you intend to use the compression capability from netCDF-4 (supported in V3.5. More info below).
The WRF source code tar file can be downloaded from http://www.mmm.ucar.edu/wrf/users/download/get_source.html. Once the tar file is unzipped (gunzip WRFV4.TAR.gz), and untared (tar –xf WRFV4.TAR), it will create a WRF/ directory. This contains:
Makefile Top-level makefile
README General information about the WRF/ARW core
doc/ Information on various functions of the model
Registry/ Directory for WRF Registry files
arch/ Directory where compile options are gathered
clean script to clean created files and executables
compile script for compiling the WRF code
configure script to create the configure.wrf file for compiling
chem/ WRF chemistry, supported by NOAA/GSD
dyn_em/ Directory for ARW dynamics and numerics
dyn_nmm/ Directory for NMM dynamics and numerics, supported by DTC
external/ Directory that contains external packages, such as those for IO, time keeping and MPI
frame/ Directory that contains modules for the WRF framework
inc/ Directory that contains ‘include’ files
main/ Directory for main routines, such as wrf.F, and all executables after compilation
phys/ Directory for all physics modules
run/ Directory where one may run WRF
share/ Directory that contains mostly modules for the WRF mediation layer and WRF I/O
test/ Directory that contains test case directories, may be used to run WRF
tools/ Directory that contains tools for developers
The steps to compile and run the model are:

  1. configure: generate a configuration file for compilation
    
  2. compile: compile the code
    
  3. run the model
    

Go to the WRF (top) directory and type:
./configure
The build for the WRF model allows for a few options to be used with the configure command.
./configure –d build the code with debugging turned on
./configure –D same as –d, plus bounds and range checking, uninitialized variables, floating traps
./configure –r8 build the code to use 64 bit reals for computation and output
For any of the ./configure commands, a list of choices for your computer should appear. These choices range from compiling for a single processor job (serial), to using OpenMP shared-memory (smpar), distributed-memory parallelization (dmpar) options for multiple processors, or a combination of shared-memory and distributed-memory options (dm+sm). When a selection is made, a second choice for compiling nesting will appear. For example, on a Linux computer, the above steps may look like:

setenv NETCDF /usr/local/netcdf-pgi
./configure
checking for perl5... no
checking for perl... found /usr/bin/perl (perl)
Will use NETCDF in dir: /glade/apps/opt/netcdf/4.3.0/intel/12.1.5
HDF5 not set in environment. Will configure WRF for use without.
PHDF5 not set in environment. Will configure WRF for use without.
Will use 'time' to report timing information
$JASPERLIB or $JASPERINC not found in environment, configuring to build without grib2 I/O...


Please select from among the following Linux x86_64 options:

  1. (serial) 2. (smpar) 3. (dmpar) 4. (dm+sm) PGI (pgf90/gcc)
  2. (serial) 6. (smpar) 7. (dmpar) 8. (dm+sm) PGI (pgf90/pgcc): SGI MPT
  3. (serial) 10. (smpar) 11. (dmpar) 12. (dm+sm) PGI (pgf90/gcc): PGI accelerator
  4. (serial) 14. (smpar) 15. (dmpar) 16. (dm+sm) INTEL (ifort/icc)
    17. (dm+sm) INTEL (ifort/icc): Xeon Phi (MIC architecture)
  5. (serial) 19. (smpar) 20. (dmpar) 21. (dm+sm) INTEL (ifort/icc): Xeon (SNB with AVX mods)
  6. (serial) 23. (smpar) 24. (dmpar) 25. (dm+sm) INTEL (ifort/icc): SGI MPT
  7. (serial) 27. (smpar) 28. (dmpar) 29. (dm+sm) INTEL (ifort/icc): IBM POE
  8. (serial) 31. (dmpar) PATHSCALE (pathf90/pathcc)
  9. (serial) 33. (smpar) 34. (dmpar) 35. (dm+sm) GNU (gfortran/gcc)
  10. (serial) 37. (smpar) 38. (dmpar) 39. (dm+sm) IBM (xlf90_r/cc_r)
  11. (serial) 41. (smpar) 42. (dmpar) 43. (dm+sm) PGI (ftn/gcc): Cray XC CLE
  12. (serial) 45. (smpar) 46. (dmpar) 47. (dm+sm) CRAY CCE (ftn/cc): Cray XE and XC
  13. (serial) 49. (smpar) 50. (dmpar) 51. (dm+sm) INTEL (ftn/icc): Cray XC
  14. (serial) 53. (smpar) 54. (dmpar) 55. (dm+sm) PGI (pgf90/pgcc)
  15. (serial) 57. (smpar) 58. (dmpar) 59. (dm+sm) PGI (pgf90/gcc): -f90=pgf90
  16. (serial) 61. (smpar) 62. (dmpar) 63. (dm+sm) PGI (pgf90/pgcc): -f90=pgf90
  17. (serial) 65. (smpar) 66. (dmpar) 67. (dm+sm) INTEL (ifort/icc): HSW/BDW
  18. (serial) 69. (smpar) 70. (dmpar) 71. (dm+sm) INTEL (ifort/icc): KNL MIC
  19. (serial) 73. (smpar) 74. (dmpar) 75. (dm+sm) FUJITSU (frtpx/fccpx): FX10/FX100 SPARC64 IXfx/Xlfx

Enter selection [1-75] : ------------------------------------------------------------------------
Compile for nesting? (0=no nesting, 1=basic, 2=preset moves, 3=vortex following) [default 0]:
Enter the appropriate options that are best for your computer and application.
When the return key is hit, a configure.wrf file will be created. Edit compile options/paths, if necessary.
Hint: It is helpful to start with something simple, such as the serial build. If it is successful, move on to build dmpar or smpar code. Remember to type ‘./clean –a’ between each build when you either change one of the Registry files or when you change an option during the configure step.
Hint: If you would like to use parallel netCDF (p-netCDF) developed by Argonne National Lab (http://trac.mcs.anl.gov/projects/parallel-netcdf), you will need to install p-netCDF separately, and use the environment variable PNETCDF to set the path:
setenv PNETCDF path-to-pnetcdf-library
Hint: Since V3.5, compilation may take a bit longer due to the addition of the CLM4 module. If you do not intend to use the CLM4 land-surface model option, you can modify your configure.wrf file by removing -DWRF_USE_CLM from ARCH_LOCAL.
To compile the code, type
./compile
and the following choices will appear:
Usage:

compile wrf compile wrf in run dir (Note, no real.exe, ndown.exe or ideal.exe generated)

or choose a test case (see README_test_cases for details):

compile em_b_wave
compile em_convrad (new in V3.7)

compile em_esmf_exp (example only)
compile em_grav2d_x

compile em_heldsuarez
compile em_hill2d_x

compile em_les
compile em_quarter_ss
compile em_real

compile em_seabreeze2d_x
compile em_squall2d_x
compile em_squall2d_y
compile em_tropical_cyclone

compile nmm_real (NMM solver)
compile nmm_tropical_cyclone (NMM solver)

compile –h help message

where em stands for the Advanced Research WRF dynamic solver (which is the 'Eulerian mass-coordinate' solver). Type one of the above to compile. When you switch from one test case to another, you must type one of the above to recompile. The recompile is necessary to create a new initialization executable (i.e. real.exe, and ideal.exe - there is a different ideal.exe for each of the idealized test cases), while wrf.exe is the same for all test cases.
If you want to remove all object files (except those in the external/ directory) and executables, type './clean'.
Type './clean -a' to remove built files in ALL directories, including configure.wrf (the original configure.wrf will be saved to configure.wrf.backup). The './clean –a' command is required if you have edited the configure.wrf or any of the Registry files.
Beginning with V4.0, the default compile will use the netCDF4 compression function if it detects all supported libraries are available. This option will typically reduce the file size by more than 50%, but it may take longer to write. If the required libraries do not exist, the compile will change back to use classic netCDF. One can also enforce the use of classic netCDF by setting environment variable, NETCDF_classic, followed by ‘configure’ and ‘compile’.
For more detailed information, visit: http://www.mmm.ucar.edu/wrf/users/wrfv3.5/building-netcdf4.html
a. Idealized case
For any 2D test case (labeled in the case names), serial or OpenMP (smpar) compile options must be used. Additionally, you must only choose the ‘0=no nesting’ option when you configure. For all other cases, you may use serial or parallel (dmpar) and nesting. Suppose you would like to compile and run the 2-dimensional squall case, type
./compile em_squall2d_x >& compile.log
After a successful compilation, you should have two executables created in the main/ directory: ideal.exe and wrf.exe. These two executables will be linked to the corresponding test/case_name and run/ directories. cd to either directory to run the model.
It is a good practice to save the entire compile output to a file. When the executables are not present, this output is useful to help diagnose the compile errors.
b. Real-data case
For a real-data case, type
./compile em_real >& compile.log &
When the compile is successful, it will create three executables in the main/directory: ndown.exe, real.exe and wrf.exe.
real.exe: for WRF initialization of real data cases
ndown.exe : for one-way nesting
wrf.exe : WRF model integration
Like in the idealized cases, these executables will be linked to the test/em_real and run/ directories. cd to one of these two directories to run the model.

2020-04-13 16:05

posted @ 2024-09-04 05:35  chinagod  阅读(45)  评论(0编辑  收藏  举报