那些城市那些花

导航

module使用和设置

Modules environment
Description

This is a system that allows you to easily change between different versions of compilers and other software, no matter what shell you use, without having to set a lot of environment variables by hand each time. The authors say:

The Modules package is a database and set of scripts that simplify shell initialization and lets users easily modify their environment during the session. The Modules package lessens the burden of UNIX environment maintenance while providing a mechanism for the dynamic manipulation of application environment changes as single entities.
Availability
SuSE workstations and all servers except Chimaera and Athens
Source
http://modules.sourceforge.net/
Licence details
GPL


Instructions for users

Basics

The machines have a variety of Fortran compilers, libraries, and other bits of software installed. Some of it exists in multiple versions. It is becoming increasingly complicated to set up the user environment to use the appropriate ones, especially if you often need to switch between compilers for testing. The purpose of the module package is to simplify this for end users.

On most of the machines the modules package is available as standard, and sometimes useful modules are automatically loaded on login.

To get started, list the loaded modules:

cen1001@sword:~$ module list
Currently Loaded Modulefiles:
1) modules 3) ifort/8.0 5) nag/5.0
2) ifc 4) pgi/4.0-3(default)
You can see that the 'modules' module itself is loaded, and several others. Most of the others are compilers, but if you didn't know that you could check:
cen1001@sword:~$ module whatis ifc
ifc : Loads version 7 of the Intel Fortran compiler
cen1001@sword:~$ module help ifc
----------- Module Specific Help for 'ifc' ------------------------
ifc - loads version 7 of the Intel Fortran compiler
Thisadds /usr/local/intel/compiler70/ia32/bin to the PATH and
sets the LM_LICENSE_FILE variable amongst others.
ifc is a highly optimizing Fortran compiler for Intel chips.
It relies on a licence server for compiling, but binaries may
be run without a licence being available. There is a manpage,
and both HTML and PDF documentation in
/usr/local/intel/compiler70/doc.
Some of the modules are versioned. You can see what other versions of the modules are available and which versions are the defaults:
cen1001@sword:~$ module avail
------------------------- /usr/local/Modules/versions --------------------------
3.1.6
--------------------- /usr/local/Modules/3.1.6/modulefiles ---------------------
fftw/2.1.5/ifc module-cvs null
fftw/3.0.1 module-info pgi/4.0-3(default)
ifc modules pgi/5.0-2
ifort/8.0 mpi/mpich/1.2.5/ifc pgi/5.1-6
mkl/6.0(default) mpi/mpich/1.2.5/pgi pgi/5.2-4
mkl/6.1 nag/5.0 use.own

If you want to see what loading a module is going to do to your environment:
cen1001@sword:~$ module display ifort
-------------------------------------------------------------------
/usr/local/Modules/3.1.6/modulefiles/ifort/8.0:
module-whatis Loads version 8.0 of the Intel Fortran compiler
setenv LM_LICENSE_FILE 28518@por.ch.private.cam.ac.uk
append-path PATH /usr/local/intel_fc_80/bin
prepend-path LD_LIBRARY_PATH /usr/local/intel_fc_80/lib
append-path MANPATH /usr/local/intel_fc_80/man
-------------------------------------------------------------------

Some modules conflict with others. For example, you may only have one of the pgi modules loaded at once. If I tried to load pgi/5.2-4 with pgi/4.0-3 already loaded I'd get an error message. So if I wanted to use the latest PGI compiler I would have to replace pgi/4.0-3 with pgi/5.2-4:
cen1001@sword:~$ module unload pgi
cen1001@sword:~$ module load pgi/5.2-4
or even quicker:
cen1001@sword:~$ module swap pgi pgi/5.2-4

Some of the computers have special modules available which load a set of modules with one command. They are useful when doing parallel programming as this kind of work requires you to load a large set of compatible modules and it can be tricky to get the dependencies right. These modules are called things like 'environment' or 'compilers'. These particular modules are intended as shortcuts to be placed in your shell startup files and do nothing other than load other modules.

Making it stick

Once you have got things configured to your satisfaction you will want to make your shell load the same set of modules every time you log in. Unfortunately it can be very tricky to get this right so that you always get the modules loaded when you want them and don't prevent yourself from switching when you need to. I am only going to cover the bash shell here.

Making it stick on managed SuSE workstations

Put the module commands in your ~/.bashrc file. This works because this file is sourced for both interactive, non-login shells (ie all your terminals in GNOME) and also for all interactive login shells too (ie when you ssh in). This is a SuSEism and does not work on every Linux! In particular, many do not source .bashrc for login shells.

~/.bashrc is not sourced for non-interactive shells, so although a shell script will inherit the environment that you set up using module commands, it cannot change it by calling module internally unless you reinitialize the module system within the script.

Making it stick on compute servers

Unfortunately this depends on the server's setup. A new user account comes with a set of example shell startup files, so you should look for the one with the module commands in it and modify that. The following explains what's going on.

On clusters where modules are configured on the compute nodes as well as the head nodes (sword, destiny, tardis) you can safely put module commands in ~/.bashrc. Those machines run SuSE so the module commands run for all interactive shells. As modules are configured on the compute nodes you don't get an error when the queueing system starts a shell on a compute node.

Clust, nimbus, and mek-quake only have modules on the head node. The module commands should be wrapped in a test so that they are only executed on the head node. They can go in either .profile.bash_profile or .bashrc but I'd recommend the former two as you really only want the module commands to run when you first log into the system.

These machines are set up like this for historical reasons, and it is now not possible to change it because people using them have come to rely on the behaviour in some rather fundamental ways.

Documentation

The modules system can do much more than what I've covered above. There are manpages for it and the module command has built in help: type module help. Lots of helpful stuff for admins can be found on the modules-interest mailing list archive.

Admin notes

Where to find things

The module configuration files live in /usr/local/shared/Modules/modulefiles{,32,64} and also in the image-specific directories in /usr/local/shared. On servers they are generally in /usr/local/Modules. They are written in Tcl, but you don't need to know Tcl (much) because most of the commands you use are extensions written for the modules package. The easiest way to learn is to copy an existing modulefile and edit it. A full list of modulefile commands is in man (4) modulefile (available when modules are loaded, of course).

Modules can be versioned. Instead of a modulefile, there is a directory named what the basic module would have been named, and modulefiles within that directory named after the different versions. The directory may contain a .modulerc file. This is also a modulefile and will be run if any module out of the directory is requested. This file can set the default version of the module to use if the user didn't pick one. When making deeply nested modulefiles there has to be a .modulerc at each level pointing to the appropriate next directory level, rather than just one at the top level.

You can add new module directory trees to the PATH by editing the init/.modulepath file in the module install directory. Users can use their own private modules directory by loading the module use.own, which sets modules up to search ~/privatemodules as well as the system module directories.

Modules for libraries

Modules for libraries are slightly complex, because the library location has to be made available to the compiler (actually the linker) at compile (actually link) time, and the location also has to be available to the runtime linker at runtime. The latter can be done by setting LD_LIBRARY_PATH but this has disadvantages. If you set it to contain an NFS filesystem and your NFS server becomes unavailable then your shell will freeze. It is therefore arguably better to use the rpath method, which codes the library directory into the binary itself and not set LD_LIBRARY_PATH. This also avoids the problem where your binary works interactivaely (where LD_LIBRARY_PATH is set by the login scripts) and fails when run through one of the batch queueing systems (where you have to remember to set it yourself if you need it).

Having said all that, rpath seems not to work in all cases. I linked dynamically with MKL and at runtime the MKL library itself tried to pull in another library file, which it couldn't find without LD_LIBRARY_PATH set even though the original binary had rpath set correctly.

Thanks to these problems, the MKL modules set LD_LIBRARY_PATH and also set variables called LIBRARY_PATH and CPATH. The compiler driver scripts test for these and if they are set, they tell the linker to search the appropriate directories at link time and to also put them into the rpath. The CPATH variable is used to tell the compiler where to find headers. See the compiler admin docs for how to set each compiler up to do this. The shell freezing problem is avoided by never loading modules which set LD_LIBRARY_PATH by default- the user has to make a conscious decision to load them. Other library modules can also set LIBRARY_PATH and CPATH. Some versions of the Intel MKL modulefiles set MKLPATH too, which I used to use for similar purposes.

Initialization considerations

I had some trouble with putting module initialisation into the system startup scripts and not breaking things. I wanted the users to start with the modules environment loaded, and also to have certain specific modules (compilers) loaded by default. I ran into trouble with the latter with the C shell, because this shell sources the /etc/csh.cshrc file for every shell, and so if you force certain default modules to be loaded in that file you then run into problems when the user has loaded a different set in their environment and then tries to run a C shell script. The script tries to load the default set and complains if they clash with the set the user's original shell currently has.

One way round this would have been to place the module loads into the user's /.cshrc file, so that they could then edit it if they needed to change the default modules. Unfortunately I had an existing userbase and didn't want to have to read everyone's startup files and modify them individually.

The solution turned out to be not to load any modules for non-interactive shells by wrapping the load commands inside a test. The bourne shells don't do any startup for non-interactive shells so this was never an issue for them. I decided to still initialize the modules environment for non-interactive C shells, even though this means they work slightly differently from Bourne shells. It might be useful to someone.

As time goes on this has ceased to be a problem because new users get a preconfigured ~/.bashrc or ~/.profile as appropriate, and can then edit it if needed. Old users have by now become used to the new system and have inserted commands in their startup files, so there is no need to load any modules in the system startup files.

Defaults and deep nesting

There is a problem with the current version of Modules where defaults only work totally correctly for modulefiles with all their versions in the top level; ie you can't have nested directories. You can set a .modulerc at each level as described above, but if any of the module families have common pathname components (like for example ifort and icc do) you can only do this for one family or you get errors about duplicate defaults. This is a problem for the local collection of compilers and libraries because we have many modules like 'mkl/64/serial/9.1/023' . The best fix I have found for this so far is to make a dummy modulefile in the top level that contains something like:

module load ifort/32/9.0/032
proc ModulesHelp { } {
  puts "This is a dummy module; to see what it does type \"module disp [module-info name ]\"\n"
  }
  module-whatis "This is a dummy module; type \"module disp [module-info name ]\" to see what it does\n"

and then make a .modulerc file in top level setting that module as the default. When users type 'module load ifort' they get the right default.

Zero effort modulefiles

I found a nice trick on the modules-interest mailing list which lets you use the exact same modulefile for every single version of a compiler family. You can use the comman

set components [ file split [ module-info name ] ]

to grab the filename of the modulefile and split it into its parts, then you can operate on those parts and use them to set things like the compiler version variables. Install the magic modulefile as .base and just create symlinks to it with the names you need.

posted on 2017-07-17 16:38  那些城市那些花  阅读(1635)  评论(0编辑  收藏  举报