This is as a Lab Notebook which describes how to solve a specific problem at a specific time.
Please keep this in mind as you read and use the content. Please pay close attention to the date, version
information and other details.
CTSM and LILAC
CTSM (Community Terrestrial Systems Model) is a land model for geographic data. The latest version is built with LILAC (Lightweight Infrastructure for Land Atmosphere Coupling) to connect land models with atmosphere models.
Connect to a development node (this is because CTSM uses Python to handle its dependencies, and the Subversion module does not have a new enough matching Python module.)
ESMF is compiled with the FOSS (GNU/GCC) toolchain. Make sure you specify the GNU gcc compiler when building your models.
CESM and CTSM use "machine files" to understand the environment they are running in. You will need to set up a config_machines.xml file containing the appropriate settings for ICER.
These files should be in this directory for a user $USER /mnt/home/$USER/.cime
<?xml version="1.0"?><!-- This is a partial cime machine port which should be able to get through the case.build step. Trying to get further than that will cause problems, because various directory paths won't be set up properly. If you are looking at the template file: Variable names prefixed with a dollar sign will be replaced with machine-specific values. A double dollar sign gets replaced with a single dollar sign, so something like $MYVAR refers to the MYVAR cime variable. --><config_machinesversion="2.0"><machineMACH="dev-amd20"><!-- Text description of the machine --><DESC>TemporarybuildinformationforaCTSMbuild</DESC><!-- OS: the operating system of this machine. Passed to cppflags for compiled programs as -DVALUE recognized are LINUX, AIX, Darwin, CNL --><OS>UBUNTU</OS><!-- COMPILERS: compilers supported on this machine, comma seperated list, first is default --><COMPILERS>gnu</COMPILERS><!-- MPILIBS: mpilibs supported on this machine, comma seperated list, first is default, mpi-serial is assumed and not required in this list. It appears that the particular value is not important here: all that matters is that we're self-consistent. (See https://github.com/ESMCI/cime/issues/3537) --><MPILIBS>mpich</MPILIBS><!-- CIME_OUTPUT_ROOT: Base directory for case output, the case/bld and case/run directories are written below here --><CIME_OUTPUT_ROOT>/mnt/scratch/$USER/</CIME_OUTPUT_ROOT><!-- DIN_LOC_ROOT: location of the inputdata data directory inputdata is downloaded automatically on a case by case basis as long as the user has write access to this directory. --><DIN_LOC_ROOT>/mnt/scratch/$USER/</DIN_LOC_ROOT><!-- DIN_LOC_ROOT_CLMFORC: override of DIN_LOC_ROOT specific to CLM forcing data --><DIN_LOC_ROOT_CLMFORC>$DIN_LOC_ROOT/</DIN_LOC_ROOT_CLMFORC><!-- DOUT_S_ROOT: root directory of short term archive files, short term archiving moves model output data out of the run directory, but keeps it on disk--><DOUT_S_ROOT>$CIME_OUTPUT_ROOT/archive/$CASE</DOUT_S_ROOT><!-- GMAKE: gnu compatible make tool, default is 'gmake' --><GMAKE>gmake</GMAKE><!-- GMAKE_J: optional number of threads to pass to the gmake flag --><GMAKE_J>12</GMAKE_J><!-- BATCH_SYSTEM: batch system used on this machine, supported values are: none, cobalt, lsf, pbs, slurm. This is irrelevant for this build-only port. --><BATCH_SYSTEM>slurm</BATCH_SYSTEM><!-- SUPPORTED_BY: contact information for support for this system this field is not used in code --><SUPPORTED_BY>CTSM</SUPPORTED_BY><!-- MAX_TASKS_PER_NODE: maximum number of threads*tasks per shared memory node on this machine, should always be >= MAX_MPITASKS_PER_NODE. This is irrelevant for this build-only port. --><MAX_TASKS_PER_NODE>28</MAX_TASKS_PER_NODE><!-- MAX_MPITASKS_PER_NODE: number of physical PES per shared node on this machine, in practice the MPI tasks per node will not exceed this value. --><MAX_MPITASKS_PER_NODE>28</MAX_MPITASKS_PER_NODE><!-- mpirun: The mpi exec to start a job on this machine, supported values are values listed in MPILIBS above, default and mpi-serial. This is irrelevant for this build-only port. --><mpirunmpilib="default"><!-- name of the exectuable used to launch mpi jobs --><executable>srun</executable><!-- arguments to the mpiexec command, the name attribute here is ignored--><arguments><!-- <arg name="anum_tasks"> -ntasks $TOTALPES</arg> <arg name="labelstdout">-prepend-rank</arg>--><argname="num_tasks">--ntasks={{total_tasks}}</arg></arguments></mpirun><!-- module system: allowed module_system type values are: module http://www.tacc.utexas.edu/tacc-projects/mclay/lmod soft http://www.mcs.anl.gov/hs/software/systems/softenv/softenv-intro.html none For this build-only port, we don't specify a module system: instead, we assume that the user has loaded all relevant modules prior to starting the build. (Similarly, we assume that the user has set all relevant environment variables, so don't specify any environment variables here.) --><module_systemtype="none"/><environment_variablescomp_interface="nuopc"><envname="ESMFMKFILE">/opt/software-current/2023.06/x86_64/generic/software/ESMF/8.6.0-foss-2023a/lib/esmf.mk</env><envname="NETCDF_PATH">/opt/software-current/2023.06/x86_64/generic/software/netCDF/4.9.2-gompi-2023a</env><envname="PNETCDF_PATH">/opt/software-current/2023.06/x86_64/generic/software/PnetCDF/1.12.3-gompi-2023a</env></environment_variables></machine></config_machines>
<?xml version="1.0"?><batch_systemtype="slurm"MACH="dev-amd20"><batch_queryper_job_arg="-j">squeue</batch_query><batch_submit>sbatch</batch_submit><batch_cancel>scancel</batch_cancel><batch_directive>#SBATCH</batch_directive><jobid_pattern>(\d+)$</jobid_pattern><depend_string>--dependency=afterok:jobid</depend_string><depend_allow_string>--dependency=afterany:jobid</depend_allow_string><depend_separator>,</depend_separator><walltime_format>%H:%M:%S</walltime_format><!-- replace $USER with your username --><batch_mail_flag>--mail-user=$USER@msu.edu</batch_mail_flag><batch_mail_type_flag>--mail-type=$USER@msu.edu</batch_mail_type_flag><batch_mail_type>end</batch_mail_type><directives><directive>--job-name={{job_id}}</directive><directive>--nodes={{num_nodes}}</directive><directive>--ntasks-per-node={{tasks_per_node}}</directive><directive>--output={{job_id}}</directive><directive>--time=02:00:00</directive><directive>--mem=205G</directive></directives><queues><queuedefault="true"></queue></queues></batch_system>