Warning
This is as a Lab Notebook which describes how to solve a specific problem at a specific time. Please keep this in mind as you read and use the content. Please pay close attention to the date, version information and other details.
Downloading and Installing cryoSPARC (2023-11-22)
Prepare for Installation
- Register and obtain the license. To obtain a License ID for cryoSPARC, go to https://cryosparc.com/download, fill out the form and submit it. Then on approval, you will receive an email with a license ID number. (Store license ID in some safe place in your home space)
-
Log into a development node with GPU on HPCC. (NOTE: Use only dev-amd20-v100 due to the GPU driver version requirement.)
-
Determine where you'd like to install cryoSPARC and create the installation directory. User should install this software in $HOME or $RESEARCH space. We use CryoSPARC in home directory as the installation directory in this document as an example.
1 2
mkdir ~/CryoSPARC # create the installation directory cd ~/CryoSPARC # go to the install directory
Note
`The installation directory could be any directory under the user's home or research space where use has full access permission. This will be the root directory where all cryoSPARC code and dependencies will be installed.
Download software
- Set environment variable: run
where
1
export LICENSE_ID="<license_id>"
is the license ID you received from the registration. - Download package to the install directory
1 2 3
cd ~/CryoSPARC curl -L https://get.cryosparc.com/download/master-latest/$LICENSE_ID -o cryosparc_master.tar.gz curl -L https://get.cryosparc.com/download/worker-latest/$LICENSE_ID -o cryosparc_worker.tar.gz
- Extract the downloaded files:
1 2
tar -xf cryosparc_master.tar.gz cryosparc_master tar -xf cryosparc_worker.tar.gz cryosparc_worker
Note
After extracting the worker package, you may see a second folder called cryosparc2_worker (note the 2) containing a single version file. This is here for backward compatibility when upgrading from older versions of cryoSPARC and is not applicable for new installations. You may safely delete the cryosparc2_worker
Installation of Master
- Load environment
1 2
module purge # unload previous loaded modules. module load foss/2022b # load the latest compile and dependency.
-
Master node Installationi
Example:1 2 3 4 5 6 7 8 9
cd <dir_master_package> # go to master package directory. ./install.sh --license $LICENSE_ID \ --hostname <master_hostname> \ --dbpath <db_path> \ --port <port_number> \ [--insecure] \ [--allowroot] \ [--yes] \
1 2 3 4 5
cd ~/CryoSPARC/cryosparc_master # go to master package directory. ./install.sh --license $LICENSE_ID \ --hostname localhost \ --dbpath ~/CryoSPARC/cryoSPARC_database \ --port 45000 \
-
Start cryoSPARC: run
1 2 3
export CRYOSPARC_FORCE_HOSTNAME=true export CRYOSPARC_MASTER_HOSTNAME=$HOSTNAME ./bin/cryosparcm start
- Create your user account of cryoSPARC: run
1 2 3 4 5
./bin/cryosparcm createuser --email "<user email>" \ --password "<user password>" \ --username "<login username>" \ --firstname "<given name>" \ --lastname "<surname>"
Note
For details of the meaning of the options of above master node installalation steps, See https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/downloading-and-installing-cryosparc#glossary-reference-1.
After completing the above, you are ready to access the user interface.
-
Access the user interface: navigate your browser to
http://<master_hostname>:<port_number>
If you are physically using the same machine as the master node to interact with the cryoSPARC interface, you can connect to it as:
http://localhost:<port_number>
See https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/accessing-cryosparc for more information.
Installation of worker:
- Load environment if not yet done it
1 2 3
module purge # unload previous loaded modules. module load foss/2022b # load the latest compile and dependency. module load CUDA/12.3.0 # cuda is needed for worker
- Worker node Installation
1 2 3 4 |
|
1 |
|
Note
For the meaning of the options of worker node installation script, See https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/downloading-and-installing-cryosparc#worker-installation-glossary-reference.
Note
Once the master and worker are successfully installed at the dev-node, stop the current cryosparc session using command " ~/CryoSPARC/cryosparc_master/bin/cryosparcm stop"
Start an interactive session of CryoSPARC using ondemand
-
Request an interactive desktop with the number of GPUS equal to the number of workers. You need to request withthe sufficient resources (CPU, memory, wall time, etc.). Open a terminal on the desktop.
-
Launch cryosparc master: For the users convenience, create a file named "cryosparc.sh" containing commands for setting up environment (shown below). Before launch CryoSPARC, run command "source cryosparc.sh" first.
1 2 3 4 5 6 7 8 9 10 11 12
#!/bin/bash # set up CryoSPARC environment # # Load modules module purge module load foss/2022b # set PATH export PATH=~/cryosparc/cryosparc_master/bin:~/cryosparc/cryosparc_worker/bin:$PATH export CRYOSPARC_FORCE_HOSTNAME=true export CRYOSPARC_MASTER_HOSTNAME=$HOSTNAME
Connect a Cluster to CryoSPARC
Once the cryosparc_worker package is installed, the cluster must be registered with the master process. This requires a template for job submission commands and scripts that the master process will use to submit jobs to the cluster scheduler. To register the cluster, provide cryoSPARC with the following two files and call the cryosparcm cluster connect command: - cluster_info.json - cluster_script.sh
The first file (cluster_info.json) contains template strings used to construct cluster commands (e.g., qsub, qstat, qdel etc., or their equivalents for your system). The second file (cluster_script.sh) contains a template string to construct appropriate cluster submission scripts for your system. The jinja2 template engine is used to generate cluster submission/monitoring commands as well as submission scripts for each job.
- Create the files The following fields are required to be defined as template strings in the configuration of a cluster. Examples for SLURM are given; use any command required for your particular cluster scheduler. Note that parameters listed as "optional" can be omitted or included with their value as null.
cluster_info.json:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Note
The cryoSPARC scheduler does not assume control over GPU allocation when spawning jobs on a cluster. The number of GPUs required is provided as a template variable. Either your submission script or your cluster scheduler is responsible for assigning GPU device indices to each job spawned based on the provided variable. The cryoSPARC worker processes that use one or more GPUs on a cluster simply use device 0, then 1, then 2, etc. Therefore, the simplest way to correctly allocate GPUs is to set the CUDA_VISIBLE_DEVICES environment variable in your cluster scheduler or submission script. Then device 0 is always the first GPU that a running job must use.
-
Load script and register the integration.
To create or set a configuration for a cluster in cryoSPARC, use the following commands.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Note
The command cryosparcm cluster connect
attempts reading cluster_info.json and cluster_script.sh from the current working directory.
Examples of cluster_info.json and cluster_script.sh scripts for SLURM on HPCC:
cluster_info.json
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
cluster_script.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Q: Where should these two files be stored?
A: Working directory.