llama.cpp
Inference of Meta's LLaMA model (and others) in pure C/C++
https://github.com/ggerganov/llama.cpp
Available modules
The overview below shows which llama.cpp installations are available per target architecture in the HPCC module system, ordered based on software version (new to old).
To start using llama.cpp, load one of these modules using a module load command like:
module load llama.cpp/b4595-foss-2023a-CUDA-12.1.1
(This data was automatically generated on Thu, 22 Jan 2026 at 12:03:27 EST)
| gateway | generic | zen2 | zen3 | zen4 | skylake_avx512 | |
|---|---|---|---|---|---|---|
| Gateway nodes | everywhere (except Grace nodes) | amd20 | amd22 | amd24 | intel18,amd20-v100,amd21,intel21 | |
| llama.cpp/b4595-foss-2023a-CUDA-12.1.1 | - | x | - | - | - | - |