Skip to content

Warning

This is as a Lab Notebook which describes how to solve a specific problem at a specific time. Please keep this in mind as you read and use the content. Please pay close attention to the date, version information and other details.

How to Use Ollama

Ollama is a command-line tool for managing and running machine learning models. Below is a guide on how to use the various commands available in Ollama.

Quick Start

1
ollama
Will show you the available commands.

Note: If this is the first time you are running Ollama, you will most likely want to run this powertool:

1
ollama_link_models
This will symbolically link the models into your home directory. They will not take up any of your space but you can have access to them, learn more about our generative AI common folder.

1
ollama serve
Will start the Ollama server. It is recommended to only run Ollama when you have access to a GPU.

(If you want to run Ollama in a single command, you can use ollama serve & to run it in the background)

Next you can use

1
ollama list

To see the available models. More info about models on the hpcc here

Find the model you want to use and run it with

1
ollama run [model]

Available Commands

  • serve: Start Ollama.
  • create: Create a model from a Modelfile.
  • show: Show information for a model.
  • run: Run a model.
  • pull: Pull a model from a registry.
  • push: Push a model to a registry.
  • list: List models.
  • ps: List running models.
  • cp: Copy a model.
  • rm: Remove a model.
  • help: Help about any command.