When trying to access the ollama container from another (node) service in my docker compose setup, I get the following error:
ResponseError: model ‘llama3’ not found, try pulling it first
I want the setup for the containers to be automatic and don’t want to manually connect to the containers and manually pull the models.
Is there a way to load the model of my choice automatically when I create the ollama docker container?
Here is my relevant part of the docker-compose.yml
ollama:
image: ollama/ollama:latest
ports:
- 11434:11434
volumes:
- ./ollama/ollama:/root/.ollama
container_name: ollama
pull_policy: always
tty: true
restart: always
2
Answers
Use a custom entrypoint script to download the model when a container is launched. The model will be persisted in the volume mount, so this will go quickly with subsequent starts.
Key changes:
entrypoint
.entrypoint.sh
.This is the content of
entrypoint.sh
:This is what worked for me without needing an additional mount for the entry script.
Then I used the script provided by @datawookie as
wait_for_ollama.sh
in the same directory as the dockerfile.