Ollama it has similar pattern as docker.
- Download
curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
sudo tar -C /usr -xzf ollama-linux-amd64.tgz
ollama serve
# OR host on different ip
OLLAMA_HOST=192.168.29.13:11435 ollama serve
ollama -v
#check graphics card
nvidia-smi
#port http://127.0.0.1:11434/
###Podman/Docker - https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image
podman run -d --gpus=all --device nvidia.com/gpu=all --security-opt=label=disable -v ollama:/root/.ollama -p 11434:11434 ollama/ollama
- Run
user@home:~$ ollama list
NAME ID SIZE MODIFIED
gemma2:latest ff02c3702f32 5.4 GB 11 hours ago
llama3.2:latest a80c4f17acd5 2.0 GB 12 hours ago
user@home:~$ ollama run llama3.2
>>> hola
Hola! ¿En qué puedo ayudarte hoy?
>>> hey
What's up? Want to chat about something in particular or just shoot the breeze?
podman run -d -p 3000:8080 --gpus all --device nvidia.com/gpu=all --security-opt=label=disable -e OLLAMA_BASE_URL=http://192.168.29.13:11434 -e WEBUI_AUTH=False -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
https://github.com/ollama/ollama/blob/main/docs/linux.md
for podman GPU access cdi- https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html