Ollama
Get up and running with large language models.
shell 安装
1 | curl -fsSL https://ollama.com/install.sh | sh |
docker镜像源, /etc/docker/daemon.json
1 | https://docker.registry.cyou |
CPU only , Install
1 | docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama |
Run model locally
1 | docker exec -it ollama ollama run llama3 |
1 | ollama pull llama3.1 |
REST API
Ollama has a REST API for running and managing models.
Generate a response
1 | curl http://192.168.31.4:11434/api/generate -d '{ |
Chat with a model
1 | curl http://192.168.31.4:11434/api/chat -d '{ |
See the API documentation for all endpoints.
Open WebUI (Formerly Ollama WebUI)
- If Ollama is on your computer, use this command:
- If Ollama is on a Different Server, use this command:
- For CPU Only: If you’re not using a GPU, use this command instead: