Ollama (Download and run large language models locally)
Ollama is an application which lets you run large language models
offline.
A list of models are available on ollama.com/library.
Optional dependencies like CUDA or ROCm will be automatically detected
during compilation of ollama libraries, if present.
CUDA=ON: building with CUDA, default is CUDA=OFF.
ROCM=ON: building with ROCm, default is ROCM=OFF.
Building ollama server and client requires network and
development/google-go-lang
This requires: google-go-lang
Maintained by: Ruoh-Shoei LIN
Keywords:
ChangeLog: ollama
Homepage:
https://github.com/ollama/ollama
Download SlackBuild:
ollama.tar.gz
ollama.tar.gz.asc (FAQ)
(the SlackBuild does not include the source)
| Individual Files: |
| README |
| ollama.SlackBuild |
| ollama.info |
| slack-desc |
© 2006-2026 SlackBuilds.org Project. All rights reserved.
Slackware® is a registered trademark of
Patrick Volkerding
Linux® is a registered trademark of
Linus Torvalds