SlackBuilds Repository

15.0 > Misc > ollama (0.20.5)

Ollama (Download and run large language models locally)

Ollama is an application which lets you run large language models
offline.

A list of models are available on ollama.com/library.

Optional dependencies like CUDA or ROCm will be automatically detected
during compilation of ollama libraries, if present.

CUDA=ON: building with CUDA, default is CUDA=OFF.

ROCM=ON: building with ROCm, default is ROCM=OFF.

Building ollama server and client requires network and
development/google-go-lang

This requires: google-go-lang

Maintained by: Ruoh-Shoei LIN
Keywords:
ChangeLog: ollama

Homepage:
https://github.com/ollama/ollama

Source Downloads (64bit):
ollama-0.20.5.tar.gz (0d9238ea86fef9b8d95b91d8300fc5e3)

Download SlackBuild:
ollama.tar.gz
ollama.tar.gz.asc (FAQ)

(the SlackBuild does not include the source)

Individual Files:
README
ollama.SlackBuild
ollama.info
slack-desc

Validated for Slackware 15.0

See our HOWTO for instructions on how to use the contents of this repository.

Access to the repository is available via:
ftp git cgit http rsync

© 2006-2026 SlackBuilds.org Project. All rights reserved.
Slackware® is a registered trademark of Patrick Volkerding
Linux® is a registered trademark of Linus Torvalds