Ollama Installation ubuntu server

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Install Model


Depending upon the System resoues, current version is 3.3 but it all depends on your storage, process power and ram availability
for the sake of this tutorial I will install install llama3.2 as it has smaller sizes also available

ollama run llama3.2


Once above is installed you can now start using self hosted AI using the CLI

once you isntall make sure to check if ollama is running go the the browser and use your ip wehre ollama is installed with below port

http://10.11.0.12:11434/

Install and get started Open WebUI

Now you need to Install Docker and get started with linking with LLM

If LLM is locally instlaled then you must use below commad

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

If you want to use docker container in any other system then you must use below commad

docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://ollama.syncbricks.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Now you can start using ollama with model 3.2

Leave a Comment