Ollama’s github repository (to check for updates)
Ollama’s web (to check for models)
OpenWeb UI (to check for docker commands)
Install locally
Prerequisites
I’m running OpenWeb UI through docker.
First of all check you have docker.desktop open. It may tell you to update WSL. Afterwards please check your docker is able to run containers
docker run hello-world
Ollama
ollama ls # see local models
ollama run gpt-oss # run model
ollama rm gemma3 # delete model
inside a model
/? # see help
# this creates a 'blueprint' you can save and load multiple times to give the LLM some context
/save <model>
/load <model>
/clear
/bye (or ctrl+D)
Give context to a LLM (local folders)
# you need to be in the context folder
# this returns the file names and contents for that folder
git ls-files | xargs -I {} sh -c 'echo "\n=== {} ===\n"; cat {}' | ollama run ds-custom 'how would you improve this?'
OpenWeb UI
Now let’s pull OpenWeb UI container. I use this command as I have a NVIDIA GPU which I may use
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
I’ve already installed Ollama locally, but there’re other commands for OpenWeb UI which come bundled with Ollama.
Now open localhost:3000
and you may interact with it more visually.