· Tutorials  · 2 min read

WebUI Solution: Chat Privately With Your Local AI

Set up OpenChat with Ollama for a fully private local AI experience.

Set up OpenChat with Ollama for a fully private local AI experience.

Ever wish you could host a private ChatGPT alternative on your own machine? OpenChat (Open WebUI) provides an easy, fully offline setup—especially if you’ve already installed Ollama for local AI inference.

Prerequisites

  • Ollama

If you need help installing Ollama, we covered installing Ollama in our Privately Chat with DeepSeek R1 on Windows in 5 Minutes post.

What is OpenChat?

OpenChat is a self-hosted AI platform that integrates with Ollama to offer a complete offline chat experience, including features like local RAG, voice/video chat, and Markdown—all while keeping your data on your PC.

How To Install OpenChat

Step 1: Install Docker

  1. Visit https://www.docker.com/products/docker-desktop
  2. Download and install Docker Desktop for your OS.
  3. Launch Docker Desktop and ensure it’s running.

Docker Desktop Running On Windows

What is Docker?

Docker helps you package and run apps in isolated containers so it’s easier to handle dependencies and environments.

Step 2: Create a Docker Compose Config File

  1. Create a folder and add a file named docker-compose.yml.
  2. Copy this into the docker-compose.yml file:
services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:ollama
    container_name: open-webui
    restart: always
    ports:
      - '3000:8080'
    volumes:
      - open-webui:/app/backend/data

volumes:
  open-webui:
    driver: local

What is Docker Compose?

Docker Compose allows you to define and run multiple containers from a single YAML file. It simplifies spinning everything up with one command.

Step 3: Use Docker-Compose to handle the rest

  1. Open your terminal (CMD, PowerShell, Bash, etc).
  2. Navigate to your folder:
    cd path\to\your\docker-compose-folder
  3. Then run the docker-compose file:
    docker-compose up
    This pulls the image and launches the OpenChat server locally.
  4. Visit http://localhost:3000 and start chatting.
  5. To stop and remove the container, run:
    docker-compose down
    This setup helps you quickly deploy a local GPT-like environment using Docker, ensuring your conversations never leave your machine.

Try Downloading a new model

In this example, we will install DeepSeek R1 and there are two ways to download a model:

The Fastest way

Note: This is broken right now, but it may still be worth a try

  1. Go to the Model Selector in Open WebUI.
  2. Enter “deepseek-r1:7b” (or another variant).
  3. If the model isn’t found, OpenChat will prompt you to download it via Ollama.
  4. Once downloaded, select it in the Model Selector.

Download a model the fastest way

The Admin way

  1. Click on your profile.
  2. Click on “Admin Panel” button.
  3. Click on “Settings” tab.
  4. Click on the “Download Icon” button.

Download a model Part 1

  1. Under “Pull a model from Ollama.com”, enter “deepseek-r1:7b” (or another variant).
  2. Click on the “Download Icon” button.
  3. Wait for the download to complete.

Download a model Part 2

Enjoy Private AI

With OpenChat, you get a powerful, offline LLM solution on your own hardware—perfect for private GPT-like usage or secure on-premise AI projects. Check out the docs for more tips!

Back to Blog

Related Posts

View All Posts »