Doc Reviewer delegates all evaluation work to a large language model. Before you can run your first evaluation, you need to connect at least one model. You manage models in Settings → Models, where you can add, edit, activate, and delete model configurations. Only the active model is used for evaluations, but you can keep multiple configurations and switch between them at any time.Documentation Index
Fetch the complete documentation index at: https://www.doc-reviewer.site/llms.txt
Use this file to discover all available pages before exploring further.
Supported providers
| Provider | Base URL | API key required |
|---|---|---|
| OpenAI | https://api.openai.com/v1 | Yes |
| Anthropic | https://api.anthropic.com/v1 | Yes |
| Ollama (local) | http://localhost:11434/v1 | No |
| Any OpenAI-compatible API | Your custom base URL | Depends on provider |
Add a model
Fill in the model details
Complete all required fields:
- Model ID — the identifier the API expects, for example
gpt-4oorclaude-3-5-sonnet-20241022. This must match the model name used by your provider’s API exactly. - Display name — a human-readable label shown in the UI, for example
GPT-4o. - Provider — select the provider or enter a custom value.
- Base URL — the API endpoint. Use the values from the table above or your provider’s documentation.
- API key — paste your secret key. Leave blank for providers that do not require one (such as Ollama).
Only one model can be active at a time. Switching the active model takes effect immediately — the next evaluation will use the newly activated model.
Switch the active model
To switch to a different model, go to Settings → Models and click Activate next to the model you want to use. The previously active model is deactivated automatically.API key security
API keys are stored in a local SQLite database on your machine. They are never sent to Doc Reviewer’s servers — there are none. The only outbound traffic that includes your API key is the direct API call from your machine to your LLM provider.
Use a local model with Ollama
Ollama lets you run open-weight models locally, without any API key or external service. This is useful when you need to keep your documents on-premises or want to avoid per-token costs.Install Ollama
Download and install Ollama from ollama.com. After installation, Ollama runs as a background service and listens on port
11434.Pull a model
Open a terminal and run a model to download it:You can use any model available in the Ollama library. The
ollama run command downloads the model if it is not already present and starts an interactive session — you can close that session after the download completes.Add the model in Doc Reviewer
Go to Settings → Models → Add model and fill in:
- Model ID — the Ollama model name, for example
llama3ormistral. - Display name — any label you prefer.
- Base URL —
http://localhost:11434/v1 - API key — leave blank.
The models.yml seed file
Doc Reviewer readsmodels.yml from the directory next to doc-reviewer.exe once — on first run — and uses it to populate the initial model list. After that, models.yml has no effect. To add or change models after first run, use Settings → Models in the UI.
If you are deploying Doc Reviewer to a new machine and want to pre-configure models, place a
models.yml file next to the .exe before launching the application for the first time.