This page answers the most common questions about Doc Reviewer. If you are running into a specific problem — an app that won’t open, an evaluation that returns errors, or web pages that fail to load — the answers here cover the most likely causes and fixes. If your question is not listed, check Known limitations for constraints that may affect your workflow.Documentation Index
Fetch the complete documentation index at: https://www.doc-reviewer.site/llms.txt
Use this file to discover all available pages before exploring further.
Why does the app open in a browser? Is it a web app?
Why does the app open in a browser? Is it a web app?
doc-reviewer.exe, the app starts a local backend server on localhost:8000 and automatically opens the interface in your default browser. No internet connection is required to use the app itself. The only outbound network calls Doc Reviewer makes are LLM API requests to the provider you configure — OpenAI, Anthropic, or a local model via Ollama.Is my data sent to the cloud?
Is my data sent to the cloud?
data/uploads/ folder next to the .exe, and all metadata, evaluation results, and settings are stored in a local SQLite database at data/db.sqlite. Your API keys are stored locally in that database and are never transmitted anywhere other than the LLM provider’s API endpoint.Which LLM model should I use?
Which LLM model should I use?
Can I use a local or offline model?
Can I use a local or offline model?
http://localhost:11434/v1. Leave the API key field blank — Ollama does not require authentication. Make sure Ollama is running with your chosen model loaded before you start an evaluation.Web page loading fails — what should I do?
Web page loading fails — what should I do?
.exe. If you have not installed it yet, run the following command in a terminal:My evaluation results look wrong — what can I do?
My evaluation results look wrong — what can I do?
- Weak LLM model — small or poorly-tuned models give unreliable scores. Switch to a stronger model in Settings → Models.
- No product context — without context, the LLM evaluates instructions without knowing the product’s terminology and audience. Open your project page and generate or write product context, then re-run the evaluation.
- Incorrect criteria — the active criteria set may not match your documentation style. Review or customize criteria in Settings → Criteria.
- False positives — if a specific criterion is flagging things incorrectly for a particular instruction, mark it as a false positive using the override control in the evaluation result panel.
Can I evaluate documents written in English?
Can I evaluate documents written in English?
## Роль (Role) section at the top of the criteria file, which is used as the LLM’s system prompt role.Automatic instruction detection uses morphological analysis optimized for Russian. Detection accuracy may be lower for English section titles — you can manually reclassify sections that are missed or incorrectly classified.Where is my data stored?
Where is my data stored?
data/ folder next to doc-reviewer.exe:data/db.sqlite— the SQLite database containing all projects, documents, evaluation results, model settings, and criteria setsdata/uploads/— the original document files you have uploaded
data/ folder.How do I update Doc Reviewer?
How do I update Doc Reviewer?
doc-reviewer.exe with the new file. Your data/ folder — including data/db.sqlite and all uploaded documents — is preserved separately and carries over automatically. You do not need to redo any configuration.db.sqlite and reconfigure the app from scratch. Release notes will call this out explicitly when it applies.Can I share evaluation results with my team?
Can I share evaluation results with my team?
Why are some sections not detected as instructions?
Why are some sections not detected as instructions?
What does product context do?
What does product context do?