Understanding Context Windows and Token Limits
Learn how tokens and context windows work across Harvey.
Last updated: Dec 5, 2025
Note: This article covers advanced technical concepts for users who are interested, but this information is not required to use Harvey effectively.
Overview
Harvey leverages powerful AI models to generate high-quality outputs quickly and accurately. Like all large language models, Harvey operates within a token limit, which defines the size of its context window—how much information the AI can “see” at once when generating a response.
In this article, we'll share rough estimates for when a document or set of documents will fit entirely within the model’s context window—its full-context range. These estimates are flexible, and you can usually exceed them without seeing a drop in output quality.
What are tokens and context windows?
Note: to quickly view the full-context ranges, use the table of contents to jump to the quick reference table.
Harvey processes text in tokens, which are small pieces of language—typically a word or part of a word. For example:
- “contract” = 1 token
- “unbelievable” = 3 tokens
- "Please summarize this contract.” = ~6–8 tokens
Harvey’s models also have a context window, which is the total number of tokens the AI can consider at once. This includes:
- Your input (prompt or question)
- Any documents or Vault content included
- Prior conversation history in the same thread
- Harvey’s response
For example, if the context window is ~150 pages, Harvey can process that much combined input + output. If the content exceeds the range, Harvey will focus on the most relevant sections and may exclude unrelated content to maintain accuracy and clarity.
In short, tokens are the building blocks of text. The context window is the AI’s short-term memory, measured in tokens.
Threads
The full-context range of a thread is about 240 pages of text. This includes:
- The prompt or question you enter
- Any documents you attach
- Conversation history in the thread (your previous questions + responses)
When the range is reached, the least relevant content is dropped automatically.
Tip: Break up long documents or ask focused questions about smaller sections instead of uploading everything at once.
Querying with Sources
You can add files or sources, like a vault, without hitting token limits—as long as your query is focused. When working with vaults or large files, phrase your questions clearly so Harvey retrieves only what’s relevant.
- Optimal relevance: Ask, "What is the termination clause in Section 9?"
- Less relevance: Ask, "Summarize each of the provisions in the contract."
Review Tables
Review tables allow you to store and reference large volumes of data without directly consuming your entire context window.
When you create review tables, Harvey treats each query–document pair as its own task, which allows it to return more targeted, detailed outputs.
Instead of combining all documents into a single context window, each table cell has it’s own context window, which allows it to fully ingest documents that are around ~60 pages of text.
If a document is longer than the available window, Harvey automatically identifies the most relevant information and may omit unrelated material to keep results clear and precise.
Tip: After extracting data points in your review table, use Ask to synthesize or analyze them further. Learn more: Ask Questions Directly in Review Tables
Workflows
Each block in a Workflow has the same suggested limits as a thread (~240 pages of text).
- When referencing documents or Vault, only relevant snippets are passed into that block.
- AI-generated outputs and variables can be passed between blocks, but each block runs independently within its own token limit.
Tip: When chaining multiple AI blocks, keep inputs focused and avoid repeatedly passing large documents unless necessary.
Quick Reference Table
Before reviewing the full-context ranges, keep in mind:
- These ranges reflect the approximate amount of text Harvey can fully consider in a single pass.
- They're not upload limits—you can safely upload more than these amounts and still get strong results.
- If outputs seem less relevant, they can be used as a helpful troubleshooting reference.
Product Area | Full-Context Ranges | Notes |
|---|---|---|
Threads | ~240 pages of plain text | Includes prompt, docs, and history |
Review tables | ~60 pages of plain text per cell | Each cell is a unique model call that runs the query from that column over the document in that row |
Workflows (per block) | ~240 pages of plain text | Each block runs independently |
What Happens If You Exceed the Context Window?
If limits are exceeded, Harvey automatically optimizes processing by:
- Dropping less relevant parts of the conversation or documents
- Producing shorter or less detailed responses
If you are not satisfied with the response due to incomplete or inconsistent outputs, we recommend the following:
- Simplify: Shorten your prompt or break it into smaller asks (e.g. ask a specific query first, then ask follow-up questions from the output).
- Reduce: Remove or narrow the documents you upload.
- Split: Run long tasks across multiple requests or workflow blocks.
- Use a review table first: Run a review table with columns for each data point you are analyzing. Then, use ask over review to start a thread over those extracted data points. This ensures that every document included in the table is reviewed.
FAQs
Related Articles
Getting Started with Harvey
A high level overview of Harvey features and how to navigate our platform.
User Login Guide
Learn how to log in to your Harvey account securely and without issues.
How to Set Up Your User Profile
Discover how to customize Harvey to fit your needs.
Harvey Guide: Learn How to Use Harvey Directly in Harvey
Learn how to use Harvey directly in the product with instant, cited answers sourced from the Help Center.
Choosing Between Threads and Review Tables
Learn how review tables and threads work together to streamline your document review and analysis.