LLM context split & budget

Client

Paste a multi-part context (system, tools, retrieval, user) separated by a line you control. Get per-section character counts, rough token estimates, and share of the total—optionally compare to a planning budget. Works with the same heuristics as the LLM token estimate tool.

When to use this

Large prompts are often assembled from several sources: instructions, tool schemas, retrieved documents, and the latest user message. This page helps you see the weight of each part so you can trim or reorder before opening a chat UI or API—not to replace a real tokenizer.

Context text

?

Sections are split on a literal delimiter you choose (for example a --- line). Each chunk gets its own character count and rough token estimate using the same CJK-aware heuristic as the single-block token tool—not a provider tokenizer.

Sections
4
Total chars
315
Total ≈ tokens
80
Per-section size and token share
#CharsWords≈ tokens% of tokensCJK ratio
172121822.5%0%
210542733.8%0%
394132430.0%0%
444101113.8%0%

Common use cases

  • See how much of your window is system prompt versus tool JSON versus retrieved passages before you send anything.
  • Compare two packaging strategies (e.g. fewer large chunks vs. more small ones) by pasting each draft with the same delimiters.
  • Pair with a rough token budget field to spot when a bundle is likely over a target limit.

Common mistakes to avoid

  • Assuming percentages match billing

    Token counts here use character heuristics. Provider tokenizers can differ—use official counters when cost or hard limits matter.

  • Splitting in the wrong place

    The tool splits on an exact substring. If your delimiter appears inside JSON or prose, you will get extra sections—pick a sentinel line that cannot occur in payload text.

FAQ

Does this send my prompt to a server?

No. Splitting and arithmetic run entirely in your browser, like the single-block token estimate tool.

Why auto chars-per-token “per section”?

Mixed-language prompts often put English instructions in one block and denser CJK or code in another. Auto mode recomputes the blend for each section instead of averaging the whole paste.

Common search terms

Phrases people search for that match this tool. See the full long-tail keyword index.

  • split llm prompt by section
  • context window budget planner

Related utilities you can open in another tab—mostly client-side.