Long text chunker for chat paste
ClientTurn a long paste into numbered pieces under a character budget. Pair with the token estimate when you think in rough tokens, or context split & budget when you already use delimiter sections.
When to use this
Chat products often cap bytes or characters per message. This tool gives you ordered, copy-ready slices so you can paste in sequence and tell the model there is more coming. It is a packing helper—not a rewrite, summary, or RAG indexer.
Source text
?
Limits are characters in this page (JavaScript string length), not provider tokenizer tokens. Use the Paragraph mode to keep blank-line boundaries when possible, Line for single newlines, or Hard for fixed-width slices only.
1 chunk · effective max 8,000 chars
- Chunk 1 of 1279 chars · ~70 tok (heuristic)
Section one introduces the idea. It is short. Section two has several lines that might need to be pasted separately when your chat caps message size. Keep each paste under the limit and tell the model you will continue in the next message. Section three closes with next steps.
Common use cases
- Paste a long doc when the assistant UI caps message size—split here, then send “Part 1 / Part 2” in order.
- Plan paste sizes beside the token estimate tool: same CJK-aware heuristic on each chunk after you split.
- Redact or scan secrets first when the source might contain tokens or emails.
Common mistakes to avoid
Assuming chunks match provider tokenizer limits
This page counts characters and shows a rough token split from heuristics—not tiktoken or any hosted tokenizer.
Expecting semantic sectioning
Paragraph mode only favors blank lines; it does not detect headings or chapters in PDFs unless they use blank lines.
FAQ
Does my text leave the browser?
No. Splitting and copy buttons run locally; nothing is sent to our servers for generation.
Which break mode should I use?
Paragraph for prose with blank lines, Line for logs or one-line-per-row text, Hard when you only care about a fixed character width.
Common search terms
Phrases people search for that match this tool. See the full long-tail keyword index.
- split long text for chatgpt paste
- paste large document into llm in parts
- chunk text by character limit browser
More tools
Related utilities you can open in another tab—mostly client-side.
UTF-8 byte size for API & chat pastes
ClientUTF-8 byte length vs JavaScript string length, optional byte ceiling bar—plan HTTP bodies and chat payloads locally; not tokenizer-exact token hint included.
LLM character budget from token cap
ClientPlan max paste characters from a target token budget—same CJK-aware heuristic as token estimate, browser-only, not tokenizer-exact.
LLM token estimate
ClientRough character-based token planning for prompts and context—CJK-aware heuristic, browser-only—not tokenizer-exact.
LLM context split & budget
ClientSplit pasted prompt blocks by a delimiter and see per-section rough token share and optional budget compare—browser-only, not tokenizer-exact.