AI agents & LLM integration
Toolcore is not a hosted JSON or image API: there is no endpoint you call with a payload and get transformed data back. What you can integrate from code or from an assistant is the stable HTTPS URLs that open a tool with optional prefilled text via query parameters—plus JSON manifests your agents can fetch to discover paths, key names, and an English system prompt. Most tools run their main work in the visitor's browser; some pages may use optional sign-in or third-party services—see each page.
Save agent compute (pseudo-MCP pattern)
You do not need a full MCP server on our side. The supported pattern is manifest + prefilled URLs: the assistant fetches JSON, builds a link, and the user’s browser runs formatters, encoders, crypto, diffs, and the rest. That saves LLM tokens and compute compared to re-deriving large outputs in chat. Each manifest includes an integration object describing this contract; optional local MCP: bun run mcp (stdio) uses the same non-AIGC catalog for clients that speak MCP natively. For a human-facing terminal-style entry on the site, see Pseudo-CLI (open + catalog id)—still the same tool pages under the hood.
Machine-readable manifest
GET /agent-tools.json returns JSON with siteUrl, manifestUrl, per-tool path and prefill rules (including kind: "none" when a page needs upload or interaction that URLs cannot supply). Use it to build links for workflows like JSON format, encrypt and decrypt, or Base64 where prefill is supported.
GET /mcp-tools.json is the same structure with non-AIGC tools only (browser-local catalog; no AIGC entries). Use it when you want agents or MCP clients to ignore server-side AI helpers. The repo also ships a small stdio MCP server: run bun run mcp from the project root and point your client at that command.
LLM system prompt (plain text)
GET /llm-prompt.txt is the same English instructions embedded in the manifest as llmSystemPrompt, formatted for copy-paste into assistant or MCP-style client settings. It tells the model to fetch the manifest, build HTTPS URLs with q (percent-encoded UTF-8) or qb (Base64 UTF-8), and not to claim server-side execution.
How this differs from a "JSON format API"
Many sites compete on head terms like "json format", "image convert", or "encrypt decrypt" with a classic request/response API. Toolcore's angle is privacy and static hosting: the work happens in the tab, and agents integrate by returning or opening URLs—documented here and in About.
Common use cases
- Fetch /agent-tools.json or /mcp-tools.json and build prefilled HTTPS links so users run transforms in the browser instead of in chat.
- Paste /llm-prompt.txt into assistant settings so the model follows the same URL contract as the manifest.
- Pair with the pseudo-CLI or direct ?q= links when you want a short, repeatable handoff from automation to UI.
Common mistakes to avoid
Expecting a private JSON-format or image API
There is no server endpoint that returns transformed payloads for arbitrary secrets. Integration is manifest + links; work stays in the visitor’s tab for client tools.
Sending AIGC tools to MCP clients that must stay offline
Use /mcp-tools.json for the non-AIGC catalog, or filter entries in your client when policy forbids server-side generation.
FAQ
Does Toolcore run my payload on a server?
Most tools execute in the browser. AIGC routes may process content server-side when configured—see each tool page and execution labels.
Why prefer links over pasting huge formatted output in chat?
Prefilled URLs let the site do the transform locally, which often saves tokens and avoids duplicating work the UI already performs.