AI Tools
Token counters, LLM cost estimators, prompt diffing — the utilities you need for building on OpenAI, Anthropic, and other model APIs.
3 tools in this category
Anthropic Messages API
Send prompts to Claude from your browser — streaming, system prompts, your key stays local.
HTTP Client
Send REST requests from your browser — import & export curl.
OpenAI Responses API
Test OpenAI's Responses API from your browser — streaming, structured outputs, your key stays local.
Building on LLMs introduces a new class of dev tool: token counters to stay under context limits, cost calculators to forecast spend, prompt diffs to track what changed between model calls. This category is in active development — the first tools will ship shortly.
What's coming
A GPT/Claude/Gemini token counter (different tokenizers!), a cost calculator that multiplies token counts by current model pricing, a prompt diffing tool that highlights what changed between versions, and an eval harness that runs the same prompt against multiple models and compares outputs. Everything client-side where possible — tokenizers are chunky JSON files but small enough to ship in a browser bundle.
Why client-side tokenizers matter
Online tokenizers are a natural place for prompt engineers to test sensitive system prompts. Sending your "secret" instructions to a third-party server during development is exactly what you want to avoid. All our AI tools will ship the tokenizer in JavaScript and run it locally.
Frequently asked questions
When will the first AI tools ship?
Soon — the tokenizer is first in the queue. Cost calculator and prompt diff follow. New tools land here weekly.
Will the tokenizer be accurate?
Yes — we'll bundle the actual tokenizer vocabularies used by OpenAI (tiktoken), Anthropic, and Google. The result matches what you'll see in your API billing.