31 free tools · Runs in browser · No data sent to server

Free Developer Tools Online

The data that goes through developer tools is often sensitive — production JWTs, .env files, database schemas, API responses. Everything here runs in JavaScript inside your browser tab. No server call, no upload, nothing logged. Paste your JSON, decode your JWT, build your CRON — all of it stays local.

Runs locally in browserNo data sent to serverNo sign-upNo install100% Free

Data Format Converters

JSON & Code Tools

Auth & Security

Web & CSS Utilities

DevOps & Infrastructure

Testing & Debugging

Data & AI Utilities

Low-Level & Misc

Why these tools run in your browser, not on a server

Most online developer tools work by sending your input to a server, processing it there, and returning the result. That's fine for a random string. It's less fine for a JWT from your production auth system, an .env file with database credentials, or a SQL schema you'd rather not copy to a third-party server. Every tool here runs as JavaScript in your browser tab. Open the Network panel — you won't see your data leave.

Nothing leaves your tab

No POST request with your JSON. No server log entry with your JWT. No S3 bucket storing your .env. The browser's JavaScript engine does the processing and writes the result directly into the DOM.

Works on restricted networks

Once the page loads, the tools keep working with no internet. Useful on corporate networks that block external API calls, on VPNs, or anywhere the network is unreliable.

No waiting for a response

Local processing means no network round-trip. Output updates as you type. A 5 MB JSON file formats in under a second. The latency is your CPU, not a distant server.

What developers use these for

  • Minified API response that's unreadable — format and check if it's valid JSON JSON Formatter
  • Converting a JSON config to YAML for a Kubernetes manifest or GitHub Actions workflow JSON to YAML
  • JWT from an auth log — check which claims it contains and when it expires JWT Decoder
  • SHA-256 or MD5 hash for a string, a secret, or a file checksum Hash Generator
  • CRON expression that's firing at unexpected times — see what it actually means in plain English Cron Expression Builder
  • Encode an image or binary file as Base64 for a data URI or JSON payload Base64 Encoder / Decoder
  • Regex that works in one language but not another — test it live with real input Regex Tester
  • Spot duplicate keys, wrong values, or syntax errors in an .env file before deploying .env File Parser
  • Prompt that's hitting token limits — check the count per model before the API call LLM Token Counter
  • API response missing HSTS, CSP, or X-Frame-Options — check what headers came back HTTP Header Analyzer

Guides & Related Reading

Questions about the developer tools

I pasted a production JWT into the decoder. Was that a mistake?

No. The decoder runs entirely in your browser — it splits the token on the dots, calls atob() on each part, and displays the JSON. That's all JavaScript, local to your tab. Nothing is sent outbound. If you're sceptical, open DevTools → Network tab, then paste a token — you'll see zero requests fire.

How is this different from running these tools in a terminal?

For most tasks it isn't — jq, python -m json.tool, and openssl base64 all do the same thing. The difference is setup. When you're on a machine without the right toolchain, a browser tab with a working formatter is faster than installing dependencies or remembering which flag does what. These are convenience tools, not replacements for a real shell.

Why does the JSON Formatter validate as well as format?

Formatting requires parsing first. A JSON parser either succeeds or throws a syntax error with a specific position. Surfacing that error — with the line and column number — is just exposing what the parser already knows. You can't format invalid JSON, so the validation step is built in rather than being a separate pass.

The LLM Token Counter gives different counts for different models. Why?

Each model uses a different tokenization vocabulary. GPT-4 uses cl100k_base, Claude uses a different BPE vocabulary, and Gemini uses SentencePiece. The same sentence can tokenize differently depending on which model built the vocabulary. The cost estimate multiplies the model-specific token count by that model's per-token rate.

When would I use Text Diff Checker instead of git diff?

When you don't have files in a repository to diff. API responses you copied from Slack, two versions of a config pasted from separate sources, before/after for a database migration, prompt text you're iterating on — anything that lives in a clipboard rather than a tracked file. No git init, no terminal, works on any two text blobs.

Is there a size limit on what I can process?

No server-side limit — there's no server. The practical ceiling is your browser's memory. Files up to around 10–20 MB parse quickly in modern browsers. Very large JSON (50 MB+) will be slow to render with syntax highlighting, but it'll complete. If you're routinely processing huge files, a CLI tool will be faster, but for one-off tasks the browser is fine.

Explore Other Tool Categories