Free Tool · No signup required

Number Base Converter — Binary, Octal, Decimal & Hex Instantly

See 4-bit nibbles, bit-length, and two's complement — live in every base

1
2
3

Switching between number bases is a daily task for anyone working with bitwise operations, memory addresses, network masks, color values, or low-level protocols. This converter lets you type in any of the four bases — binary, octal, decimal, or hexadecimal — and immediately see the equivalent in all three others. Binary output is grouped into clean 4-bit nibbles (e.g. 0111 1111) and adapts to 8-bit, 16-bit, or 32-bit display based on the magnitude of your number. Toggle two's complement to see how a negative integer is encoded in binary memory — invaluable when debugging bitwise operations or reading disassembled machine code.

How to Convert Between Number Bases

Type in any base — binary, octal, decimal, or hex — and all others update instantly.

1
Step 1

Type in any base

Click into any of the four input fields and start typing. Decimal accepts 0-9 and negative numbers with a leading minus. Binary accepts 0 and 1. Octal accepts 0-7. Hexadecimal accepts 0-9 and A-F (case insensitive). As you type, all other fields update instantly. Invalid characters are ignored.

2
Step 2

Read the binary breakdown panel

Below the binary field, a breakdown panel shows the number formatted as 4-bit nibbles (groups of four binary digits) for easy reading. It also displays the detected bit-length (8, 16, or 32 bits) based on the magnitude of your number, making it easy to verify alignment with data types like uint8, int16, or uint32.

3
Step 3

Toggle two complement

Enable the Two complement toggle to see how a negative decimal number is represented in binary. For example, -1 in 8-bit two complement is 11111111 (all ones). This is the representation used by CPUs and programming languages for signed integers, and understanding it is essential for bitwise operation debugging.

Features

Converts between binary, octal, decimal, and hexadecimal simultaneously

Binary output grouped in 4-bit nibbles for readability

Automatic bit-length display: 8, 16, or 32 bits

Two complement mode for signed integer binary representation

Supports large integers via BigInt — no precision loss

Negative number support in decimal

Hex input is case-insensitive (A-F or a-f)

Runs entirely in your browser with no server calls

Related Tools

Frequently Asked Questions

How do I convert decimal 255 to hexadecimal?

Divide 255 by 16 repeatedly, tracking remainders. 255 / 16 = 15 remainder 15. 15 in hex is F, and 15 as the whole-number quotient is also F, so 255 in decimal is FF in hexadecimal. In this converter, just type 255 in the Decimal field and the Hex field instantly shows FF.

What is a nibble in binary?

A nibble is a group of 4 binary bits, representing values from 0000 to 1111 (0 to 15 in decimal, or 0 to F in hex). One byte (8 bits) contains two nibbles. Grouping binary digits into nibbles makes them much easier to read and aligns naturally with hexadecimal (one hex digit = one nibble).

What is two complement and why does it matter?

Two complement is the method most CPUs use to represent signed (positive and negative) integers in binary. A positive number is stored normally. A negative number is stored as the bitwise complement of its absolute value plus one. This scheme allows addition and subtraction to work with the same hardware circuit regardless of sign.

What is the maximum number this converter can handle?

This tool uses JavaScript BigInt for all arithmetic, so it can handle arbitrarily large integers without precision loss. Contrast this with JavaScript Number, which loses integer precision above 2 to the power of 53. For typical use cases involving 8, 16, 32, or 64-bit numbers, there is no effective limit.

How do I read hexadecimal color codes?

HTML/CSS hex colors use 6 hex digits in pairs: RRGGBB. For example #FF5733 means red=FF (255), green=57 (87), blue=33 (51). Paste FF into the Hex field to see it equals decimal 255. Paste 57 to see decimal 87. This converter is handy for decomposing and understanding color values at the byte level.