Binary, Decimal, and Hexadecimal in Programming
Why Developers Work Across Multiple Bases
Binary is the native language of all computing hardware, representing the core on/off states of transistors. However, reading and writing long strings of binary is prone to human error. Hexadecimal acts as the perfect developer shorthand. Because 16 is a power of 2, a single hex digit cleanly maps to exactly four binary bits (a nibble). This allows developers to express complex binary states compactly.
Decimal, while our natural counting system, often misaligns with hardware limits. In code, you'll encounter hex in bitfields, network masks, memory addresses, and color values, while binary is typically reserved for low-level bitwise operations and hardware interfacing.
How Computers Store Numbers (Bits, Bytes, Words)
At the lowest level, computers store all numerical data as sequences of bits within fixed-width allocations like 8-bit bytes, 16-bit half-words, or 32-bit words. Understanding the base systems is essential for managing how data overflows, how much memory an application consumes, and how data structures align in memory.
How to Convert Between Number Systems
Binary to Decimal: Positional Weight Method
To convert binary to decimal, multiply each bit by 2 raised to the power of its position (starting from 0 on the right) and sum the results. For example, converting the binary value 10101111 (which is 175 in decimal):
(1 × 128) + (0 × 64) + (1 × 32) + (0 × 16) + (1 × 8) + (1 × 4) + (1 × 2) + (1 × 1) = 175 Decimal to Binary: Repeated Division
To convert from decimal to binary, repeatedly divide the decimal number by 2. Record the remainder of each step. The final binary number is formed by reading the remainders from bottom to top.
175 ÷ 2 = 87 remainder 1
87 ÷ 2 = 43 remainder 1
43 ÷ 2 = 21 remainder 1
21 ÷ 2 = 10 remainder 1
10 ÷ 2 = 5 remainder 0
5 ÷ 2 = 2 remainder 1
2 ÷ 2 = 1 remainder 0
1 ÷ 2 = 0 remainder 1
Reading bottom-up: 10101111 Binary to Hex: The Nibble Shortcut
Converting binary to hexadecimal is fast because of the direct bit mapping. Pad the binary number with leading zeros to make its length a multiple of 4. Then, split it into 4-bit chunks (nibbles) and translate each chunk into its single hex equivalent.
Binary: 10101111
Split: 1010 | 1111
Lookup: A | F
Hex: 0xAF Hex to Binary: Direct Digit Expansion
To convert hex to binary, simply expand each hexadecimal digit into its exact 4-bit binary equivalent.
Hex: 0xAF
Split: A | F
Expand: 1010 | 1111
Binary: 10101111 Decimal to Hex and Back
Converting between decimal and hexadecimal uses the same division or positional weight methods as binary, but utilizing base 16 instead of 2. Repeatedly divide by 16 to go from decimal to hex, or multiply by powers of 16 to go from hex to decimal.
175 ÷ 16 = 10 remainder 15 (which is F in hex)
10 ÷ 16 = 0 remainder 10 (which is A in hex)
Result: 0xAF Two's Complement and Signed Integers
How Sign Bits Work in Fixed-Width Integers
Two's Complement is the standard way computers represent signed (negative) integers. The most significant bit (MSB) acts as the sign bit. If the MSB is 1, the number is negative. The formula to calculate the decimal value of a negative Two's Complement number is: unsigned value - 2^N (where N is the bit width).
Here are examples for an 8-bit system:
00000000 = 0
00000001 = 1
01111111 = 127 (Max positive)
10000000 = -128 (Min negative)
11111111 = -1 Overflow Behavior Across Bit Widths
When an arithmetic operation exceeds the maximum value a data type can hold, integer overflow occurs. In an 8-bit signed integer, adding 1 to 127 (01111111) results in 10000000, which wraps around to -128. Understanding bounds is critical for preventing security vulnerabilities and crashes in typed languages like C, C++, and Rust.
Bitwise Operations Reference
AND, OR, XOR — Practical Use Cases in Code
AND (&): Compares bits, outputting 1 only if both are 1. Commonly used for masking.
0xAF & 0x0F == 0x0F // Isolates the lower nibble OR (|): Outputs 1 if either bit is 1. Used to ensure specific bits are turned on.
0xAF | 0x10 == 0xBF // Sets the 5th bit XOR (^): Outputs 1 if the bits differ. Ideal for toggling states or basic cryptography.
0xAF ^ 0xFF == 0x50 // Inverts all bits NOT, Shift Left, Shift Right
NOT (~): Flips all bits, changing 0 to 1 and 1 to 0 (ones' complement).
~0xAF == 0x50 // Assuming 8-bit integer Shift Left (<<): Shifts bits to the left, effectively multiplying by 2 per shift.
0x01 << 3 == 0x08 // 1 becomes 8 Shift Right (>>): Shifts bits right, effectively dividing by 2. Signed shifts preserve the sign bit.
0x08 >> 1 == 0x04 // 8 becomes 4 Common Bitwise Patterns (Masking, Setting, Clearing, Toggling Bits)
These reference patterns are foundational for embedded programming, flags configuration, and performance optimization:
Masking: value & 0xFF // Isolate lower byte
Setting a bit: value | (1 << n) // Turn on the nth bit
Clearing a bit: value & ~(1 << n) // Turn off the nth bit
Toggling a bit: value ^ (1 << n) // Flip the nth bit
Checking a bit: (value >> n) & 1 // Returns 1 if nth bit is set Hex in Web Development and Systems Programming
CSS Color Codes and Hex Notation
In web development, colors are predominantly represented in 6-digit hex format (e.g., #FF5733), mapping to Red, Green, and Blue channels. Each channel is 1 byte (0x00 to 0xFF). Browsers also support 3-digit shorthand (#F00 expands to #FF0000) and 8-digit notation, where the final byte dictates the alpha (transparency) channel.
Memory Addresses and Hex Dumps
Systems programmers interact with hex constantly. Memory addresses (e.g., 0x7FFE4A2C) are represented in hex to keep pointers readable. When debugging binaries or analyzing packet captures, debuggers display raw data as hex dumps, rendering streams of bytes into navigable blocks alongside their ASCII equivalents.
ASCII and Unicode — What Decimal Values Map To
Text encoding maps numerical values to characters. In ASCII, the printable range spans from decimal 32 (0x20, Space) to 126 (0x7E, Tilde). Control characters like Newline sit below 32 (e.g., 0x0A). Modern Unicode retains this ASCII baseline but extends the numerical mappings to cover all global text systems.
Frequently Asked Questions
Binary (base-2) uses only 0s and 1s, representing hardware on/off states. Decimal (base-10) uses digits 0-9 and is the standard human counting system. Hexadecimal (base-16) uses 0-9 and A-F, acting as a compact, human-readable shorthand for binary where each hex digit perfectly maps to 4 binary bits (a nibble).
Hexadecimal is significantly shorter and easier to read than binary while maintaining a direct structural mapping to it. A 32-bit memory address takes 32 characters to write in binary but only 8 characters in hex (e.g., 0xFFFFFFFF), reducing visual noise and typing errors during debugging.
Pad the binary number with leading zeros until its length is a multiple of four. Group the bits into blocks of four (nibbles) starting from the right. Convert each 4-bit block into its corresponding single hexadecimal digit (e.g., 1010 becomes A, 1111 becomes F).
Two's Complement is the mathematical encoding system most computers use to represent negative integers natively in binary. It ensures that the first bit (Most Significant Bit) acts as a sign bit, allowing fundamental arithmetic operations like addition and subtraction to work identically for both positive and negative numbers at the hardware level.
Bitwise operations compare two binary numbers bit by bit. AND outputs 1 only if both corresponding bits are 1, commonly used to mask or extract specific bits. OR outputs 1 if at least one bit is 1, used to set bits. XOR outputs 1 if the bits differ, frequently used to toggle bits or in simple cryptographic ciphers.
For unsigned integers: an 8-bit holds up to 255 (2^8 - 1), a 16-bit holds up to 65,535 (2^16 - 1), and a 32-bit holds up to 4,294,967,295 (2^32 - 1). For signed (Two's Complement) integers, the maximum positive values are 127, 32,767, and 2,147,483,647 respectively, because one bit is reserved for the negative sign.
JavaScript standard Numbers are double-precision 64-bit floats (IEEE 754), meaning they only have 53 bits allocated for the exact integer value (mantissa). Any integer beyond Number.MAX_SAFE_INTEGER (9,007,199,254,740,991) loses precision and rounds to the nearest even representable value. Developers must use the BigInt type to safely perform math or bitwise operations on 64-bit integers.
Decimal 10 (hexadecimal 0x0A) maps to the Line Feed (LF) control character in the ASCII table. In programming and operating systems like Linux and macOS, it functions as the standard 'newline' character (represented as '\n' in code) to break text into a new line.