What is Binary Code and How Does it Work?

Computers are a constant presence in our day to day lives. From the technology that was used to put the first astronauts on the moon to central processing units (CPUs) that allow us to make innovative products such as smart TVs, tablets, and ever-improving gaming consoles, computer science is an essential part of the modern world that has made a huge impact in what we are able to accomplish. At the core of computers and their ability to perform advanced calculations and process information is through the use of binary code. So, what is binary code and how does it work? In this article, we will provide insight into these questions, along with a brief history and some examples of how it is presented.

Binary data is used by computers in several areas; such as generating statistics, mathematical computation, and of course, computer science. It is essentially the primary language of computing systems. The data itself is identified as binary because it only uses two possible states, 0 or 1. The specific location and sequence of the numbers may seem impossible to decipher, but for computers, the base-2 system makes it possible to generate and store endless amounts of information. While it takes more than just binary information to create and run software, it remains an essential component of computer technology.

*Photo by Ricardo Ortiz/Pexels*

Binary data works within the parameters of a base-2 counting system. What this means is that there are only two numerical options for the data; either 0 or 1. While this may seem limiting, it’s actually allows our computers and smart devices to complete a seemingly endless array of tasks. The use of a base-2 system for computer data storage can be best attributed to early computer design. Engineers and computer scientists needed a way to communicate with computers and to allow devices to communicate between each other. To better understand the way binary works in technology, it’s important to review the concept of ‘bits’ and ‘bytes’ of code.

In computer science, a ‘bit’ is the simplest form that data can take. The name actually comes from combining binary and digit, to represent the logical state with either a 0 or a 1. When several bits are strung together, they form what is known as a ‘byte’. A byte is typically made up of eight bits, and is used as a unit of digital information. With all of this information being processed through the CPU and RAM, binary code successfully gives information to the motherboard that puts the technology in motion.

To count in binary, think about the typical way we count in our base-10 system. The digits 0-9 represent the decimals that we use to group numbers and count.

When the symbols for the first digit are exhausted, the next-higher digit (to the left) is incremented, and counting starts over at 0.

In decimal, counting proceeds like this:

After 009 the rightmost digit starts over, and next digit is incremented:

In a base-2 system, it’s a similar concept except there are only two possible digits: 0 or 1. As we count up, after a digit reaches 1, it resets to 0 and then the next digit to the left is incremented, as seen here:

The rightmost digits always start over, and next digit is incremented.

Here's a table that shows the binary counting sequence from 0 to 7:

Decimal | Binary |

0 | 000 |

1 | 001 |

2 | 010 |

3 | 011 |

4 | 100 |

5 | 101 |

6 | 110 |

7 | 111 |

Although computing systems are a relatively new invention, binary is not. Binary code – as we understand it today – was initially discovered in the 17^{th} Century by German Mathematician Gottfried Wilhelm Leibniz. There is even evidence that suggests a base-2 counting system existed in many ancient civilizations as well, proving that the collection of binary data has a long and intriguing history.

As humanity and our technology advanced through the centuries, the use of binary became more and more prevalent as a way to collect and store data. One of the most famous examples of this comes from the 16th Century Philosopher Francis Bacon. Bacon devised a system that he called, *‘Bacon’s Bilateral Cipher’*, which he used to send encrypted messages to trusted friends and adversaries. The main difference between Bacon’s binary and modern binary code, is its use of letters instead of numbers. Where computers use 0 and 1 to build bits and bytes of data that can be customized to represent thousands of numbers, Bacon used configurations of A and B to represent the 26 letters of the alphabet.

When German Mathematician Gottfried Wilhelm Leibniz encountered and eventually grew frustrated with the trials and errors of current data methods, he proposed a base-2 numerical system that could represent the numbers in the decimal system. The simplicity of this new method allowed Leibniz to abandon multiplication tables and, instead, take advantage of rudimentary calculations as he progressed through his work. Ultimately, his discovery earned him the unofficial title of ‘The Father of Binary Code’.

To reach our modern understanding of binary code, we have to take a look at the original design of computers. The use of a base-2 counting system can be attributed to the original ‘On’ and ‘Off’ lever that controlled electrical impulses to a computing system. At its core, the 0 and 1 of binary data is a digital recreation of the switch either being on (1) or off (0).

As the basic function of data communication for computational systems, binary code is present in most of our daily activities. Learning how to interpret, analyze, and understand binary code is an easy first step when it comes to studying computer science. Once this basic element of software engineering is mastered, the possibilities for creation and expansion become infinite.

So, what does binary code look like?

Well, in movies it often looks like a wall of code streaming across a computer screen at an impossibly fast speed. While this representation of binary code is exciting, it isn’t exactly accurate. Binary code presents itself as something relatively simple to follow along with and decode, as long as someone has the skills to do so. Remember the bits and the bytes that was mentioned earlier? This is where they come into play. Bits of binary code string together to form a byte that represents either a numerical or alphabetical character. These characters can then be combined in lines of code to form sequences of words and large numbers.

For example, the binary representations of the letters A, E, H, L, O, P, S and T are; 01000001 01000101 01001000 01001100 01001111 01110000 01110011 and 01110100, respectively. Knowing this conversion allows us to decode the following binary message;

01110100 01001111 01110100 01000001 01001100

01110000 01001000 01000001 01110011 01000101

The Answer: **Total Phase**

This is just one example of the thousands of ways binary code can be used in the modern day.

Knowing just the basics about how binary code is used to aid in the development and advancement of technology allows anyone to take their first step into the exciting and ever-evolving world of computer science and embedded engineering. At Total Phase, we offer a wide variety of tools that provide support for this journey for everyone working with I2C, SPI, USB, CAN, and eSPI protocols, from full-time engineers to part-time hobbyists.

For instance, we offer products like our I2C/SPI host adapters that can be used for system development, allowing users to debug and emulate systems with ease. Particularly, our Promira Serial Platform is a handy tool for programming memory and system emulation with built-in level shifting. We also offer I2C, SPI, and USB protocol analyzers for real-time bus monitoring.

Combining our devices with a strong foundational knowledge of binary code opens a whole new world of possibilities.

Possibilities, and new ideas.