There are a few reasons why we measure space in bytes and speed in bits. First, bytes are easier to represent than bits. A byte is just a group of 8 bits, while a bit is a single letter that can represent one of two values (0 or 1). This makes it easy to compare and contrast data between different systems, since byte values can be represented in more than one way. Second, bytes are more common on computer systems. Most computers use bytes to store data, so measuring space in bytes is more common than measuring space in bits.
Megabits vs. Megabytes: What’s the Difference?
A bit or “binary digit” is the smallest piece of information in a binary computer system. A bit can be either a one or a zero, and bits are represented in many different ways: as memory cells in an SSD, as pits and lands on a Blu-ray, or as magnetic patterns on a hard drive platter.
A megabit is a million bits, which is equivalent to 125 Kilobytes. In other words, a single megabyte contains eight megabits worth of data. So, in theory, a 1000 Mbps (Megabits per second) network connection can transfer 125 MB/s (Megabytes per second) worth of data.
Mbps and Mb/s refer to Megabits, and MBps and MB/s refer to megabytes. So it’s not hard to see why so many people confuse the two, leading to them significantly over- or under-estimating the speed of something.
Why Measure Speed in Megabits and Storage in Megabytes?
It’s hard to see immediately why you’d choose either megabits or megabytes for a given measurement. After all, when you transfer a file in Windows, the measurement shown is in MB/s and not Mbps. So it’s not as if you can’t measure data transfer speeds in the larger unit.
However, a byte is a specific arrangement of bits that’s part of a particular standard. Bits are universal to every binary computer system. Even if aliens developed binary computer systems, the bit would still be the fundamental unit of data. In the meantime, there are eight bits to a byte today because you need eight bits to represent every character in the ASCII encoding system. However, bytes could have been a different arbitrary number of bits.
With network data transfer, the system isn’t transferring bytes; it’s transferring bits. Knowing how many raw bits can be sent and received gives you a universal measurement of network bandwidth.
When we’re talking about storage devices such as hard drives or SSDs, the drive is formatted to store data in accordance with the standard byte. A disk is not an arrangement of single bits but of 8-bit bytes. So it makes sense to measure its total storage as a multiple of this unit rather than of the bit.
Ironically, there’s a unit discrepancy with hard drives as well. Hard drive manufacturers define a Kilobyte as 1000 bytes, one Megabyte as 1000 Kilobytes, and so on. Windows, on the other hand, uses groups of 1024 in line with RAM manufacturer convention.
This is what a 1TB hard drive shows up as a 931GB drive in Windows, even though they both describe exactly the same number of bits. This underscored why measuring data transfer rate in bits is the most sensible way to do it, since arbitrary standards don’t muddy the waters.
Just Use the Rule of Eight
If you take care to double-check whether bits or bytes are being used, converting from one to the other is as easy as multiplying or dividing by eight. As long as you remember that there are eight megabits in one megabyte, you’ll have a better idea of how much speed or volume you’re dealing with.
RELATED: How Much Download Speed Do You Really Need?