If you’re like most people, you probably don’t think much about your computer’s disk and RAM speed. But if you’re looking to improve your system’s performance, it’s important to know what kind of speeds you’re getting. To test your computer’s disk and RAM speed, first use the free GNU dd command to measure the read and write speeds of a single file. To measure the RAM speed, use the free memtest86+ utility. Here are some tips for using these commands:
- Make sure that your computer is connected to the internet and has an active network connection.
- Use a large enough file for both tests. For example, if you’re testing the disk speed, use a file that is at least 1GB in size. If you’re testing the RAM speed, use a file that is at least 4GB in size.
- Use different files for each test so that you can compare results accurately. For example, if you’re measuring the disk speed with dd , use a different file for each test run (for example, /dev/sda1 , /dev/sdb1 , etc.). If you’re measuring the RAM speed with memtest86+, use a different file for each test run (for example, /dev/memtest86+0x100000-0x7ffff ). ..
How Is IO Performance Measured?
There are many different ways to read and write to disks, so no single number exists for “speed” that you can measure.
The simplest way to measure performance is to time how long it takes to read large files or perform large file copies. This measures sequential read and write speed, which is a good metric to know, but you’ll rarely see speeds this high in practice, especially in a server environment.
A better metric is random access speed, which measures how fast you can access files stored in random blocks, mimicking real-world usage much more.
SSDs usually have fast random access speeds compared to hard drives, which makes them much more suited for general use. Hard drives still have decent sequential read and write speeds, which makes them good for data archival and retrieval.
However, disk performance may not matter much for certain workloads. A lot of applications cache objects in memory (if you’ve got enough RAM), so the next time you want to read that object, it will be read from memory instead (which is faster). For write-heavy workloads though, the disk still must accessed.
Speed is often measured in MB/s, but certain providers may measure in IOPS (Input/Output Operations Per Second). This is simply a bigger number meaning the same thing; you can find what IOPS from MB/s with this formula:
However, some providers may not do a great job of telling you which benchmark they use for measuring IOPS, so it’s good to do testing yourself.
Install fio for Random Read/Write Tests
While Linux does have the built in dd command, which can be used to measure sequential write performance, it isn’t indicative of how it will behave under real-world stresses. You’ll want to test your random read and write speed instead.
fio is a utility that can handle this. Install it from your distro’s package manager:
Then, run a basic test using the following command:
This runs random read and write tests using a 250 MB of data, at a ratio of 80% reads to 20% writes. The results will display in terms of IOPS and in MB/s:
The above test was run on an AWS gp2 SSD, a fairly average SSD, which shows fairly average performance. Write performance will always be lower with any type of IO; many SSDs and HDDs have built in cache for the drive controller to use, which makes many reads fairly quick. However, whenever you write, you must make physical changes to the drive, which is slower.
Running the test on a hard drive shows low random mixed IO performance, which is a common problem with hard drives:
Hard drives, though, are typically used for large sequential reads and writes, so a random IO test doesn’t match the use case here. If you want to change the test type, you can pass in a different argument for –readwrite. fio supports a lot of different tests:
Sequential Read: seqread Sequential Write:seqwrite Random Read: randread Random Write: randwrite Random Mixed IO: randrw
Additionally, you can change the block size with the –bs argument. We set it to 4K, which is fairly standard for random tests, but sequential reads and writes may show better or worse performance with larger block sizes. Sizes 16KB to 32KB may be closer to what you’ll encounter under real load.
Testing Memory Performance
fio can’t test RAM speed, so if you want to benchmark your server’s RAM, you must install sysbench from your distro’s package manager:
This package can benchmark a lot of performance metrics, but we’re only focused on the memory test. The following command allocates 1 MB of RAM, then performs write operations until it has written 10 GB of data, (Don’t worry, you don’t need 10 GB of RAM to do this benchmark.)
This will display the memory speed in MiB/s, as well as the access latency associated with it.
This test measures write speed, but you can add –memory-oper=read to measure the read speed, which should be a bit higher most of the time. You can also test with lower block sizes, which puts more stress on the memory.
Realistically though, most RAM will be good enough to run just about anything, and you’ll usually be limited more by the amount of RAM than the actual speed of it.