Fio

From Leo's Notes
Last edited on 18 August 2022, at 17:49.

The flexible IO (fio) tester is an open source disk benchmark utility. It includes a number of features including a wide range of IO engines (for testing different aspects of IO on a system), threaded jobs, IO depth, and much more than what something like dd can offer.

A related project that uses fio is fio_plot which runs multiple benchmarks with various options and generates graphs.

Quick start[edit | edit source]

Installation[edit | edit source]

fio is available on most distributions as part of the system repos.

Distro Install command
Ubuntu
# apt install fio
Red Hat / CentOS / Rocky Linux Available from EPEL.
# yum -y install epel-release
# yum -y install fio
From source
# git clone https://github.com/axboe/fio
# cd fio
# ./configure
# make && make install

Filesystem benchmarking[edit | edit source]

To simply test the filesystem performance with fio (direct IO, no buffers), start fio with the following command. Change the --rw option to alter the type of benchmark you wish to perform.

# fio \
  --ioengine=libaio \
  --direct=1 \
  --randrepeat=0 \
  --refill_buffers \
  --end_fsync=1 \
  --filename=$HOME/.fiotest \
  --name=fio-read-test \
  --rw=read \
  --size=1GB \
  --bs=1M \
  --numjobs=1 \
  --iodepth=8 \
  --runtime=60

More information on each of the options are listed below.

Option Description
--rw=read Do a sequential read test. Alternatively, change this to one of the following other options:
  • read (sequential)
  • write (sequential)
  • randread
  • randwrite
  • readwrite (sequential)
  • randrw
--size=1GB Use a 1GB test file size. Must be a multiple of 1MB
--bs=1M Use a 1MB block size. Defaults to 4k
--iodepth=1 For asynchronous read/writes, the OS will immediately return and fio does not need to wait for the IO operation to complete. This option will set how many IO operations are 'in flight', or running in the background by the OS.

This setting requires direct=1 and the use of a asynchronous IO engine such as libaio.

--numjobs=1 Number of parallel jobs that fio will spawn for the test.
--runtime=60 Number of seconds to run the benchmark for.
--randrepeat=0

--refill_buffers

--end_fsync=1

randrepeat reseeds the random generator differently for each run

refill buffers ensures that buffers are rewritten and not cached

end fsync ensures the file contents are synced before the job exits

--direct=1 use non-buffered I/O (usually O_DIRECT).

Understanding the results[edit | edit source]

Results you get after running fio looks like something below:

Starting 1 process
sequential-read-8-queues-1-thread: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=241MiB/s][r=241 IOPS][eta 00m:00s]
sequential-read-8-queues-1-thread: (groupid=0, jobs=1): err= 0: pid=137965: Thu Jul 28 15:41:32 2022
  read: IOPS=190, BW=191MiB/s (200MB/s)(957MiB/5014msec)
    slat (usec): min=105, max=881, avg=222.28, stdev=54.72
    clat (msec): min=3, max=426, avg=41.63, stdev=63.38
     lat (msec): min=3, max=427, avg=41.86, stdev=63.38
    clat percentiles (msec):
  • slat - submission latency. This is the time it took to submit the I/O. This could be useful if you need to determine if you need to tune the IO scheduler (eg. from spinning disk to SSD) or if there's a network issue for network based filesystems.
  • clat - completion latency. This is the time between submission and completion.
  • read - the read (or write/randread/randwrite, depending on your test mode) performance. IOPS is IO operations per second, BW is bandwidth.

Using fio_plot[edit | edit source]

fio_plot is a python based tool to help run various fio benchmarks and then visualizing the results in various graphs.

Installation[edit | edit source]

To install fio-plot system-wide, run: pip3 install fio-plot. If you only want to install it on a user account, use: pip3 install --user fio-plot.

Usage[edit | edit source]

Create a benchmark.ini file with the following contents. Modify the target to a location that you want to test.

[benchfio]
target = /mnt/target-filesystem
output = test-results
type = directory
mode = read,write,randread,randwrite
size = 10G
iodepth = 1,2,4,8,16,32,64
numjobs = 1,2,4,8,16,32,64
direct = 1
engine = libaio
precondition = False
precondition_repeat = False
runtime = 60
destructive = True
block_size = 1024k

Then run the benchmark with bench-fio benchmark.ini

Generate graphs with fio-plot. Something like the following script will generate a bunch of graphs.

#!/bin/bash
set -ex

FIO_PLOT="/home/leo/.local/bin/fio-plot"
FIO_RESULTS="/home/leo/fio/results"

for type in iops bw lat ; do
        for test in read write randread randwrite ; do
                $FIO_PLOT -i $FIO_RESULTS/1024k/ -r $test \
                        -L -t $type \
                        -T "Filesystem $type / $test (1024k)"   -o results_1024k_$type_$test.png
        done
done

Other notes[edit | edit source]

Drop caches[edit | edit source]

You might want to drop caches before doing a test to avoid skewed results by running: echo 3 > /proc/sys/vm/drop_caches

Change the output format[edit | edit source]

Use the --output-format=json option to change the output format to json.

See also[edit | edit source]