Budget Fair Queueing (BFQ) Storage-I/O Scheduler

In this page we report a selection of our benchmarks

We repeated our tests with several mainstream Linux distributions, obtaining the same results.

You can find here results with: previous versions of Linux and of BFQ, more devices, and, in case of old-enough versions of Linux, with legacy blk. In general, our results mostly depend only on the BFQ version, and are essentially the same with any kernel version, and with either blk-mq or legacy blk. In addition, the relative performance of BFQ, compared with that of the other I/O schedulers, is the same with any storage medium tested (embedded flash storage, HDDs, SSDs, ...), and with any computing system (from minimal embedded systems to high-end servers).

We report the results of our throughput, application-responsiveness (start-up time, measured only for a mid-size application, gnome-terminal; see previous results for smaller and larger applications) and video-playing (frame-drop-rate) benchmarks. The last two benchmarks measure also total throughput during the test, but we do not report throughput measurements too for these benchmarks, as these values are little meaningful. See, e.g., this old-result page for a complete explanation.

These benchmarks are part of the S benchmark suite, and can be executed with the following commands:

In what follows, we call reader/writer a program (fio in the S suite) that just reads/writes a large file. In addition, we say that a reader/writer is sequential or random depending on whether it reads/writes the file sequentially or at random positions. For brevity, we report only our results with synthetic, heavy workloads. The goal is to show application start-up times in rather extreme conditions, i.e, with very heavy background workloads.

Next figure shows the throughput reached by each I/O scheduler while one of the following four heavy workloads is being executed: 10 parallel sequential or random sync readers (10r-seq, 10r-rand), 5 parallel, sequential or random sync readers plus 5 parallel sequential or random writers (5r5w-seq, 5r5w-rand).

Throughput
Figure 1. Throughput (higher is better).

BFQ reaches a ~6% lower throughput than the best-performing scheduler (NONE) with 10r-rand. This happens because some of the processes spawned by the benchmark script do occasional I/O during the test, and BFQ, in low-latency mode (default), is willing to trade throughput for the latency guaranteed to occasional, little I/O. If throughput is so critical that latency can be sacrificed, then just disable low-latency mode. On the other end, BFQ's low-latency heuristics do not affect throughput with the other workloads. All schedulers reach about the same throughput.

In terms of maximum IOPS sustainable (regardless of the actual drive serving I/O), this system reaches ~500 KIOPS with BFQ, against ~1 MIOPS with the other I/O schedulers. This means ~2 GB/s with 4KB random I/O with BFQ, and ~4 GB/s with the other I/O schedulers.

As for responsiveness, BFQ guarantees to gnome-terminal the lowest-possible start-up time with only reads in the background, and about four times the lowest-possible start-up time with reads and writes. Start-up time increases with writes, because writes induce high read latencies inside the drive. For this same reason, start-up times are unbearably high with the other I/O schedulers: 16 times as high as with BFQ and 64 times as high as the lowest-possible start-up time.

gnome-terminal start-up time
Figure 2. gnome-terminal start-up time (lower is better).

Finally, next figure shows our video-playing results.

Video-playing frame-drop rate
Figure 3. Video-playing frame-drop rate (lower is better).

Because of the very high speed of the drive, results are essentially good with all schedulers. But only with BFQ no frame gets lost.