* [PATCH v9 0/4] block: Adding ROW scheduling algorithm
@ 2013-06-17 6:39 Tanya Brokhman
0 siblings, 0 replies; only message in thread
From: Tanya Brokhman @ 2013-06-17 6:39 UTC (permalink / raw)
To: axboe; +Cc: linux-arm-msm, linux-mmc, Tanya Brokhman
In order to decrease the latency of a prioritized request (such as READ
requests) the device driver might decide to stop the transmission of a
current "low priority" request in order to handle the "high priority" one.
The urgency of the request is decided by the block layer I/O scheduler.
When the block layer notifies the underlying device driver (eMMC for
example) of an urgent request, the device driver might decide to stop
the current request transmission. The remainder of the stopped request
will be re-inserted back to the scheduler, to be re-scheduled after
handling the urgent request.
The above is implemented in the block layer by 2 callbacks of the queue:
- urgent_request_fn() - This callback is registered by the underlying
device driver and is called instead of the existing requst_fn() callbacks
to handle urgent requests.
- elevator_is_urgent_fn() - This callback is registered by the current
I/O scheduler. If present it's used by the block layer to ping the
scheduler of an urgent request presence.
NOTE: If one of the above callbacks is not registered, these code pass
will never be activated.
The ROW I/O scheduler implements an urgent request notification mechanism.
The policy of ROW I/O scheduler is to prioritize READ requests over WRITE
as much as possible without starving the WRITE requests.
The ROW I/O scheduler implements I/O priorities (CLASS_RT, CLASS_BE,
CLASS_IDLE). Dissimilar to CFQ, CLASS_BE requests won’t be starved by
CLASS_RT, and CLASS_IDLE requests won’t be starved by CLASS_RT/CLASS_BE
requests. A tolerance limit is managed to prevent starvation.
The ROW I/O scheduler with the URGENT request support by the device driver
improves the READ throughput by ~25% and the READ worst case latency is
decreased by ~85%. All measured for READ/WRITE coalition scenarios.
For example, the bellow numbers were collected for parallel lmdd read and
write. The tests were performed on:
kernel version: 3.4
Underline device driver: mmc
Host controller: msm-sdcc
Card: standard emmc NAND flash
-------------------------------------------------------------------
Algorithm | Throughput [mb/sec] | Worst case Latency |
| READ | WRITE | READ Latency [msec] |
-------------------------------------------------------------------
CFQ | 122.3 | 40.22 | 422 |
ROW | 151.4 | 41.07 | 51.5 |
This development depends on a patch introduced by Jens Axboe in
linux-block git tree which extends the req->cmd_flags field.
It's attached to this patch set for convenience.
Jens Axboe (1):
block: make rq->cmd_flags be 64-bit
Tanya Brokhman (3):
block: Add support for reinsert a dispatched req
block: Add API for urgent request handling
block: Adding ROW scheduling algorithm
Documentation/block/row-iosched.txt | 134 +++++
block/Kconfig.iosched | 21 +
block/Makefile | 1 +
block/blk-core.c | 91 +++-
block/blk-settings.c | 12 +
block/elevator.c | 40 ++
block/row-iosched.c | 1090 +++++++++++++++++++++++++++++++++++
drivers/block/floppy.c | 4 +-
drivers/scsi/sd.c | 2 +-
include/linux/blk_types.h | 68 ++-
include/linux/blkdev.h | 10 +-
include/linux/elevator.h | 7 +
12 files changed, 1431 insertions(+), 49 deletions(-)
create mode 100644 Documentation/block/row-iosched.txt
create mode 100644 block/row-iosched.c
--
1.7.6
--
QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundatio
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2013-06-17 6:39 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-17 6:39 [PATCH v9 0/4] block: Adding ROW scheduling algorithm Tanya Brokhman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).