* [PATCH v2 0/9] Nowait support for stacked block devices
@ 2017-10-04 13:55 Goldwyn Rodrigues
2017-10-04 13:55 ` [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait Goldwyn Rodrigues
` (9 more replies)
0 siblings, 10 replies; 35+ messages in thread
From: Goldwyn Rodrigues @ 2017-10-04 13:55 UTC (permalink / raw)
To: linux-block; +Cc: axboe, shli
This is a continuation of the nowait support which was incorporated
a while back. We introduced REQ_NOWAIT which would return immediately
if the call would block at the block layer. Request based-devices
do not wait. However, bio based devices (the ones which exclusively
call make_request_fn) need to be trained to handle REQ_NOWAIT.
This effort covers the devices under MD and DM which would block
for any reason. If there should be more devices or situations
which need to be covered, please let me know.
The problem with partial writes discussed during v1 turned out
to be a bug in partial writes during direct I/O and is fixed
by the submitted patch[1].
Changes since v1:
- mddev to return early in case the device is suspended, within the md code as opposed to ->make_request()
- Check for nowait support with all the lower devices. Same with if adding a device which does not support nowait.
- Nowait under each raid is checked before the final I/O submission for the entire I/O.
[1] https://patchwork.kernel.org/patch/9979887/
--
Goldwyn
^ permalink raw reply [flat|nested] 35+ messages in thread* [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues @ 2017-10-04 13:55 ` Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 2/9] md: Add nowait support to md Goldwyn Rodrigues ` (8 subsequent siblings) 9 siblings, 0 replies; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-10-04 13:55 UTC (permalink / raw) To: linux-block; +Cc: axboe, shli, Goldwyn Rodrigues From: Goldwyn Rodrigues <rgoldwyn@suse.com> Nowait is a feature of direct AIO, where users can request to return immediately if the I/O is going to block. This translates to REQ_NOWAIT in bio.bi_opf flags. While request based devices don't wait, stacked devices such as md/dm will. In order to explicitly mark stacked devices as supported, we set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN whenever the device would block. Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> --- block/blk-core.c | 4 ++-- include/linux/blkdev.h | 6 ++++++ 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 048be4aa6024..8de633f8c633 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2044,10 +2044,10 @@ generic_make_request_checks(struct bio *bio) /* * For a REQ_NOWAIT based request, return -EOPNOTSUPP - * if queue is not a request based queue. + * if queue cannot handle nowait bio's */ - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) + if ((bio->bi_opf & REQ_NOWAIT) && !blk_queue_supports_nowait(q)) goto not_supported; if (should_fail_request(&bio->bi_disk->part0, bio->bi_iter.bi_size)) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 02fa42d24b52..1d0da2a9cf46 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -631,6 +631,7 @@ struct request_queue { #define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */ #define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */ #define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */ +#define QUEUE_FLAG_NOWAIT 29 /* stack device driver supports REQ_NOWAIT */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_STACKABLE) | \ @@ -759,6 +760,11 @@ static inline bool queue_is_rq_based(struct request_queue *q) return q->request_fn || q->mq_ops; } +static inline bool blk_queue_supports_nowait(struct request_queue *q) +{ + return queue_is_rq_based(q) || test_bit(QUEUE_FLAG_NOWAIT, &q->queue_flags); +} + static inline unsigned int blk_queue_cluster(struct request_queue *q) { return q->limits.cluster; -- 2.14.2 ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 2/9] md: Add nowait support to md 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait Goldwyn Rodrigues @ 2017-10-04 13:55 ` Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 3/9] md: raid1 nowait support Goldwyn Rodrigues ` (7 subsequent siblings) 9 siblings, 0 replies; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-10-04 13:55 UTC (permalink / raw) To: linux-block; +Cc: axboe, shli, Goldwyn Rodrigues From: Goldwyn Rodrigues <rgoldwyn@suse.com> Set queue flags to QUEUE_FLAG_NOWAIT to indicate REQ_NOWAIT will be handled. If any of the underlying devices do not support NOWAIT feature, we do not set the flag. If the device is suspended, it returns -EWOULDBLOCK. Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> --- drivers/md/md.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/md/md.c b/drivers/md/md.c index 0ff1bbf6c90e..7325f8be36b4 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -237,6 +237,19 @@ EXPORT_SYMBOL_GPL(md_new_event); static LIST_HEAD(all_mddevs); static DEFINE_SPINLOCK(all_mddevs_lock); +static bool is_suspended(struct mddev *mddev, struct bio *bio) +{ + /* We can serve READ requests when device is suspended */ + if (bio_data_dir(bio) == READ) + return false; + + if (mddev->suspended) + return true; + + return (mddev->suspend_lo < bio_end_sector(bio) && + mddev->suspend_hi > bio->bi_iter.bi_sector); +} + /* * iterates through all used mddevs in the system. * We take care to grab the all_mddevs_lock whenever navigating @@ -316,6 +329,11 @@ static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio) bio_endio(bio); return BLK_QC_T_NONE; } + /* Bail out if we would have to wait for suspended device */ + if ((bio->bi_opf & REQ_NOWAIT) && is_suspended(mddev, bio)) { + bio_wouldblock_error(bio); + return BLK_QC_T_NONE; + } /* * save the sectors now since our bio can @@ -5404,6 +5422,7 @@ int md_run(struct mddev *mddev) int err; struct md_rdev *rdev; struct md_personality *pers; + bool nowait = true; if (list_empty(&mddev->disks)) /* cannot run an array with no devices.. */ @@ -5470,8 +5489,15 @@ int md_run(struct mddev *mddev) } } sysfs_notify_dirent_safe(rdev->sysfs_state); + if (!blk_queue_supports_nowait(rdev->bdev->bd_queue)) + nowait = false; } + /* Set the NOWAIT flags if all underlying devices support it */ + if (nowait) + queue_flag_set_unlocked(QUEUE_FLAG_NOWAIT, mddev->queue); + + if (mddev->bio_set == NULL) { mddev->bio_set = bioset_create(BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS); if (!mddev->bio_set) @@ -6554,6 +6580,15 @@ static int hot_add_disk(struct mddev *mddev, dev_t dev) set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); if (!mddev->thread) md_update_sb(mddev, 1); + + /* If the new disk does not support REQ_NOWAIT, + disable on whole MD */ + if (!blk_queue_supports_nowait(rdev->bdev->bd_queue)) { + pr_info("%s: Disabling nowait because %s does not support nowait\n", + mdname(mddev), bdevname(rdev->bdev,b)); + queue_flag_clear_unlocked(QUEUE_FLAG_NOWAIT, mddev->queue); + } + /* * Kick recovery, maybe this spare has to be added to the * array immediately. -- 2.14.2 ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 3/9] md: raid1 nowait support 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 2/9] md: Add nowait support to md Goldwyn Rodrigues @ 2017-10-04 13:55 ` Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 4/9] md: raid10 " Goldwyn Rodrigues ` (6 subsequent siblings) 9 siblings, 0 replies; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-10-04 13:55 UTC (permalink / raw) To: linux-block; +Cc: axboe, shli, Goldwyn Rodrigues From: Goldwyn Rodrigues <rgoldwyn@suse.com> The RAID1 driver would bail with EAGAIN in case of: + I/O has to wait for a barrier + array is frozen + Area is suspended + There are too many pending I/O that it will be queued. To facilitate error for wait barriers, wait_barrier() is returning bool. True in case if there was a wait (or is not required). False in case a wait was required, but was not performed. Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> --- drivers/md/raid1.c | 88 +++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 68 insertions(+), 20 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index f3f3e40dc9d8..37ec283e67af 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -891,8 +891,9 @@ static void lower_barrier(struct r1conf *conf, sector_t sector_nr) wake_up(&conf->wait_barrier); } -static void _wait_barrier(struct r1conf *conf, int idx) +static bool _wait_barrier(struct r1conf *conf, int idx, bool nowait) { + bool ret = true; /* * We need to increase conf->nr_pending[idx] very early here, * then raise_barrier() can be blocked when it waits for @@ -923,7 +924,7 @@ static void _wait_barrier(struct r1conf *conf, int idx) */ if (!READ_ONCE(conf->array_frozen) && !atomic_read(&conf->barrier[idx])) - return; + return ret; /* * After holding conf->resync_lock, conf->nr_pending[idx] @@ -941,18 +942,29 @@ static void _wait_barrier(struct r1conf *conf, int idx) */ wake_up(&conf->wait_barrier); /* Wait for the barrier in same barrier unit bucket to drop. */ - wait_event_lock_irq(conf->wait_barrier, - !conf->array_frozen && - !atomic_read(&conf->barrier[idx]), - conf->resync_lock); + if (conf->array_frozen || atomic_read(&conf->barrier[idx])) { + if (nowait) { + ret = false; + goto dec_waiting; + } else { + wait_event_lock_irq(conf->wait_barrier, + !conf->array_frozen && + !atomic_read(&conf->barrier[idx]), + conf->resync_lock); + } + } atomic_inc(&conf->nr_pending[idx]); +dec_waiting: atomic_dec(&conf->nr_waiting[idx]); spin_unlock_irq(&conf->resync_lock); + return ret; } -static void wait_read_barrier(struct r1conf *conf, sector_t sector_nr) +static bool wait_read_barrier(struct r1conf *conf, sector_t sector_nr, + bool nowait) { int idx = sector_to_idx(sector_nr); + bool ret = true; /* * Very similar to _wait_barrier(). The difference is, for read @@ -964,7 +976,7 @@ static void wait_read_barrier(struct r1conf *conf, sector_t sector_nr) atomic_inc(&conf->nr_pending[idx]); if (!READ_ONCE(conf->array_frozen)) - return; + return ret; spin_lock_irq(&conf->resync_lock); atomic_inc(&conf->nr_waiting[idx]); @@ -975,19 +987,31 @@ static void wait_read_barrier(struct r1conf *conf, sector_t sector_nr) */ wake_up(&conf->wait_barrier); /* Wait for array to be unfrozen */ - wait_event_lock_irq(conf->wait_barrier, - !conf->array_frozen, - conf->resync_lock); + if (conf->array_frozen) { + /* If nowait flag is set, return false to + * show we did not wait + */ + if (nowait) { + ret = false; + goto dec_waiting; + } else { + wait_event_lock_irq(conf->wait_barrier, + !conf->array_frozen, + conf->resync_lock); + } + } atomic_inc(&conf->nr_pending[idx]); +dec_waiting: atomic_dec(&conf->nr_waiting[idx]); spin_unlock_irq(&conf->resync_lock); + return ret; } -static void wait_barrier(struct r1conf *conf, sector_t sector_nr) +static bool wait_barrier(struct r1conf *conf, sector_t sector_nr, bool nowait) { int idx = sector_to_idx(sector_nr); - _wait_barrier(conf, idx); + return _wait_barrier(conf, idx, nowait); } static void wait_all_barriers(struct r1conf *conf) @@ -995,7 +1019,7 @@ static void wait_all_barriers(struct r1conf *conf) int idx; for (idx = 0; idx < BARRIER_BUCKETS_NR; idx++) - _wait_barrier(conf, idx); + _wait_barrier(conf, idx, false); } static void _allow_barrier(struct r1conf *conf, int idx) @@ -1212,7 +1236,11 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, * Still need barrier for READ in case that whole * array is frozen. */ - wait_read_barrier(conf, bio->bi_iter.bi_sector); + if (!wait_read_barrier(conf, bio->bi_iter.bi_sector, + bio->bi_opf & REQ_NOWAIT)) { + bio_wouldblock_error(bio); + return; + } if (!r1_bio) r1_bio = alloc_r1bio(mddev, bio); @@ -1321,6 +1349,11 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, * an interruptible wait. */ DEFINE_WAIT(w); + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + return; + } + for (;;) { sigset_t full, old; prepare_to_wait(&conf->wait_barrier, @@ -1339,17 +1372,26 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, } finish_wait(&conf->wait_barrier, &w); } - wait_barrier(conf, bio->bi_iter.bi_sector); - - r1_bio = alloc_r1bio(mddev, bio); - r1_bio->sectors = max_write_sectors; + if (!wait_barrier(conf, bio->bi_iter.bi_sector, + bio->bi_opf & REQ_NOWAIT)) { + bio_wouldblock_error(bio); + return; + } if (conf->pending_count >= max_queued_requests) { md_wakeup_thread(mddev->thread); + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + return; + } raid1_log(mddev, "wait queued"); wait_event(conf->wait_barrier, conf->pending_count < max_queued_requests); } + + r1_bio = alloc_r1bio(mddev, bio); + r1_bio->sectors = max_write_sectors; + /* first select target devices under rcu_lock and * inc refcount on their rdev. Record them by setting * bios[x] to bio @@ -1435,9 +1477,15 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, rdev_dec_pending(conf->mirrors[j].rdev, mddev); r1_bio->state = 0; allow_barrier(conf, bio->bi_iter.bi_sector); + + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + free_r1bio(r1_bio); + return; + } raid1_log(mddev, "wait rdev %d blocked", blocked_rdev->raid_disk); md_wait_for_blocked_rdev(blocked_rdev, mddev); - wait_barrier(conf, bio->bi_iter.bi_sector); + wait_barrier(conf, bio->bi_iter.bi_sector, false); goto retry_write; } -- 2.14.2 ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 4/9] md: raid10 nowait support 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues ` (2 preceding siblings ...) 2017-10-04 13:55 ` [PATCH 3/9] md: raid1 nowait support Goldwyn Rodrigues @ 2017-10-04 13:55 ` Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 5/9] md: raid5 " Goldwyn Rodrigues ` (5 subsequent siblings) 9 siblings, 0 replies; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-10-04 13:55 UTC (permalink / raw) To: linux-block; +Cc: axboe, shli, Goldwyn Rodrigues From: Goldwyn Rodrigues <rgoldwyn@suse.com> Bail and status to EAGAIN if raid10 is going to wait for: + barriers + reshape operation + Too many queued requests Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> --- drivers/md/raid10.c | 67 ++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 51 insertions(+), 16 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 374df5796649..b0701f5751fe 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -969,8 +969,9 @@ static void lower_barrier(struct r10conf *conf) wake_up(&conf->wait_barrier); } -static void wait_barrier(struct r10conf *conf) +static bool wait_barrier(struct r10conf *conf, bool nowait) { + bool ret = true; spin_lock_irq(&conf->resync_lock); if (conf->barrier) { conf->nr_waiting++; @@ -984,19 +985,25 @@ static void wait_barrier(struct r10conf *conf) * count down. */ raid10_log(conf->mddev, "wait barrier"); - wait_event_lock_irq(conf->wait_barrier, - !conf->barrier || - (atomic_read(&conf->nr_pending) && - current->bio_list && - (!bio_list_empty(¤t->bio_list[0]) || - !bio_list_empty(¤t->bio_list[1]))), - conf->resync_lock); + if (!nowait) + wait_event_lock_irq(conf->wait_barrier, + !conf->barrier || + (atomic_read(&conf->nr_pending) && + current->bio_list && + (!bio_list_empty(¤t->bio_list[0]) || + !bio_list_empty(¤t->bio_list[1]))), + conf->resync_lock); + else + ret = false; conf->nr_waiting--; if (!conf->nr_waiting) wake_up(&conf->wait_barrier); } - atomic_inc(&conf->nr_pending); + /* Do not increment nr_pending if we din't wait */ + if (ret) + atomic_inc(&conf->nr_pending); spin_unlock_irq(&conf->resync_lock); + return ret; } static void allow_barrier(struct r10conf *conf) @@ -1148,7 +1155,10 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio, * thread has put up a bar for new requests. * Continue immediately if no resync is active currently. */ - wait_barrier(conf); + if (!wait_barrier(conf, bio->bi_opf & REQ_NOWAIT)) { + bio_wouldblock_error(bio); + return; + } sectors = r10_bio->sectors; while (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && @@ -1159,12 +1169,16 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio, * pass */ raid10_log(conf->mddev, "wait reshape"); + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + return; + } allow_barrier(conf); wait_event(conf->wait_barrier, conf->reshape_progress <= bio->bi_iter.bi_sector || conf->reshape_progress >= bio->bi_iter.bi_sector + sectors); - wait_barrier(conf); + wait_barrier(conf, false); } rdev = read_balance(conf, r10_bio, &max_sectors); @@ -1298,12 +1312,19 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, * thread has put up a bar for new requests. * Continue immediately if no resync is active currently. */ - wait_barrier(conf); + if (!wait_barrier(conf, bio->bi_opf & REQ_NOWAIT)) { + bio_wouldblock_error(bio); + return; + } sectors = r10_bio->sectors; while (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && bio->bi_iter.bi_sector < conf->reshape_progress && bio->bi_iter.bi_sector + sectors > conf->reshape_progress) { + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + return; + } /* * IO spans the reshape position. Need to wait for reshape to * pass @@ -1314,7 +1335,7 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, conf->reshape_progress <= bio->bi_iter.bi_sector || conf->reshape_progress >= bio->bi_iter.bi_sector + sectors); - wait_barrier(conf); + wait_barrier(conf, false); } if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && @@ -1328,6 +1349,10 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, set_mask_bits(&mddev->sb_flags, 0, BIT(MD_SB_CHANGE_DEVS) | BIT(MD_SB_CHANGE_PENDING)); md_wakeup_thread(mddev->thread); + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + return; + } raid10_log(conf->mddev, "wait reshape metadata"); wait_event(mddev->sb_wait, !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)); @@ -1337,6 +1362,10 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, if (conf->pending_count >= max_queued_requests) { md_wakeup_thread(mddev->thread); + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + return; + } raid10_log(mddev, "wait queued"); wait_event(conf->wait_barrier, conf->pending_count < max_queued_requests); @@ -1462,9 +1491,15 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, } } allow_barrier(conf); + + /* Don't wait for REQ_NOWAIT */ + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + return; + } raid10_log(conf->mddev, "wait rdev %d blocked", blocked_rdev->raid_disk); md_wait_for_blocked_rdev(blocked_rdev, mddev); - wait_barrier(conf); + wait_barrier(conf, false); goto retry_write; } @@ -1693,7 +1728,7 @@ static void print_conf(struct r10conf *conf) static void close_sync(struct r10conf *conf) { - wait_barrier(conf); + wait_barrier(conf, false); allow_barrier(conf); mempool_destroy(conf->r10buf_pool); @@ -4365,7 +4400,7 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, if (need_flush || time_after(jiffies, conf->reshape_checkpoint + 10*HZ)) { /* Need to update reshape_position in metadata */ - wait_barrier(conf); + wait_barrier(conf, false); mddev->reshape_position = conf->reshape_progress; if (mddev->reshape_backwards) mddev->curr_resync_completed = raid10_size(mddev, 0, 0) -- 2.14.2 ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 5/9] md: raid5 nowait support 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues ` (3 preceding siblings ...) 2017-10-04 13:55 ` [PATCH 4/9] md: raid10 " Goldwyn Rodrigues @ 2017-10-04 13:55 ` Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 6/9] dm: add " Goldwyn Rodrigues ` (4 subsequent siblings) 9 siblings, 0 replies; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-10-04 13:55 UTC (permalink / raw) To: linux-block; +Cc: axboe, shli, Goldwyn Rodrigues From: Goldwyn Rodrigues <rgoldwyn@suse.com> Return EAGAIN in case RAID5 would block because of waiting due to reshaping. Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> --- drivers/md/raid5.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 928e24a07133..707a3907d100 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -5601,6 +5601,13 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi) last_sector = bio_end_sector(bi); bi->bi_next = NULL; + if ((bi->bi_opf & REQ_NOWAIT) && + conf->reshape_progress != MaxSector && + (mddev->reshape_backwards ? logical_sector < conf->reshape_safe : last_sector >= conf->reshape_safe)) { + bio_wouldblock_error(bi); + return true; + } + prepare_to_wait(&conf->wait_for_overlap, &w, TASK_UNINTERRUPTIBLE); for (;logical_sector < last_sector; logical_sector += STRIPE_SECTORS) { int previous; -- 2.14.2 ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 6/9] dm: add nowait support 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues ` (4 preceding siblings ...) 2017-10-04 13:55 ` [PATCH 5/9] md: raid5 " Goldwyn Rodrigues @ 2017-10-04 13:55 ` Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 7/9] dm: Add nowait support to raid1 Goldwyn Rodrigues ` (3 subsequent siblings) 9 siblings, 0 replies; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-10-04 13:55 UTC (permalink / raw) To: linux-block; +Cc: axboe, shli, Goldwyn Rodrigues From: Goldwyn Rodrigues <rgoldwyn@suse.com> Add support for bio based dm devices, which exclusively sets a make_request_fn(). Request based devices are supported by default. Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> --- drivers/md/dm.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 6e54145969c5..b721716862a2 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1522,6 +1522,8 @@ static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio) if (!(bio->bi_opf & REQ_RAHEAD)) queue_io(md, bio); + else if (bio->bi_opf & REQ_NOWAIT) + bio_wouldblock_error(bio); else bio_io_error(bio); return BLK_QC_T_NONE; @@ -2011,6 +2013,29 @@ struct queue_limits *dm_get_queue_limits(struct mapped_device *md) } EXPORT_SYMBOL_GPL(dm_get_queue_limits); +static int device_supports_nowait(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) +{ + struct request_queue *q = bdev_get_queue(dev->bdev); + return q && blk_queue_supports_nowait(q); +} + +static bool dm_table_supports_nowait(struct dm_table *t) +{ + struct dm_target *ti; + unsigned i; + + for (i = 0; i < dm_table_get_num_targets(t); i++) { + ti = dm_table_get_target(t, i); + + if (!ti->type->iterate_devices || + !ti->type->iterate_devices(ti, device_supports_nowait, NULL)) + return false; + } + + return true; +} + /* * Setup the DM device's queue based on md's type */ @@ -2044,7 +2069,8 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t) */ bioset_free(md->queue->bio_split); md->queue->bio_split = NULL; - + if (dm_table_supports_nowait(t)) + queue_flag_set_unlocked(QUEUE_FLAG_NOWAIT, md->queue); if (type == DM_TYPE_DAX_BIO_BASED) queue_flag_set_unlocked(QUEUE_FLAG_DAX, md->queue); break; -- 2.14.2 ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 7/9] dm: Add nowait support to raid1 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues ` (5 preceding siblings ...) 2017-10-04 13:55 ` [PATCH 6/9] dm: add " Goldwyn Rodrigues @ 2017-10-04 13:55 ` Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 8/9] dm: Add nowait support to dm-delay Goldwyn Rodrigues ` (2 subsequent siblings) 9 siblings, 0 replies; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-10-04 13:55 UTC (permalink / raw) To: linux-block; +Cc: axboe, shli, Goldwyn Rodrigues From: Goldwyn Rodrigues <rgoldwyn@suse.com> If the I/O would block because the devices are syncing, bail. Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> --- drivers/md/dm-raid1.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c index c0b82136b2d1..96044b7787f9 100644 --- a/drivers/md/dm-raid1.c +++ b/drivers/md/dm-raid1.c @@ -1204,6 +1204,14 @@ static int mirror_map(struct dm_target *ti, struct bio *bio) if (rw == WRITE) { /* Save region for mirror_end_io() handler */ bio_record->write_region = dm_rh_bio_to_region(ms->rh, bio); + if (bio->bi_opf & REQ_NOWAIT) { + int state = dm_rh_get_state(ms->rh, + bio_record->write_region, 1); + if (state != DM_RH_CLEAN && state != DM_RH_DIRTY) { + bio_wouldblock_error(bio); + return DM_MAPIO_SUBMITTED; + } + } queue_bio(ms, bio, rw); return DM_MAPIO_SUBMITTED; } @@ -1219,6 +1227,11 @@ static int mirror_map(struct dm_target *ti, struct bio *bio) if (bio->bi_opf & REQ_RAHEAD) return DM_MAPIO_KILL; + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + return DM_MAPIO_SUBMITTED; + } + queue_bio(ms, bio, rw); return DM_MAPIO_SUBMITTED; } -- 2.14.2 ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 8/9] dm: Add nowait support to dm-delay 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues ` (6 preceding siblings ...) 2017-10-04 13:55 ` [PATCH 7/9] dm: Add nowait support to raid1 Goldwyn Rodrigues @ 2017-10-04 13:55 ` Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 9/9] dm-mpath: Add nowait support Goldwyn Rodrigues 2017-10-05 17:19 ` [PATCH v2 0/9] Nowait support for stacked block devices Shaohua Li 9 siblings, 0 replies; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-10-04 13:55 UTC (permalink / raw) To: linux-block; +Cc: axboe, shli, Goldwyn Rodrigues From: Goldwyn Rodrigues <rgoldwyn@suse.com> I/O should bail out if any value for delay is set. Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> --- drivers/md/dm-delay.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c index 2209a9700acd..e67a7042ae68 100644 --- a/drivers/md/dm-delay.c +++ b/drivers/md/dm-delay.c @@ -240,6 +240,10 @@ static int delay_bio(struct delay_c *dc, int delay, struct bio *bio) if (!delay || !atomic_read(&dc->may_delay)) return DM_MAPIO_REMAPPED; + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + return DM_MAPIO_SUBMITTED; + } delayed = dm_per_bio_data(bio, sizeof(struct dm_delay_info)); delayed->context = dc; -- 2.14.2 ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 9/9] dm-mpath: Add nowait support 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues ` (7 preceding siblings ...) 2017-10-04 13:55 ` [PATCH 8/9] dm: Add nowait support to dm-delay Goldwyn Rodrigues @ 2017-10-04 13:55 ` Goldwyn Rodrigues 2017-10-05 17:19 ` [PATCH v2 0/9] Nowait support for stacked block devices Shaohua Li 9 siblings, 0 replies; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-10-04 13:55 UTC (permalink / raw) To: linux-block; +Cc: axboe, shli, Goldwyn Rodrigues From: Goldwyn Rodrigues <rgoldwyn@suse.com> If there are no queues, bail if REQ_NOWAIT is set instead of queueing up I/O. Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> --- drivers/md/dm-mpath.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 11f273d2f018..d714c1b1b066 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -542,6 +542,11 @@ static int __multipath_map_bio(struct multipath *m, struct bio *bio, struct dm_m if ((pgpath && queue_io) || (!pgpath && test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))) { + /* Bail if nowait is set */ + if (bio->bi_opf & REQ_NOWAIT) { + bio_wouldblock_error(bio); + return DM_MAPIO_SUBMITTED; + } /* Queue for the daemon to resubmit */ spin_lock_irqsave(&m->lock, flags); bio_list_add(&m->queued_bios, bio); -- 2.14.2 ^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH v2 0/9] Nowait support for stacked block devices 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues ` (8 preceding siblings ...) 2017-10-04 13:55 ` [PATCH 9/9] dm-mpath: Add nowait support Goldwyn Rodrigues @ 2017-10-05 17:19 ` Shaohua Li 2017-10-06 12:01 ` Goldwyn Rodrigues 9 siblings, 1 reply; 35+ messages in thread From: Shaohua Li @ 2017-10-05 17:19 UTC (permalink / raw) To: Goldwyn Rodrigues; +Cc: linux-block, axboe On Wed, Oct 04, 2017 at 08:55:02AM -0500, Goldwyn Rodrigues wrote: > This is a continuation of the nowait support which was incorporated > a while back. We introduced REQ_NOWAIT which would return immediately > if the call would block at the block layer. Request based-devices > do not wait. However, bio based devices (the ones which exclusively > call make_request_fn) need to be trained to handle REQ_NOWAIT. > > This effort covers the devices under MD and DM which would block > for any reason. If there should be more devices or situations > which need to be covered, please let me know. > > The problem with partial writes discussed during v1 turned out > to be a bug in partial writes during direct I/O and is fixed > by the submitted patch[1]. > > Changes since v1: > - mddev to return early in case the device is suspended, within the md code as opposed to ->make_request() > - Check for nowait support with all the lower devices. Same with if adding a device which does not support nowait. > - Nowait under each raid is checked before the final I/O submission for the entire I/O. > > [1] https://patchwork.kernel.org/patch/9979887/ Does this fix the partial IO issue we discussed before? It looks not to me. The partial IO bailed out could be any part of an IO, so simply returning the successed size doesn't help. Am I missing anything? I didn't follow the discussion, maybe Jens knew. Thanks, Shaohua ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 0/9] Nowait support for stacked block devices 2017-10-05 17:19 ` [PATCH v2 0/9] Nowait support for stacked block devices Shaohua Li @ 2017-10-06 12:01 ` Goldwyn Rodrigues 2017-10-10 22:36 ` Shaohua Li 0 siblings, 1 reply; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-10-06 12:01 UTC (permalink / raw) To: Shaohua Li; +Cc: linux-block, axboe On 10/05/2017 12:19 PM, Shaohua Li wrote: > On Wed, Oct 04, 2017 at 08:55:02AM -0500, Goldwyn Rodrigues wrote: >> This is a continuation of the nowait support which was incorporated >> a while back. We introduced REQ_NOWAIT which would return immediately >> if the call would block at the block layer. Request based-devices >> do not wait. However, bio based devices (the ones which exclusively >> call make_request_fn) need to be trained to handle REQ_NOWAIT. >> >> This effort covers the devices under MD and DM which would block >> for any reason. If there should be more devices or situations >> which need to be covered, please let me know. >> >> The problem with partial writes discussed during v1 turned out >> to be a bug in partial writes during direct I/O and is fixed >> by the submitted patch[1]. >> >> Changes since v1: >> - mddev to return early in case the device is suspended, within the md code as opposed to ->make_request() >> - Check for nowait support with all the lower devices. Same with if adding a device which does not support nowait. >> - Nowait under each raid is checked before the final I/O submission for the entire I/O. >> >> [1] https://patchwork.kernel.org/patch/9979887/ > > Does this fix the partial IO issue we discussed before? It looks not to me. The > partial IO bailed out could be any part of an IO, so simply returning the > successed size doesn't help. Am I missing anything? I didn't follow the > discussion, maybe Jens knew. > If the partial IO bailed out is any part of IO, isn't it supposed to return the size of the IO succeeded _so far_? If a latter part of the IO succeeds (with a failure in between) what are you supposed to return to user in case of direct write()s? Would that even be correct in case it is a file overwrite? -- Goldwyn ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 0/9] Nowait support for stacked block devices 2017-10-06 12:01 ` Goldwyn Rodrigues @ 2017-10-10 22:36 ` Shaohua Li 0 siblings, 0 replies; 35+ messages in thread From: Shaohua Li @ 2017-10-10 22:36 UTC (permalink / raw) To: Goldwyn Rodrigues; +Cc: linux-block, axboe On Fri, Oct 06, 2017 at 07:01:19AM -0500, Goldwyn Rodrigues wrote: > > > On 10/05/2017 12:19 PM, Shaohua Li wrote: > > On Wed, Oct 04, 2017 at 08:55:02AM -0500, Goldwyn Rodrigues wrote: > >> This is a continuation of the nowait support which was incorporated > >> a while back. We introduced REQ_NOWAIT which would return immediately > >> if the call would block at the block layer. Request based-devices > >> do not wait. However, bio based devices (the ones which exclusively > >> call make_request_fn) need to be trained to handle REQ_NOWAIT. > >> > >> This effort covers the devices under MD and DM which would block > >> for any reason. If there should be more devices or situations > >> which need to be covered, please let me know. > >> > >> The problem with partial writes discussed during v1 turned out > >> to be a bug in partial writes during direct I/O and is fixed > >> by the submitted patch[1]. > >> > >> Changes since v1: > >> - mddev to return early in case the device is suspended, within the md code as opposed to ->make_request() > >> - Check for nowait support with all the lower devices. Same with if adding a device which does not support nowait. > >> - Nowait under each raid is checked before the final I/O submission for the entire I/O. > >> > >> [1] https://patchwork.kernel.org/patch/9979887/ > > > > Does this fix the partial IO issue we discussed before? It looks not to me. The > > partial IO bailed out could be any part of an IO, so simply returning the > > successed size doesn't help. Am I missing anything? I didn't follow the > > discussion, maybe Jens knew. > > > > If the partial IO bailed out is any part of IO, isn't it supposed to > return the size of the IO succeeded _so far_? If a latter part of the IO > succeeds (with a failure in between) what are you supposed to return to > user in case of direct write()s? Would that even be correct in case it > is a file overwrite? I didn't argue about the return value. To me the partial IO issue can't be fixed simply by whatever 'return value'. ^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH 0/9] Nowait feature for stacked block devices @ 2017-07-26 23:57 Goldwyn Rodrigues 2017-07-26 23:57 ` [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait Goldwyn Rodrigues 0 siblings, 1 reply; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-07-26 23:57 UTC (permalink / raw) To: linux-block; +Cc: hch, jack, linux-raid, dm-devel This is a continuation of the nowait support which was incorporated a while back. We introduced REQ_NOWAIT which would return immediately if the call would block at the block layer. Request based-devices do not wait. However, bio based devices (the ones which exclusively call make_request_fn) need to be trained to handle REQ_NOWAIT. This effort covers the devices under MD and DM which would block for any reason. If there should be more devices or situations which need to be covered, please let me know. -- Goldwyn ^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-07-26 23:57 [PATCH 0/9] Nowait feature " Goldwyn Rodrigues @ 2017-07-26 23:57 ` Goldwyn Rodrigues 2017-08-08 20:32 ` Shaohua Li 0 siblings, 1 reply; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-07-26 23:57 UTC (permalink / raw) To: linux-block; +Cc: hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues From: Goldwyn Rodrigues <rgoldwyn@suse.com> Nowait is a feature of direct AIO, where users can request to return immediately if the I/O is going to block. This translates to REQ_NOWAIT in bio.bi_opf flags. While request based devices don't wait, stacked devices such as md/dm will. In order to explicitly mark stacked devices as supported, we set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN whenever the device would block. Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> --- block/blk-core.c | 3 ++- include/linux/blkdev.h | 2 ++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/block/blk-core.c b/block/blk-core.c index 970b9c9638c5..1c9a981d88e5 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2025,7 +2025,8 @@ generic_make_request_checks(struct bio *bio) * if queue is not a request based queue. */ - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q) && + !blk_queue_supports_nowait(q)) goto not_supported; part = bio->bi_bdev->bd_part; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 25f6a0cb27d3..fae021ebec1b 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -633,6 +633,7 @@ struct request_queue { #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_STACKABLE) | \ @@ -732,6 +733,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) #define blk_queue_scsi_passthrough(q) \ test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) +#define blk_queue_supports_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) #define blk_noretry_request(rq) \ ((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \ -- 2.12.3 ^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-07-26 23:57 ` [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait Goldwyn Rodrigues @ 2017-08-08 20:32 ` Shaohua Li 2017-08-08 20:36 ` Jens Axboe 2017-08-09 11:44 ` Goldwyn Rodrigues 0 siblings, 2 replies; 35+ messages in thread From: Shaohua Li @ 2017-08-08 20:32 UTC (permalink / raw) To: Goldwyn Rodrigues Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote: > From: Goldwyn Rodrigues <rgoldwyn@suse.com> > > Nowait is a feature of direct AIO, where users can request > to return immediately if the I/O is going to block. This translates > to REQ_NOWAIT in bio.bi_opf flags. While request based devices > don't wait, stacked devices such as md/dm will. > > In order to explicitly mark stacked devices as supported, we > set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN > whenever the device would block. probably you should route this patch to Jens first, DM/MD are different trees. > Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> > --- > block/blk-core.c | 3 ++- > include/linux/blkdev.h | 2 ++ > 2 files changed, 4 insertions(+), 1 deletion(-) > > diff --git a/block/blk-core.c b/block/blk-core.c > index 970b9c9638c5..1c9a981d88e5 100644 > --- a/block/blk-core.c > +++ b/block/blk-core.c > @@ -2025,7 +2025,8 @@ generic_make_request_checks(struct bio *bio) > * if queue is not a request based queue. > */ > > - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) > + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q) && > + !blk_queue_supports_nowait(q)) > goto not_supported; > > part = bio->bi_bdev->bd_part; > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h > index 25f6a0cb27d3..fae021ebec1b 100644 > --- a/include/linux/blkdev.h > +++ b/include/linux/blkdev.h > @@ -633,6 +633,7 @@ struct request_queue { > #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ > #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ > #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ > +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ > > #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ > (1 << QUEUE_FLAG_STACKABLE) | \ > @@ -732,6 +733,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) > #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) > #define blk_queue_scsi_passthrough(q) \ > test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) > +#define blk_queue_supports_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) Should this bit consider under layer disks? For example, one raid array disk doesn't support NOWAIT, shouldn't we disable NOWAIT for the array? I have another generic question. If a bio is splitted into 2 bios, one bio doesn't need to wait but the other need to wait. We will return -EAGAIN for the second bio, so the whole bio will return -EAGAIN, but the first bio is already dispatched to disk. Is this correct behavior? ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-08 20:32 ` Shaohua Li @ 2017-08-08 20:36 ` Jens Axboe 2017-08-10 2:18 ` Jens Axboe 2017-08-09 11:44 ` Goldwyn Rodrigues 1 sibling, 1 reply; 35+ messages in thread From: Jens Axboe @ 2017-08-08 20:36 UTC (permalink / raw) To: Shaohua Li, Goldwyn Rodrigues Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On 08/08/2017 02:32 PM, Shaohua Li wrote: >> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >> index 25f6a0cb27d3..fae021ebec1b 100644 >> --- a/include/linux/blkdev.h >> +++ b/include/linux/blkdev.h >> @@ -633,6 +633,7 @@ struct request_queue { >> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ >> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ >> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ >> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ Does this work on 32-bit, where sizeof(unsigned long) == 32? -- Jens Axboe ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-08 20:36 ` Jens Axboe @ 2017-08-10 2:18 ` Jens Axboe 2017-08-10 11:38 ` Goldwyn Rodrigues 0 siblings, 1 reply; 35+ messages in thread From: Jens Axboe @ 2017-08-10 2:18 UTC (permalink / raw) To: Shaohua Li, Goldwyn Rodrigues Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On 08/08/2017 02:36 PM, Jens Axboe wrote: > On 08/08/2017 02:32 PM, Shaohua Li wrote: >>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >>> index 25f6a0cb27d3..fae021ebec1b 100644 >>> --- a/include/linux/blkdev.h >>> +++ b/include/linux/blkdev.h >>> @@ -633,6 +633,7 @@ struct request_queue { >>> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ >>> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ >>> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ >>> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ > > Does this work on 32-bit, where sizeof(unsigned long) == 32? I didn't get an answer to this one. -- Jens Axboe ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 2:18 ` Jens Axboe @ 2017-08-10 11:38 ` Goldwyn Rodrigues 2017-08-10 14:14 ` Jens Axboe 0 siblings, 1 reply; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-08-10 11:38 UTC (permalink / raw) To: Jens Axboe, Shaohua Li Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On 08/09/2017 09:18 PM, Jens Axboe wrote: > On 08/08/2017 02:36 PM, Jens Axboe wrote: >> On 08/08/2017 02:32 PM, Shaohua Li wrote: >>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >>>> index 25f6a0cb27d3..fae021ebec1b 100644 >>>> --- a/include/linux/blkdev.h >>>> +++ b/include/linux/blkdev.h >>>> @@ -633,6 +633,7 @@ struct request_queue { >>>> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ >>>> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ >>>> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ >>>> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ >> >> Does this work on 32-bit, where sizeof(unsigned long) == 32? > > I didn't get an answer to this one. > Oh, I assumed the question is rhetorical. No, it will not work on 32-bit. I was planning to change the field queue_flags to u64. Is that okay? -- Goldwyn ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 11:38 ` Goldwyn Rodrigues @ 2017-08-10 14:14 ` Jens Axboe 2017-08-10 17:15 ` Goldwyn Rodrigues 0 siblings, 1 reply; 35+ messages in thread From: Jens Axboe @ 2017-08-10 14:14 UTC (permalink / raw) To: Goldwyn Rodrigues, Shaohua Li Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On 08/10/2017 05:38 AM, Goldwyn Rodrigues wrote: > > > On 08/09/2017 09:18 PM, Jens Axboe wrote: >> On 08/08/2017 02:36 PM, Jens Axboe wrote: >>> On 08/08/2017 02:32 PM, Shaohua Li wrote: >>>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >>>>> index 25f6a0cb27d3..fae021ebec1b 100644 >>>>> --- a/include/linux/blkdev.h >>>>> +++ b/include/linux/blkdev.h >>>>> @@ -633,6 +633,7 @@ struct request_queue { >>>>> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ >>>>> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ >>>>> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ >>>>> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ >>> >>> Does this work on 32-bit, where sizeof(unsigned long) == 32? >> >> I didn't get an answer to this one. >> > > Oh, I assumed the question is rhetorical. > No, it will not work on 32-bit. I was planning to change the field > queue_flags to u64. Is that okay? No, besides that would not work with set/test_bit() and friends. Grab a free bit instead. -- Jens Axboe ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 14:14 ` Jens Axboe @ 2017-08-10 17:15 ` Goldwyn Rodrigues 2017-08-10 17:17 ` Jens Axboe 0 siblings, 1 reply; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-08-10 17:15 UTC (permalink / raw) To: Jens Axboe, Shaohua Li Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On 08/10/2017 09:14 AM, Jens Axboe wrote: > On 08/10/2017 05:38 AM, Goldwyn Rodrigues wrote: >> >> >> On 08/09/2017 09:18 PM, Jens Axboe wrote: >>> On 08/08/2017 02:36 PM, Jens Axboe wrote: >>>> On 08/08/2017 02:32 PM, Shaohua Li wrote: >>>>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >>>>>> index 25f6a0cb27d3..fae021ebec1b 100644 >>>>>> --- a/include/linux/blkdev.h >>>>>> +++ b/include/linux/blkdev.h >>>>>> @@ -633,6 +633,7 @@ struct request_queue { >>>>>> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ >>>>>> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ >>>>>> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ >>>>>> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ >>>> >>>> Does this work on 32-bit, where sizeof(unsigned long) == 32? >>> >>> I didn't get an answer to this one. >>> >> >> Oh, I assumed the question is rhetorical. >> No, it will not work on 32-bit. I was planning to change the field >> queue_flags to u64. Is that okay? > > No, besides that would not work with set/test_bit() and friends. Grab > a free bit instead. > Which bit is free? I don't see any gaps in QUEUE_FLAG_*, and I am not sure if any one is unused. -- Goldwyn ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 17:15 ` Goldwyn Rodrigues @ 2017-08-10 17:17 ` Jens Axboe 0 siblings, 0 replies; 35+ messages in thread From: Jens Axboe @ 2017-08-10 17:17 UTC (permalink / raw) To: Goldwyn Rodrigues, Shaohua Li Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On 08/10/2017 11:15 AM, Goldwyn Rodrigues wrote: > > > On 08/10/2017 09:14 AM, Jens Axboe wrote: >> On 08/10/2017 05:38 AM, Goldwyn Rodrigues wrote: >>> >>> >>> On 08/09/2017 09:18 PM, Jens Axboe wrote: >>>> On 08/08/2017 02:36 PM, Jens Axboe wrote: >>>>> On 08/08/2017 02:32 PM, Shaohua Li wrote: >>>>>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >>>>>>> index 25f6a0cb27d3..fae021ebec1b 100644 >>>>>>> --- a/include/linux/blkdev.h >>>>>>> +++ b/include/linux/blkdev.h >>>>>>> @@ -633,6 +633,7 @@ struct request_queue { >>>>>>> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ >>>>>>> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ >>>>>>> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ >>>>>>> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ >>>>> >>>>> Does this work on 32-bit, where sizeof(unsigned long) == 32? >>>> >>>> I didn't get an answer to this one. >>>> >>> >>> Oh, I assumed the question is rhetorical. >>> No, it will not work on 32-bit. I was planning to change the field >>> queue_flags to u64. Is that okay? >> >> No, besides that would not work with set/test_bit() and friends. Grab >> a free bit instead. >> > > Which bit is free? I don't see any gaps in QUEUE_FLAG_*, and I am not > sure if any one is unused. Bit 0 is free in mainline, and I just looked, and two other bits are gone as well: http://git.kernel.dk/cgit/linux-block/commit/?h=for-4.14/block&id=e743eb1ecd5564b5ae0a4a76c1566f748a358839 -- Jens Axboe ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-08 20:32 ` Shaohua Li 2017-08-08 20:36 ` Jens Axboe @ 2017-08-09 11:44 ` Goldwyn Rodrigues 2017-08-09 15:02 ` Shaohua Li 1 sibling, 1 reply; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-08-09 11:44 UTC (permalink / raw) To: Shaohua Li Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On 08/08/2017 03:32 PM, Shaohua Li wrote: > On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote: >> From: Goldwyn Rodrigues <rgoldwyn@suse.com> >> >> Nowait is a feature of direct AIO, where users can request >> to return immediately if the I/O is going to block. This translates >> to REQ_NOWAIT in bio.bi_opf flags. While request based devices >> don't wait, stacked devices such as md/dm will. >> >> In order to explicitly mark stacked devices as supported, we >> set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN >> whenever the device would block. > > probably you should route this patch to Jens first, DM/MD are different trees. Yes, I have sent it to linux-block as well, and he has commented as well. > >> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> >> --- >> block/blk-core.c | 3 ++- >> include/linux/blkdev.h | 2 ++ >> 2 files changed, 4 insertions(+), 1 deletion(-) >> >> diff --git a/block/blk-core.c b/block/blk-core.c >> index 970b9c9638c5..1c9a981d88e5 100644 >> --- a/block/blk-core.c >> +++ b/block/blk-core.c >> @@ -2025,7 +2025,8 @@ generic_make_request_checks(struct bio *bio) >> * if queue is not a request based queue. >> */ >> >> - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) >> + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q) && >> + !blk_queue_supports_nowait(q)) >> goto not_supported; >> >> part = bio->bi_bdev->bd_part; >> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >> index 25f6a0cb27d3..fae021ebec1b 100644 >> --- a/include/linux/blkdev.h >> +++ b/include/linux/blkdev.h >> @@ -633,6 +633,7 @@ struct request_queue { >> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ >> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ >> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ >> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ >> >> #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ >> (1 << QUEUE_FLAG_STACKABLE) | \ >> @@ -732,6 +733,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) >> #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) >> #define blk_queue_scsi_passthrough(q) \ >> test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) >> +#define blk_queue_supports_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) > > Should this bit consider under layer disks? For example, one raid array disk > doesn't support NOWAIT, shouldn't we disable NOWAIT for the array? Yes, it should. I will add a check before setting the flag. Thanks. Request-based devices don't wait. So, they would not have this flag set. It is only the bio-based, with the make_request_fn hook which need this. > > I have another generic question. If a bio is splitted into 2 bios, one bio > doesn't need to wait but the other need to wait. We will return -EAGAIN for the > second bio, so the whole bio will return -EAGAIN, but the first bio is already > dispatched to disk. Is this correct behavior? > No, from a multi-device point of view, this is inconsistent. I have tried the request bio returns -EAGAIN before the split, but I shall check again. Where do you see this happening? -- Goldwyn ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-09 11:44 ` Goldwyn Rodrigues @ 2017-08-09 15:02 ` Shaohua Li 2017-08-09 15:35 ` Goldwyn Rodrigues 0 siblings, 1 reply; 35+ messages in thread From: Shaohua Li @ 2017-08-09 15:02 UTC (permalink / raw) To: Goldwyn Rodrigues Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On Wed, Aug 09, 2017 at 06:44:55AM -0500, Goldwyn Rodrigues wrote: > > > On 08/08/2017 03:32 PM, Shaohua Li wrote: > > On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote: > >> From: Goldwyn Rodrigues <rgoldwyn@suse.com> > >> > >> Nowait is a feature of direct AIO, where users can request > >> to return immediately if the I/O is going to block. This translates > >> to REQ_NOWAIT in bio.bi_opf flags. While request based devices > >> don't wait, stacked devices such as md/dm will. > >> > >> In order to explicitly mark stacked devices as supported, we > >> set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN > >> whenever the device would block. > > > > probably you should route this patch to Jens first, DM/MD are different trees. > > Yes, I have sent it to linux-block as well, and he has commented as well. > > > > > >> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> > >> --- > >> block/blk-core.c | 3 ++- > >> include/linux/blkdev.h | 2 ++ > >> 2 files changed, 4 insertions(+), 1 deletion(-) > >> > >> diff --git a/block/blk-core.c b/block/blk-core.c > >> index 970b9c9638c5..1c9a981d88e5 100644 > >> --- a/block/blk-core.c > >> +++ b/block/blk-core.c > >> @@ -2025,7 +2025,8 @@ generic_make_request_checks(struct bio *bio) > >> * if queue is not a request based queue. > >> */ > >> > >> - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) > >> + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q) && > >> + !blk_queue_supports_nowait(q)) > >> goto not_supported; > >> > >> part = bio->bi_bdev->bd_part; > >> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h > >> index 25f6a0cb27d3..fae021ebec1b 100644 > >> --- a/include/linux/blkdev.h > >> +++ b/include/linux/blkdev.h > >> @@ -633,6 +633,7 @@ struct request_queue { > >> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ > >> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ > >> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ > >> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ > >> > >> #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ > >> (1 << QUEUE_FLAG_STACKABLE) | \ > >> @@ -732,6 +733,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) > >> #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) > >> #define blk_queue_scsi_passthrough(q) \ > >> test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) > >> +#define blk_queue_supports_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) > > > > Should this bit consider under layer disks? For example, one raid array disk > > doesn't support NOWAIT, shouldn't we disable NOWAIT for the array? > > Yes, it should. I will add a check before setting the flag. Thanks. > Request-based devices don't wait. So, they would not have this flag set. > It is only the bio-based, with the make_request_fn hook which need this. > > > > > I have another generic question. If a bio is splitted into 2 bios, one bio > > doesn't need to wait but the other need to wait. We will return -EAGAIN for the > > second bio, so the whole bio will return -EAGAIN, but the first bio is already > > dispatched to disk. Is this correct behavior? > > > > No, from a multi-device point of view, this is inconsistent. I have > tried the request bio returns -EAGAIN before the split, but I shall > check again. Where do you see this happening? No, this isn't multi-device specific, any driver can do it. Please see blk_queue_split. ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-09 15:02 ` Shaohua Li @ 2017-08-09 15:35 ` Goldwyn Rodrigues 2017-08-09 20:21 ` Shaohua Li 0 siblings, 1 reply; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-08-09 15:35 UTC (permalink / raw) To: Shaohua Li Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On 08/09/2017 10:02 AM, Shaohua Li wrote: > On Wed, Aug 09, 2017 at 06:44:55AM -0500, Goldwyn Rodrigues wrote: >> >> >> On 08/08/2017 03:32 PM, Shaohua Li wrote: >>> On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote: >>>> From: Goldwyn Rodrigues <rgoldwyn@suse.com> >>>> >>>> Nowait is a feature of direct AIO, where users can request >>>> to return immediately if the I/O is going to block. This translates >>>> to REQ_NOWAIT in bio.bi_opf flags. While request based devices >>>> don't wait, stacked devices such as md/dm will. >>>> >>>> In order to explicitly mark stacked devices as supported, we >>>> set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN >>>> whenever the device would block. >>> >>> probably you should route this patch to Jens first, DM/MD are different trees. >> >> Yes, I have sent it to linux-block as well, and he has commented as well. >> >> >>> >>>> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> >>>> --- >>>> block/blk-core.c | 3 ++- >>>> include/linux/blkdev.h | 2 ++ >>>> 2 files changed, 4 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/block/blk-core.c b/block/blk-core.c >>>> index 970b9c9638c5..1c9a981d88e5 100644 >>>> --- a/block/blk-core.c >>>> +++ b/block/blk-core.c >>>> @@ -2025,7 +2025,8 @@ generic_make_request_checks(struct bio *bio) >>>> * if queue is not a request based queue. >>>> */ >>>> >>>> - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) >>>> + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q) && >>>> + !blk_queue_supports_nowait(q)) >>>> goto not_supported; >>>> >>>> part = bio->bi_bdev->bd_part; >>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >>>> index 25f6a0cb27d3..fae021ebec1b 100644 >>>> --- a/include/linux/blkdev.h >>>> +++ b/include/linux/blkdev.h >>>> @@ -633,6 +633,7 @@ struct request_queue { >>>> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ >>>> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ >>>> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ >>>> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ >>>> >>>> #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ >>>> (1 << QUEUE_FLAG_STACKABLE) | \ >>>> @@ -732,6 +733,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) >>>> #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) >>>> #define blk_queue_scsi_passthrough(q) \ >>>> test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) >>>> +#define blk_queue_supports_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) >>> >>> Should this bit consider under layer disks? For example, one raid array disk >>> doesn't support NOWAIT, shouldn't we disable NOWAIT for the array? >> >> Yes, it should. I will add a check before setting the flag. Thanks. >> Request-based devices don't wait. So, they would not have this flag set. >> It is only the bio-based, with the make_request_fn hook which need this. >> >>> >>> I have another generic question. If a bio is splitted into 2 bios, one bio >>> doesn't need to wait but the other need to wait. We will return -EAGAIN for the >>> second bio, so the whole bio will return -EAGAIN, but the first bio is already >>> dispatched to disk. Is this correct behavior? >>> >> >> No, from a multi-device point of view, this is inconsistent. I have >> tried the request bio returns -EAGAIN before the split, but I shall >> check again. Where do you see this happening? > > No, this isn't multi-device specific, any driver can do it. Please see blk_queue_split. > In that case, the bio end_io function is chained and the bio of the split will replicate the error to the parent (if not already set). -- Goldwyn ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-09 15:35 ` Goldwyn Rodrigues @ 2017-08-09 20:21 ` Shaohua Li 2017-08-09 22:16 ` Goldwyn Rodrigues 0 siblings, 1 reply; 35+ messages in thread From: Shaohua Li @ 2017-08-09 20:21 UTC (permalink / raw) To: Goldwyn Rodrigues Cc: linux-block, hch, jack, linux-raid, dm-devel, Goldwyn Rodrigues On Wed, Aug 09, 2017 at 10:35:39AM -0500, Goldwyn Rodrigues wrote: > > > On 08/09/2017 10:02 AM, Shaohua Li wrote: > > On Wed, Aug 09, 2017 at 06:44:55AM -0500, Goldwyn Rodrigues wrote: > >> > >> > >> On 08/08/2017 03:32 PM, Shaohua Li wrote: > >>> On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote: > >>>> From: Goldwyn Rodrigues <rgoldwyn@suse.com> > >>>> > >>>> Nowait is a feature of direct AIO, where users can request > >>>> to return immediately if the I/O is going to block. This translates > >>>> to REQ_NOWAIT in bio.bi_opf flags. While request based devices > >>>> don't wait, stacked devices such as md/dm will. > >>>> > >>>> In order to explicitly mark stacked devices as supported, we > >>>> set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN > >>>> whenever the device would block. > >>> > >>> probably you should route this patch to Jens first, DM/MD are different trees. > >> > >> Yes, I have sent it to linux-block as well, and he has commented as well. > >> > >> > >>> > >>>> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> > >>>> --- > >>>> block/blk-core.c | 3 ++- > >>>> include/linux/blkdev.h | 2 ++ > >>>> 2 files changed, 4 insertions(+), 1 deletion(-) > >>>> > >>>> diff --git a/block/blk-core.c b/block/blk-core.c > >>>> index 970b9c9638c5..1c9a981d88e5 100644 > >>>> --- a/block/blk-core.c > >>>> +++ b/block/blk-core.c > >>>> @@ -2025,7 +2025,8 @@ generic_make_request_checks(struct bio *bio) > >>>> * if queue is not a request based queue. > >>>> */ > >>>> > >>>> - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) > >>>> + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q) && > >>>> + !blk_queue_supports_nowait(q)) > >>>> goto not_supported; > >>>> > >>>> part = bio->bi_bdev->bd_part; > >>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h > >>>> index 25f6a0cb27d3..fae021ebec1b 100644 > >>>> --- a/include/linux/blkdev.h > >>>> +++ b/include/linux/blkdev.h > >>>> @@ -633,6 +633,7 @@ struct request_queue { > >>>> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ > >>>> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ > >>>> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ > >>>> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ > >>>> > >>>> #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ > >>>> (1 << QUEUE_FLAG_STACKABLE) | \ > >>>> @@ -732,6 +733,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) > >>>> #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) > >>>> #define blk_queue_scsi_passthrough(q) \ > >>>> test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) > >>>> +#define blk_queue_supports_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) > >>> > >>> Should this bit consider under layer disks? For example, one raid array disk > >>> doesn't support NOWAIT, shouldn't we disable NOWAIT for the array? > >> > >> Yes, it should. I will add a check before setting the flag. Thanks. > >> Request-based devices don't wait. So, they would not have this flag set. > >> It is only the bio-based, with the make_request_fn hook which need this. > >> > >>> > >>> I have another generic question. If a bio is splitted into 2 bios, one bio > >>> doesn't need to wait but the other need to wait. We will return -EAGAIN for the > >>> second bio, so the whole bio will return -EAGAIN, but the first bio is already > >>> dispatched to disk. Is this correct behavior? > >>> > >> > >> No, from a multi-device point of view, this is inconsistent. I have > >> tried the request bio returns -EAGAIN before the split, but I shall > >> check again. Where do you see this happening? > > > > No, this isn't multi-device specific, any driver can do it. Please see blk_queue_split. > > > > In that case, the bio end_io function is chained and the bio of the > split will replicate the error to the parent (if not already set). this doesn't answer my question. So if a bio returns -EAGAIN, part of the bio probably already dispatched to disk (if the bio is splitted to 2 bios, one returns -EAGAIN, the other one doesn't block and dispatch to disk), what will application be going to do? I think this is different to other IO errors. FOr other IO errors, application will handle the error, while we ask app to retry the whole bio here and app doesn't know part of bio is already written to disk. ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-09 20:21 ` Shaohua Li @ 2017-08-09 22:16 ` Goldwyn Rodrigues 2017-08-10 1:17 ` Shaohua Li 0 siblings, 1 reply; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-08-09 22:16 UTC (permalink / raw) To: Shaohua Li; +Cc: linux-block, hch, jack, linux-raid, dm-devel On 08/09/2017 03:21 PM, Shaohua Li wrote: > On Wed, Aug 09, 2017 at 10:35:39AM -0500, Goldwyn Rodrigues wrote: >> >> >> On 08/09/2017 10:02 AM, Shaohua Li wrote: >>> On Wed, Aug 09, 2017 at 06:44:55AM -0500, Goldwyn Rodrigues wrote: >>>> >>>> >>>> On 08/08/2017 03:32 PM, Shaohua Li wrote: >>>>> On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote: >>>>>> From: Goldwyn Rodrigues <rgoldwyn@suse.com> >>>>>> >>>>>> Nowait is a feature of direct AIO, where users can request >>>>>> to return immediately if the I/O is going to block. This translates >>>>>> to REQ_NOWAIT in bio.bi_opf flags. While request based devices >>>>>> don't wait, stacked devices such as md/dm will. >>>>>> >>>>>> In order to explicitly mark stacked devices as supported, we >>>>>> set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN >>>>>> whenever the device would block. >>>>> >>>>> probably you should route this patch to Jens first, DM/MD are different trees. >>>> >>>> Yes, I have sent it to linux-block as well, and he has commented as well. >>>> >>>> >>>>> >>>>>> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> >>>>>> --- >>>>>> block/blk-core.c | 3 ++- >>>>>> include/linux/blkdev.h | 2 ++ >>>>>> 2 files changed, 4 insertions(+), 1 deletion(-) >>>>>> >>>>>> diff --git a/block/blk-core.c b/block/blk-core.c >>>>>> index 970b9c9638c5..1c9a981d88e5 100644 >>>>>> --- a/block/blk-core.c >>>>>> +++ b/block/blk-core.c >>>>>> @@ -2025,7 +2025,8 @@ generic_make_request_checks(struct bio *bio) >>>>>> * if queue is not a request based queue. >>>>>> */ >>>>>> >>>>>> - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) >>>>>> + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q) && >>>>>> + !blk_queue_supports_nowait(q)) >>>>>> goto not_supported; >>>>>> >>>>>> part = bio->bi_bdev->bd_part; >>>>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >>>>>> index 25f6a0cb27d3..fae021ebec1b 100644 >>>>>> --- a/include/linux/blkdev.h >>>>>> +++ b/include/linux/blkdev.h >>>>>> @@ -633,6 +633,7 @@ struct request_queue { >>>>>> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ >>>>>> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ >>>>>> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ >>>>>> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ >>>>>> >>>>>> #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ >>>>>> (1 << QUEUE_FLAG_STACKABLE) | \ >>>>>> @@ -732,6 +733,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) >>>>>> #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) >>>>>> #define blk_queue_scsi_passthrough(q) \ >>>>>> test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) >>>>>> +#define blk_queue_supports_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) >>>>> >>>>> Should this bit consider under layer disks? For example, one raid array disk >>>>> doesn't support NOWAIT, shouldn't we disable NOWAIT for the array? >>>> >>>> Yes, it should. I will add a check before setting the flag. Thanks. >>>> Request-based devices don't wait. So, they would not have this flag set. >>>> It is only the bio-based, with the make_request_fn hook which need this. >>>> >>>>> >>>>> I have another generic question. If a bio is splitted into 2 bios, one bio >>>>> doesn't need to wait but the other need to wait. We will return -EAGAIN for the >>>>> second bio, so the whole bio will return -EAGAIN, but the first bio is already >>>>> dispatched to disk. Is this correct behavior? >>>>> >>>> >>>> No, from a multi-device point of view, this is inconsistent. I have >>>> tried the request bio returns -EAGAIN before the split, but I shall >>>> check again. Where do you see this happening? >>> >>> No, this isn't multi-device specific, any driver can do it. Please see blk_queue_split. >>> >> >> In that case, the bio end_io function is chained and the bio of the >> split will replicate the error to the parent (if not already set). > > this doesn't answer my question. So if a bio returns -EAGAIN, part of the bio > probably already dispatched to disk (if the bio is splitted to 2 bios, one > returns -EAGAIN, the other one doesn't block and dispatch to disk), what will > application be going to do? I think this is different to other IO errors. FOr > other IO errors, application will handle the error, while we ask app to retry > the whole bio here and app doesn't know part of bio is already written to disk. It is the same as for other I/O errors as well, such as EIO. You do not know which bio of all submitted bio's returned the error EIO. The application would and should consider the whole I/O as failed. The user application does not know of bios, or how it is going to be split in the underlying layers. It knows at the system call level. In this case, the EAGAIN will be returned to the user for the whole I/O not as a part of the I/O. It is up to application to try the I/O again with or without RWF_NOWAIT set. In direct I/O, it is bubbled out using dio->io_error. You can read about it at the patch header for the initial patchset at [1]. Use case: It is for applications having two threads, a compute thread and an I/O thread. It would try to push AIO as much as possible in the compute thread using RWF_NOWAIT, and if it fails, would pass it on to I/O thread which would perform without RWF_NOWAIT. End result if done right is you save on context switches and all the synchronization/messaging machinery to perform I/O. [1] http://marc.info/?l=linux-block&m=149789003305876&w=2 -- Goldwyn ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-09 22:16 ` Goldwyn Rodrigues @ 2017-08-10 1:17 ` Shaohua Li 2017-08-10 2:07 ` Goldwyn Rodrigues 0 siblings, 1 reply; 35+ messages in thread From: Shaohua Li @ 2017-08-10 1:17 UTC (permalink / raw) To: Goldwyn Rodrigues; +Cc: linux-block, hch, jack, linux-raid, dm-devel On Wed, Aug 09, 2017 at 05:16:23PM -0500, Goldwyn Rodrigues wrote: > > > On 08/09/2017 03:21 PM, Shaohua Li wrote: > > On Wed, Aug 09, 2017 at 10:35:39AM -0500, Goldwyn Rodrigues wrote: > >> > >> > >> On 08/09/2017 10:02 AM, Shaohua Li wrote: > >>> On Wed, Aug 09, 2017 at 06:44:55AM -0500, Goldwyn Rodrigues wrote: > >>>> > >>>> > >>>> On 08/08/2017 03:32 PM, Shaohua Li wrote: > >>>>> On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote: > >>>>>> From: Goldwyn Rodrigues <rgoldwyn@suse.com> > >>>>>> > >>>>>> Nowait is a feature of direct AIO, where users can request > >>>>>> to return immediately if the I/O is going to block. This translates > >>>>>> to REQ_NOWAIT in bio.bi_opf flags. While request based devices > >>>>>> don't wait, stacked devices such as md/dm will. > >>>>>> > >>>>>> In order to explicitly mark stacked devices as supported, we > >>>>>> set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN > >>>>>> whenever the device would block. > >>>>> > >>>>> probably you should route this patch to Jens first, DM/MD are different trees. > >>>> > >>>> Yes, I have sent it to linux-block as well, and he has commented as well. > >>>> > >>>> > >>>>> > >>>>>> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> > >>>>>> --- > >>>>>> block/blk-core.c | 3 ++- > >>>>>> include/linux/blkdev.h | 2 ++ > >>>>>> 2 files changed, 4 insertions(+), 1 deletion(-) > >>>>>> > >>>>>> diff --git a/block/blk-core.c b/block/blk-core.c > >>>>>> index 970b9c9638c5..1c9a981d88e5 100644 > >>>>>> --- a/block/blk-core.c > >>>>>> +++ b/block/blk-core.c > >>>>>> @@ -2025,7 +2025,8 @@ generic_make_request_checks(struct bio *bio) > >>>>>> * if queue is not a request based queue. > >>>>>> */ > >>>>>> > >>>>>> - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) > >>>>>> + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q) && > >>>>>> + !blk_queue_supports_nowait(q)) > >>>>>> goto not_supported; > >>>>>> > >>>>>> part = bio->bi_bdev->bd_part; > >>>>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h > >>>>>> index 25f6a0cb27d3..fae021ebec1b 100644 > >>>>>> --- a/include/linux/blkdev.h > >>>>>> +++ b/include/linux/blkdev.h > >>>>>> @@ -633,6 +633,7 @@ struct request_queue { > >>>>>> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ > >>>>>> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ > >>>>>> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ > >>>>>> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ > >>>>>> > >>>>>> #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ > >>>>>> (1 << QUEUE_FLAG_STACKABLE) | \ > >>>>>> @@ -732,6 +733,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) > >>>>>> #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) > >>>>>> #define blk_queue_scsi_passthrough(q) \ > >>>>>> test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) > >>>>>> +#define blk_queue_supports_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) > >>>>> > >>>>> Should this bit consider under layer disks? For example, one raid array disk > >>>>> doesn't support NOWAIT, shouldn't we disable NOWAIT for the array? > >>>> > >>>> Yes, it should. I will add a check before setting the flag. Thanks. > >>>> Request-based devices don't wait. So, they would not have this flag set. > >>>> It is only the bio-based, with the make_request_fn hook which need this. > >>>> > >>>>> > >>>>> I have another generic question. If a bio is splitted into 2 bios, one bio > >>>>> doesn't need to wait but the other need to wait. We will return -EAGAIN for the > >>>>> second bio, so the whole bio will return -EAGAIN, but the first bio is already > >>>>> dispatched to disk. Is this correct behavior? > >>>>> > >>>> > >>>> No, from a multi-device point of view, this is inconsistent. I have > >>>> tried the request bio returns -EAGAIN before the split, but I shall > >>>> check again. Where do you see this happening? > >>> > >>> No, this isn't multi-device specific, any driver can do it. Please see blk_queue_split. > >>> > >> > >> In that case, the bio end_io function is chained and the bio of the > >> split will replicate the error to the parent (if not already set). > > > > this doesn't answer my question. So if a bio returns -EAGAIN, part of the bio > > probably already dispatched to disk (if the bio is splitted to 2 bios, one > > returns -EAGAIN, the other one doesn't block and dispatch to disk), what will > > application be going to do? I think this is different to other IO errors. FOr > > other IO errors, application will handle the error, while we ask app to retry > > the whole bio here and app doesn't know part of bio is already written to disk. > > It is the same as for other I/O errors as well, such as EIO. You do not > know which bio of all submitted bio's returned the error EIO. The > application would and should consider the whole I/O as failed. > > The user application does not know of bios, or how it is going to be > split in the underlying layers. It knows at the system call level. In > this case, the EAGAIN will be returned to the user for the whole I/O not > as a part of the I/O. It is up to application to try the I/O again with > or without RWF_NOWAIT set. In direct I/O, it is bubbled out using > dio->io_error. You can read about it at the patch header for the initial > patchset at [1]. > > Use case: It is for applications having two threads, a compute thread > and an I/O thread. It would try to push AIO as much as possible in the > compute thread using RWF_NOWAIT, and if it fails, would pass it on to > I/O thread which would perform without RWF_NOWAIT. End result if done > right is you save on context switches and all the > synchronization/messaging machinery to perform I/O. > > [1] http://marc.info/?l=linux-block&m=149789003305876&w=2 Yes, I knew the concept, but I didn't see previous patches mentioned the -EAGAIN actually should be taken as a real IO error. This means a lot to applications and make the API hard to use. I'm wondering if we should disable bio split for NOWAIT bio, which will make the -EAGAIN only mean 'try again'. Thanks, Shaohua ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 1:17 ` Shaohua Li @ 2017-08-10 2:07 ` Goldwyn Rodrigues 2017-08-10 2:17 ` Jens Axboe 0 siblings, 1 reply; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-08-10 2:07 UTC (permalink / raw) To: Shaohua Li; +Cc: linux-block, hch, jack, linux-raid, dm-devel On 08/09/2017 08:17 PM, Shaohua Li wrote: > On Wed, Aug 09, 2017 at 05:16:23PM -0500, Goldwyn Rodrigues wrote: >> >> >> On 08/09/2017 03:21 PM, Shaohua Li wrote: >>> On Wed, Aug 09, 2017 at 10:35:39AM -0500, Goldwyn Rodrigues wrote: >>>> >>>> >>>> On 08/09/2017 10:02 AM, Shaohua Li wrote: >>>>> On Wed, Aug 09, 2017 at 06:44:55AM -0500, Goldwyn Rodrigues wrote: >>>>>> >>>>>> >>>>>> On 08/08/2017 03:32 PM, Shaohua Li wrote: >>>>>>> On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote: >>>>>>>> From: Goldwyn Rodrigues <rgoldwyn@suse.com> >>>>>>>> >>>>>>>> Nowait is a feature of direct AIO, where users can request >>>>>>>> to return immediately if the I/O is going to block. This translates >>>>>>>> to REQ_NOWAIT in bio.bi_opf flags. While request based devices >>>>>>>> don't wait, stacked devices such as md/dm will. >>>>>>>> >>>>>>>> In order to explicitly mark stacked devices as supported, we >>>>>>>> set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN >>>>>>>> whenever the device would block. >>>>>>> >>>>>>> probably you should route this patch to Jens first, DM/MD are different trees. >>>>>> >>>>>> Yes, I have sent it to linux-block as well, and he has commented as well. >>>>>> >>>>>> >>>>>>> >>>>>>>> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> >>>>>>>> --- >>>>>>>> block/blk-core.c | 3 ++- >>>>>>>> include/linux/blkdev.h | 2 ++ >>>>>>>> 2 files changed, 4 insertions(+), 1 deletion(-) >>>>>>>> >>>>>>>> diff --git a/block/blk-core.c b/block/blk-core.c >>>>>>>> index 970b9c9638c5..1c9a981d88e5 100644 >>>>>>>> --- a/block/blk-core.c >>>>>>>> +++ b/block/blk-core.c >>>>>>>> @@ -2025,7 +2025,8 @@ generic_make_request_checks(struct bio *bio) >>>>>>>> * if queue is not a request based queue. >>>>>>>> */ >>>>>>>> >>>>>>>> - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) >>>>>>>> + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q) && >>>>>>>> + !blk_queue_supports_nowait(q)) >>>>>>>> goto not_supported; >>>>>>>> >>>>>>>> part = bio->bi_bdev->bd_part; >>>>>>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >>>>>>>> index 25f6a0cb27d3..fae021ebec1b 100644 >>>>>>>> --- a/include/linux/blkdev.h >>>>>>>> +++ b/include/linux/blkdev.h >>>>>>>> @@ -633,6 +633,7 @@ struct request_queue { >>>>>>>> #define QUEUE_FLAG_REGISTERED 29 /* queue has been registered to a disk */ >>>>>>>> #define QUEUE_FLAG_SCSI_PASSTHROUGH 30 /* queue supports SCSI commands */ >>>>>>>> #define QUEUE_FLAG_QUIESCED 31 /* queue has been quiesced */ >>>>>>>> +#define QUEUE_FLAG_NOWAIT 32 /* stack device driver supports REQ_NOWAIT */ >>>>>>>> >>>>>>>> #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ >>>>>>>> (1 << QUEUE_FLAG_STACKABLE) | \ >>>>>>>> @@ -732,6 +733,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) >>>>>>>> #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) >>>>>>>> #define blk_queue_scsi_passthrough(q) \ >>>>>>>> test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) >>>>>>>> +#define blk_queue_supports_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) >>>>>>> >>>>>>> Should this bit consider under layer disks? For example, one raid array disk >>>>>>> doesn't support NOWAIT, shouldn't we disable NOWAIT for the array? >>>>>> >>>>>> Yes, it should. I will add a check before setting the flag. Thanks. >>>>>> Request-based devices don't wait. So, they would not have this flag set. >>>>>> It is only the bio-based, with the make_request_fn hook which need this. >>>>>> >>>>>>> >>>>>>> I have another generic question. If a bio is splitted into 2 bios, one bio >>>>>>> doesn't need to wait but the other need to wait. We will return -EAGAIN for the >>>>>>> second bio, so the whole bio will return -EAGAIN, but the first bio is already >>>>>>> dispatched to disk. Is this correct behavior? >>>>>>> >>>>>> >>>>>> No, from a multi-device point of view, this is inconsistent. I have >>>>>> tried the request bio returns -EAGAIN before the split, but I shall >>>>>> check again. Where do you see this happening? >>>>> >>>>> No, this isn't multi-device specific, any driver can do it. Please see blk_queue_split. >>>>> >>>> >>>> In that case, the bio end_io function is chained and the bio of the >>>> split will replicate the error to the parent (if not already set). >>> >>> this doesn't answer my question. So if a bio returns -EAGAIN, part of the bio >>> probably already dispatched to disk (if the bio is splitted to 2 bios, one >>> returns -EAGAIN, the other one doesn't block and dispatch to disk), what will >>> application be going to do? I think this is different to other IO errors. FOr >>> other IO errors, application will handle the error, while we ask app to retry >>> the whole bio here and app doesn't know part of bio is already written to disk. >> >> It is the same as for other I/O errors as well, such as EIO. You do not >> know which bio of all submitted bio's returned the error EIO. The >> application would and should consider the whole I/O as failed. >> >> The user application does not know of bios, or how it is going to be >> split in the underlying layers. It knows at the system call level. In >> this case, the EAGAIN will be returned to the user for the whole I/O not >> as a part of the I/O. It is up to application to try the I/O again with >> or without RWF_NOWAIT set. In direct I/O, it is bubbled out using >> dio->io_error. You can read about it at the patch header for the initial >> patchset at [1]. >> >> Use case: It is for applications having two threads, a compute thread >> and an I/O thread. It would try to push AIO as much as possible in the >> compute thread using RWF_NOWAIT, and if it fails, would pass it on to >> I/O thread which would perform without RWF_NOWAIT. End result if done >> right is you save on context switches and all the >> synchronization/messaging machinery to perform I/O. >> >> [1] http://marc.info/?l=linux-block&m=149789003305876&w=2 > > Yes, I knew the concept, but I didn't see previous patches mentioned the > -EAGAIN actually should be taken as a real IO error. This means a lot to > applications and make the API hard to use. I'm wondering if we should disable > bio split for NOWAIT bio, which will make the -EAGAIN only mean 'try again'. > Don't take it as EAGAIN, but read it as EWOULDBLOCK. Why do you say the API is hard to use? Do you have a case to back it up? No, not splitting the bio does not make sense here. I do not see any advantage in it, unless you can present a case otherwise. -- Goldwyn ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 2:07 ` Goldwyn Rodrigues @ 2017-08-10 2:17 ` Jens Axboe 2017-08-10 11:49 ` Goldwyn Rodrigues 0 siblings, 1 reply; 35+ messages in thread From: Jens Axboe @ 2017-08-10 2:17 UTC (permalink / raw) To: Goldwyn Rodrigues, Shaohua Li Cc: linux-block, hch, jack, linux-raid, dm-devel On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote: >>>>>>> No, from a multi-device point of view, this is inconsistent. I >>>>>>> have tried the request bio returns -EAGAIN before the split, but >>>>>>> I shall check again. Where do you see this happening? >>>>>> >>>>>> No, this isn't multi-device specific, any driver can do it. >>>>>> Please see blk_queue_split. >>>>>> >>>>> >>>>> In that case, the bio end_io function is chained and the bio of >>>>> the split will replicate the error to the parent (if not already >>>>> set). >>>> >>>> this doesn't answer my question. So if a bio returns -EAGAIN, part >>>> of the bio probably already dispatched to disk (if the bio is >>>> splitted to 2 bios, one returns -EAGAIN, the other one doesn't >>>> block and dispatch to disk), what will application be going to do? >>>> I think this is different to other IO errors. FOr other IO errors, >>>> application will handle the error, while we ask app to retry the >>>> whole bio here and app doesn't know part of bio is already written >>>> to disk. >>> >>> It is the same as for other I/O errors as well, such as EIO. You do >>> not know which bio of all submitted bio's returned the error EIO. >>> The application would and should consider the whole I/O as failed. >>> >>> The user application does not know of bios, or how it is going to be >>> split in the underlying layers. It knows at the system call level. >>> In this case, the EAGAIN will be returned to the user for the whole >>> I/O not as a part of the I/O. It is up to application to try the I/O >>> again with or without RWF_NOWAIT set. In direct I/O, it is bubbled >>> out using dio->io_error. You can read about it at the patch header >>> for the initial patchset at [1]. >>> >>> Use case: It is for applications having two threads, a compute >>> thread and an I/O thread. It would try to push AIO as much as >>> possible in the compute thread using RWF_NOWAIT, and if it fails, >>> would pass it on to I/O thread which would perform without >>> RWF_NOWAIT. End result if done right is you save on context switches >>> and all the synchronization/messaging machinery to perform I/O. >>> >>> [1] http://marc.info/?l=linux-block&m=149789003305876&w=2 >> >> Yes, I knew the concept, but I didn't see previous patches mentioned >> the -EAGAIN actually should be taken as a real IO error. This means a >> lot to applications and make the API hard to use. I'm wondering if we >> should disable bio split for NOWAIT bio, which will make the -EAGAIN >> only mean 'try again'. > > Don't take it as EAGAIN, but read it as EWOULDBLOCK. Why do you say > the API is hard to use? Do you have a case to back it up? Because it is hard to use, and potentially suboptimal. Let's say you're doing a 1MB write, we hit EWOULDBLOCK for the last split. Do we return a short write, or do we return EWOULDBLOCK? If the latter, then that really sucks from an API point of view. > No, not splitting the bio does not make sense here. I do not see any > advantage in it, unless you can present a case otherwise. It ties back into the "hard to use" that I do agree with IFF we don't return the short write. It's hard for an application to use that efficiently, if we write 1MB-128K but get EWOULDBLOCK, the re-write the full 1MB from a different context. -- Jens Axboe ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 2:17 ` Jens Axboe @ 2017-08-10 11:49 ` Goldwyn Rodrigues 2017-08-10 14:23 ` Jens Axboe 2017-08-10 14:25 ` Jan Kara 0 siblings, 2 replies; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-08-10 11:49 UTC (permalink / raw) To: Jens Axboe, Shaohua Li; +Cc: linux-block, hch, jack, linux-raid, dm-devel On 08/09/2017 09:17 PM, Jens Axboe wrote: > On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote: >>>>>>>> No, from a multi-device point of view, this is inconsistent. I >>>>>>>> have tried the request bio returns -EAGAIN before the split, but >>>>>>>> I shall check again. Where do you see this happening? >>>>>>> >>>>>>> No, this isn't multi-device specific, any driver can do it. >>>>>>> Please see blk_queue_split. >>>>>>> >>>>>> >>>>>> In that case, the bio end_io function is chained and the bio of >>>>>> the split will replicate the error to the parent (if not already >>>>>> set). >>>>> >>>>> this doesn't answer my question. So if a bio returns -EAGAIN, part >>>>> of the bio probably already dispatched to disk (if the bio is >>>>> splitted to 2 bios, one returns -EAGAIN, the other one doesn't >>>>> block and dispatch to disk), what will application be going to do? >>>>> I think this is different to other IO errors. FOr other IO errors, >>>>> application will handle the error, while we ask app to retry the >>>>> whole bio here and app doesn't know part of bio is already written >>>>> to disk. >>>> >>>> It is the same as for other I/O errors as well, such as EIO. You do >>>> not know which bio of all submitted bio's returned the error EIO. >>>> The application would and should consider the whole I/O as failed. >>>> >>>> The user application does not know of bios, or how it is going to be >>>> split in the underlying layers. It knows at the system call level. >>>> In this case, the EAGAIN will be returned to the user for the whole >>>> I/O not as a part of the I/O. It is up to application to try the I/O >>>> again with or without RWF_NOWAIT set. In direct I/O, it is bubbled >>>> out using dio->io_error. You can read about it at the patch header >>>> for the initial patchset at [1]. >>>> >>>> Use case: It is for applications having two threads, a compute >>>> thread and an I/O thread. It would try to push AIO as much as >>>> possible in the compute thread using RWF_NOWAIT, and if it fails, >>>> would pass it on to I/O thread which would perform without >>>> RWF_NOWAIT. End result if done right is you save on context switches >>>> and all the synchronization/messaging machinery to perform I/O. >>>> >>>> [1] http://marc.info/?l=linux-block&m=149789003305876&w=2 >>> >>> Yes, I knew the concept, but I didn't see previous patches mentioned >>> the -EAGAIN actually should be taken as a real IO error. This means a >>> lot to applications and make the API hard to use. I'm wondering if we >>> should disable bio split for NOWAIT bio, which will make the -EAGAIN >>> only mean 'try again'. >> >> Don't take it as EAGAIN, but read it as EWOULDBLOCK. Why do you say >> the API is hard to use? Do you have a case to back it up? > > Because it is hard to use, and potentially suboptimal. Let's say you're > doing a 1MB write, we hit EWOULDBLOCK for the last split. Do we return a > short write, or do we return EWOULDBLOCK? If the latter, then that > really sucks from an API point of view. > >> No, not splitting the bio does not make sense here. I do not see any >> advantage in it, unless you can present a case otherwise. > > It ties back into the "hard to use" that I do agree with IFF we don't > return the short write. It's hard for an application to use that > efficiently, if we write 1MB-128K but get EWOULDBLOCK, the re-write the > full 1MB from a different context. > It returns the error code only and not short reads/writes. But isn't that true for all system calls in case of error? For aio, there are two result fields in io_event out of which one could be used for error while the other be used for amount of writes/reads performed. However, only one is used. This will not work with pread()/pwrite() calls though because of the limitation of return values. Finally, what if the EWOULDBLOCK is returned for an earlier bio (say offset 128k) for a 1MB pwrite(), while the rest of the 7 128K are successful. What short return value should the system call return? -- Goldwyn ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 11:49 ` Goldwyn Rodrigues @ 2017-08-10 14:23 ` Jens Axboe 2017-08-10 14:25 ` Jan Kara 1 sibling, 0 replies; 35+ messages in thread From: Jens Axboe @ 2017-08-10 14:23 UTC (permalink / raw) To: Goldwyn Rodrigues, Shaohua Li Cc: linux-block, hch, jack, linux-raid, dm-devel On 08/10/2017 05:49 AM, Goldwyn Rodrigues wrote: > > > On 08/09/2017 09:17 PM, Jens Axboe wrote: >> On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote: >>>>>>>>> No, from a multi-device point of view, this is inconsistent. I >>>>>>>>> have tried the request bio returns -EAGAIN before the split, but >>>>>>>>> I shall check again. Where do you see this happening? >>>>>>>> >>>>>>>> No, this isn't multi-device specific, any driver can do it. >>>>>>>> Please see blk_queue_split. >>>>>>>> >>>>>>> >>>>>>> In that case, the bio end_io function is chained and the bio of >>>>>>> the split will replicate the error to the parent (if not already >>>>>>> set). >>>>>> >>>>>> this doesn't answer my question. So if a bio returns -EAGAIN, part >>>>>> of the bio probably already dispatched to disk (if the bio is >>>>>> splitted to 2 bios, one returns -EAGAIN, the other one doesn't >>>>>> block and dispatch to disk), what will application be going to do? >>>>>> I think this is different to other IO errors. FOr other IO errors, >>>>>> application will handle the error, while we ask app to retry the >>>>>> whole bio here and app doesn't know part of bio is already written >>>>>> to disk. >>>>> >>>>> It is the same as for other I/O errors as well, such as EIO. You do >>>>> not know which bio of all submitted bio's returned the error EIO. >>>>> The application would and should consider the whole I/O as failed. >>>>> >>>>> The user application does not know of bios, or how it is going to be >>>>> split in the underlying layers. It knows at the system call level. >>>>> In this case, the EAGAIN will be returned to the user for the whole >>>>> I/O not as a part of the I/O. It is up to application to try the I/O >>>>> again with or without RWF_NOWAIT set. In direct I/O, it is bubbled >>>>> out using dio->io_error. You can read about it at the patch header >>>>> for the initial patchset at [1]. >>>>> >>>>> Use case: It is for applications having two threads, a compute >>>>> thread and an I/O thread. It would try to push AIO as much as >>>>> possible in the compute thread using RWF_NOWAIT, and if it fails, >>>>> would pass it on to I/O thread which would perform without >>>>> RWF_NOWAIT. End result if done right is you save on context switches >>>>> and all the synchronization/messaging machinery to perform I/O. >>>>> >>>>> [1] http://marc.info/?l=linux-block&m=149789003305876&w=2 >>>> >>>> Yes, I knew the concept, but I didn't see previous patches mentioned >>>> the -EAGAIN actually should be taken as a real IO error. This means a >>>> lot to applications and make the API hard to use. I'm wondering if we >>>> should disable bio split for NOWAIT bio, which will make the -EAGAIN >>>> only mean 'try again'. >>> >>> Don't take it as EAGAIN, but read it as EWOULDBLOCK. Why do you say >>> the API is hard to use? Do you have a case to back it up? >> >> Because it is hard to use, and potentially suboptimal. Let's say you're >> doing a 1MB write, we hit EWOULDBLOCK for the last split. Do we return a >> short write, or do we return EWOULDBLOCK? If the latter, then that >> really sucks from an API point of view. >> >>> No, not splitting the bio does not make sense here. I do not see any >>> advantage in it, unless you can present a case otherwise. >> >> It ties back into the "hard to use" that I do agree with IFF we don't >> return the short write. It's hard for an application to use that >> efficiently, if we write 1MB-128K but get EWOULDBLOCK, the re-write the >> full 1MB from a different context. >> > > It returns the error code only and not short reads/writes. But isn't > that true for all system calls in case of error? It's not a hard error. If you wrote 896K in the example above, I'd really expect the return value to be 896*1024. The API is hard to use efficiently, if that's not the case. > For aio, there are two result fields in io_event out of which one could > be used for error while the other be used for amount of writes/reads > performed. However, only one is used. This will not work with > pread()/pwrite() calls though because of the limitation of return values. Don't invent something new for this, the mechanism already exists for returning a short read or write. That's how all of them have worked for decades. > Finally, what if the EWOULDBLOCK is returned for an earlier bio (say > offset 128k) for a 1MB pwrite(), while the rest of the 7 128K are > successful. What short return value should the system call return? It should return 128*1024, since that's how much was successfully done from the start offset. But yes, this is exactly the point that I brought up, and why contesting Shaohua's suggestion to perhaps treat splits differently should not be discarded so quickly. -- Jens Axboe ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 11:49 ` Goldwyn Rodrigues 2017-08-10 14:23 ` Jens Axboe @ 2017-08-10 14:25 ` Jan Kara 2017-08-10 14:28 ` Jens Axboe 1 sibling, 1 reply; 35+ messages in thread From: Jan Kara @ 2017-08-10 14:25 UTC (permalink / raw) To: Goldwyn Rodrigues Cc: Jens Axboe, Shaohua Li, linux-block, hch, jack, linux-raid, dm-devel On Thu 10-08-17 06:49:53, Goldwyn Rodrigues wrote: > On 08/09/2017 09:17 PM, Jens Axboe wrote: > > On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote: > >>>>>>>> No, from a multi-device point of view, this is inconsistent. I > >>>>>>>> have tried the request bio returns -EAGAIN before the split, but > >>>>>>>> I shall check again. Where do you see this happening? > >>>>>>> > >>>>>>> No, this isn't multi-device specific, any driver can do it. > >>>>>>> Please see blk_queue_split. > >>>>>>> > >>>>>> > >>>>>> In that case, the bio end_io function is chained and the bio of > >>>>>> the split will replicate the error to the parent (if not already > >>>>>> set). > >>>>> > >>>>> this doesn't answer my question. So if a bio returns -EAGAIN, part > >>>>> of the bio probably already dispatched to disk (if the bio is > >>>>> splitted to 2 bios, one returns -EAGAIN, the other one doesn't > >>>>> block and dispatch to disk), what will application be going to do? > >>>>> I think this is different to other IO errors. FOr other IO errors, > >>>>> application will handle the error, while we ask app to retry the > >>>>> whole bio here and app doesn't know part of bio is already written > >>>>> to disk. > >>>> > >>>> It is the same as for other I/O errors as well, such as EIO. You do > >>>> not know which bio of all submitted bio's returned the error EIO. > >>>> The application would and should consider the whole I/O as failed. > >>>> > >>>> The user application does not know of bios, or how it is going to be > >>>> split in the underlying layers. It knows at the system call level. > >>>> In this case, the EAGAIN will be returned to the user for the whole > >>>> I/O not as a part of the I/O. It is up to application to try the I/O > >>>> again with or without RWF_NOWAIT set. In direct I/O, it is bubbled > >>>> out using dio->io_error. You can read about it at the patch header > >>>> for the initial patchset at [1]. > >>>> > >>>> Use case: It is for applications having two threads, a compute > >>>> thread and an I/O thread. It would try to push AIO as much as > >>>> possible in the compute thread using RWF_NOWAIT, and if it fails, > >>>> would pass it on to I/O thread which would perform without > >>>> RWF_NOWAIT. End result if done right is you save on context switches > >>>> and all the synchronization/messaging machinery to perform I/O. > >>>> > >>>> [1] http://marc.info/?l=linux-block&m=149789003305876&w=2 > >>> > >>> Yes, I knew the concept, but I didn't see previous patches mentioned > >>> the -EAGAIN actually should be taken as a real IO error. This means a > >>> lot to applications and make the API hard to use. I'm wondering if we > >>> should disable bio split for NOWAIT bio, which will make the -EAGAIN > >>> only mean 'try again'. > >> > >> Don't take it as EAGAIN, but read it as EWOULDBLOCK. Why do you say > >> the API is hard to use? Do you have a case to back it up? > > > > Because it is hard to use, and potentially suboptimal. Let's say you're > > doing a 1MB write, we hit EWOULDBLOCK for the last split. Do we return a > > short write, or do we return EWOULDBLOCK? If the latter, then that > > really sucks from an API point of view. > > > >> No, not splitting the bio does not make sense here. I do not see any > >> advantage in it, unless you can present a case otherwise. > > > > It ties back into the "hard to use" that I do agree with IFF we don't > > return the short write. It's hard for an application to use that > > efficiently, if we write 1MB-128K but get EWOULDBLOCK, the re-write the > > full 1MB from a different context. > > > > It returns the error code only and not short reads/writes. But isn't > that true for all system calls in case of error? > > For aio, there are two result fields in io_event out of which one could > be used for error while the other be used for amount of writes/reads > performed. However, only one is used. This will not work with > pread()/pwrite() calls though because of the limitation of return values. > > Finally, what if the EWOULDBLOCK is returned for an earlier bio (say > offset 128k) for a 1MB pwrite(), while the rest of the 7 128K are > successful. What short return value should the system call return? This is indeed tricky. If an application submits 1MB write, I don't think we can afford to just write arbitrary subset of it. That just IMHO too much violates how writes traditionally behaved. Even short writes trigger bugs in various applications but I'm willing to require that applications using NOWAIT IO can handle these. However writing arbitrary subset looks like a nasty catch. IMHO we should not submit further bios until we are sure current one does not return EWOULDBLOCK when splitting a larger one... Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 14:25 ` Jan Kara @ 2017-08-10 14:28 ` Jens Axboe 2017-08-10 17:15 ` Goldwyn Rodrigues 0 siblings, 1 reply; 35+ messages in thread From: Jens Axboe @ 2017-08-10 14:28 UTC (permalink / raw) To: Jan Kara, Goldwyn Rodrigues Cc: Shaohua Li, linux-block, hch, linux-raid, dm-devel On 08/10/2017 08:25 AM, Jan Kara wrote: > On Thu 10-08-17 06:49:53, Goldwyn Rodrigues wrote: >> On 08/09/2017 09:17 PM, Jens Axboe wrote: >>> On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote: >>>>>>>>>> No, from a multi-device point of view, this is inconsistent. I >>>>>>>>>> have tried the request bio returns -EAGAIN before the split, but >>>>>>>>>> I shall check again. Where do you see this happening? >>>>>>>>> >>>>>>>>> No, this isn't multi-device specific, any driver can do it. >>>>>>>>> Please see blk_queue_split. >>>>>>>>> >>>>>>>> >>>>>>>> In that case, the bio end_io function is chained and the bio of >>>>>>>> the split will replicate the error to the parent (if not already >>>>>>>> set). >>>>>>> >>>>>>> this doesn't answer my question. So if a bio returns -EAGAIN, part >>>>>>> of the bio probably already dispatched to disk (if the bio is >>>>>>> splitted to 2 bios, one returns -EAGAIN, the other one doesn't >>>>>>> block and dispatch to disk), what will application be going to do? >>>>>>> I think this is different to other IO errors. FOr other IO errors, >>>>>>> application will handle the error, while we ask app to retry the >>>>>>> whole bio here and app doesn't know part of bio is already written >>>>>>> to disk. >>>>>> >>>>>> It is the same as for other I/O errors as well, such as EIO. You do >>>>>> not know which bio of all submitted bio's returned the error EIO. >>>>>> The application would and should consider the whole I/O as failed. >>>>>> >>>>>> The user application does not know of bios, or how it is going to be >>>>>> split in the underlying layers. It knows at the system call level. >>>>>> In this case, the EAGAIN will be returned to the user for the whole >>>>>> I/O not as a part of the I/O. It is up to application to try the I/O >>>>>> again with or without RWF_NOWAIT set. In direct I/O, it is bubbled >>>>>> out using dio->io_error. You can read about it at the patch header >>>>>> for the initial patchset at [1]. >>>>>> >>>>>> Use case: It is for applications having two threads, a compute >>>>>> thread and an I/O thread. It would try to push AIO as much as >>>>>> possible in the compute thread using RWF_NOWAIT, and if it fails, >>>>>> would pass it on to I/O thread which would perform without >>>>>> RWF_NOWAIT. End result if done right is you save on context switches >>>>>> and all the synchronization/messaging machinery to perform I/O. >>>>>> >>>>>> [1] http://marc.info/?l=linux-block&m=149789003305876&w=2 >>>>> >>>>> Yes, I knew the concept, but I didn't see previous patches mentioned >>>>> the -EAGAIN actually should be taken as a real IO error. This means a >>>>> lot to applications and make the API hard to use. I'm wondering if we >>>>> should disable bio split for NOWAIT bio, which will make the -EAGAIN >>>>> only mean 'try again'. >>>> >>>> Don't take it as EAGAIN, but read it as EWOULDBLOCK. Why do you say >>>> the API is hard to use? Do you have a case to back it up? >>> >>> Because it is hard to use, and potentially suboptimal. Let's say you're >>> doing a 1MB write, we hit EWOULDBLOCK for the last split. Do we return a >>> short write, or do we return EWOULDBLOCK? If the latter, then that >>> really sucks from an API point of view. >>> >>>> No, not splitting the bio does not make sense here. I do not see any >>>> advantage in it, unless you can present a case otherwise. >>> >>> It ties back into the "hard to use" that I do agree with IFF we don't >>> return the short write. It's hard for an application to use that >>> efficiently, if we write 1MB-128K but get EWOULDBLOCK, the re-write the >>> full 1MB from a different context. >>> >> >> It returns the error code only and not short reads/writes. But isn't >> that true for all system calls in case of error? >> >> For aio, there are two result fields in io_event out of which one could >> be used for error while the other be used for amount of writes/reads >> performed. However, only one is used. This will not work with >> pread()/pwrite() calls though because of the limitation of return values. >> >> Finally, what if the EWOULDBLOCK is returned for an earlier bio (say >> offset 128k) for a 1MB pwrite(), while the rest of the 7 128K are >> successful. What short return value should the system call return? > > This is indeed tricky. If an application submits 1MB write, I don't think > we can afford to just write arbitrary subset of it. That just IMHO too much > violates how writes traditionally behaved. Even short writes trigger bugs > in various applications but I'm willing to require that applications using > NOWAIT IO can handle these. However writing arbitrary subset looks like a > nasty catch. IMHO we should not submit further bios until we are sure > current one does not return EWOULDBLOCK when splitting a larger one... Exactly, that's the point that both Shaohua and I was getting at. Short writes should be fine, especially if NOWAIT is set. Discontig writes should also be OK, but it's horrible and inefficient. If we do that, then using this feature is a net-loss, not a win by any stretch. -- Jens Axboe ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 14:28 ` Jens Axboe @ 2017-08-10 17:15 ` Goldwyn Rodrigues 2017-08-10 17:20 ` Jens Axboe 0 siblings, 1 reply; 35+ messages in thread From: Goldwyn Rodrigues @ 2017-08-10 17:15 UTC (permalink / raw) To: Jens Axboe, Jan Kara; +Cc: Shaohua Li, linux-block, hch, linux-raid, dm-devel On 08/10/2017 09:28 AM, Jens Axboe wrote: > On 08/10/2017 08:25 AM, Jan Kara wrote: >> On Thu 10-08-17 06:49:53, Goldwyn Rodrigues wrote: >>> On 08/09/2017 09:17 PM, Jens Axboe wrote: >>>> On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote: >>>>>>>>>>> No, from a multi-device point of view, this is inconsistent. I >>>>>>>>>>> have tried the request bio returns -EAGAIN before the split, but >>>>>>>>>>> I shall check again. Where do you see this happening? >>>>>>>>>> >>>>>>>>>> No, this isn't multi-device specific, any driver can do it. >>>>>>>>>> Please see blk_queue_split. >>>>>>>>>> >>>>>>>>> >>>>>>>>> In that case, the bio end_io function is chained and the bio of >>>>>>>>> the split will replicate the error to the parent (if not already >>>>>>>>> set). >>>>>>>> >>>>>>>> this doesn't answer my question. So if a bio returns -EAGAIN, part >>>>>>>> of the bio probably already dispatched to disk (if the bio is >>>>>>>> splitted to 2 bios, one returns -EAGAIN, the other one doesn't >>>>>>>> block and dispatch to disk), what will application be going to do? >>>>>>>> I think this is different to other IO errors. FOr other IO errors, >>>>>>>> application will handle the error, while we ask app to retry the >>>>>>>> whole bio here and app doesn't know part of bio is already written >>>>>>>> to disk. >>>>>>> >>>>>>> It is the same as for other I/O errors as well, such as EIO. You do >>>>>>> not know which bio of all submitted bio's returned the error EIO. >>>>>>> The application would and should consider the whole I/O as failed. >>>>>>> >>>>>>> The user application does not know of bios, or how it is going to be >>>>>>> split in the underlying layers. It knows at the system call level. >>>>>>> In this case, the EAGAIN will be returned to the user for the whole >>>>>>> I/O not as a part of the I/O. It is up to application to try the I/O >>>>>>> again with or without RWF_NOWAIT set. In direct I/O, it is bubbled >>>>>>> out using dio->io_error. You can read about it at the patch header >>>>>>> for the initial patchset at [1]. >>>>>>> >>>>>>> Use case: It is for applications having two threads, a compute >>>>>>> thread and an I/O thread. It would try to push AIO as much as >>>>>>> possible in the compute thread using RWF_NOWAIT, and if it fails, >>>>>>> would pass it on to I/O thread which would perform without >>>>>>> RWF_NOWAIT. End result if done right is you save on context switches >>>>>>> and all the synchronization/messaging machinery to perform I/O. >>>>>>> >>>>>>> [1] http://marc.info/?l=linux-block&m=149789003305876&w=2 >>>>>> >>>>>> Yes, I knew the concept, but I didn't see previous patches mentioned >>>>>> the -EAGAIN actually should be taken as a real IO error. This means a >>>>>> lot to applications and make the API hard to use. I'm wondering if we >>>>>> should disable bio split for NOWAIT bio, which will make the -EAGAIN >>>>>> only mean 'try again'. >>>>> >>>>> Don't take it as EAGAIN, but read it as EWOULDBLOCK. Why do you say >>>>> the API is hard to use? Do you have a case to back it up? >>>> >>>> Because it is hard to use, and potentially suboptimal. Let's say you're >>>> doing a 1MB write, we hit EWOULDBLOCK for the last split. Do we return a >>>> short write, or do we return EWOULDBLOCK? If the latter, then that >>>> really sucks from an API point of view. >>>> >>>>> No, not splitting the bio does not make sense here. I do not see any >>>>> advantage in it, unless you can present a case otherwise. >>>> >>>> It ties back into the "hard to use" that I do agree with IFF we don't >>>> return the short write. It's hard for an application to use that >>>> efficiently, if we write 1MB-128K but get EWOULDBLOCK, the re-write the >>>> full 1MB from a different context. >>>> >>> >>> It returns the error code only and not short reads/writes. But isn't >>> that true for all system calls in case of error? >>> >>> For aio, there are two result fields in io_event out of which one could >>> be used for error while the other be used for amount of writes/reads >>> performed. However, only one is used. This will not work with >>> pread()/pwrite() calls though because of the limitation of return values. >>> >>> Finally, what if the EWOULDBLOCK is returned for an earlier bio (say >>> offset 128k) for a 1MB pwrite(), while the rest of the 7 128K are >>> successful. What short return value should the system call return? >> >> This is indeed tricky. If an application submits 1MB write, I don't think >> we can afford to just write arbitrary subset of it. That just IMHO too much >> violates how writes traditionally behaved. Even short writes trigger bugs >> in various applications but I'm willing to require that applications using >> NOWAIT IO can handle these. However writing arbitrary subset looks like a >> nasty catch. IMHO we should not submit further bios until we are sure >> current one does not return EWOULDBLOCK when splitting a larger one... > > Exactly, that's the point that both Shaohua and I was getting at. Short > writes should be fine, especially if NOWAIT is set. Discontig writes > should also be OK, but it's horrible and inefficient. If we do that, > then using this feature is a net-loss, not a win by any stretch. > To make sure I understand this, we disable bio splits for NOWAIT bio so we return EWOULDBLOCK for the entire I/O. -- Goldwyn ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait 2017-08-10 17:15 ` Goldwyn Rodrigues @ 2017-08-10 17:20 ` Jens Axboe 0 siblings, 0 replies; 35+ messages in thread From: Jens Axboe @ 2017-08-10 17:20 UTC (permalink / raw) To: Goldwyn Rodrigues, Jan Kara Cc: Shaohua Li, linux-block, hch, linux-raid, dm-devel On 08/10/2017 11:15 AM, Goldwyn Rodrigues wrote: > > > On 08/10/2017 09:28 AM, Jens Axboe wrote: >> On 08/10/2017 08:25 AM, Jan Kara wrote: >>> On Thu 10-08-17 06:49:53, Goldwyn Rodrigues wrote: >>>> On 08/09/2017 09:17 PM, Jens Axboe wrote: >>>>> On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote: >>>>>>>>>>>> No, from a multi-device point of view, this is inconsistent. I >>>>>>>>>>>> have tried the request bio returns -EAGAIN before the split, but >>>>>>>>>>>> I shall check again. Where do you see this happening? >>>>>>>>>>> >>>>>>>>>>> No, this isn't multi-device specific, any driver can do it. >>>>>>>>>>> Please see blk_queue_split. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> In that case, the bio end_io function is chained and the bio of >>>>>>>>>> the split will replicate the error to the parent (if not already >>>>>>>>>> set). >>>>>>>>> >>>>>>>>> this doesn't answer my question. So if a bio returns -EAGAIN, part >>>>>>>>> of the bio probably already dispatched to disk (if the bio is >>>>>>>>> splitted to 2 bios, one returns -EAGAIN, the other one doesn't >>>>>>>>> block and dispatch to disk), what will application be going to do? >>>>>>>>> I think this is different to other IO errors. FOr other IO errors, >>>>>>>>> application will handle the error, while we ask app to retry the >>>>>>>>> whole bio here and app doesn't know part of bio is already written >>>>>>>>> to disk. >>>>>>>> >>>>>>>> It is the same as for other I/O errors as well, such as EIO. You do >>>>>>>> not know which bio of all submitted bio's returned the error EIO. >>>>>>>> The application would and should consider the whole I/O as failed. >>>>>>>> >>>>>>>> The user application does not know of bios, or how it is going to be >>>>>>>> split in the underlying layers. It knows at the system call level. >>>>>>>> In this case, the EAGAIN will be returned to the user for the whole >>>>>>>> I/O not as a part of the I/O. It is up to application to try the I/O >>>>>>>> again with or without RWF_NOWAIT set. In direct I/O, it is bubbled >>>>>>>> out using dio->io_error. You can read about it at the patch header >>>>>>>> for the initial patchset at [1]. >>>>>>>> >>>>>>>> Use case: It is for applications having two threads, a compute >>>>>>>> thread and an I/O thread. It would try to push AIO as much as >>>>>>>> possible in the compute thread using RWF_NOWAIT, and if it fails, >>>>>>>> would pass it on to I/O thread which would perform without >>>>>>>> RWF_NOWAIT. End result if done right is you save on context switches >>>>>>>> and all the synchronization/messaging machinery to perform I/O. >>>>>>>> >>>>>>>> [1] http://marc.info/?l=linux-block&m=149789003305876&w=2 >>>>>>> >>>>>>> Yes, I knew the concept, but I didn't see previous patches mentioned >>>>>>> the -EAGAIN actually should be taken as a real IO error. This means a >>>>>>> lot to applications and make the API hard to use. I'm wondering if we >>>>>>> should disable bio split for NOWAIT bio, which will make the -EAGAIN >>>>>>> only mean 'try again'. >>>>>> >>>>>> Don't take it as EAGAIN, but read it as EWOULDBLOCK. Why do you say >>>>>> the API is hard to use? Do you have a case to back it up? >>>>> >>>>> Because it is hard to use, and potentially suboptimal. Let's say you're >>>>> doing a 1MB write, we hit EWOULDBLOCK for the last split. Do we return a >>>>> short write, or do we return EWOULDBLOCK? If the latter, then that >>>>> really sucks from an API point of view. >>>>> >>>>>> No, not splitting the bio does not make sense here. I do not see any >>>>>> advantage in it, unless you can present a case otherwise. >>>>> >>>>> It ties back into the "hard to use" that I do agree with IFF we don't >>>>> return the short write. It's hard for an application to use that >>>>> efficiently, if we write 1MB-128K but get EWOULDBLOCK, the re-write the >>>>> full 1MB from a different context. >>>>> >>>> >>>> It returns the error code only and not short reads/writes. But isn't >>>> that true for all system calls in case of error? >>>> >>>> For aio, there are two result fields in io_event out of which one could >>>> be used for error while the other be used for amount of writes/reads >>>> performed. However, only one is used. This will not work with >>>> pread()/pwrite() calls though because of the limitation of return values. >>>> >>>> Finally, what if the EWOULDBLOCK is returned for an earlier bio (say >>>> offset 128k) for a 1MB pwrite(), while the rest of the 7 128K are >>>> successful. What short return value should the system call return? >>> >>> This is indeed tricky. If an application submits 1MB write, I don't think >>> we can afford to just write arbitrary subset of it. That just IMHO too much >>> violates how writes traditionally behaved. Even short writes trigger bugs >>> in various applications but I'm willing to require that applications using >>> NOWAIT IO can handle these. However writing arbitrary subset looks like a >>> nasty catch. IMHO we should not submit further bios until we are sure >>> current one does not return EWOULDBLOCK when splitting a larger one... >> >> Exactly, that's the point that both Shaohua and I was getting at. Short >> writes should be fine, especially if NOWAIT is set. Discontig writes >> should also be OK, but it's horrible and inefficient. If we do that, >> then using this feature is a net-loss, not a win by any stretch. >> > > To make sure I understand this, we disable bio splits for NOWAIT bio so > we return EWOULDBLOCK for the entire I/O. That's also not great, since splits is a common operation, and the majority of splits can proceed without hitting out-of-resources. So ideally we'd handle that case, but in a saner fashion than the laissez faire approach that the current patchset takes. -- Jens Axboe ^ permalink raw reply [flat|nested] 35+ messages in thread
end of thread, other threads:[~2017-10-10 22:36 UTC | newest] Thread overview: 35+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2017-10-04 13:55 [PATCH v2 0/9] Nowait support for stacked block devices Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 2/9] md: Add nowait support to md Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 3/9] md: raid1 nowait support Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 4/9] md: raid10 " Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 5/9] md: raid5 " Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 6/9] dm: add " Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 7/9] dm: Add nowait support to raid1 Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 8/9] dm: Add nowait support to dm-delay Goldwyn Rodrigues 2017-10-04 13:55 ` [PATCH 9/9] dm-mpath: Add nowait support Goldwyn Rodrigues 2017-10-05 17:19 ` [PATCH v2 0/9] Nowait support for stacked block devices Shaohua Li 2017-10-06 12:01 ` Goldwyn Rodrigues 2017-10-10 22:36 ` Shaohua Li -- strict thread matches above, loose matches on Subject: below -- 2017-07-26 23:57 [PATCH 0/9] Nowait feature " Goldwyn Rodrigues 2017-07-26 23:57 ` [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait Goldwyn Rodrigues 2017-08-08 20:32 ` Shaohua Li 2017-08-08 20:36 ` Jens Axboe 2017-08-10 2:18 ` Jens Axboe 2017-08-10 11:38 ` Goldwyn Rodrigues 2017-08-10 14:14 ` Jens Axboe 2017-08-10 17:15 ` Goldwyn Rodrigues 2017-08-10 17:17 ` Jens Axboe 2017-08-09 11:44 ` Goldwyn Rodrigues 2017-08-09 15:02 ` Shaohua Li 2017-08-09 15:35 ` Goldwyn Rodrigues 2017-08-09 20:21 ` Shaohua Li 2017-08-09 22:16 ` Goldwyn Rodrigues 2017-08-10 1:17 ` Shaohua Li 2017-08-10 2:07 ` Goldwyn Rodrigues 2017-08-10 2:17 ` Jens Axboe 2017-08-10 11:49 ` Goldwyn Rodrigues 2017-08-10 14:23 ` Jens Axboe 2017-08-10 14:25 ` Jan Kara 2017-08-10 14:28 ` Jens Axboe 2017-08-10 17:15 ` Goldwyn Rodrigues 2017-08-10 17:20 ` Jens Axboe
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox