From: Eric Blake <eblake@redhat.com>
To: qemu-devel@nongnu.org
Cc: jsnow@redhat.com, kwolf@redhat.com, qemu-block@nongnu.org,
Fam Zheng <famz@redhat.com>, Max Reitz <mreitz@redhat.com>
Subject: [Qemu-devel] [PATCH v2 3/6] null: Switch to byte-based read/write
Date: Tue, 24 Apr 2018 14:25:03 -0500 [thread overview]
Message-ID: <20180424192506.149089-4-eblake@redhat.com> (raw)
In-Reply-To: <20180424192506.149089-1-eblake@redhat.com>
We are gradually moving away from sector-based interfaces, towards
byte-based. Make the change for the last few sector-based callbacks
in the null-co and null-aio drivers.
Note that since the null driver does nothing on writes, it trivially
supports the BDRV_REQ_FUA flag (all writes have already landed to
the same bit-bucket without needing an extra flush call). Also, since
the null driver does just as well with byte-based requests, we can
now avoid cycles wasted on read-modify-write by taking advantage of
the block layer now defaulting the alignment to 1 instead of 512.
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: rely on new block layer default alignment [Kevin]
---
block/null.c | 59 ++++++++++++++++++++++++++++++-----------------------------
1 file changed, 30 insertions(+), 29 deletions(-)
diff --git a/block/null.c b/block/null.c
index 806a8631e4d..8fbbda52ea1 100644
--- a/block/null.c
+++ b/block/null.c
@@ -93,6 +93,7 @@ static int null_file_open(BlockDriverState *bs, QDict *options, int flags,
}
s->read_zeroes = qemu_opt_get_bool(opts, NULL_OPT_ZEROES, false);
qemu_opts_del(opts);
+ bs->supported_write_flags = BDRV_REQ_FUA;
return ret;
}
@@ -116,22 +117,22 @@ static coroutine_fn int null_co_common(BlockDriverState *bs)
return 0;
}
-static coroutine_fn int null_co_readv(BlockDriverState *bs,
- int64_t sector_num, int nb_sectors,
- QEMUIOVector *qiov)
+static coroutine_fn int null_co_preadv(BlockDriverState *bs,
+ uint64_t offset, uint64_t bytes,
+ QEMUIOVector *qiov, int flags)
{
BDRVNullState *s = bs->opaque;
if (s->read_zeroes) {
- qemu_iovec_memset(qiov, 0, 0, nb_sectors * BDRV_SECTOR_SIZE);
+ qemu_iovec_memset(qiov, 0, 0, bytes);
}
return null_co_common(bs);
}
-static coroutine_fn int null_co_writev(BlockDriverState *bs,
- int64_t sector_num, int nb_sectors,
- QEMUIOVector *qiov)
+static coroutine_fn int null_co_pwritev(BlockDriverState *bs,
+ uint64_t offset, uint64_t bytes,
+ QEMUIOVector *qiov, int flags)
{
return null_co_common(bs);
}
@@ -186,26 +187,26 @@ static inline BlockAIOCB *null_aio_common(BlockDriverState *bs,
return &acb->common;
}
-static BlockAIOCB *null_aio_readv(BlockDriverState *bs,
- int64_t sector_num, QEMUIOVector *qiov,
- int nb_sectors,
- BlockCompletionFunc *cb,
- void *opaque)
-{
- BDRVNullState *s = bs->opaque;
-
- if (s->read_zeroes) {
- qemu_iovec_memset(qiov, 0, 0, nb_sectors * BDRV_SECTOR_SIZE);
- }
-
- return null_aio_common(bs, cb, opaque);
-}
-
-static BlockAIOCB *null_aio_writev(BlockDriverState *bs,
- int64_t sector_num, QEMUIOVector *qiov,
- int nb_sectors,
+static BlockAIOCB *null_aio_preadv(BlockDriverState *bs,
+ uint64_t offset, uint64_t bytes,
+ QEMUIOVector *qiov, int flags,
BlockCompletionFunc *cb,
void *opaque)
+{
+ BDRVNullState *s = bs->opaque;
+
+ if (s->read_zeroes) {
+ qemu_iovec_memset(qiov, 0, 0, bytes);
+ }
+
+ return null_aio_common(bs, cb, opaque);
+}
+
+static BlockAIOCB *null_aio_pwritev(BlockDriverState *bs,
+ uint64_t offset, uint64_t bytes,
+ QEMUIOVector *qiov, int flags,
+ BlockCompletionFunc *cb,
+ void *opaque)
{
return null_aio_common(bs, cb, opaque);
}
@@ -266,8 +267,8 @@ static BlockDriver bdrv_null_co = {
.bdrv_close = null_close,
.bdrv_getlength = null_getlength,
- .bdrv_co_readv = null_co_readv,
- .bdrv_co_writev = null_co_writev,
+ .bdrv_co_preadv = null_co_preadv,
+ .bdrv_co_pwritev = null_co_pwritev,
.bdrv_co_flush_to_disk = null_co_flush,
.bdrv_reopen_prepare = null_reopen_prepare,
@@ -286,8 +287,8 @@ static BlockDriver bdrv_null_aio = {
.bdrv_close = null_close,
.bdrv_getlength = null_getlength,
- .bdrv_aio_readv = null_aio_readv,
- .bdrv_aio_writev = null_aio_writev,
+ .bdrv_aio_preadv = null_aio_preadv,
+ .bdrv_aio_pwritev = null_aio_pwritev,
.bdrv_aio_flush = null_aio_flush,
.bdrv_reopen_prepare = null_reopen_prepare,
--
2.14.3
next prev parent reply other threads:[~2018-04-24 19:29 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-24 19:25 [Qemu-devel] [PATCH v2 0/6] block: byte-based AIO read/write Eric Blake
2018-04-24 19:25 ` [Qemu-devel] [PATCH v2 1/6] block: Support byte-based aio callbacks Eric Blake
2018-04-24 19:25 ` [Qemu-devel] [PATCH v2 2/6] file-win32: Switch to byte-based callbacks Eric Blake
2018-04-25 10:57 ` Kevin Wolf
2018-04-24 19:25 ` Eric Blake [this message]
2018-04-24 19:25 ` [Qemu-devel] [PATCH v2 4/6] rbd: " Eric Blake
2018-04-24 19:53 ` [Qemu-devel] [Qemu-block] " Jason Dillaman
2018-04-25 10:58 ` Kevin Wolf
2018-04-25 13:00 ` Eric Blake
2018-04-24 19:25 ` [Qemu-devel] [PATCH v2 5/6] vxhs: " Eric Blake
2018-04-24 19:25 ` [Qemu-devel] [PATCH v2 6/6] block: Drop last of the sector-based aio callbacks Eric Blake
2018-04-25 10:59 ` [Qemu-devel] [PATCH v2 0/6] block: byte-based AIO read/write Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180424192506.149089-4-eblake@redhat.com \
--to=eblake@redhat.com \
--cc=famz@redhat.com \
--cc=jsnow@redhat.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).