From: Pavel Borzenkov <pborzenkov@virtuozzo.com>
To: Alex Bligh <alex@alex.org.uk>
Cc: Eric Blake <eblake@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
"open list:Block layer core" <qemu-block@nongnu.org>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [RFC PATCH 18/18] nbd: Implement NBD_CMD_WRITE_ZEROES on client
Date: Sat, 9 Apr 2016 14:52:20 +0300 [thread overview]
Message-ID: <20160409115220.GD21526@phobos> (raw)
In-Reply-To: <D6B96403-5EF5-49D5-B734-55D29EC27DCB@alex.org.uk>
On Sat, Apr 09, 2016 at 11:57:57AM +0100, Alex Bligh wrote:
>
> On 8 Apr 2016, at 23:05, Eric Blake <eblake@redhat.com> wrote:
>
> > RFC because there is still discussion on the NBD list about
> > adding an NBD_OPT_ to let the client suggest server defaults
> > related to scanning for zeroes during NBD_CMD_WRITE, which may
> > tweak this patch.
> >
> > Upstream NBD protocol recently added the ability to efficiently
> > write zeroes without having to send the zeroes over the wire,
> > along with a flag to control whether the client wants a hole.
> >
> > The generic block code takes care of falling back to the obvious
> > write lots of zeroes if we return -ENOTSUP because the server
> > does not have WRITE_ZEROES.
> >
> > Signed-off-by: Eric Blake <eblake@redhat.com>
> > ---
> > block/nbd-client.h | 2 ++
> > block/nbd-client.c | 34 ++++++++++++++++++++++++++++++++++
> > block/nbd.c | 23 +++++++++++++++++++++++
> > 3 files changed, 59 insertions(+)
> >
> > diff --git a/block/nbd-client.h b/block/nbd-client.h
> > index bc7aec0..2fe6654 100644
> > --- a/block/nbd-client.h
> > +++ b/block/nbd-client.h
> > @@ -47,6 +47,8 @@ void nbd_client_close(BlockDriverState *bs);
> > int nbd_client_co_discard(BlockDriverState *bs, int64_t sector_num,
> > int nb_sectors);
> > int nbd_client_co_flush(BlockDriverState *bs);
> > +int nbd_client_co_write_zeroes(BlockDriverState *bs, int64_t sector_num,
> > + int nb_sectors, int *flags);
> > int nbd_client_co_writev(BlockDriverState *bs, int64_t sector_num,
> > int nb_sectors, QEMUIOVector *qiov, int *flags);
> > int nbd_client_co_readv(BlockDriverState *bs, int64_t sector_num,
> > diff --git a/block/nbd-client.c b/block/nbd-client.c
> > index f013084..4be83a8 100644
> > --- a/block/nbd-client.c
> > +++ b/block/nbd-client.c
> > @@ -291,6 +291,40 @@ int nbd_client_co_readv(BlockDriverState *bs, int64_t sector_num,
> > return nbd_co_readv_1(bs, sector_num, nb_sectors, qiov, offset);
> > }
> >
> > +int nbd_client_co_write_zeroes(BlockDriverState *bs, int64_t sector_num,
> > + int nb_sectors, int *flags)
> > +{
> > + ssize_t ret;
> > + NbdClientSession *client = nbd_get_client_session(bs);
> > + struct nbd_request request = { .type = NBD_CMD_WRITE_ZEROES };
> > + struct nbd_reply reply;
> > +
> > + if (!(client->nbdflags & NBD_FLAG_SEND_WRITE_ZEROES)) {
> > + return -ENOTSUP;
> > + }
> > +
> > + if ((*flags & BDRV_REQ_FUA) && (client->nbdflags & NBD_FLAG_SEND_FUA)) {
> > + *flags &= ~BDRV_REQ_FUA;
> > + request.flags |= NBD_CMD_FLAG_FUA;
> > + }
> > + if (!(*flags & BDRV_REQ_MAY_UNMAP)) {
> > + request.flags |= NBD_CMD_FLAG_NO_HOLE;
> > + }
> > +
> > + request.from = sector_num * 512;
> > + request.len = nb_sectors * 512;
> > +
> > + nbd_coroutine_start(client, &request);
> > + ret = nbd_co_send_request(bs, &request, NULL, 0);
> > + if (ret < 0) {
> > + reply.error = -ret;
> > + } else {
> > + nbd_co_receive_reply(client, &request, &reply, NULL, 0);
> > + }
> > + nbd_coroutine_end(client, &request);
> > + return -reply.error;
> > +}
> > +
> > int nbd_client_co_writev(BlockDriverState *bs, int64_t sector_num,
> > int nb_sectors, QEMUIOVector *qiov, int *flags)
> > {
> > diff --git a/block/nbd.c b/block/nbd.c
> > index f7ea3b3..f5119c0 100644
> > --- a/block/nbd.c
> > +++ b/block/nbd.c
> > @@ -355,6 +355,26 @@ static int nbd_co_readv(BlockDriverState *bs, int64_t sector_num,
> > return nbd_client_co_readv(bs, sector_num, nb_sectors, qiov);
> > }
> >
> > +static int nbd_co_write_zeroes(BlockDriverState *bs, int64_t sector_num,
> > + int nb_sectors, BdrvRequestFlags orig_flags)
> > +{
> > + int flags = orig_flags;
> > + int ret;
> > +
> > + ret = nbd_client_co_write_zeroes(bs, sector_num, nb_sectors, &flags);
> > + if (ret < 0) {
> > + return ret;
> > + }
> > +
> > + /* The flag wasn't sent to the server, so we need to emulate it with an
> > + * explicit flush */
>
> Surely you only need to do this is the flag wasn't sent to the server,
> i.e. if !(client->nbdflags & NBD_FLAG_SEND_FUA)
>
> If you've sent a FUA request, no need to flush the whole thing.
In this case BDRV_REQ_FUA is cleared from 'flags' by
nbd_client_co_write_zeroes() and this condition becomes false.
>
> nbd_co_writev_flags seems to have the same issue, which is where I guess
> you got that from.
>
> > + if (flags & BDRV_REQ_FUA) {
> > + ret = nbd_client_co_flush(bs);
> > + }
> > +
> > + return ret;
> > +}
> > +
> > static int nbd_co_writev_flags(BlockDriverState *bs, int64_t sector_num,
> > int nb_sectors, QEMUIOVector *qiov, int flags)
> > {
> > @@ -476,6 +496,7 @@ static BlockDriver bdrv_nbd = {
> > .bdrv_parse_filename = nbd_parse_filename,
> > .bdrv_file_open = nbd_open,
> > .bdrv_co_readv = nbd_co_readv,
> > + .bdrv_co_write_zeroes = nbd_co_write_zeroes,
> > .bdrv_co_writev = nbd_co_writev,
> > .bdrv_co_writev_flags = nbd_co_writev_flags,
> > .supported_write_flags = BDRV_REQ_FUA,
> > @@ -496,6 +517,7 @@ static BlockDriver bdrv_nbd_tcp = {
> > .bdrv_parse_filename = nbd_parse_filename,
> > .bdrv_file_open = nbd_open,
> > .bdrv_co_readv = nbd_co_readv,
> > + .bdrv_co_write_zeroes = nbd_co_write_zeroes,
> > .bdrv_co_writev = nbd_co_writev,
> > .bdrv_co_writev_flags = nbd_co_writev_flags,
> > .supported_write_flags = BDRV_REQ_FUA,
> > @@ -516,6 +538,7 @@ static BlockDriver bdrv_nbd_unix = {
> > .bdrv_parse_filename = nbd_parse_filename,
> > .bdrv_file_open = nbd_open,
> > .bdrv_co_readv = nbd_co_readv,
> > + .bdrv_co_write_zeroes = nbd_co_write_zeroes,
> > .bdrv_co_writev = nbd_co_writev,
> > .bdrv_co_writev_flags = nbd_co_writev_flags,
> > .supported_write_flags = BDRV_REQ_FUA,
> > --
> > 2.5.5
> >
> >
>
> --
> Alex Bligh
>
>
>
>
>
next prev parent reply other threads:[~2016-04-09 12:08 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-08 22:05 [Qemu-devel] [RFC PATCH 00/18] NBD protocol additions Eric Blake
2016-04-08 22:05 ` [Qemu-devel] [PATCH 01/18] nbd: Don't kill server on client that doesn't request TLS Eric Blake
2016-04-09 10:28 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 02/18] nbd: Don't fail handshake on NBD_OPT_LIST descriptions Eric Blake
2016-04-09 10:30 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 03/18] nbd: More debug typo fixes, use correct formats Eric Blake
2016-04-09 10:30 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 04/18] nbd: Detect servers that send unexpected error values Eric Blake
2016-04-09 10:31 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 05/18] nbd: Reject unknown request flags Eric Blake
2016-04-09 10:32 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 06/18] nbd: Avoid magic number for NBD max name size Eric Blake
2016-04-09 10:35 ` Alex Bligh
2016-04-09 22:07 ` Eric Blake
2016-04-08 22:05 ` [Qemu-devel] [PATCH 07/18] nbd: Treat flags vs. command type as separate fields Eric Blake
2016-04-09 10:37 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 08/18] nbd: Limit nbdflags to 16 bits Eric Blake
2016-04-09 10:37 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 09/18] nbd: Share common reply-sending code in server Eric Blake
2016-04-09 10:38 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 10/18] nbd: Share common option-sending code in client Eric Blake
2016-04-09 10:38 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 11/18] nbd: Let client skip portions of server reply Eric Blake
2016-04-09 10:39 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 12/18] nbd: Less allocation during NBD_OPT_LIST Eric Blake
2016-04-09 10:41 ` Alex Bligh
2016-04-09 22:24 ` Eric Blake
2016-04-08 22:05 ` [Qemu-devel] [PATCH 13/18] nbd: Support shorter handshake Eric Blake
2016-04-09 10:42 ` Alex Bligh
2016-04-09 22:27 ` Eric Blake
2016-04-08 22:05 ` [Qemu-devel] [PATCH 14/18] nbd: Implement NBD_OPT_GO on client Eric Blake
2016-04-09 10:47 ` Alex Bligh
2016-04-09 22:38 ` Eric Blake
2016-04-08 22:05 ` [Qemu-devel] [PATCH 15/18] nbd: Implement NBD_OPT_GO on server Eric Blake
2016-04-09 10:48 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [PATCH 16/18] nbd: Support NBD_CMD_CLOSE Eric Blake
2016-04-09 10:50 ` Alex Bligh
2016-04-09 23:12 ` Eric Blake
2016-04-10 5:28 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [RFC PATCH 17/18] nbd: Implement NBD_CMD_WRITE_ZEROES on server Eric Blake
2016-04-09 9:39 ` Pavel Borzenkov
2016-04-09 10:54 ` Alex Bligh
2016-04-08 22:05 ` [Qemu-devel] [RFC PATCH 18/18] nbd: Implement NBD_CMD_WRITE_ZEROES on client Eric Blake
2016-04-09 10:57 ` Alex Bligh
2016-04-09 11:52 ` Pavel Borzenkov [this message]
2016-04-09 23:17 ` Eric Blake
2016-04-10 5:27 ` Alex Bligh
2016-04-09 10:21 ` [Qemu-devel] [Nbd] [RFC PATCH 00/18] NBD protocol additions Wouter Verhelst
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160409115220.GD21526@phobos \
--to=pborzenkov@virtuozzo.com \
--cc=alex@alex.org.uk \
--cc=eblake@redhat.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).