From: Kevin Wolf <kwolf@redhat.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org, josef@toxicpanda.com,
nbd@other.debian.org, eblake@redhat.com, vincent.chen@sifive.com,
Leon Schuermann <leon@is.currently.online>,
Bart Van Assche <bvanassche@acm.org>
Subject: Re: [PATCH] nbd: fix partial sending
Date: Thu, 17 Oct 2024 17:47:53 +0200 [thread overview]
Message-ID: <ZxExqStWA5HmZMzy@redhat.com> (raw)
In-Reply-To: <20241017113614.2964389-1-ming.lei@redhat.com>
Am 17.10.2024 um 13:36 hat Ming Lei geschrieben:
> nbd driver sends request header and payload with multiple call of
> sock_sendmsg, and partial sending can't be avoided. However, nbd driver
> returns BLK_STS_RESOURCE to block core in this situation. This way causes
> one issue: request->tag may change in the next run of nbd_queue_rq(), but
> the original old tag has been sent as part of header cookie, this way
> confuses nbd driver reply handling, since the real request can't be
> retrieved any more with the obsolete old tag.
>
> Fix it by retrying sending directly, this way is reasonable & safe since
> nothing can move on if the current hw queue(socket) has pending request,
> and unnecessary requeue can be avoided in this way.
>
> Cc: vincent.chen@sifive.com
> Cc: Leon Schuermann <leon@is.currently.online>
> Cc: Bart Van Assche <bvanassche@acm.org>
> Reported-by: Kevin Wolf <kwolf@redhat.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> Kevin,
> Please test this version, thanks!
The NBD errors seem to go away with this.
I'm not sure about side effects, though. Isn't the idea behind EINTR
that you return to userspace to let it handle a signal? Looping in the
kernel doesn't quite achieve this, so do we delay/prevent signal
delivery with this? On the other hand, if it were completely prevented,
then this should become an infinite loop, which it didn't in my test.
> drivers/block/nbd.c | 35 +++++++++++++++++++++++++++++++++--
> 1 file changed, 33 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
> index b852050d8a96..ef84071041e3 100644
> --- a/drivers/block/nbd.c
> +++ b/drivers/block/nbd.c
> @@ -701,8 +701,9 @@ static blk_status_t nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd,
> if (sent) {
> nsock->pending = req;
> nsock->sent = sent;
> + } else {
> + set_bit(NBD_CMD_REQUEUED, &cmd->flags);
> }
> - set_bit(NBD_CMD_REQUEUED, &cmd->flags);
> return BLK_STS_RESOURCE;
> }
> dev_err_ratelimited(disk_to_dev(nbd->disk),
> @@ -743,7 +744,6 @@ static blk_status_t nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd,
> */
> nsock->pending = req;
> nsock->sent = sent;
> - set_bit(NBD_CMD_REQUEUED, &cmd->flags);
> return BLK_STS_RESOURCE;
> }
> dev_err(disk_to_dev(nbd->disk),
> @@ -778,6 +778,35 @@ static blk_status_t nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd,
> return BLK_STS_OK;
> }
>
> +/*
> + * Send pending nbd request and payload, part of them have been sent
> + * already, so we have to send them all with current request, and can't
> + * return BLK_STS_RESOURCE, otherwise request tag may be changed in next
> + * retry
> + */
> +static blk_status_t nbd_send_pending_cmd(struct nbd_device *nbd,
> + struct nbd_cmd *cmd)
> +{
> + struct request *req = blk_mq_rq_from_pdu(cmd);
> + unsigned long deadline = READ_ONCE(req->deadline);
> + unsigned int wait_ms = 2;
> + blk_status_t res;
> +
> + WARN_ON_ONCE(test_bit(NBD_CMD_REQUEUED, &cmd->flags));
> +
> + while (true) {
> + res = nbd_send_cmd(nbd, cmd, cmd->index);
> + if (res != BLK_STS_RESOURCE)
> + return res;
> + if (READ_ONCE(jiffies) + msecs_to_jiffies(wait_ms) >= deadline)
> + break;
> + msleep(wait_ms);
> + wait_ms *= 2;
> + }
> +
> + return BLK_STS_IOERR;
> +}
> +
> static int nbd_read_reply(struct nbd_device *nbd, struct socket *sock,
> struct nbd_reply *reply)
> {
> @@ -1111,6 +1140,8 @@ static blk_status_t nbd_handle_cmd(struct nbd_cmd *cmd, int index)
> goto out;
> }
> ret = nbd_send_cmd(nbd, cmd, index);
> + if (ret == BLK_STS_RESOURCE && nsock->pending == req)
> + ret = nbd_send_pending_cmd(nbd, cmd);
Is there a reason to call nbd_send_cmd() outside of the new loop first
instead of going to the loop directly? It's always better to only have
a single code path.
> out:
> mutex_unlock(&nsock->tx_lock);
> nbd_config_put(nbd);
Kevin
next prev parent reply other threads:[~2024-10-17 15:48 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-17 11:36 [PATCH] nbd: fix partial sending Ming Lei
2024-10-17 15:13 ` Bart Van Assche
2024-10-17 15:22 ` Jens Axboe
2024-10-17 15:42 ` Ming Lei
2024-10-17 15:47 ` Kevin Wolf [this message]
2024-10-18 0:33 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZxExqStWA5HmZMzy@redhat.com \
--to=kwolf@redhat.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=eblake@redhat.com \
--cc=josef@toxicpanda.com \
--cc=leon@is.currently.online \
--cc=linux-block@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=nbd@other.debian.org \
--cc=vincent.chen@sifive.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).