From: Christoph Hellwig <hch@lst.de>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: "Christoph Hellwig" <hch@lst.de>, "Jens Axboe" <axboe@kernel.dk>,
"Geert Uytterhoeven" <geert@linux-m68k.org>,
"Richard Weinberger" <richard@nod.at>,
"Philipp Reisner" <philipp.reisner@linbit.com>,
"Lars Ellenberg" <lars.ellenberg@linbit.com>,
"Christoph Böhmwalder" <christoph.boehmwalder@linbit.com>,
"Josef Bacik" <josef@toxicpanda.com>,
"Ming Lei" <ming.lei@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Jason Wang" <jasowang@redhat.com>,
"Alasdair Kergon" <agk@redhat.com>,
"Mike Snitzer" <snitzer@kernel.org>,
"Mikulas Patocka" <mpatocka@redhat.com>,
"Song Liu" <song@kernel.org>, "Yu Kuai" <yukuai3@huawei.com>,
"Vineeth Vijayan" <vneethv@linux.ibm.com>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
drbd-dev@lists.linbit.com, nbd@other.debian.org,
linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 10/26] xen-blkfront: don't disable cache flushes when they fail
Date: Thu, 13 Jun 2024 16:05:08 +0200 [thread overview]
Message-ID: <20240613140508.GA16529@lst.de> (raw)
In-Reply-To: <ZmnFH17bTV2Ot_iR@macbook>
On Wed, Jun 12, 2024 at 05:56:15PM +0200, Roger Pau Monné wrote:
> Right. AFAICT advertising "feature-barrier" and/or
> "feature-flush-cache" could be done based on whether blkback
> understand those commands, not on whether the underlying storage
> supports the equivalent of them.
>
> Worst case we can print a warning message once about the underlying
> storage failing to complete flush/barrier requests, and that data
> integrity might not be guaranteed going forward, and not propagate the
> error to the upper layer?
>
> What would be the consequence of propagating a flush error to the
> upper layers?
If you propage the error to the upper layer you will generate an
I/O error there, which usually leads to a file system shutdown.
> Given the description of the feature in the blkif header, I'm afraid
> we cannot guarantee that seeing the feature exposed implies barrier or
> flush support, since the request could fail at any time (or even from
> the start of the disk attachment) and it would still sadly be a correct
> implementation given the description of the options.
Well, then we could do something like the patch below, which keeps
the existing behavior, but insolates the block layer from it and
removes the only user of blk_queue_write_cache from interrupt
context:
---
From e6e82c769ab209a77302994c3829cf6ff7a595b8 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig <hch@lst.de>
Date: Thu, 30 May 2024 08:58:52 +0200
Subject: xen-blkfront: don't disable cache flushes when they fail
blkfront always had a robust negotiation protocol for detecting a write
cache. Stop simply disabling cache flushes in the block layer as the
flags handling is moving to the atomic queue limits API that needs
user context to freeze the queue for that. Instead handle the case
of the feature flags cleared inside of blkfront. This removes old
debug code to check for such a mismatch which was previously impossible
to hit, including the check for passthrough requests that blkfront
never used to start with.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/xen-blkfront.c | 44 +++++++++++++++++++-----------------
1 file changed, 23 insertions(+), 21 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 9b4ec3e4908cce..e2c92d5095ff17 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -788,6 +788,14 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
* A barrier request a superset of FUA, so we can
* implement it the same way. (It's also a FLUSH+FUA,
* since it is guaranteed ordered WRT previous writes.)
+ *
+ * Note that can end up here with a FUA write and the
+ * flags cleared. This happens when the flag was
+ * run-time disabled and raced with I/O submission in
+ * the block layer. We submit it as a normal write
+ * here. A pure flush should never end up here with
+ * the flags cleared as they are completed earlier for
+ * the !feature_flush case.
*/
if (info->feature_flush && info->feature_fua)
ring_req->operation =
@@ -795,8 +803,6 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
else if (info->feature_flush)
ring_req->operation =
BLKIF_OP_FLUSH_DISKCACHE;
- else
- ring_req->operation = 0;
}
ring_req->u.rw.nr_segments = num_grant;
if (unlikely(require_extra_req)) {
@@ -887,16 +893,6 @@ static inline void flush_requests(struct blkfront_ring_info *rinfo)
notify_remote_via_irq(rinfo->irq);
}
-static inline bool blkif_request_flush_invalid(struct request *req,
- struct blkfront_info *info)
-{
- return (blk_rq_is_passthrough(req) ||
- ((req_op(req) == REQ_OP_FLUSH) &&
- !info->feature_flush) ||
- ((req->cmd_flags & REQ_FUA) &&
- !info->feature_fua));
-}
-
static blk_status_t blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
const struct blk_mq_queue_data *qd)
{
@@ -908,23 +904,30 @@ static blk_status_t blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
rinfo = get_rinfo(info, qid);
blk_mq_start_request(qd->rq);
spin_lock_irqsave(&rinfo->ring_lock, flags);
- if (RING_FULL(&rinfo->ring))
- goto out_busy;
- if (blkif_request_flush_invalid(qd->rq, rinfo->dev_info))
- goto out_err;
+ /*
+ * Check if the backend actually supports flushes.
+ *
+ * While the block layer won't send us flushes if we don't claim to
+ * support them, the Xen protocol allows the backend to revoke support
+ * at any time. That is of course a really bad idea and dangerous, but
+ * has been allowed for 10+ years. In that case we simply clear the
+ * flags, and directly return here for an empty flush and ignore the
+ * FUA flag later on.
+ */
+ if (unlikely(req_op(qd->rq) == REQ_OP_FLUSH && !info->feature_flush))
+ goto out;
+ if (RING_FULL(&rinfo->ring))
+ goto out_busy;
if (blkif_queue_request(qd->rq, rinfo))
goto out_busy;
flush_requests(rinfo);
+out:
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
return BLK_STS_OK;
-out_err:
- spin_unlock_irqrestore(&rinfo->ring_lock, flags);
- return BLK_STS_IOERR;
-
out_busy:
blk_mq_stop_hw_queue(hctx);
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
@@ -1627,7 +1630,6 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
blkif_req(req)->error = BLK_STS_OK;
info->feature_fua = 0;
info->feature_flush = 0;
- xlvbd_flush(info);
}
fallthrough;
case BLKIF_OP_READ:
--
2.43.0
next prev parent reply other threads:[~2024-06-13 14:05 UTC|newest]
Thread overview: 104+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-11 5:19 move features flags into queue_limits Christoph Hellwig
2024-06-11 5:19 ` [PATCH 01/26] sd: fix sd_is_zoned Christoph Hellwig
2024-06-11 5:46 ` Damien Le Moal
2024-06-11 8:11 ` Hannes Reinecke
2024-06-11 10:50 ` Johannes Thumshirn
2024-06-11 19:18 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 02/26] sd: move zone limits setup out of sd_read_block_characteristics Christoph Hellwig
2024-06-11 5:51 ` Damien Le Moal
2024-06-11 5:52 ` Christoph Hellwig
2024-06-11 5:54 ` Christoph Hellwig
2024-06-11 7:25 ` Damien Le Moal
2024-06-11 7:20 ` Damien Le Moal
2024-06-12 4:45 ` Christoph Hellwig
2024-06-13 9:39 ` Christoph Hellwig
2024-06-16 23:01 ` Damien Le Moal
2024-06-17 4:53 ` Christoph Hellwig
2024-06-17 6:03 ` Damien Le Moal
2024-06-11 8:12 ` Hannes Reinecke
2024-06-11 5:19 ` [PATCH 03/26] loop: stop using loop_reconfigure_limits in __loop_clr_fd Christoph Hellwig
2024-06-11 5:53 ` Damien Le Moal
2024-06-11 5:54 ` Christoph Hellwig
2024-06-11 8:14 ` Hannes Reinecke
2024-06-11 19:21 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 04/26] loop: always update discard settings in loop_reconfigure_limits Christoph Hellwig
2024-06-11 5:54 ` Damien Le Moal
2024-06-11 8:15 ` Hannes Reinecke
2024-06-11 19:23 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 05/26] loop: regularize upgrading the lock size for direct I/O Christoph Hellwig
2024-06-11 5:56 ` Damien Le Moal
2024-06-11 5:59 ` Christoph Hellwig
2024-06-11 8:16 ` Hannes Reinecke
2024-06-11 19:27 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 06/26] loop: also use the default block size from an underlying block device Christoph Hellwig
2024-06-11 5:58 ` Damien Le Moal
2024-06-11 5:59 ` Christoph Hellwig
2024-06-11 8:17 ` Hannes Reinecke
2024-06-11 19:28 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 07/26] loop: fold loop_update_rotational into loop_reconfigure_limits Christoph Hellwig
2024-06-11 6:00 ` Damien Le Moal
2024-06-11 8:18 ` Hannes Reinecke
2024-06-11 19:31 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 08/26] virtio_blk: remove virtblk_update_cache_mode Christoph Hellwig
2024-06-11 7:26 ` Damien Le Moal
2024-06-11 8:19 ` Hannes Reinecke
2024-06-11 11:49 ` Johannes Thumshirn
2024-06-11 15:43 ` Stefan Hajnoczi
2024-06-11 19:32 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 09/26] nbd: move setting the cache control flags to __nbd_set_size Christoph Hellwig
2024-06-11 7:28 ` Damien Le Moal
2024-06-11 8:20 ` Hannes Reinecke
2024-06-11 16:50 ` Josef Bacik
2024-06-11 19:34 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 10/26] xen-blkfront: don't disable cache flushes when they fail Christoph Hellwig
2024-06-11 7:30 ` Damien Le Moal
2024-06-12 4:50 ` Christoph Hellwig
2024-06-11 8:21 ` Hannes Reinecke
2024-06-12 8:01 ` Roger Pau Monné
2024-06-12 15:00 ` Christoph Hellwig
2024-06-12 15:56 ` Roger Pau Monné
2024-06-13 14:05 ` Christoph Hellwig [this message]
2024-06-14 7:56 ` Roger Pau Monné
2024-06-11 5:19 ` [PATCH 11/26] block: freeze the queue in queue_attr_store Christoph Hellwig
2024-06-11 7:32 ` Damien Le Moal
2024-06-11 8:22 ` Hannes Reinecke
2024-06-11 19:36 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 12/26] block: remove blk_flush_policy Christoph Hellwig
2024-06-11 7:33 ` Damien Le Moal
2024-06-11 8:23 ` Hannes Reinecke
2024-06-11 19:37 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 13/26] block: move cache control settings out of queue->flags Christoph Hellwig
2024-06-11 7:55 ` Damien Le Moal
2024-06-12 4:54 ` Christoph Hellwig
2024-06-11 9:58 ` Hannes Reinecke
2024-06-12 4:52 ` Christoph Hellwig
2024-06-12 14:53 ` Ulf Hansson
2024-06-11 5:19 ` [PATCH 14/26] block: move the nonrot flag to queue_limits Christoph Hellwig
2024-06-11 8:02 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 15/26] block: move the add_random " Christoph Hellwig
2024-06-11 8:06 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 16/26] block: move the io_stat flag setting " Christoph Hellwig
2024-06-11 8:09 ` Damien Le Moal
2024-06-12 4:58 ` Christoph Hellwig
2024-06-11 5:19 ` [PATCH 17/26] block: move the stable_write flag " Christoph Hellwig
2024-06-11 8:12 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 18/26] block: move the synchronous " Christoph Hellwig
2024-06-11 8:13 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 19/26] block: move the nowait " Christoph Hellwig
2024-06-11 8:16 ` Damien Le Moal
2024-06-12 5:01 ` Christoph Hellwig
2024-06-11 5:19 ` [PATCH 20/26] block: move the dax " Christoph Hellwig
2024-06-11 8:17 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 21/26] block: move the poll " Christoph Hellwig
2024-06-11 8:21 ` Damien Le Moal
2024-06-12 5:03 ` Christoph Hellwig
2024-06-11 5:19 ` [PATCH 22/26] block: move the zoned flag into the feature field Christoph Hellwig
2024-06-11 8:23 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 23/26] block: move the zone_resetall flag to queue_limits Christoph Hellwig
2024-06-11 8:24 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 24/26] block: move the pci_p2pdma " Christoph Hellwig
2024-06-11 8:24 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 25/26] block: move the skip_tagset_quiesce " Christoph Hellwig
2024-06-11 8:25 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 26/26] block: move the bounce flag into the feature field Christoph Hellwig
2024-06-11 8:26 ` Damien Le Moal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240613140508.GA16529@lst.de \
--to=hch@lst.de \
--cc=agk@redhat.com \
--cc=axboe@kernel.dk \
--cc=ceph-devel@vger.kernel.org \
--cc=christoph.boehmwalder@linbit.com \
--cc=dm-devel@lists.linux.dev \
--cc=drbd-dev@lists.linbit.com \
--cc=geert@linux-m68k.org \
--cc=jasowang@redhat.com \
--cc=josef@toxicpanda.com \
--cc=lars.ellenberg@linbit.com \
--cc=linux-bcache@vger.kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-m68k@lists.linux-m68k.org \
--cc=linux-mmc@vger.kernel.org \
--cc=linux-mtd@lists.infradead.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-raid@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=linux-um@lists.infradead.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=martin.petersen@oracle.com \
--cc=ming.lei@redhat.com \
--cc=mpatocka@redhat.com \
--cc=mst@redhat.com \
--cc=nbd@other.debian.org \
--cc=nvdimm@lists.linux.dev \
--cc=philipp.reisner@linbit.com \
--cc=richard@nod.at \
--cc=roger.pau@citrix.com \
--cc=snitzer@kernel.org \
--cc=song@kernel.org \
--cc=virtualization@lists.linux.dev \
--cc=vneethv@linux.ibm.com \
--cc=xen-devel@lists.xenproject.org \
--cc=yukuai3@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).