From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65BF4C27C4F for ; Thu, 13 Jun 2024 14:06:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To: Content-Transfer-Encoding:Content-Type:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=0JAgWI6yypeQvJHtBsLjxy9EsiTAtsBZFfQkPH4VMNc=; b=jdfKKggxu3fkSW+o3k4IufQ+Qn RihlimKTGSYw+0KWt6pMvg9ue8u2S7xH1QUEhNfgF+bavNTJeEfgrq2y+8I9IP/y2yF04gOQb5ETx BkkQOc37Y/yBlnY72BdG/ayStICwYKFpSYdekN/0kccZasUU0uQaSeYX8hiUKElNbsVmRf+yrjwnn c+tceWmVTAC4O+DVJL/zh2dwl+NSTy36X9Ce+CNE8lSFdrCpfHa2AgAClYxK8ALi8HycFp5M65BSr BpGdTiwA4gnl1NwBKuXjJ+HTmGO0Mv1Glg/ToFS43+h2IG7/QM1e5V5Zbo1m1/g5J5LNOmX/KLump jdx2WRNw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sHl6M-0000000Gseo-4C3J; Thu, 13 Jun 2024 14:06:55 +0000 Received: from mail-qv1-xf35.google.com ([2607:f8b0:4864:20::f35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sHQKh-0000000DGBP-1qIB for linux-nvme@lists.infradead.org; Wed, 12 Jun 2024 15:56:22 +0000 Received: by mail-qv1-xf35.google.com with SMTP id 6a1803df08f44-6b09072c9d9so5966d6.1 for ; Wed, 12 Jun 2024 08:56:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1718207778; x=1718812578; darn=lists.infradead.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=0JAgWI6yypeQvJHtBsLjxy9EsiTAtsBZFfQkPH4VMNc=; b=rX7Bsud9E9W0KKEvRVo9vvePC5oHluX/AQX63C1qV5YfkT+kP1M2N9koFyDpxTruG+ noq+TYDb1dy8XBnF7LqyPkoF5phpMnSIAKHq4Mi7DJ/Tv9M9V1uR43PoZ0fnNX2DMpf4 COBnzcz8xnnA1TNMGpIZ7/JaF6lhnh6TEeEic= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718207778; x=1718812578; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0JAgWI6yypeQvJHtBsLjxy9EsiTAtsBZFfQkPH4VMNc=; b=YYUqee9nAeP4Dea3M/+/tpbgtSlzriK0vwuxVtp4/2NohCt55wY2Czmv/gITxfksJu AYCT1TQoTra7lRFj69iBGKkVY60yn2/tooVwovHydbm7Y4rmDm7X27wpDNBpjW/A9S24 Xy4rmYsIxaKnUH+NgsN4dQ5ELFYejzml8MjbyHK6vhfAi0Nx5zBcE2i300kgQXdkmWoH ohEjf5iNRrfA08mVrYyzccbokeD/hz4UKBtbi8iMKxii9EEtNhqQcBMW3eyJRu5kOCY0 Tm17W63OIWRuXhmMn7AfH4R/PmWzN9B2BMfgtIUExxX85lSpp31u1dx7bZ7OrYQIBUon 0ogQ== X-Forwarded-Encrypted: i=1; AJvYcCWkpvLA611XKGGHB2mkX04Ca+oyXFimg4FO9N+Drx9Sm9tkzp7JscfylxNV1agboAOtp261gTU38RPjA+Qz4c1jPZyHLKctVS7pR2s0cTA= X-Gm-Message-State: AOJu0YwaedzXrM0HrD+1tNXhi9MYO/mb+idUYvd/0OoVASYqVFl78Z4G xMUoOXsYj5ukaKx1KyU2Bn4E3lCKuF9QbrbV2JMVTEyLjLsGpjwwY45i79vG5VA= X-Google-Smtp-Source: AGHT+IH3vJ/Cfk8LzYoeLv9cXT/SgN6Pt3X56xC7VVYFazmjn8i4deMSmAqg4iM25NKqjeo5TBvU1g== X-Received: by 2002:a05:6214:2a47:b0:6b0:7365:dde0 with SMTP id 6a1803df08f44-6b2a33de160mr1306776d6.18.1718207777691; Wed, 12 Jun 2024 08:56:17 -0700 (PDT) Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6b0884337e9sm22877866d6.16.2024.06.12.08.56.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 08:56:17 -0700 (PDT) Date: Wed, 12 Jun 2024 17:56:15 +0200 From: Roger Pau =?utf-8?B?TW9ubsOp?= To: Christoph Hellwig Cc: Jens Axboe , Geert Uytterhoeven , Richard Weinberger , Philipp Reisner , Lars Ellenberg , Christoph =?utf-8?Q?B=C3=B6hmwalder?= , Josef Bacik , Ming Lei , "Michael S. Tsirkin" , Jason Wang , Alasdair Kergon , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai , Vineeth Vijayan , "Martin K. Petersen" , linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org, drbd-dev@lists.linbit.com, nbd@other.debian.org, linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org, virtualization@lists.linux.dev, xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, linux-block@vger.kernel.org Subject: Re: [PATCH 10/26] xen-blkfront: don't disable cache flushes when they fail Message-ID: References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-11-hch@lst.de> <20240612150030.GA29188@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20240612150030.GA29188@lst.de> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240612_085619_507663_959C547D X-CRM114-Status: GOOD ( 33.04 ) X-Mailman-Approved-At: Thu, 13 Jun 2024 07:06:53 -0700 X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org On Wed, Jun 12, 2024 at 05:00:30PM +0200, Christoph Hellwig wrote: > On Wed, Jun 12, 2024 at 10:01:18AM +0200, Roger Pau Monné wrote: > > On Tue, Jun 11, 2024 at 07:19:10AM +0200, Christoph Hellwig wrote: > > > blkfront always had a robust negotiation protocol for detecting a write > > > cache. Stop simply disabling cache flushes when they fail as that is > > > a grave error. > > > > It's my understanding the current code attempts to cover up for the > > lack of guarantees the feature itself provides: > > > So even when the feature is exposed, the backend might return > > EOPNOTSUPP for the flush/barrier operations. > > How is this supposed to work? I mean in the worst case we could > just immediately complete the flush requests in the driver, but > we're really lying to any upper layer. Right. AFAICT advertising "feature-barrier" and/or "feature-flush-cache" could be done based on whether blkback understand those commands, not on whether the underlying storage supports the equivalent of them. Worst case we can print a warning message once about the underlying storage failing to complete flush/barrier requests, and that data integrity might not be guaranteed going forward, and not propagate the error to the upper layer? What would be the consequence of propagating a flush error to the upper layers? > > Such failure is tied on whether the underlying blkback storage > > supports REQ_OP_WRITE with REQ_PREFLUSH operation. blkback will > > expose "feature-barrier" and/or "feature-flush-cache" without knowing > > whether the underlying backend supports those operations, hence the > > weird fallback in blkfront. > > If we are just talking about the Linux blkback driver (I know there > probably are a few other implementations) it won't every do that. > I see it has code to do so, but the Linux block layer doesn't > allow the flush operation to randomly fail if it was previously > advertised. Note that even blkfront conforms to this as it fixes > up the return value when it gets this notsupp error to ok. Yes, I'm afraid it's impossible to know what the multiple incarnations of all the scattered blkback implementations possibly do (FreeBSD, NetBSD, QEMU and blktap at least I know of). > > Overall blkback should ensure that REQ_PREFLUSH is supported before > > exposing "feature-barrier" or "feature-flush-cache", as then the > > exposed features would really match what the underlying backend > > supports (rather than the commands blkback knows about). > > Yes. The in-tree xen-blkback does that, but even without that the > Linux block layer actually makes sure flushes sent by upper layers > always succeed even when not supported. Given the description of the feature in the blkif header, I'm afraid we cannot guarantee that seeing the feature exposed implies barrier or flush support, since the request could fail at any time (or even from the start of the disk attachment) and it would still sadly be a correct implementation given the description of the options. Thanks, Roger.