From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by mail19.linbit.com (LINBIT Mail Daemon) with ESMTP id 0882D420657 for ; Mon, 17 Jun 2024 08:23:34 +0200 (CEST) Message-ID: <3247433c-b356-425c-a888-8f7904351a2f@kernel.org> Date: Mon, 17 Jun 2024 15:23:27 +0900 MIME-Version: 1.0 Subject: Re: [PATCH 13/26] block: move cache control settings out of queue->flags To: Christoph Hellwig , Jens Axboe References: <20240617060532.127975-1-hch@lst.de> <20240617060532.127975-14-hch@lst.de> From: Damien Le Moal Content-Language: en-US In-Reply-To: <20240617060532.127975-14-hch@lst.de> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: nvdimm@lists.linux.dev, Ulf Hansson , "Michael S. Tsirkin" , Jason Wang , linux-nvme@lists.infradead.org, Song Liu , linux-mtd@lists.infradead.org, Vineeth Vijayan , Alasdair Kergon , drbd-dev@lists.linbit.com, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Richard Weinberger , Geert Uytterhoeven , Yu Kuai , dm-devel@lists.linux.dev, linux-um@lists.infradead.org, Mike Snitzer , Josef Bacik , nbd@other.debian.org, linux-raid@vger.kernel.org, linux-m68k@lists.linux-m68k.org, Mikulas Patocka , xen-devel@lists.xenproject.org, ceph-devel@vger.kernel.org, Ming Lei , linux-bcache@vger.kernel.org, linux-block@vger.kernel.org, "Martin K. Petersen" , linux-mmc@vger.kernel.org, Philipp Reisner , virtualization@lists.linux.dev, Lars Ellenberg , linuxppc-dev@lists.ozlabs.org, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= List-Id: "*Coordination* of development, patches, contributions -- *Questions* \(even to developers\) go to drbd-user, please." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On 6/17/24 15:04, Christoph Hellwig wrote: > Move the cache control settings into the queue_limits so that the flags > can be set atomically with the device queue frozen. > > Add new features and flags field for the driver set flags, and internal > (usually sysfs-controlled) flags in the block layer. Note that we'll > eventually remove enough field from queue_limits to bring it back to the > previous size. > > The disable flag is inverted compared to the previous meaning, which > means it now survives a rescan, similar to the max_sectors and > max_discard_sectors user limits. > > The FLUSH and FUA flags are now inherited by blk_stack_limits, which > simplified the code in dm a lot, but also causes a slight behavior > change in that dm-switch and dm-unstripe now advertise a write cache > despite setting num_flush_bios to 0. The I/O path will handle this > gracefully, but as far as I can tell the lack of num_flush_bios > and thus flush support is a pre-existing data integrity bug in those > targets that really needs fixing, after which a non-zero num_flush_bios > should be required in dm for targets that map to underlying devices. > > Signed-off-by: Christoph Hellwig > Acked-by: Ulf Hansson [mmc] A few nits below. With these fixed, Reviewed-by: Damien Le Moal > +Implementation details for bio based block drivers > +-------------------------------------------------- > + > +For bio based drivers the REQ_PREFLUSH and REQ_FUA bit are simplify passed on ...bit are simplify... -> ...bits are simply... > +to the driver if the drivers sets the BLK_FEAT_WRITE_CACHE flag and the drivers > +needs to handle them. s/drivers/driver (2 times) > -and the driver must handle write requests that have the REQ_FUA bit set > -in prep_fn/request_fn. If the FUA bit is not natively supported the block > -layer turns it into an empty REQ_OP_FLUSH request after the actual write. > +When the BLK_FEAT_FUA flags is set, the REQ_FUA bit simplify passed on for the s/bit simplify/bit is simply -- Damien Le Moal Western Digital Research