From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4EFC8CD1297 for ; Wed, 10 Apr 2024 04:05:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=qs5zPeAsEcTuuvWHCl8rO9RDQhxhl31TRFIHSeLuueU=; b=IM9ELQ3i4mmLxsxfuXBw0+S8Sw AnuRok7bxWMOwBqPPtaOn6GOnvbWR4zSnLJT6KCUx/KjBeVerPDU5IBHf5QXD0gCCE9LWDGKPzzYg c51MYh+QBUUctVGpE7GhU1Xed5GwGSYIge+ERf+Jzk33/pc57z5liXs+GVIwIc3Q1t3NXdFcKZL1a vRvc2BlLWIeXqQdkYYMLfhmfFZhiCrh+prUm1HuyYw9coV85JHfYIWILY2pu/fMaZACpsO8ywgDJM Ll3X4bzvSgL4wWuIeEYQ/GPVlYSYxKn91G7iLa0epc0lMP6dujS8pyCch2zLLLUkjDczoLn6ikVRi Phtxt9hQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ruPDE-00000004xMS-1VHK; Wed, 10 Apr 2024 04:05:28 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ruPDC-00000004xLw-30Iv for linux-nvme@bombadil.infradead.org; Wed, 10 Apr 2024 04:05:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=qs5zPeAsEcTuuvWHCl8rO9RDQhxhl31TRFIHSeLuueU=; b=ClEKjZyn0q+YoWT29QBVUZv3x4 7ISBLoo3ezuPgISr4g1j134YmhOhC3bGqp/zBcIxc/aIa2f1wbY3tEj4KA35QtdzIjwNvWTmykbXZ DFP3fKW2rcC1gxLGBJ4PkcEZIPKb+26gGieZFatmm0nFBXNkjWN3rVkCGdIPy9wmstcMXjWPMILP1 oDIIbPGGkcggayy9BUb5XOLhqvRRiJjzZmgA5Ox3pXKzSoPue9vqLgwAQz+r+RAbBwoC+mSzhUM7z xG/FpDxPzMuVfmcrJ8mIaztiG4XhivtLEK/40Tx7XFqq8UKlf/wodX/BukW2wTCz599syOPVXetah 7vprz9MQ==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1ruPD6-00000003bR4-2bWn; Wed, 10 Apr 2024 04:05:20 +0000 Date: Wed, 10 Apr 2024 05:05:20 +0100 From: Matthew Wilcox To: Luis Chamberlain Cc: John Garry , Pankaj Raghav , Daniel Gomez , Javier =?iso-8859-1?Q?Gonz=E1lez?= , axboe@kernel.dk, kbusch@kernel.org, hch@lst.de, sagi@grimberg.me, jejb@linux.ibm.com, martin.petersen@oracle.com, djwong@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, dchinner@redhat.com, jack@suse.cz, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, tytso@mit.edu, jbongio@google.com, linux-scsi@vger.kernel.org, ojaswin@linux.ibm.com, linux-aio@kvack.org, linux-btrfs@vger.kernel.org, io-uring@vger.kernel.org, nilay@linux.ibm.com, ritesh.list@gmail.com Subject: Re: [PATCH v6 00/10] block atomic writes Message-ID: References: <20240326133813.3224593-1-john.g.garry@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org On Mon, Apr 08, 2024 at 10:50:47AM -0700, Luis Chamberlain wrote: > On Fri, Apr 05, 2024 at 11:06:00AM +0100, John Garry wrote: > > On 04/04/2024 17:48, Matthew Wilcox wrote: > > > > > The thing is that there's no requirement for an interface as complex as > > > > > the one you're proposing here. I've talked to a few database people > > > > > and all they want is to increase the untorn write boundary from "one > > > > > disc block" to one database block, typically 8kB or 16kB. > > > > > > > > > > So they would be quite happy with a much simpler interface where they > > > > > set the inode block size at inode creation time, > > > > We want to support untorn writes for bdev file operations - how can we set > > > > the inode block size there? Currently it is based on logical block size. > > > ioctl(BLKBSZSET), I guess? That currently limits to PAGE_SIZE, but I > > > think we can remove that limitation with the bs>PS patches. > > I can say a bit more on this, as I explored that. Essentially Matthew, > yes, I got that to work but it requires a set of different patches. We have > what we tried and then based on feedback from Chinner we have a > direction on what to try next. The last effort on that front was having the > iomap aops for bdev be used and lifting the PAGE_SIZE limit up to the > page cache limits. The crux on that front was that we end requiring > disabling BUFFER_HEAD and that is pretty limitting, so my old > implementation had dynamic aops so to let us use the buffer-head aops > only when using filesystems which require it and use iomap aops > otherwise. But as Chinner noted we learned through the DAX experience > that's not a route we want to again try, so the real solution is to > extend iomap bdev aops code with buffer-head compatibility. Have you tried just using the buffer_head code? I think you heard bad advice at last LSFMM. Since then I've landed a bunch of patches which remove PAGE_SIZE assumptions throughout the buffer_head code, and while I haven't tried it, it might work. And it might be easier to make work than adding more BH hacks to the iomap code. A quick audit for problems ... __getblk_slow: if (unlikely(size & (bdev_logical_block_size(bdev)-1) || (size < 512 || size > PAGE_SIZE))) { cont_expand_zero (not used by bdev code) cont_write_begin (ditto) That's all I spot from a quick grep for PAGE, offset_in_page() and kmap. You can't do a lot of buffer_heads per folio, because you'll overrun struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE]; in block_read_full_folio(), but you can certainly do _one_ buffer_head per folio, and that's all you need for bs>PS. > I suspect this is a use case where perhaps the max folio order could be > set for the bdev in the future, the logical block size the min order, > and max order the large atomic. No, that's not what we want to do at all! Minimum writeback size needs to be the atomic size, otherwise we have to keep track of which writes are atomic and which ones aren't. So, just set the logical block size to the atomic size, and we're done.