From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23D1FC77B7A for ; Fri, 26 May 2023 01:36:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232420AbjEZBgT (ORCPT ); Thu, 25 May 2023 21:36:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241569AbjEZBgP (ORCPT ); Thu, 25 May 2023 21:36:15 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46F85125 for ; Thu, 25 May 2023 18:36:13 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-64d30ab1ef2so333880b3a.2 for ; Thu, 25 May 2023 18:36:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20221208.gappssmtp.com; s=20221208; t=1685064973; x=1687656973; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=bR3UToISzSLbX8ZQkHUE42zju/u1Pd13Qxkfw1ihU0s=; b=xBykKLfgpBaSvfRX3A2+FxT1KMrbnM15im+s6pEISEci1JWoy+6UGBvACHOKnzsNQr afk90Xh/M4D33/CZyC3EusZmqc1ApG7JjLCpW6aX3SWhZt9ZYbyqO7WMBxqRgMWFTHtq HSdOCN6vwq8iD2sZuRhSnYIqjspgSjnwj/JJkdgbMcgZZCUfDvhe0451GrKyYqY5eIvT gbFjo76t5wzNhvbQD1+eZ6JBj+DjthvcCoZA0AU3MkAccbMC5I/5OCQHLQBwb95rcN0F aZC1+itQVaJaNWNgoirfbLXH3K81rFtIImWJ/54lSzn69rMSJfSeM4eOwdpVzh8fi8GZ 2dwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685064973; x=1687656973; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bR3UToISzSLbX8ZQkHUE42zju/u1Pd13Qxkfw1ihU0s=; b=NO/jnpejRgh1UrOwUi3E1ZVdYQv0hAXFbjsFwvRynLwIv6NJnakpRDTGwLMgP0NU/E 0GMqQUhELVPWILUwG85AbFFq8b94bNIBMOlyRboHBRcOxw9W7UCGWa7m1Vs1elJOlkza 6hvFUNSOE1MCZ0CAgJ3UFA271VFr4HKps3qh8Nc7QzN4av6WEuM4wfmeM4TxMiqE40aR Kg8tybXG+K5rhRG0mO/I89uTRRzhOtWr2voBsl4+6LtqAtYNxhgyCp8ss/eGT9Pqk2Cv PUCJ2Flw7bUm8I9s/YoSzbih8T4eogqHIpZR9+A0MaWCQsiStfjwjC9pgC6KWUrOwfvg nxlA== X-Gm-Message-State: AC+VfDzuooqyNAeNtWm6BluSpOCARXG1Q/OidTm9WH6Y+TlUwPZRigqM UODOpIDxRALzd+L1sSf6VOfUlw== X-Google-Smtp-Source: ACHHUZ6sPdmuQonaj/olpSGroUfK3OoVqwjZ0hrzZiUIv4Qa66jySy12bySeqpIr9yyQSaUMpmfRQA== X-Received: by 2002:a05:6a00:124a:b0:643:96bc:b292 with SMTP id u10-20020a056a00124a00b0064396bcb292mr1061741pfi.5.1685064972718; Thu, 25 May 2023 18:36:12 -0700 (PDT) Received: from dread.disaster.area (pa49-179-0-188.pa.nsw.optusnet.com.au. [49.179.0.188]) by smtp.gmail.com with ESMTPSA id g9-20020a62e309000000b0063efe2f3ecdsm1679539pfh.204.2023.05.25.18.36.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 May 2023 18:36:11 -0700 (PDT) Received: from dave by dread.disaster.area with local (Exim 4.96) (envelope-from ) id 1q2MNF-003wt9-15; Fri, 26 May 2023 11:36:09 +1000 Date: Fri, 26 May 2023 11:36:09 +1000 From: Dave Chinner To: Sarthak Kukreti Cc: Mike Snitzer , Joe Thornber , Jens Axboe , linux-block@vger.kernel.org, Theodore Ts'o , Stefan Hajnoczi , "Michael S. Tsirkin" , "Darrick J. Wong" , Brian Foster , Bart Van Assche , linux-kernel@vger.kernel.org, Christoph Hellwig , dm-devel@redhat.com, Andreas Dilger , linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, Jason Wang , Alasdair Kergon Subject: Re: [PATCH v7 0/5] Introduce provisioning primitives Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 25, 2023 at 03:47:21PM -0700, Sarthak Kukreti wrote: > On Thu, May 25, 2023 at 9:00 AM Mike Snitzer wrote: > > On Thu, May 25 2023 at 7:39P -0400, > > Dave Chinner wrote: > > > On Wed, May 24, 2023 at 04:02:49PM -0400, Mike Snitzer wrote: > > > > On Tue, May 23 2023 at 8:40P -0400, > > > > Dave Chinner wrote: > > > > > It's worth noting that XFS already has a coarse-grained > > > > > implementation of preferred regions for metadata storage. It will > > > > > currently not use those metadata-preferred regions for user data > > > > > unless all the remaining user data space is full. Hence I'm pretty > > > > > sure that a pre-provisioning enhancment like this can be done > > > > > entirely in-memory without requiring any new on-disk state to be > > > > > added. > > > > > > > > > > Sure, if we crash and remount, then we might chose a different LBA > > > > > region for pre-provisioning. But that's not really a huge deal as we > > > > > could also run an internal background post-mount fstrim operation to > > > > > remove any unused pre-provisioning that was left over from when the > > > > > system went down. > > > > > > > > This would be the FITRIM with extension you mention below? Which is a > > > > filesystem interface detail? > > > > > > No. We might reuse some of the internal infrastructure we use to > > > implement FITRIM, but that's about it. It's just something kinda > > > like FITRIM but with different constraints determined by the > > > filesystem rather than the user... > > > > > > As it is, I'm not sure we'd even need it - a preiodic userspace > > > FITRIM would acheive the same result, so leaked provisioned spaces > > > would get cleaned up eventually without the filesystem having to do > > > anything specific... > > > > > > > So dm-thinp would _not_ need to have new > > > > state that tracks "provisioned but unused" block? > > > > > > No idea - that's your domain. :) > > > > > > dm-snapshot, for certain, will need to track provisioned regions > > > because it has to guarantee that overwrites to provisioned space in > > > the origin device will always succeed. Hence it needs to know how > > > much space breaking sharing in provisioned regions after a snapshot > > > has been taken with be required... > > > > dm-thinp offers its own much more scalable snapshot support (doesn't > > use old dm-snapshot N-way copyout target). > > > > dm-snapshot isn't going to be modified to support this level of > > hardening (dm-snapshot is basically in "maintenance only" now). Ah, of course. Sorry for the confusion, I was kinda using dm-snapshot as shorthand for "dm-thinp + snapshots". > > But I understand your meaning: what you said is 100% applicable to > > dm-thinp's snapshot implementation and needs to be accounted for in > > thinp's metadata (inherent 'provisioned' flag). *nod* > A bit orthogonal: would dm-thinp need to differentiate between > user-triggered provision requests (eg. from fallocate()) vs > fs-triggered requests? Why? How is the guarantee the block device has to provide to provisioned areas different for user vs filesystem internal provisioned space? > I would lean towards user provisioned areas not > getting dedup'd on snapshot creation, Snapshotting is a clone operation, not a dedupe operation. Yes, the end result of both is that you have a block shared between multiple indexes that needs COW on the next overwrite, but the two operations that get to that point are very different... > but that would entail tracking > the state of the original request and possibly a provision request > flag (REQ_PROVISION_DEDUP_ON_SNAPSHOT) or an inverse flag > (REQ_PROVISION_NODEDUP). Possibly too convoluted... Let's not try to add everyone's favourite pony to this interface before we've even got it off the ground. It's the simple precision of the API, the lack of cross-layer communication requirements and the ability to implement and optimise the independent layers independently that makes this a very appealing solution. We need to start with getting the simple stuff working and prove the concept. Then once we can observe the behaviour of a working system we can start working on optimising individual layers for efficiency and performance.... Cheers, Dave. -- Dave Chinner david@fromorbit.com