From: Christoph Hellwig <hch@lst.de>
To: Mike Snitzer <snitzer@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Theodore Ts'o <tytso@mit.edu>,
dm-devel@lists.linux.dev, fstests@vger.kernel.org,
linux-ext4@vger.kernel.org, regressions@lists.linux.dev,
linux-block@vger.kernel.org
Subject: Re: dm: use queue_limits_set
Date: Mon, 20 May 2024 17:44:25 +0200 [thread overview]
Message-ID: <20240520154425.GB1104@lst.de> (raw)
In-Reply-To: <ZktuojMrQWH9MQJO@kernel.org>
On Mon, May 20, 2024 at 11:39:14AM -0400, Mike Snitzer wrote:
> That's fair. My criticism was more about having to fix up DM targets
> to cope with the new normal of max_discard_sectors being set as a
> function of max_hw_discard_sectors and max_user_discard_sectors.
>
> With stacked devices in particular it is _very_ hard for the user to
> know their exerting control over a max discard limit is correct.
The user forcing a limit is always very sketchy, which is why I'm
not a fan of it.
> Yeah, but my concern is that if a user sets a value that is too low
> it'll break targets like DM thinp (which Ted reported). So forcibly
> setting both to indirectly set the required max_discard_sectors seems
> necessary.
Dm-think requiring a minimum discard size is a rather odd requirement.
Is this just a debug asswert, or is there a real technical reason
for it? If so we can introduce a now to force a minimum size or
disable user setting the value entirely.
> diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
> index 4793ad2aa1f7..c196f39579af 100644
> --- a/drivers/md/dm-thin.c
> +++ b/drivers/md/dm-thin.c
> @@ -4497,7 +4499,8 @@ static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits)
>
> if (pool->pf.discard_enabled) {
> limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT;
> - limits->max_discard_sectors = pool->sectors_per_block * BIO_PRISON_MAX_RANGE;
> + limits->max_hw_discard_sectors = limits->max_user_discard_sectors =
> + pool->sectors_per_block * BIO_PRISON_MAX_RANGE;
> }
Drivers really have no business setting max_user_discard_sector,
the whole point of the field is to separate device/driver capabilities
from user policy. So if dm-think really has no way of handling
smaller discards, we need to ensure they can't be set.
I'm also kinda curious what actually sets a user limit in Ted's case
as that feels weird.
next prev parent reply other threads:[~2024-05-20 15:44 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20240518022646.GA450709@mit.edu>
2024-05-19 5:05 ` dm: use queue_limits_set Mike Snitzer
2024-05-19 5:42 ` Mike Snitzer
2024-05-20 15:06 ` Christoph Hellwig
2024-05-20 15:39 ` Mike Snitzer
2024-05-20 15:44 ` Christoph Hellwig [this message]
2024-05-20 15:54 ` Mike Snitzer
2024-05-20 17:17 ` Mike Snitzer
2024-05-20 20:12 ` Christoph Hellwig
2024-05-20 22:03 ` Mike Snitzer
2024-05-21 0:45 ` Mike Snitzer
2024-05-21 15:29 ` Mike Snitzer
2024-05-20 15:47 ` Mike Snitzer
2024-05-20 15:50 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240520154425.GB1104@lst.de \
--to=hch@lst.de \
--cc=dm-devel@lists.linux.dev \
--cc=fstests@vger.kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=regressions@lists.linux.dev \
--cc=snitzer@kernel.org \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).