From: Jan Kara <jack@suse.cz>
To: Daniel Gomez <da.gomez@samsung.com>
Cc: Hugh Dickins <hughd@google.com>,
"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>,
"brauner@kernel.org" <brauner@kernel.org>,
"jack@suse.cz" <jack@suse.cz>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"dagmcr@gmail.com" <dagmcr@gmail.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"willy@infradead.org" <willy@infradead.org>,
"hch@infradead.org" <hch@infradead.org>,
"mcgrof@kernel.org" <mcgrof@kernel.org>,
Pankaj Raghav <p.raghav@samsung.com>,
"gost.dev@samsung.com" <gost.dev@samsung.com>
Subject: Re: [RFC PATCH 0/9] shmem: fix llseek in hugepages
Date: Tue, 20 Feb 2024 13:39:05 +0100 [thread overview]
Message-ID: <20240220123905.qdjn2x3dtryklibl@quack3> (raw)
In-Reply-To: <r3ws3x36uaiv6ycuk23nvpe2cn2oyzkk56af2bjlczfzmkfmuv@72otrsbffped>
On Tue 20-02-24 10:26:48, Daniel Gomez wrote:
> On Mon, Feb 19, 2024 at 02:15:47AM -0800, Hugh Dickins wrote:
> I'm uncertain when we may want to be more elastic. In the case of XFS with iomap
> and support for large folios, for instance, we are 'less' elastic than here. So,
> what exactly is the rationale behind wanting shmem to be 'more elastic'?
Well, but if you allocated space in larger chunks - as is the case with
ext4 and bigalloc feature, you will be similarly 'elastic' as tmpfs with
large folio support... So simply the granularity of allocation of
underlying space is what matters here. And for tmpfs the underlying space
happens to be the page cache.
> If we ever move shmem to large folios [1], and we use them in an oportunistic way,
> then we are going to be more elastic in the default path.
>
> [1] https://lore.kernel.org/all/20230919135536.2165715-1-da.gomez@samsung.com
>
> In addition, I think that having this block granularity can benefit quota
> support and the reclaim path. For example, in the generic/100 fstest, around
> ~26M of data are reported as 1G of used disk when using tmpfs with huge pages.
And I'd argue this is a desirable thing. If 1G worth of pages is attached
to the inode, then quota should be accounting 1G usage even though you've
written just 26MB of data to the file. Quota is about constraining used
resources, not about "how much did I write to the file".
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
next prev parent reply other threads:[~2024-02-20 12:39 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20240209142903eucas1p1f211ca6fc40a788e833de062e2772c41@eucas1p1.samsung.com>
2024-02-09 14:29 ` [RFC PATCH 0/9] shmem: fix llseek in hugepages Daniel Gomez
[not found] ` <CGME20240209142905eucas1p2df56a08287a84a5fa004142100926bb4@eucas1p2.samsung.com>
2024-02-09 14:29 ` [RFC PATCH 3/9] shmem: move folio zero operation to write_begin() Daniel Gomez
[not found] ` <CGME20240209142904eucas1p20a388be8e43b756b84b5a586d5a88f18@eucas1p2.samsung.com>
2024-02-09 14:29 ` [RFC PATCH 2/9] shmem: add per-block uptodate tracking for hugepages Daniel Gomez
[not found] ` <CGME20240209142903eucas1p17f73779c6b38276cd7cefbe0a40f355e@eucas1p1.samsung.com>
2024-02-09 14:29 ` [RFC PATCH 1/9] splice: don't check for uptodate if partially uptodate is impl Daniel Gomez
[not found] ` <CGME20240209142905eucas1p14498619591475e416a8163dbc96c90e4@eucas1p1.samsung.com>
2024-02-09 14:29 ` [RFC PATCH 5/9] shmem: clear_highpage() if block is not uptodate Daniel Gomez
[not found] ` <CGME20240209142905eucas1p150b096fab4b8a684b416d3beb0df901b@eucas1p1.samsung.com>
2024-02-09 14:29 ` [RFC PATCH 4/9] shmem: exit shmem_get_folio_gfp() if block is uptodate Daniel Gomez
[not found] ` <CGME20240209142906eucas1p2c31598bf448077f04eef66319ae2f3a1@eucas1p2.samsung.com>
2024-02-09 14:29 ` [RFC PATCH 6/9] shmem: set folio uptodate when reclaim Daniel Gomez
[not found] ` <CGME20240209142907eucas1p2024d2809a150c6e58082de0937596290@eucas1p2.samsung.com>
2024-02-09 14:29 ` [RFC PATCH 7/9] shmem: check if a block is uptodate before splice into pipe Daniel Gomez
[not found] ` <CGME20240209142907eucas1p12155b2fb002df5e0cd617fa74de757b7@eucas1p1.samsung.com>
2024-02-09 14:29 ` [RFC PATCH 9/9] shmem: enable per-block uptodate Daniel Gomez
[not found] ` <CGME20240209142907eucas1p2c61ae37b2a1ca2caeccc48b2169226f2@eucas1p2.samsung.com>
2024-02-09 14:29 ` [RFC PATCH 8/9] shmem: clear uptodate blocks after PUNCH_HOLE Daniel Gomez
[not found] ` <CGME20240214194911eucas1p187ae3bc5b2be4e0d2155f9ce792fdf8b@eucas1p1.samsung.com>
2024-02-14 19:49 ` [RFC PATCH 0/9] shmem: fix llseek in hugepages Daniel Gomez
2024-02-19 10:15 ` Hugh Dickins
2024-02-20 10:26 ` Daniel Gomez
2024-02-20 12:39 ` Jan Kara [this message]
2024-02-27 11:42 ` Daniel Gomez
2024-02-28 15:50 ` Daniel Gomez
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240220123905.qdjn2x3dtryklibl@quack3 \
--to=jack@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=da.gomez@samsung.com \
--cc=dagmcr@gmail.com \
--cc=gost.dev@samsung.com \
--cc=hch@infradead.org \
--cc=hughd@google.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=p.raghav@samsung.com \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).