From: "Darrick J. Wong" <djwong@kernel.org>
To: Xu Yang <xu.yang_2@nxp.com>
Cc: brauner@kernel.org, willy@infradead.org, hch@lst.de,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
jun.li@nxp.com
Subject: Re: [PATCH v3] iomap: avoid redundant fault_in_iov_iter_readable() judgement when use larger chunks
Date: Mon, 20 May 2024 08:08:23 -0700 [thread overview]
Message-ID: <20240520150823.GA25518@frogsfrogsfrogs> (raw)
In-Reply-To: <20240520105525.2176322-1-xu.yang_2@nxp.com>
On Mon, May 20, 2024 at 06:55:25PM +0800, Xu Yang wrote:
> Since commit (5d8edfb900d5 "iomap: Copy larger chunks from userspace"),
> iomap will try to copy in larger chunks than PAGE_SIZE. However, if the
> mapping doesn't support large folio, only one page of maximum 4KB will
> be created and 4KB data will be writen to pagecache each time. Then,
> next 4KB will be handled in next iteration. This will cause potential
> write performance problem.
>
> If chunk is 2MB, total 512 pages need to be handled finally. During this
> period, fault_in_iov_iter_readable() is called to check iov_iter readable
> validity. Since only 4KB will be handled each time, below address space
> will be checked over and over again:
>
> start end
> -
> buf, buf+2MB
> buf+4KB, buf+2MB
> buf+8KB, buf+2MB
> ...
> buf+2044KB buf+2MB
>
> Obviously the checking size is wrong since only 4KB will be handled each
> time. So this will get a correct chunk to let iomap work well in non-large
> folio case.
>
> With this change, the write speed will be stable. Tested on ARM64 device.
>
> Before:
>
> - dd if=/dev/zero of=/dev/sda bs=400K count=10485 (334 MB/s)
> - dd if=/dev/zero of=/dev/sda bs=800K count=5242 (278 MB/s)
> - dd if=/dev/zero of=/dev/sda bs=1600K count=2621 (204 MB/s)
> - dd if=/dev/zero of=/dev/sda bs=2200K count=1906 (170 MB/s)
> - dd if=/dev/zero of=/dev/sda bs=3000K count=1398 (150 MB/s)
> - dd if=/dev/zero of=/dev/sda bs=4500K count=932 (139 MB/s)
>
> After:
>
> - dd if=/dev/zero of=/dev/sda bs=400K count=10485 (339 MB/s)
> - dd if=/dev/zero of=/dev/sda bs=800K count=5242 (330 MB/s)
> - dd if=/dev/zero of=/dev/sda bs=1600K count=2621 (332 MB/s)
> - dd if=/dev/zero of=/dev/sda bs=2200K count=1906 (333 MB/s)
> - dd if=/dev/zero of=/dev/sda bs=3000K count=1398 (333 MB/s)
> - dd if=/dev/zero of=/dev/sda bs=4500K count=932 (333 MB/s)
>
> Fixes: 5d8edfb900d5 ("iomap: Copy larger chunks from userspace")
> Cc: stable@vger.kernel.org
> Signed-off-by: Xu Yang <xu.yang_2@nxp.com>
>
> ---
> Changes in v2:
> - fix address space description in message
> Changes in v3:
> - adjust 'chunk' and add mapping_max_folio_size() in header file
> as suggested by Matthew
> - add write performance results in commit message
> ---
> fs/iomap/buffered-io.c | 2 +-
> include/linux/pagemap.h | 37 ++++++++++++++++++++++++-------------
> 2 files changed, 25 insertions(+), 14 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 41c8f0c68ef5..c5802a459334 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -898,11 +898,11 @@ static bool iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len,
> static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
> {
> loff_t length = iomap_length(iter);
> - size_t chunk = PAGE_SIZE << MAX_PAGECACHE_ORDER;
> loff_t pos = iter->pos;
> ssize_t total_written = 0;
> long status = 0;
> struct address_space *mapping = iter->inode->i_mapping;
> + size_t chunk = mapping_max_folio_size(mapping);
> unsigned int bdp_flags = (iter->flags & IOMAP_NOWAIT) ? BDP_ASYNC : 0;
>
> do {
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index c5e33e2ca48a..6be8e22360f1 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -346,6 +346,19 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
> m->gfp_mask = mask;
> }
>
> +/*
> + * There are some parts of the kernel which assume that PMD entries
> + * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then,
> + * limit the maximum allocation order to PMD size. I'm not aware of any
> + * assumptions about maximum order if THP are disabled, but 8 seems like
> + * a good order (that's 1MB if you're using 4kB pages)
> + */
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER
> +#else
> +#define MAX_PAGECACHE_ORDER 8
> +#endif
> +
> /**
> * mapping_set_large_folios() - Indicate the file supports large folios.
> * @mapping: The file.
> @@ -372,6 +385,17 @@ static inline bool mapping_large_folio_support(struct address_space *mapping)
> test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
> }
>
> +/*
> + * Get max folio size in case of supporting large folio, otherwise return
> + * PAGE_SIZE.
Minor quibble -- the comment doesn't need to restate what the function
does because we can see that in the code below.
/* Return the maximum folio size for this pagecache mapping, in bytes. */
With that fixed,
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
--D
> + */
> +static inline size_t mapping_max_folio_size(struct address_space *mapping)
> +{
> + if (mapping_large_folio_support(mapping))
> + return PAGE_SIZE << MAX_PAGECACHE_ORDER;
> + return PAGE_SIZE;
> +}
> +
> static inline int filemap_nr_thps(struct address_space *mapping)
> {
> #ifdef CONFIG_READ_ONLY_THP_FOR_FS
> @@ -530,19 +554,6 @@ static inline void *detach_page_private(struct page *page)
> return folio_detach_private(page_folio(page));
> }
>
> -/*
> - * There are some parts of the kernel which assume that PMD entries
> - * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then,
> - * limit the maximum allocation order to PMD size. I'm not aware of any
> - * assumptions about maximum order if THP are disabled, but 8 seems like
> - * a good order (that's 1MB if you're using 4kB pages)
> - */
> -#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER
> -#else
> -#define MAX_PAGECACHE_ORDER 8
> -#endif
> -
> #ifdef CONFIG_NUMA
> struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order);
> #else
> --
> 2.34.1
>
>
next prev parent reply other threads:[~2024-05-20 15:08 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-20 10:55 [PATCH v3] iomap: avoid redundant fault_in_iov_iter_readable() judgement when use larger chunks Xu Yang
2024-05-20 15:08 ` Darrick J. Wong [this message]
2024-05-21 1:51 ` Xu Yang
2024-05-20 15:29 ` Christoph Hellwig
2024-05-21 1:57 ` Xu Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240520150823.GA25518@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=brauner@kernel.org \
--cc=hch@lst.de \
--cc=jun.li@nxp.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=willy@infradead.org \
--cc=xu.yang_2@nxp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).