linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: "Yin, Fengwei" <fengwei.yin@intel.com>
Cc: "Darrick J. Wong" <djwong@kernel.org>,
	linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org,
	Wang Yugui <wangyugui@e16-tech.com>,
	Dave Chinner <david@fromorbit.com>,
	Christoph Hellwig <hch@infradead.org>,
	Al Viro <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH v2 7/7] iomap: Copy larger chunks from userspace
Date: Tue, 6 Jun 2023 19:07:53 +0100	[thread overview]
Message-ID: <ZH91+QWd3k8a2x/Z@casper.infradead.org> (raw)
In-Reply-To: <d47f280e-9e98-ffd2-1386-097fc8dc11b5@intel.com>

On Mon, Jun 05, 2023 at 04:25:22PM +0800, Yin, Fengwei wrote:
> On 6/5/2023 6:11 AM, Matthew Wilcox wrote:
> > On Sun, Jun 04, 2023 at 11:29:52AM -0700, Darrick J. Wong wrote:
> >> On Fri, Jun 02, 2023 at 11:24:44PM +0100, Matthew Wilcox (Oracle) wrote:
> >>> -		copied = copy_page_from_iter_atomic(page, offset, bytes, i);
> >>> +		copied = copy_page_from_iter_atomic(&folio->page, offset, bytes, i);
> >>
> >> I think I've gotten lost in the weeds.  Does copy_page_from_iter_atomic
> >> actually know how to deal with a multipage folio?  AFAICT it takes a
> >> page, kmaps it, and copies @bytes starting at @offset in the page.  If
> >> a caller feeds it a multipage folio, does that all work correctly?  Or
> >> will the pagecache split multipage folios as needed to make it work
> >> right?
> > 
> > It's a smidgen inefficient, but it does work.  First, it calls
> > page_copy_sane() to check that offset & n fit within the compound page
> > (ie this all predates folios).
> > 
> > ... Oh.  copy_page_from_iter() handles this correctly.
> > copy_page_from_iter_atomic() doesn't.  I'll have to fix this
> > first.  Looks like Al fixed copy_page_from_iter() in c03f05f183cd
> > and didn't fix copy_page_from_iter_atomic().
> > 
> >> If we create a 64k folio at pos 0 and then want to write a byte at pos
> >> 40k, does __filemap_get_folio break up the 64k folio so that the folio
> >> returned by iomap_get_folio starts at 40k?  Or can the iter code handle
> >> jumping ten pages into a 16-page folio and I just can't see it?
> > 
> > Well ... it handles it fine unless it's highmem.  p is kaddr + offset,
> > so if offset is 40k, it works correctly on !highmem.
> So is it better to have implementations for !highmem and highmem? And for
> !highmem, we don't need the kmap_local_page()/kunmap_local() and chunk
> size per copy is not limited to PAGE_SIZE. Thanks.

No, that's not needed; we can handle that just fine.  Maybe this can
use kmap_local_page() instead of kmap_atomic().  Al, what do you think?
I haven't tested this yet; need to figure out a qemu config with highmem ...

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 960223ed9199..d3d6a0789625 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -857,24 +857,36 @@ size_t iov_iter_zero(size_t bytes, struct iov_iter *i)
 }
 EXPORT_SYMBOL(iov_iter_zero);
 
-size_t copy_page_from_iter_atomic(struct page *page, unsigned offset, size_t bytes,
-				  struct iov_iter *i)
+size_t copy_page_from_iter_atomic(struct page *page, unsigned offset,
+		size_t bytes, struct iov_iter *i)
 {
-	char *kaddr = kmap_atomic(page), *p = kaddr + offset;
-	if (!page_copy_sane(page, offset, bytes)) {
-		kunmap_atomic(kaddr);
+	size_t n = bytes, copied = 0;
+
+	if (!page_copy_sane(page, offset, bytes))
 		return 0;
-	}
-	if (WARN_ON_ONCE(!i->data_source)) {
-		kunmap_atomic(kaddr);
+	if (WARN_ON_ONCE(!i->data_source))
 		return 0;
+
+	page += offset / PAGE_SIZE;
+	offset %= PAGE_SIZE;
+	if (PageHighMem(page))
+		n = min_t(size_t, bytes, PAGE_SIZE);
+	while (1) {
+		char *kaddr = kmap_atomic(page) + offset;
+		iterate_and_advance(i, n, base, len, off,
+			copyin(kaddr + off, base, len),
+			memcpy_from_iter(i, kaddr + off, base, len)
+		)
+		kunmap_atomic(kaddr);
+		copied += n;
+		if (!PageHighMem(page) || copied == bytes || n == 0)
+			break;
+		offset += n;
+		page += offset / PAGE_SIZE;
+		offset %= PAGE_SIZE;
+		n = min_t(size_t, bytes - copied, PAGE_SIZE);
 	}
-	iterate_and_advance(i, bytes, base, len, off,
-		copyin(p + off, base, len),
-		memcpy_from_iter(i, p + off, base, len)
-	)
-	kunmap_atomic(kaddr);
-	return bytes;
+	return copied;
 }
 EXPORT_SYMBOL(copy_page_from_iter_atomic);
 

  reply	other threads:[~2023-06-06 18:08 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-02 22:24 [PATCH v2 0/7] Create large folios in iomap buffered write path Matthew Wilcox (Oracle)
2023-06-02 22:24 ` [PATCH v2 1/7] iomap: Remove large folio handling in iomap_invalidate_folio() Matthew Wilcox (Oracle)
2023-06-04 17:58   ` Darrick J. Wong
2023-06-05  7:11   ` Christoph Hellwig
2023-06-02 22:24 ` [PATCH v2 2/7] doc: Correct the description of ->release_folio Matthew Wilcox (Oracle)
2023-06-04 17:55   ` Darrick J. Wong
2023-06-04 20:10     ` Matthew Wilcox
2023-06-04 20:33       ` Darrick J. Wong
2023-06-05 13:11         ` Matthew Wilcox
2023-06-05 15:07           ` Darrick J. Wong
2023-06-05  7:12   ` Christoph Hellwig
2023-06-02 22:24 ` [PATCH v2 3/7] iomap: Remove unnecessary test from iomap_release_folio() Matthew Wilcox (Oracle)
2023-06-04 18:01   ` Darrick J. Wong
2023-06-04 21:39     ` Matthew Wilcox
2023-06-05 21:10       ` Ritesh Harjani
2023-06-05  7:13   ` Christoph Hellwig
2023-06-02 22:24 ` [PATCH v2 4/7] filemap: Add fgp_t typedef Matthew Wilcox (Oracle)
2023-06-04 18:02   ` Darrick J. Wong
2023-06-05  7:14   ` Christoph Hellwig
2023-06-02 22:24 ` [PATCH v2 5/7] filemap: Allow __filemap_get_folio to allocate large folios Matthew Wilcox (Oracle)
2023-06-04 18:09   ` Darrick J. Wong
2023-06-04 21:48     ` Matthew Wilcox
2023-06-05 15:21       ` Darrick J. Wong
2023-06-05  7:16   ` Christoph Hellwig
2023-06-02 22:24 ` [PATCH v2 6/7] iomap: Create large folios in the buffered write path Matthew Wilcox (Oracle)
2023-06-04 18:10   ` Darrick J. Wong
2023-06-05  7:16   ` Christoph Hellwig
2023-06-02 22:24 ` [PATCH v2 7/7] iomap: Copy larger chunks from userspace Matthew Wilcox (Oracle)
2023-06-04 18:29   ` Darrick J. Wong
2023-06-04 22:11     ` Matthew Wilcox
2023-06-05  8:25       ` Yin, Fengwei
2023-06-06 18:07         ` Matthew Wilcox [this message]
2023-06-07  2:21           ` Yin Fengwei
2023-06-07  5:33             ` Yin, Fengwei
2023-06-07 15:55             ` Matthew Wilcox
2023-06-08  1:22               ` Yin Fengwei
2023-06-07  6:40           ` Yin Fengwei
2023-06-07 15:56             ` Matthew Wilcox
2023-06-04  0:19 ` [PATCH v2 0/7] Create large folios in iomap buffered write path Wang Yugui

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZH91+QWd3k8a2x/Z@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=fengwei.yin@intel.com \
    --cc=hch@infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=wangyugui@e16-tech.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).