From: Matthew Wilcox <willy@infradead.org>
To: Hannes Reinecke <hare@suse.de>
Cc: linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Christoph Hellwig <hch@lst.de>,
Luis Chamberlain <mcgrof@kernel.org>
Subject: Re: [PATCH 2/7] brd: convert to folios
Date: Wed, 14 Jun 2023 14:45:30 +0100 [thread overview]
Message-ID: <ZInEeq1lfDUxye58@casper.infradead.org> (raw)
In-Reply-To: <20230614114637.89759-3-hare@suse.de>
On Wed, Jun 14, 2023 at 01:46:32PM +0200, Hannes Reinecke wrote:
> /*
> - * Each block ramdisk device has a xarray brd_pages of pages that stores
> - * the pages containing the block device's contents. A brd page's ->index is
> - * its offset in PAGE_SIZE units. This is similar to, but in no way connected
> - * with, the kernel's pagecache or buffer cache (which sit above our block
> - * device).
> + * Each block ramdisk device has a xarray of folios that stores the folios
> + * containing the block device's contents. A brd folio's ->index is its offset
> + * in PAGE_SIZE units. This is similar to, but in no way connected with,
> + * the kernel's pagecache or buffer cache (which sit above our block device).
Having read my way to the end of the series, I can now circle back and
say this comment is wrong. The folio->index is its offset in PAGE_SIZE
units if the sector size is <= PAGE_SIZE, otherwise it's the offset in
sector size units. This is _different from_ the pagecache which uses
PAGE_SIZE units and multi-index entries in the XArray.
> @@ -144,29 +143,29 @@ static int copy_to_brd_setup(struct brd_device *brd, sector_t sector, size_t n,
> static void copy_to_brd(struct brd_device *brd, const void *src,
> sector_t sector, size_t n)
> {
> - struct page *page;
> + struct folio *folio;
> void *dst;
> unsigned int offset = (sector & (PAGE_SECTORS-1)) << SECTOR_SHIFT;
> size_t copy;
>
> copy = min_t(size_t, n, PAGE_SIZE - offset);
> - page = brd_lookup_page(brd, sector);
> - BUG_ON(!page);
> + folio = brd_lookup_folio(brd, sector);
> + BUG_ON(!folio);
>
> - dst = kmap_atomic(page);
> - memcpy(dst + offset, src, copy);
> - kunmap_atomic(dst);
> + dst = kmap_local_folio(folio, offset);
> + memcpy(dst, src, copy);
> + kunmap_local(dst);
This should use memcpy_to_folio(), which doesn't exist yet.
Compile-tested patch incoming shortly ...
> + folio = brd_lookup_folio(brd, sector);
> + if (folio) {
> + src = kmap_local_folio(folio, offset);
> + memcpy(dst, src, copy);
> + kunmap_local(src);
And this will need memcpy_from_folio(), patch for that incoming too.
> @@ -226,15 +225,15 @@ static int brd_do_bvec(struct brd_device *brd, struct page *page,
> goto out;
> }
>
> - mem = kmap_atomic(page);
> + mem = kmap_local_folio(folio, off);
> if (!op_is_write(opf)) {
> - copy_from_brd(mem + off, brd, sector, len);
> - flush_dcache_page(page);
> + copy_from_brd(mem, brd, sector, len);
> + flush_dcache_folio(folio);
Nngh. This will need to be a more complex loop. I don't think we can
do a simple abstraction here. Perhaps you can base it on the two
patches you're about to see?
next prev parent reply other threads:[~2023-06-14 13:45 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-14 11:46 [PATCH 0/7] RFC: high-order folio support for I/O Hannes Reinecke
2023-06-14 11:46 ` [PATCH 1/7] brd: use XArray instead of radix-tree to index backing pages Hannes Reinecke
2023-06-14 12:45 ` Matthew Wilcox
2023-06-14 12:50 ` Pankaj Raghav
2023-06-14 13:03 ` Hannes Reinecke
2023-06-14 11:46 ` [PATCH 2/7] brd: convert to folios Hannes Reinecke
2023-06-14 13:45 ` Matthew Wilcox [this message]
2023-06-14 13:50 ` Hannes Reinecke
2023-06-14 11:46 ` [PATCH 3/7] brd: abstract page_size conventions Hannes Reinecke
2023-06-14 11:46 ` [PATCH 4/7] brd: make sector size configurable Hannes Reinecke
2023-06-14 12:55 ` Matthew Wilcox
2023-06-14 13:02 ` Hannes Reinecke
2023-06-15 2:17 ` Dave Chinner
2023-06-15 5:55 ` Christoph Hellwig
2023-06-15 6:33 ` Hannes Reinecke
2023-06-15 6:23 ` Hannes Reinecke
2023-06-14 11:46 ` [PATCH 5/7] brd: make logical " Hannes Reinecke
2023-06-14 11:46 ` [PATCH 6/7] mm/filemap: allocate folios with mapping blocksize Hannes Reinecke
[not found] ` <CGME20230619080901eucas1p224e67aa31866d2ad8d259b2209c2db67@eucas1p2.samsung.com>
2023-06-19 8:08 ` Pankaj Raghav
2023-06-19 8:42 ` Hannes Reinecke
2023-06-19 22:57 ` Dave Chinner
2023-06-20 0:00 ` Matthew Wilcox
2023-06-20 5:57 ` Hannes Reinecke
2023-06-14 11:46 ` [PATCH 7/7] mm/readahead: align readahead down to " Hannes Reinecke
2023-06-14 13:17 ` [PATCH 0/7] RFC: high-order folio support for I/O Hannes Reinecke
2023-06-14 13:53 ` Matthew Wilcox
2023-06-14 15:06 ` Hannes Reinecke
2023-06-14 15:35 ` Hannes Reinecke
2023-06-14 17:46 ` Matthew Wilcox
2023-06-14 23:53 ` Dave Chinner
2023-06-15 6:21 ` Hannes Reinecke
2023-06-15 8:51 ` Dave Chinner
2023-06-16 16:06 ` Kent Overstreet
2023-06-15 3:44 ` Dave Chinner
2023-06-14 13:48 ` [PATCH 1/2] highmem: Add memcpy_to_folio() Matthew Wilcox (Oracle)
2023-06-14 18:38 ` kernel test robot
2023-06-14 19:30 ` kernel test robot
2023-06-15 5:58 ` Christoph Hellwig
2023-06-15 12:16 ` Matthew Wilcox
2023-06-14 13:48 ` [PATCH 2/2] highmem: Add memcpy_from_folio() Matthew Wilcox (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZInEeq1lfDUxye58@casper.infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=mcgrof@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).