* [PATCH 0/3] Use arch_make_folio_accessible() everywhere
@ 2023-09-15 17:28 Matthew Wilcox (Oracle)
2023-09-15 17:28 ` [PATCH 1/3] mm: Use arch_make_folio_accessible() in gup_pte_range() Matthew Wilcox (Oracle)
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-09-15 17:28 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), linux-mm, Heiko Carstens, Vasily Gorbik,
Alexander Gordeev, Claudio Imbrenda, linux-s390, kvm
We introduced arch_make_folio_accessible() a couple of years
ago, and it's in use in the page writeback path. GUP still uses
arch_make_page_accessible(), which means that we can succeed in making
a single page of a folio accessible, then fail to make the rest of the
folio accessible when it comes time to do writeback and it's too late
to do anything about it. I'm not sure how much of a real problem this is.
Switching everything around to arch_make_folio_accessible() also lets
us switch the page flag to be per-folio instead of per-page, which is
a good step towards dynamically allocated folios.
Build-tested only.
Matthew Wilcox (Oracle) (3):
mm: Use arch_make_folio_accessible() in gup_pte_range()
mm: Convert follow_page_pte() to use a folio
s390: Convert arch_make_page_accessible() to
arch_make_folio_accessible()
arch/s390/include/asm/page.h | 5 ++--
arch/s390/kernel/uv.c | 46 +++++++++++++++++++++++-------------
arch/s390/mm/fault.c | 15 ++++++------
include/linux/mm.h | 20 ++--------------
mm/gup.c | 22 +++++++++--------
5 files changed, 54 insertions(+), 54 deletions(-)
--
2.40.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/3] mm: Use arch_make_folio_accessible() in gup_pte_range()
2023-09-15 17:28 [PATCH 0/3] Use arch_make_folio_accessible() everywhere Matthew Wilcox (Oracle)
@ 2023-09-15 17:28 ` Matthew Wilcox (Oracle)
2023-09-15 17:28 ` [PATCH 2/3] mm: Convert follow_page_pte() to use a folio Matthew Wilcox (Oracle)
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-09-15 17:28 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), linux-mm, Heiko Carstens, Vasily Gorbik,
Alexander Gordeev, Claudio Imbrenda, linux-s390, kvm
This function already uses folios, so convert the
arch_make_page_accessible() call to arch_make_folio_accessible().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/gup.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 2f8a2d89fde1..ab8a0ebc728e 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2622,13 +2622,13 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
}
/*
- * We need to make the page accessible if and only if we are
+ * We need to make the folio accessible if and only if we are
* going to access its content (the FOLL_PIN case). Please
* see Documentation/core-api/pin_user_pages.rst for
* details.
*/
if (flags & FOLL_PIN) {
- ret = arch_make_page_accessible(page);
+ ret = arch_make_folio_accessible(folio);
if (ret) {
gup_put_folio(folio, 1, flags);
goto pte_unmap;
--
2.40.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/3] mm: Convert follow_page_pte() to use a folio
2023-09-15 17:28 [PATCH 0/3] Use arch_make_folio_accessible() everywhere Matthew Wilcox (Oracle)
2023-09-15 17:28 ` [PATCH 1/3] mm: Use arch_make_folio_accessible() in gup_pte_range() Matthew Wilcox (Oracle)
@ 2023-09-15 17:28 ` Matthew Wilcox (Oracle)
2023-09-15 17:28 ` [PATCH 3/3] s390: Convert arch_make_page_accessible() to arch_make_folio_accessible() Matthew Wilcox (Oracle)
2023-09-15 17:54 ` [PATCH 0/3] Use arch_make_folio_accessible() everywhere Claudio Imbrenda
3 siblings, 0 replies; 7+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-09-15 17:28 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), linux-mm, Heiko Carstens, Vasily Gorbik,
Alexander Gordeev, Claudio Imbrenda, linux-s390, kvm
Remove uses of PageAnon(), unpin_user_page(), PageDirty(),
set_page_dirty() and mark_page_accessed(), all of which have a hidden
call to compound_head().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/gup.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index ab8a0ebc728e..ff1eaaba5720 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -582,6 +582,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
{
struct mm_struct *mm = vma->vm_mm;
struct page *page;
+ struct folio *folio;
spinlock_t *ptl;
pte_t *ptep, pte;
int ret;
@@ -644,7 +645,8 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
goto out;
}
- VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
+ folio = page_folio(page);
+ VM_BUG_ON_PAGE((flags & FOLL_PIN) && folio_test_anon(folio) &&
!PageAnonExclusive(page), page);
/* try_grab_page() does nothing unless FOLL_GET or FOLL_PIN is set. */
@@ -655,28 +657,28 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
}
/*
- * We need to make the page accessible if and only if we are going
+ * We need to make the folio accessible if and only if we are going
* to access its content (the FOLL_PIN case). Please see
* Documentation/core-api/pin_user_pages.rst for details.
*/
if (flags & FOLL_PIN) {
- ret = arch_make_page_accessible(page);
+ ret = arch_make_folio_accessible(folio);
if (ret) {
- unpin_user_page(page);
+ gup_put_folio(folio, 1, FOLL_PIN);
page = ERR_PTR(ret);
goto out;
}
}
if (flags & FOLL_TOUCH) {
if ((flags & FOLL_WRITE) &&
- !pte_dirty(pte) && !PageDirty(page))
- set_page_dirty(page);
+ !pte_dirty(pte) && !folio_test_dirty(folio))
+ folio_mark_dirty(folio);
/*
* pte_mkyoung() would be more correct here, but atomic care
* is needed to avoid losing the dirty bit: it is easier to use
- * mark_page_accessed().
+ * folio_mark_accessed().
*/
- mark_page_accessed(page);
+ folio_mark_accessed(folio);
}
out:
pte_unmap_unlock(ptep, ptl);
--
2.40.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 3/3] s390: Convert arch_make_page_accessible() to arch_make_folio_accessible()
2023-09-15 17:28 [PATCH 0/3] Use arch_make_folio_accessible() everywhere Matthew Wilcox (Oracle)
2023-09-15 17:28 ` [PATCH 1/3] mm: Use arch_make_folio_accessible() in gup_pte_range() Matthew Wilcox (Oracle)
2023-09-15 17:28 ` [PATCH 2/3] mm: Convert follow_page_pte() to use a folio Matthew Wilcox (Oracle)
@ 2023-09-15 17:28 ` Matthew Wilcox (Oracle)
2023-09-15 17:54 ` [PATCH 0/3] Use arch_make_folio_accessible() everywhere Claudio Imbrenda
3 siblings, 0 replies; 7+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-09-15 17:28 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), linux-mm, Heiko Carstens, Vasily Gorbik,
Alexander Gordeev, Claudio Imbrenda, linux-s390, kvm
With all users now using arch_make_folio_accessible(), move the loop
over each page from common code into the only implementation.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
arch/s390/include/asm/page.h | 5 ++--
arch/s390/kernel/uv.c | 46 +++++++++++++++++++++++-------------
arch/s390/mm/fault.c | 15 ++++++------
include/linux/mm.h | 20 ++--------------
4 files changed, 42 insertions(+), 44 deletions(-)
diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h
index cfec0743314e..4f1b7107f0d9 100644
--- a/arch/s390/include/asm/page.h
+++ b/arch/s390/include/asm/page.h
@@ -162,6 +162,7 @@ static inline int page_reset_referenced(unsigned long addr)
#define _PAGE_ACC_BITS 0xf0 /* HW access control bits */
struct page;
+struct folio;
void arch_free_page(struct page *page, int order);
void arch_alloc_page(struct page *page, int order);
void arch_set_page_dat(struct page *page, int order);
@@ -175,8 +176,8 @@ static inline int devmem_is_allowed(unsigned long pfn)
#define HAVE_ARCH_ALLOC_PAGE
#if IS_ENABLED(CONFIG_PGSTE)
-int arch_make_page_accessible(struct page *page);
-#define HAVE_ARCH_MAKE_PAGE_ACCESSIBLE
+int arch_make_folio_accessible(struct folio *folio);
+#define arch_make_folio_accessible arch_make_folio_accessible
#endif
#define __PAGE_OFFSET 0x0UL
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
index fc07bc39e698..dadf29469b46 100644
--- a/arch/s390/kernel/uv.c
+++ b/arch/s390/kernel/uv.c
@@ -426,46 +426,58 @@ int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr)
EXPORT_SYMBOL_GPL(gmap_destroy_page);
/*
- * To be called with the page locked or with an extra reference! This will
- * prevent gmap_make_secure from touching the page concurrently. Having 2
- * parallel make_page_accessible is fine, as the UV calls will become a
- * no-op if the page is already exported.
+ * To be called with the folio locked or with an extra reference! This will
+ * prevent gmap_make_secure from touching the folio concurrently. Having 2
+ * parallel make_folio_accessible is fine, as the UV calls will become a
+ * no-op if the folio is already exported.
+ *
+ * Returns 0 on success or negative errno.
*/
-int arch_make_page_accessible(struct page *page)
+int arch_make_folio_accessible(struct folio *folio)
{
- int rc = 0;
+ unsigned long i, nr = folio_nr_pages(folio);
+ unsigned long pfn = folio_pfn(folio);
+ int err = 0;
/* Hugepage cannot be protected, so nothing to do */
- if (PageHuge(page))
+ if (folio_test_hugetlb(folio))
return 0;
/*
* PG_arch_1 is used in 3 places:
* 1. for kernel page tables during early boot
* 2. for storage keys of huge pages and KVM
- * 3. As an indication that this page might be secure. This can
+ * 3. As an indication that this folio might be secure. This can
* overindicate, e.g. we set the bit before calling
* convert_to_secure.
* As secure pages are never huge, all 3 variants can co-exists.
*/
- if (!test_bit(PG_arch_1, &page->flags))
+ if (!test_bit(PG_arch_1, &folio->flags))
return 0;
- rc = uv_pin_shared(page_to_phys(page));
- if (!rc) {
- clear_bit(PG_arch_1, &page->flags);
+ for (i = 0; i < nr; i++) {
+ err = uv_pin_shared((pfn + i) * PAGE_SIZE);
+ if (err)
+ break;
+ }
+ if (!err) {
+ clear_bit(PG_arch_1, &folio->flags);
return 0;
}
- rc = uv_convert_from_secure(page_to_phys(page));
- if (!rc) {
- clear_bit(PG_arch_1, &page->flags);
+ for (i = 0; i < nr; i++) {
+ err = uv_convert_from_secure((pfn + i) * PAGE_SIZE);
+ if (err)
+ break;
+ }
+ if (!err) {
+ clear_bit(PG_arch_1, &folio->flags);
return 0;
}
- return rc;
+ return err;
}
-EXPORT_SYMBOL_GPL(arch_make_page_accessible);
+EXPORT_SYMBOL_GPL(arch_make_folio_accessible);
#endif
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index b678295931c3..ac707e5d58ab 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -588,6 +588,7 @@ void do_secure_storage_access(struct pt_regs *regs)
struct vm_area_struct *vma;
struct mm_struct *mm;
struct page *page;
+ struct folio *folio;
struct gmap *gmap;
int rc;
@@ -643,17 +644,17 @@ void do_secure_storage_access(struct pt_regs *regs)
mmap_read_unlock(mm);
break;
}
- if (arch_make_page_accessible(page))
+ folio = page_folio(page);
+ if (arch_make_folio_accessible(folio))
send_sig(SIGSEGV, current, 0);
- put_page(page);
+ folio_put(folio);
mmap_read_unlock(mm);
break;
case KERNEL_FAULT:
- page = phys_to_page(addr);
- if (unlikely(!try_get_page(page)))
- break;
- rc = arch_make_page_accessible(page);
- put_page(page);
+ folio = page_folio(phys_to_page(addr));
+ folio_get(folio);
+ rc = arch_make_folio_accessible(folio);
+ folio_put(folio);
if (rc)
BUG();
break;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index bf5d0b1b16f4..55d3e466d3cb 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2139,26 +2139,10 @@ static inline int folio_estimated_sharers(struct folio *folio)
return page_mapcount(folio_page(folio, 0));
}
-#ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE
-static inline int arch_make_page_accessible(struct page *page)
-{
- return 0;
-}
-#endif
-
-#ifndef HAVE_ARCH_MAKE_FOLIO_ACCESSIBLE
+#ifndef arch_make_folio_accessible
static inline int arch_make_folio_accessible(struct folio *folio)
{
- int ret;
- long i, nr = folio_nr_pages(folio);
-
- for (i = 0; i < nr; i++) {
- ret = arch_make_page_accessible(folio_page(folio, i));
- if (ret)
- break;
- }
-
- return ret;
+ return 0;
}
#endif
--
2.40.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 0/3] Use arch_make_folio_accessible() everywhere
2023-09-15 17:28 [PATCH 0/3] Use arch_make_folio_accessible() everywhere Matthew Wilcox (Oracle)
` (2 preceding siblings ...)
2023-09-15 17:28 ` [PATCH 3/3] s390: Convert arch_make_page_accessible() to arch_make_folio_accessible() Matthew Wilcox (Oracle)
@ 2023-09-15 17:54 ` Claudio Imbrenda
2023-09-15 18:17 ` Matthew Wilcox
3 siblings, 1 reply; 7+ messages in thread
From: Claudio Imbrenda @ 2023-09-15 17:54 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: Andrew Morton, linux-mm, Heiko Carstens, Vasily Gorbik,
Alexander Gordeev, linux-s390, kvm
On Fri, 15 Sep 2023 18:28:25 +0100
"Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
> We introduced arch_make_folio_accessible() a couple of years
> ago, and it's in use in the page writeback path. GUP still uses
> arch_make_page_accessible(), which means that we can succeed in making
> a single page of a folio accessible, then fail to make the rest of the
> folio accessible when it comes time to do writeback and it's too late
> to do anything about it. I'm not sure how much of a real problem this is.
>
> Switching everything around to arch_make_folio_accessible() also lets
> us switch the page flag to be per-folio instead of per-page, which is
> a good step towards dynamically allocated folios.
>
> Build-tested only.
>
> Matthew Wilcox (Oracle) (3):
> mm: Use arch_make_folio_accessible() in gup_pte_range()
> mm: Convert follow_page_pte() to use a folio
> s390: Convert arch_make_page_accessible() to
> arch_make_folio_accessible()
>
> arch/s390/include/asm/page.h | 5 ++--
> arch/s390/kernel/uv.c | 46 +++++++++++++++++++++++-------------
> arch/s390/mm/fault.c | 15 ++++++------
> include/linux/mm.h | 20 ++--------------
> mm/gup.c | 22 +++++++++--------
> 5 files changed, 54 insertions(+), 54 deletions(-)
>
if I understand correctly, this will as a matter of fact move the
security property from pages to folios.
this means that trying to access a page will (try to) make the whole
folio accessible, even though that might be counterproductive....
and there is no way to simply split a folio
I don't like this
there are also other reasons, but I don't have time to go into the
details on a Friday evening (will elaborate more on Monday)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 0/3] Use arch_make_folio_accessible() everywhere
2023-09-15 17:54 ` [PATCH 0/3] Use arch_make_folio_accessible() everywhere Claudio Imbrenda
@ 2023-09-15 18:17 ` Matthew Wilcox
2023-09-18 11:01 ` Claudio Imbrenda
0 siblings, 1 reply; 7+ messages in thread
From: Matthew Wilcox @ 2023-09-15 18:17 UTC (permalink / raw)
To: Claudio Imbrenda
Cc: Andrew Morton, linux-mm, Heiko Carstens, Vasily Gorbik,
Alexander Gordeev, linux-s390, kvm
On Fri, Sep 15, 2023 at 07:54:50PM +0200, Claudio Imbrenda wrote:
> On Fri, 15 Sep 2023 18:28:25 +0100
> "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
>
> > We introduced arch_make_folio_accessible() a couple of years
> > ago, and it's in use in the page writeback path. GUP still uses
> > arch_make_page_accessible(), which means that we can succeed in making
> > a single page of a folio accessible, then fail to make the rest of the
> > folio accessible when it comes time to do writeback and it's too late
> > to do anything about it. I'm not sure how much of a real problem this is.
> >
> > Switching everything around to arch_make_folio_accessible() also lets
> > us switch the page flag to be per-folio instead of per-page, which is
> > a good step towards dynamically allocated folios.
>
> if I understand correctly, this will as a matter of fact move the
> security property from pages to folios.
Correct.
> this means that trying to access a page will (try to) make the whole
> folio accessible, even though that might be counterproductive....
>
> and there is no way to simply split a folio
>
> I don't like this
As I said in the cover letter, we already make the entire folio
accessible in the writeback path. I suppose if you never write the
folio back, this is new ...
Anyway, looking forward to a more substantial discussion on Monday.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 0/3] Use arch_make_folio_accessible() everywhere
2023-09-15 18:17 ` Matthew Wilcox
@ 2023-09-18 11:01 ` Claudio Imbrenda
0 siblings, 0 replies; 7+ messages in thread
From: Claudio Imbrenda @ 2023-09-18 11:01 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Andrew Morton, linux-mm, Heiko Carstens, Vasily Gorbik,
Alexander Gordeev, linux-s390, kvm
On Fri, 15 Sep 2023 19:17:51 +0100
Matthew Wilcox <willy@infradead.org> wrote:
> On Fri, Sep 15, 2023 at 07:54:50PM +0200, Claudio Imbrenda wrote:
> > On Fri, 15 Sep 2023 18:28:25 +0100
> > "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
> >
> > > We introduced arch_make_folio_accessible() a couple of years
> > > ago, and it's in use in the page writeback path. GUP still uses
> > > arch_make_page_accessible(), which means that we can succeed in making
> > > a single page of a folio accessible, then fail to make the rest of the
> > > folio accessible when it comes time to do writeback and it's too late
> > > to do anything about it. I'm not sure how much of a real problem this is.
> > >
> > > Switching everything around to arch_make_folio_accessible() also lets
> > > us switch the page flag to be per-folio instead of per-page, which is
> > > a good step towards dynamically allocated folios.
> >
> > if I understand correctly, this will as a matter of fact move the
> > security property from pages to folios.
>
> Correct.
>
> > this means that trying to access a page will (try to) make the whole
> > folio accessible, even though that might be counterproductive....
> >
> > and there is no way to simply split a folio
> >
> > I don't like this
>
> As I said in the cover letter, we already make the entire folio
> accessible in the writeback path. I suppose if you never write the
> folio back, this is new ...
>
> Anyway, looking forward to a more substantial discussion on Monday.
this will be a wall of text, sorry
first some definitions:
* secure page
page belonging to a secure guest, accessible by the guest, not
accessible by the host
* shared page
page belonging to a secure guest, accessible by the guest and by the
host. the guest decides which pages to share
* pinned shared
the host can force a page to stay shared; when the guest wants to
unshare it, a vm-exit event happens. this allows the host to make sure
the page is allowed become secure again before allowing the page to be
unshared
* exported
page with guest content, encrypted and integrity protected, no longer
accessible by the guest, but accessible by the host
* made accessible
a page is pinned shared, or, if that fails, exported.
now let's see how we use the arch-bit in struct page:
the arch-bit that is used to track whether the page is secure or not.
the bit MUST be set whenever a page is secure, and MAY be set for
non-secure pages. in general it should not be set for non-secure pages,
for performance reasons. sometimes we have small windows where
non-secure pages can have the bit set.
sometimes the arch-bit is used to determine whether further processing
(e.g. export) is needed.
when a page transitions from non-secure to secure, we must make sure
that no I/O is possibly happening on it. we do this in 2 ways at the
same time:
* make sure the page is not in writeback, and if so wait until the
writeback is over
* make sure the page does not have any extra references except for the
mapping itself. this means no-one else is trying to use the page for
any other purpose, e.g. direct I/O.
>> If the page has extra references, we wait until they are dropped <<
each time a guest tries to touch an exported page, it gets imported.
it's important to track the security property of each page individually.
and this is the most important thing, and the root of all our issues:
>> we MUST NOT do any kind of I/O on secure pages <<
now let's see what happens for writeback. if a folio is in writeback,
all of it is going to be written back. so all of it needs to be made
accessible. once the I/O is over, the pages can be accessed again.
let's see what happens for other types of I/O (e.g. direct I/O)
currently, when we try to do direct I/O on a specific page,
pin_user_pages() will cause the pages to be made accessible. the
other neighbouring pages will stay secure and keep the arch bit set.
if the security property tracking gets moved to a whole folio, then you
can have a situation where the guest asks for I/O on a shared page, and
then tries to access a neighbouring page... which will hang until the
I/O is done, since the rest of folio has now been exported.
even worse, virtio requires individual guest pages to be shared
with the host. the host needs to do pin_user_pages() on those pages to
make sure they stay there, since they will be accessed directly. the pin
stays as long as the corresponding virtio devices are configured (i.e.
until the guest shuts down or reboots). with your changes, the whole
folio will be made accessible, both the shared page (pin shared) and the
remaining pages in the folio (exported). when the guest tries to access
the remaining pages, it will fail, triggering an import in the host. the
host will then proceed to wait forever.....
if I understand correctly, we have no control over the size of the
folios, and no way to split folios, so there is no solution to this.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-09-18 11:02 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-15 17:28 [PATCH 0/3] Use arch_make_folio_accessible() everywhere Matthew Wilcox (Oracle)
2023-09-15 17:28 ` [PATCH 1/3] mm: Use arch_make_folio_accessible() in gup_pte_range() Matthew Wilcox (Oracle)
2023-09-15 17:28 ` [PATCH 2/3] mm: Convert follow_page_pte() to use a folio Matthew Wilcox (Oracle)
2023-09-15 17:28 ` [PATCH 3/3] s390: Convert arch_make_page_accessible() to arch_make_folio_accessible() Matthew Wilcox (Oracle)
2023-09-15 17:54 ` [PATCH 0/3] Use arch_make_folio_accessible() everywhere Claudio Imbrenda
2023-09-15 18:17 ` Matthew Wilcox
2023-09-18 11:01 ` Claudio Imbrenda
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).