* [PATCH v5 23/38] s390: Implement the new page table range API
[not found] <20230710204339.3554919-1-willy@infradead.org>
@ 2023-07-10 20:43 ` Matthew Wilcox (Oracle)
2023-07-11 9:07 ` [PATCH v5 00/38] New " Christian Borntraeger
1 sibling, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-07-10 20:43 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), linux-arch, linux-mm, linux-kernel,
Gerald Schaefer, Mike Rapoport, Heiko Carstens, Vasily Gorbik,
Alexander Gordeev, Christian Borntraeger, Sven Schnelle,
linux-s390
Add set_ptes() and update_mmu_cache_range().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
---
arch/s390/include/asm/pgtable.h | 33 ++++++++++++++++++++++++---------
1 file changed, 24 insertions(+), 9 deletions(-)
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index c55f3c3365af..02973c740a5b 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -47,6 +47,7 @@ static inline void update_page_count(int level, long count)
* tables contain all the necessary information.
*/
#define update_mmu_cache(vma, address, ptep) do { } while (0)
+#define update_mmu_cache_range(vmf, vma, addr, ptep, nr) do { } while (0)
#define update_mmu_cache_pmd(vma, address, ptep) do { } while (0)
/*
@@ -1316,20 +1317,34 @@ pgprot_t pgprot_writecombine(pgprot_t prot);
pgprot_t pgprot_writethrough(pgprot_t prot);
/*
- * Certain architectures need to do special things when PTEs
- * within a page table are directly modified. Thus, the following
- * hook is made available.
+ * Set multiple PTEs to consecutive pages with a single call. All PTEs
+ * are within the same folio, PMD and VMA.
*/
-static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep, pte_t entry)
+static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t entry, unsigned int nr)
{
if (pte_present(entry))
entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED));
- if (mm_has_pgste(mm))
- ptep_set_pte_at(mm, addr, ptep, entry);
- else
- set_pte(ptep, entry);
+ if (mm_has_pgste(mm)) {
+ for (;;) {
+ ptep_set_pte_at(mm, addr, ptep, entry);
+ if (--nr == 0)
+ break;
+ ptep++;
+ entry = __pte(pte_val(entry) + PAGE_SIZE);
+ addr += PAGE_SIZE;
+ }
+ } else {
+ for (;;) {
+ set_pte(ptep, entry);
+ if (--nr == 0)
+ break;
+ ptep++;
+ entry = __pte(pte_val(entry) + PAGE_SIZE);
+ }
+ }
}
+#define set_ptes set_ptes
/*
* Conversion functions: convert a page and protection to a page entry,
--
2.39.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
[not found] <20230710204339.3554919-1-willy@infradead.org>
2023-07-10 20:43 ` [PATCH v5 23/38] s390: Implement the new page table range API Matthew Wilcox (Oracle)
@ 2023-07-11 9:07 ` Christian Borntraeger
2023-07-11 12:36 ` Matthew Wilcox
1 sibling, 1 reply; 12+ messages in thread
From: Christian Borntraeger @ 2023-07-11 9:07 UTC (permalink / raw)
To: Matthew Wilcox (Oracle), Andrew Morton, Claudio Imbrenda
Cc: linux-arch, linux-mm, linux-kernel, Gerald Schaefer, linux-s390
Am 10.07.23 um 22:43 schrieb Matthew Wilcox (Oracle):
> This patchset changes the API used by the MM to set up page table entries.
> The four APIs are:
> set_ptes(mm, addr, ptep, pte, nr)
> update_mmu_cache_range(vma, addr, ptep, nr)
> flush_dcache_folio(folio)
> flush_icache_pages(vma, page, nr)
>
> flush_dcache_folio() isn't technically new, but no architecture
> implemented it, so I've done that for them. The old APIs remain around
> but are mostly implemented by calling the new interfaces.
>
> The new APIs are based around setting up N page table entries at once.
> The N entries belong to the same PMD, the same folio and the same VMA,
> so ptep++ is a legitimate operation, and locking is taken care of for
> you. Some architectures can do a better job of it than just a loop,
> but I have hesitated to make too deep a change to architectures I don't
> understand well.
>
> One thing I have changed in every architecture is that PG_arch_1 is now a
> per-folio bit instead of a per-page bit. This was something that would
> have to happen eventually, and it makes sense to do it now rather than
> iterate over every page involved in a cache flush and figure out if it
> needs to happen.
I think we do use PG_arch_1 on s390 for our secure page handling and
making this perf folio instead of physical page really seems wrong
and it probably breaks this code.
Claudio, can you have a look?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
2023-07-11 9:07 ` [PATCH v5 00/38] New " Christian Borntraeger
@ 2023-07-11 12:36 ` Matthew Wilcox
2023-07-11 15:24 ` Claudio Imbrenda
2023-07-13 10:42 ` Christian Borntraeger
0 siblings, 2 replies; 12+ messages in thread
From: Matthew Wilcox @ 2023-07-11 12:36 UTC (permalink / raw)
To: Christian Borntraeger
Cc: Andrew Morton, Claudio Imbrenda, linux-arch, linux-mm,
linux-kernel, Gerald Schaefer, linux-s390
On Tue, Jul 11, 2023 at 11:07:06AM +0200, Christian Borntraeger wrote:
> Am 10.07.23 um 22:43 schrieb Matthew Wilcox (Oracle):
> > This patchset changes the API used by the MM to set up page table entries.
> > The four APIs are:
> > set_ptes(mm, addr, ptep, pte, nr)
> > update_mmu_cache_range(vma, addr, ptep, nr)
> > flush_dcache_folio(folio)
> > flush_icache_pages(vma, page, nr)
> >
> > flush_dcache_folio() isn't technically new, but no architecture
> > implemented it, so I've done that for them. The old APIs remain around
> > but are mostly implemented by calling the new interfaces.
> >
> > The new APIs are based around setting up N page table entries at once.
> > The N entries belong to the same PMD, the same folio and the same VMA,
> > so ptep++ is a legitimate operation, and locking is taken care of for
> > you. Some architectures can do a better job of it than just a loop,
> > but I have hesitated to make too deep a change to architectures I don't
> > understand well.
> >
> > One thing I have changed in every architecture is that PG_arch_1 is now a
> > per-folio bit instead of a per-page bit. This was something that would
> > have to happen eventually, and it makes sense to do it now rather than
> > iterate over every page involved in a cache flush and figure out if it
> > needs to happen.
>
> I think we do use PG_arch_1 on s390 for our secure page handling and
> making this perf folio instead of physical page really seems wrong
> and it probably breaks this code.
Per-page flags are going away in the next few years, so you're going to
need a new design. s390 seems to do a lot of unusual things. I wish
you'd talk to the rest of us more.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
2023-07-11 12:36 ` Matthew Wilcox
@ 2023-07-11 15:24 ` Claudio Imbrenda
2023-07-11 16:52 ` Andrew Morton
2023-07-12 5:29 ` Matthew Wilcox
2023-07-13 10:42 ` Christian Borntraeger
1 sibling, 2 replies; 12+ messages in thread
From: Claudio Imbrenda @ 2023-07-11 15:24 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Christian Borntraeger, Andrew Morton, linux-arch, linux-mm,
linux-kernel, Gerald Schaefer, linux-s390
On Tue, 11 Jul 2023 13:36:27 +0100
Matthew Wilcox <willy@infradead.org> wrote:
> On Tue, Jul 11, 2023 at 11:07:06AM +0200, Christian Borntraeger wrote:
> > Am 10.07.23 um 22:43 schrieb Matthew Wilcox (Oracle):
> > > This patchset changes the API used by the MM to set up page table entries.
> > > The four APIs are:
> > > set_ptes(mm, addr, ptep, pte, nr)
> > > update_mmu_cache_range(vma, addr, ptep, nr)
> > > flush_dcache_folio(folio)
> > > flush_icache_pages(vma, page, nr)
> > >
> > > flush_dcache_folio() isn't technically new, but no architecture
> > > implemented it, so I've done that for them. The old APIs remain around
> > > but are mostly implemented by calling the new interfaces.
> > >
> > > The new APIs are based around setting up N page table entries at once.
> > > The N entries belong to the same PMD, the same folio and the same VMA,
> > > so ptep++ is a legitimate operation, and locking is taken care of for
> > > you. Some architectures can do a better job of it than just a loop,
> > > but I have hesitated to make too deep a change to architectures I don't
> > > understand well.
> > >
> > > One thing I have changed in every architecture is that PG_arch_1 is now a
> > > per-folio bit instead of a per-page bit. This was something that would
> > > have to happen eventually, and it makes sense to do it now rather than
> > > iterate over every page involved in a cache flush and figure out if it
> > > needs to happen.
> >
> > I think we do use PG_arch_1 on s390 for our secure page handling and
> > making this perf folio instead of physical page really seems wrong
> > and it probably breaks this code.
>
> Per-page flags are going away in the next few years, so you're going to
For each 4k physical page frame, we need to keep track whether it is
secure or not.
A bit in struct page seems the most logical choice. If that's not
possible anymore, how would you propose we should do?
> need a new design. s390 seems to do a lot of unusual things. I wish
s390 is an unusual architecture. we are working on un-weirding our
code, but it takes time
> you'd talk to the rest of us more.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
2023-07-11 15:24 ` Claudio Imbrenda
@ 2023-07-11 16:52 ` Andrew Morton
2023-07-11 22:03 ` Matthew Wilcox
2023-07-12 5:29 ` Matthew Wilcox
1 sibling, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2023-07-11 16:52 UTC (permalink / raw)
To: Claudio Imbrenda
Cc: Matthew Wilcox, Christian Borntraeger, linux-arch, linux-mm,
linux-kernel, Gerald Schaefer, linux-s390
On Tue, 11 Jul 2023 17:24:40 +0200 Claudio Imbrenda <imbrenda@linux.ibm.com> wrote:
> On Tue, 11 Jul 2023 13:36:27 +0100
> Matthew Wilcox <willy@infradead.org> wrote:
>
> > On Tue, Jul 11, 2023 at 11:07:06AM +0200, Christian Borntraeger wrote:
> > > Am 10.07.23 um 22:43 schrieb Matthew Wilcox (Oracle):
> > > > This patchset changes the API used by the MM to set up page table entries.
> > > > The four APIs are:
> > > > set_ptes(mm, addr, ptep, pte, nr)
> > > > update_mmu_cache_range(vma, addr, ptep, nr)
> > > > flush_dcache_folio(folio)
> > > > flush_icache_pages(vma, page, nr)
> > > >
> > > > flush_dcache_folio() isn't technically new, but no architecture
> > > > implemented it, so I've done that for them. The old APIs remain around
> > > > but are mostly implemented by calling the new interfaces.
> > > >
> > > > The new APIs are based around setting up N page table entries at once.
> > > > The N entries belong to the same PMD, the same folio and the same VMA,
> > > > so ptep++ is a legitimate operation, and locking is taken care of for
> > > > you. Some architectures can do a better job of it than just a loop,
> > > > but I have hesitated to make too deep a change to architectures I don't
> > > > understand well.
> > > >
> > > > One thing I have changed in every architecture is that PG_arch_1 is now a
> > > > per-folio bit instead of a per-page bit. This was something that would
> > > > have to happen eventually, and it makes sense to do it now rather than
> > > > iterate over every page involved in a cache flush and figure out if it
> > > > needs to happen.
> > >
> > > I think we do use PG_arch_1 on s390 for our secure page handling and
> > > making this perf folio instead of physical page really seems wrong
> > > and it probably breaks this code.
> >
> > Per-page flags are going away in the next few years, so you're going to
>
> For each 4k physical page frame, we need to keep track whether it is
> secure or not.
>
> A bit in struct page seems the most logical choice. If that's not
> possible anymore, how would you propose we should do?
>
> > need a new design. s390 seems to do a lot of unusual things. I wish
>
> s390 is an unusual architecture. we are working on un-weirding our
> code, but it takes time
>
This issue sounds fatal for this version of this patchset?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
2023-07-11 16:52 ` Andrew Morton
@ 2023-07-11 22:03 ` Matthew Wilcox
0 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox @ 2023-07-11 22:03 UTC (permalink / raw)
To: Andrew Morton
Cc: Claudio Imbrenda, Christian Borntraeger, linux-arch, linux-mm,
linux-kernel, Gerald Schaefer, linux-s390
On Tue, Jul 11, 2023 at 09:52:33AM -0700, Andrew Morton wrote:
> On Tue, 11 Jul 2023 17:24:40 +0200 Claudio Imbrenda <imbrenda@linux.ibm.com> wrote:
>
> > On Tue, 11 Jul 2023 13:36:27 +0100
> > Matthew Wilcox <willy@infradead.org> wrote:
> >
> > > On Tue, Jul 11, 2023 at 11:07:06AM +0200, Christian Borntraeger wrote:
> > > > Am 10.07.23 um 22:43 schrieb Matthew Wilcox (Oracle):
> > > > > This patchset changes the API used by the MM to set up page table entries.
> > > > > The four APIs are:
> > > > > set_ptes(mm, addr, ptep, pte, nr)
> > > > > update_mmu_cache_range(vma, addr, ptep, nr)
> > > > > flush_dcache_folio(folio)
> > > > > flush_icache_pages(vma, page, nr)
> > > > >
> > > > > flush_dcache_folio() isn't technically new, but no architecture
> > > > > implemented it, so I've done that for them. The old APIs remain around
> > > > > but are mostly implemented by calling the new interfaces.
> > > > >
> > > > > The new APIs are based around setting up N page table entries at once.
> > > > > The N entries belong to the same PMD, the same folio and the same VMA,
> > > > > so ptep++ is a legitimate operation, and locking is taken care of for
> > > > > you. Some architectures can do a better job of it than just a loop,
> > > > > but I have hesitated to make too deep a change to architectures I don't
> > > > > understand well.
> > > > >
> > > > > One thing I have changed in every architecture is that PG_arch_1 is now a
> > > > > per-folio bit instead of a per-page bit. This was something that would
> > > > > have to happen eventually, and it makes sense to do it now rather than
> > > > > iterate over every page involved in a cache flush and figure out if it
> > > > > needs to happen.
> > > >
> > > > I think we do use PG_arch_1 on s390 for our secure page handling and
> > > > making this perf folio instead of physical page really seems wrong
> > > > and it probably breaks this code.
> > >
> > > Per-page flags are going away in the next few years, so you're going to
> >
> > For each 4k physical page frame, we need to keep track whether it is
> > secure or not.
> >
> > A bit in struct page seems the most logical choice. If that's not
> > possible anymore, how would you propose we should do?
> >
> > > need a new design. s390 seems to do a lot of unusual things. I wish
> >
> > s390 is an unusual architecture. we are working on un-weirding our
> > code, but it takes time
> >
>
> This issue sounds fatal for this version of this patchset?
It's only declared as being per-folio in the cover letter to this
patchset. I haven't done anything that will prohibit s390 from using it
the way they do now. So it's not fatal, but it sounds like the
in_range() macro might be ...
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
2023-07-11 15:24 ` Claudio Imbrenda
2023-07-11 16:52 ` Andrew Morton
@ 2023-07-12 5:29 ` Matthew Wilcox
2023-07-12 8:35 ` Claudio Imbrenda
1 sibling, 1 reply; 12+ messages in thread
From: Matthew Wilcox @ 2023-07-12 5:29 UTC (permalink / raw)
To: Claudio Imbrenda
Cc: Christian Borntraeger, Andrew Morton, linux-arch, linux-mm,
linux-kernel, Gerald Schaefer, linux-s390
On Tue, Jul 11, 2023 at 05:24:40PM +0200, Claudio Imbrenda wrote:
> On Tue, 11 Jul 2023 13:36:27 +0100
> Matthew Wilcox <willy@infradead.org> wrote:
> > > I think we do use PG_arch_1 on s390 for our secure page handling and
> > > making this perf folio instead of physical page really seems wrong
> > > and it probably breaks this code.
> >
> > Per-page flags are going away in the next few years, so you're going to
>
> For each 4k physical page frame, we need to keep track whether it is
> secure or not.
Do you? Wouldn't it make more sense to track that per allocation instead
of per page? ie if we allocate a 16kB anon folio for a VMA, don't you
want the entire folio to be marked as secure vs insecure?
I don't really know what secure means in this context. I think it has
something to do with which of the VM or the hypervisor can access it, but
it feels like something new that I've never had properly explained to me.
> A bit in struct page seems the most logical choice. If that's not
> possible anymore, how would you propose we should do?
The plan is to shrink struct page down to a single pointer (which
includes a few tag bits to say what type that pointer is -- a page
table, anon mem, file mem, slab, etc). So there won't be any bits
available for something like "secure or not". You could use a side
structure if you really need to keep track on a per page basis.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
2023-07-12 5:29 ` Matthew Wilcox
@ 2023-07-12 8:35 ` Claudio Imbrenda
0 siblings, 0 replies; 12+ messages in thread
From: Claudio Imbrenda @ 2023-07-12 8:35 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Christian Borntraeger, Andrew Morton, linux-arch, linux-mm,
linux-kernel, Gerald Schaefer, linux-s390
On Wed, 12 Jul 2023 06:29:21 +0100
Matthew Wilcox <willy@infradead.org> wrote:
> On Tue, Jul 11, 2023 at 05:24:40PM +0200, Claudio Imbrenda wrote:
> > On Tue, 11 Jul 2023 13:36:27 +0100
> > Matthew Wilcox <willy@infradead.org> wrote:
> > > > I think we do use PG_arch_1 on s390 for our secure page handling and
> > > > making this perf folio instead of physical page really seems wrong
> > > > and it probably breaks this code.
> > >
> > > Per-page flags are going away in the next few years, so you're going to
> >
> > For each 4k physical page frame, we need to keep track whether it is
> > secure or not.
>
> Do you? Wouldn't it make more sense to track that per allocation instead
no
> of per page? ie if we allocate a 16kB anon folio for a VMA, don't you
> want the entire folio to be marked as secure vs insecure?
if we allocate a 16k folio, it would actually be initially marked as
non-secure until the guest touches any of it, then only those 4k pages
that are needed get marked as secure.
the guest can also share the pages with the host, in which case the
individual 4k pages get marked as non-secure once I/O is attempted on
them (e.g. direct I/O)
userspace (i.e. QEMU) can also try to look into the guest, causing
individual pages to be exported (securely encrypted and then marked as
non-secure) if they were secure and not shared.
I/O cannot trigger exports, it will just fail, and that should not
happen because in some cases it can bring down the whole system. Which
is one of the main reasons why we need to keep track of the state.
>
> I don't really know what secure means in this context. I think it has
> something to do with which of the VM or the hypervisor can access it, but
> it feels like something new that I've never had properly explained to me.
Secure means it belongs to a secure guest (confidential VM,
protected virtualisation, Secure Execution, there are many names...).
Hardware will prevent the host (or any other entity except for the
secure guest itself) from accessing those 4k physical page frames,
regardless of how the host might try. An exception will be presented
for any attempts.
I/O will not trigger any exception, and will instead just fail.
I hope this explains why we need to track the property for each 4k
physical page frame.
>
> > A bit in struct page seems the most logical choice. If that's not
> > possible anymore, how would you propose we should do?
>
> The plan is to shrink struct page down to a single pointer (which
interesting
> includes a few tag bits to say what type that pointer is -- a page
> table, anon mem, file mem, slab, etc). So there won't be any bits
> available for something like "secure or not". You could use a side
> structure if you really need to keep track on a per page basis.
I guess that's something we will need to work on
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
2023-07-11 12:36 ` Matthew Wilcox
2023-07-11 15:24 ` Claudio Imbrenda
@ 2023-07-13 10:42 ` Christian Borntraeger
2023-07-13 13:42 ` Matthew Wilcox
1 sibling, 1 reply; 12+ messages in thread
From: Christian Borntraeger @ 2023-07-13 10:42 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Andrew Morton, Claudio Imbrenda, linux-arch, linux-mm,
linux-kernel, Gerald Schaefer, linux-s390
Am 11.07.23 um 14:36 schrieb Matthew Wilcox:
> On Tue, Jul 11, 2023 at 11:07:06AM +0200, Christian Borntraeger wrote:
>> Am 10.07.23 um 22:43 schrieb Matthew Wilcox (Oracle):
>>> This patchset changes the API used by the MM to set up page table entries.
>>> The four APIs are:
>>> set_ptes(mm, addr, ptep, pte, nr)
>>> update_mmu_cache_range(vma, addr, ptep, nr)
>>> flush_dcache_folio(folio)
>>> flush_icache_pages(vma, page, nr)
>>>
>>> flush_dcache_folio() isn't technically new, but no architecture
>>> implemented it, so I've done that for them. The old APIs remain around
>>> but are mostly implemented by calling the new interfaces.
>>>
>>> The new APIs are based around setting up N page table entries at once.
>>> The N entries belong to the same PMD, the same folio and the same VMA,
>>> so ptep++ is a legitimate operation, and locking is taken care of for
>>> you. Some architectures can do a better job of it than just a loop,
>>> but I have hesitated to make too deep a change to architectures I don't
>>> understand well.
>>>
>>> One thing I have changed in every architecture is that PG_arch_1 is now a
>>> per-folio bit instead of a per-page bit. This was something that would
>>> have to happen eventually, and it makes sense to do it now rather than
>>> iterate over every page involved in a cache flush and figure out if it
>>> needs to happen.
>>
>> I think we do use PG_arch_1 on s390 for our secure page handling and
>> making this perf folio instead of physical page really seems wrong
>> and it probably breaks this code.
>
> Per-page flags are going away in the next few years, so you're going to
> need a new design. s390 seems to do a lot of unusual things. I wish
> you'd talk to the rest of us more.
I understand you point from a logical point of view, but a 4k page frame
is also a hardware defined memory region. And I think not only for us.
How do you want to implement hardware poisoning for example?
Marking the whole folio with PG_hwpoison seems wrong.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
2023-07-13 10:42 ` Christian Borntraeger
@ 2023-07-13 13:42 ` Matthew Wilcox
2023-07-13 20:27 ` Christian Borntraeger
0 siblings, 1 reply; 12+ messages in thread
From: Matthew Wilcox @ 2023-07-13 13:42 UTC (permalink / raw)
To: Christian Borntraeger
Cc: Andrew Morton, Claudio Imbrenda, linux-arch, linux-mm,
linux-kernel, Gerald Schaefer, linux-s390
On Thu, Jul 13, 2023 at 12:42:44PM +0200, Christian Borntraeger wrote:
>
>
> Am 11.07.23 um 14:36 schrieb Matthew Wilcox:
> > On Tue, Jul 11, 2023 at 11:07:06AM +0200, Christian Borntraeger wrote:
> > > Am 10.07.23 um 22:43 schrieb Matthew Wilcox (Oracle):
> > > > This patchset changes the API used by the MM to set up page table entries.
> > > > The four APIs are:
> > > > set_ptes(mm, addr, ptep, pte, nr)
> > > > update_mmu_cache_range(vma, addr, ptep, nr)
> > > > flush_dcache_folio(folio)
> > > > flush_icache_pages(vma, page, nr)
> > > >
> > > > flush_dcache_folio() isn't technically new, but no architecture
> > > > implemented it, so I've done that for them. The old APIs remain around
> > > > but are mostly implemented by calling the new interfaces.
> > > >
> > > > The new APIs are based around setting up N page table entries at once.
> > > > The N entries belong to the same PMD, the same folio and the same VMA,
> > > > so ptep++ is a legitimate operation, and locking is taken care of for
> > > > you. Some architectures can do a better job of it than just a loop,
> > > > but I have hesitated to make too deep a change to architectures I don't
> > > > understand well.
> > > >
> > > > One thing I have changed in every architecture is that PG_arch_1 is now a
> > > > per-folio bit instead of a per-page bit. This was something that would
> > > > have to happen eventually, and it makes sense to do it now rather than
> > > > iterate over every page involved in a cache flush and figure out if it
> > > > needs to happen.
> > >
> > > I think we do use PG_arch_1 on s390 for our secure page handling and
> > > making this perf folio instead of physical page really seems wrong
> > > and it probably breaks this code.
> >
> > Per-page flags are going away in the next few years, so you're going to
> > need a new design. s390 seems to do a lot of unusual things. I wish
> > you'd talk to the rest of us more.
>
> I understand you point from a logical point of view, but a 4k page frame
> is also a hardware defined memory region. And I think not only for us.
> How do you want to implement hardware poisoning for example?
> Marking the whole folio with PG_hwpoison seems wrong.
For hardware poison, we can't use the page for any other purpose any more.
So one of the 16 types of pointer is for hardware poison. That doesn't
seem like it's a solution that could work for secure/insecure pages?
But what I'm really wondering is why you need to transition pages
between secure/insecure on a 4kB boundary. What's the downside to doing
it on a 16kB or 64kB boundary, or whatever size has been allocated?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
2023-07-13 13:42 ` Matthew Wilcox
@ 2023-07-13 20:27 ` Christian Borntraeger
2023-07-13 21:22 ` Matthew Wilcox
0 siblings, 1 reply; 12+ messages in thread
From: Christian Borntraeger @ 2023-07-13 20:27 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Andrew Morton, Claudio Imbrenda, linux-arch, linux-mm,
linux-kernel, Gerald Schaefer, linux-s390
Am 13.07.23 um 15:42 schrieb Matthew Wilcox:
> On Thu, Jul 13, 2023 at 12:42:44PM +0200, Christian Borntraeger wrote:
>>
>>
>> Am 11.07.23 um 14:36 schrieb Matthew Wilcox:
>>> On Tue, Jul 11, 2023 at 11:07:06AM +0200, Christian Borntraeger wrote:
>>>> Am 10.07.23 um 22:43 schrieb Matthew Wilcox (Oracle):
>>>>> This patchset changes the API used by the MM to set up page table entries.
>>>>> The four APIs are:
>>>>> set_ptes(mm, addr, ptep, pte, nr)
>>>>> update_mmu_cache_range(vma, addr, ptep, nr)
>>>>> flush_dcache_folio(folio)
>>>>> flush_icache_pages(vma, page, nr)
>>>>>
>>>>> flush_dcache_folio() isn't technically new, but no architecture
>>>>> implemented it, so I've done that for them. The old APIs remain around
>>>>> but are mostly implemented by calling the new interfaces.
>>>>>
>>>>> The new APIs are based around setting up N page table entries at once.
>>>>> The N entries belong to the same PMD, the same folio and the same VMA,
>>>>> so ptep++ is a legitimate operation, and locking is taken care of for
>>>>> you. Some architectures can do a better job of it than just a loop,
>>>>> but I have hesitated to make too deep a change to architectures I don't
>>>>> understand well.
>>>>>
>>>>> One thing I have changed in every architecture is that PG_arch_1 is now a
>>>>> per-folio bit instead of a per-page bit. This was something that would
>>>>> have to happen eventually, and it makes sense to do it now rather than
>>>>> iterate over every page involved in a cache flush and figure out if it
>>>>> needs to happen.
>>>>
>>>> I think we do use PG_arch_1 on s390 for our secure page handling and
>>>> making this perf folio instead of physical page really seems wrong
>>>> and it probably breaks this code.
>>>
>>> Per-page flags are going away in the next few years, so you're going to
>>> need a new design. s390 seems to do a lot of unusual things. I wish
>>> you'd talk to the rest of us more.
>>
>> I understand you point from a logical point of view, but a 4k page frame
>> is also a hardware defined memory region. And I think not only for us.
>> How do you want to implement hardware poisoning for example?
>> Marking the whole folio with PG_hwpoison seems wrong.
>
> For hardware poison, we can't use the page for any other purpose any more.
> So one of the 16 types of pointer is for hardware poison. That doesn't
> seem like it's a solution that could work for secure/insecure pages?
>
> But what I'm really wondering is why you need to transition pages
> between secure/insecure on a 4kB boundary. What's the downside to doing
> it on a 16kB or 64kB boundary, or whatever size has been allocated?
The export and import for more pages will be more expensive, but I assume that
we would then also use the larger chunks (e.g. for paging). The more interesting
problem is that the guest can make a page shared/non-shared on a 4kb granularity.
Stupid question: can folios be split into folio,single page,folio when needed?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 00/38] New page table range API
2023-07-13 20:27 ` Christian Borntraeger
@ 2023-07-13 21:22 ` Matthew Wilcox
0 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox @ 2023-07-13 21:22 UTC (permalink / raw)
To: Christian Borntraeger
Cc: Andrew Morton, Claudio Imbrenda, linux-arch, linux-mm,
linux-kernel, Gerald Schaefer, linux-s390
On Thu, Jul 13, 2023 at 10:27:21PM +0200, Christian Borntraeger wrote:
>
>
> Am 13.07.23 um 15:42 schrieb Matthew Wilcox:
> > On Thu, Jul 13, 2023 at 12:42:44PM +0200, Christian Borntraeger wrote:
> > >
> > >
> > > Am 11.07.23 um 14:36 schrieb Matthew Wilcox:
> > > > On Tue, Jul 11, 2023 at 11:07:06AM +0200, Christian Borntraeger wrote:
> > > > > Am 10.07.23 um 22:43 schrieb Matthew Wilcox (Oracle):
> > > > > > This patchset changes the API used by the MM to set up page table entries.
> > > > > > The four APIs are:
> > > > > > set_ptes(mm, addr, ptep, pte, nr)
> > > > > > update_mmu_cache_range(vma, addr, ptep, nr)
> > > > > > flush_dcache_folio(folio)
> > > > > > flush_icache_pages(vma, page, nr)
> > > > > >
> > > > > > flush_dcache_folio() isn't technically new, but no architecture
> > > > > > implemented it, so I've done that for them. The old APIs remain around
> > > > > > but are mostly implemented by calling the new interfaces.
> > > > > >
> > > > > > The new APIs are based around setting up N page table entries at once.
> > > > > > The N entries belong to the same PMD, the same folio and the same VMA,
> > > > > > so ptep++ is a legitimate operation, and locking is taken care of for
> > > > > > you. Some architectures can do a better job of it than just a loop,
> > > > > > but I have hesitated to make too deep a change to architectures I don't
> > > > > > understand well.
> > > > > >
> > > > > > One thing I have changed in every architecture is that PG_arch_1 is now a
> > > > > > per-folio bit instead of a per-page bit. This was something that would
> > > > > > have to happen eventually, and it makes sense to do it now rather than
> > > > > > iterate over every page involved in a cache flush and figure out if it
> > > > > > needs to happen.
> > > > >
> > > > > I think we do use PG_arch_1 on s390 for our secure page handling and
> > > > > making this perf folio instead of physical page really seems wrong
> > > > > and it probably breaks this code.
> > > >
> > > > Per-page flags are going away in the next few years, so you're going to
> > > > need a new design. s390 seems to do a lot of unusual things. I wish
> > > > you'd talk to the rest of us more.
> > >
> > > I understand you point from a logical point of view, but a 4k page frame
> > > is also a hardware defined memory region. And I think not only for us.
> > > How do you want to implement hardware poisoning for example?
> > > Marking the whole folio with PG_hwpoison seems wrong.
> >
> > For hardware poison, we can't use the page for any other purpose any more.
> > So one of the 16 types of pointer is for hardware poison. That doesn't
> > seem like it's a solution that could work for secure/insecure pages?
> >
> > But what I'm really wondering is why you need to transition pages
> > between secure/insecure on a 4kB boundary. What's the downside to doing
> > it on a 16kB or 64kB boundary, or whatever size has been allocated?
>
> The export and import for more pages will be more expensive, but I assume that
> we would then also use the larger chunks (e.g. for paging). The more interesting
> problem is that the guest can make a page shared/non-shared on a 4kb granularity.
>
> Stupid question: can folios be split into folio,single page,folio when needed?
If that's a stupid question, you're going to find the answer utterly
moronic ...
Yes, we have split_folio() today. However, it can fail if somebody else
has a reference to it, and if it does succeed, we don't really have a
join_folio() operation (we have khugepaged which walks around looking
for small folios it can replace with large folios, but that's not really
what you want).
In the MM of, let's say, 2025, I do intend to support what we might
call a hole in a folio, precisely for hwpoison and it's beginning to
sound a bit like it might work for you too. So you'd do something like
...
Allocate a 256MB folio for your VM (probably one of many allocations
you'd do to give your VM some memory). That sets 65536 page pointers
to the same value. Then you "secure" all 256MB of it so the
VM can use it all. Then the VM wants the host to read/write a 16kB
chunk of that, so you allocate a "struct insecure_mem" and set four
of the page pointers to point to that instead (it probably contains
a copy of the original page pointer). We'd mark the folio as containing
a hole so that the MM knows something unusual is going on. When you're
done reading/writing the memory, you re-secure it, set the page pointers
back to point to the original folio and free the struct insecure_mem.
Would something like that work for you? Details TBD, of course.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-07-13 21:22 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20230710204339.3554919-1-willy@infradead.org>
2023-07-10 20:43 ` [PATCH v5 23/38] s390: Implement the new page table range API Matthew Wilcox (Oracle)
2023-07-11 9:07 ` [PATCH v5 00/38] New " Christian Borntraeger
2023-07-11 12:36 ` Matthew Wilcox
2023-07-11 15:24 ` Claudio Imbrenda
2023-07-11 16:52 ` Andrew Morton
2023-07-11 22:03 ` Matthew Wilcox
2023-07-12 5:29 ` Matthew Wilcox
2023-07-12 8:35 ` Claudio Imbrenda
2023-07-13 10:42 ` Christian Borntraeger
2023-07-13 13:42 ` Matthew Wilcox
2023-07-13 20:27 ` Christian Borntraeger
2023-07-13 21:22 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox