* [PATCH kernel] powerpc/mm_iommu: Allow pinning large regions
@ 2019-04-02 4:31 Alexey Kardashevskiy
2019-04-08 3:58 ` David Gibson
0 siblings, 1 reply; 2+ messages in thread
From: Alexey Kardashevskiy @ 2019-04-02 4:31 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Alexey Kardashevskiy, Aneesh Kumar K.V, David Gibson
When called with vmas_arg==NULL, get_user_pages_longterm() allocates
an array of nr_pages*8 which can easily get greater that the max order,
for example, registering memory for a 256GB guest does this and fails
in __alloc_pages_nodemask().
This adds a loop over chunks of entries to fit the max order limit.
Fixes: 678e174c4c16 ("powerpc/mm/iommu: allow migration of cma allocated pages during mm_iommu_do_alloc", 2019-03-05)
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
arch/powerpc/mm/mmu_context_iommu.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
index 36a826e23d45..e058064b013c 100644
--- a/arch/powerpc/mm/mmu_context_iommu.c
+++ b/arch/powerpc/mm/mmu_context_iommu.c
@@ -131,6 +131,7 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
unsigned int pageshift, mem_pageshift;
struct page **hpages;
phys_addr_t *hpas;
+ unsigned long entry, chunk, pinned;
mutex_lock(&mem_list_mutex);
if (mm_iommu_find(mm, ua, entries)) {
@@ -152,13 +153,29 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
}
down_read(&mm->mmap_sem);
- ret = get_user_pages_longterm(ua, entries, FOLL_WRITE, hpages, NULL);
+ chunk = (1UL << (PAGE_SHIFT + MAX_ORDER - 1)) /
+ sizeof(struct vm_area_struct *);
+ chunk = min(chunk, entries);
+ for (entry = 0, pinned = 0; entry < entries; entry += chunk) {
+ unsigned long n = min(entries - entry, chunk);
+
+ ret = get_user_pages_longterm(ua + (entry << PAGE_SHIFT), n,
+ FOLL_WRITE, hpages + entry, NULL);
+ if (ret == n) {
+ pinned += n;
+ continue;
+ }
+ if (ret >= 0)
+ pinned += ret;
+ break;
+ }
up_read(&mm->mmap_sem);
- if (ret != entries) {
+ if (pinned != entries) {
/* free the reference taken */
- for (i = 0; i < ret; i++)
+ for (i = 0; i < pinned; i++)
put_page(hpages[i]);
- ret = -EFAULT;
+ if (!ret)
+ ret = -EFAULT;
goto cleanup_exit;
}
--
2.17.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH kernel] powerpc/mm_iommu: Allow pinning large regions
2019-04-02 4:31 [PATCH kernel] powerpc/mm_iommu: Allow pinning large regions Alexey Kardashevskiy
@ 2019-04-08 3:58 ` David Gibson
0 siblings, 0 replies; 2+ messages in thread
From: David Gibson @ 2019-04-08 3:58 UTC (permalink / raw)
To: Alexey Kardashevskiy; +Cc: Aneesh Kumar K.V, linuxppc-dev
[-- Attachment #1: Type: text/plain, Size: 2558 bytes --]
On Tue, Apr 02, 2019 at 03:31:01PM +1100, Alexey Kardashevskiy wrote:
> When called with vmas_arg==NULL, get_user_pages_longterm() allocates
> an array of nr_pages*8 which can easily get greater that the max order,
> for example, registering memory for a 256GB guest does this and fails
> in __alloc_pages_nodemask().
>
> This adds a loop over chunks of entries to fit the max order limit.
>
> Fixes: 678e174c4c16 ("powerpc/mm/iommu: allow migration of cma allocated pages during mm_iommu_do_alloc", 2019-03-05)
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> arch/powerpc/mm/mmu_context_iommu.c | 25 +++++++++++++++++++++----
> 1 file changed, 21 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
> index 36a826e23d45..e058064b013c 100644
> --- a/arch/powerpc/mm/mmu_context_iommu.c
> +++ b/arch/powerpc/mm/mmu_context_iommu.c
> @@ -131,6 +131,7 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
> unsigned int pageshift, mem_pageshift;
> struct page **hpages;
> phys_addr_t *hpas;
> + unsigned long entry, chunk, pinned;
>
> mutex_lock(&mem_list_mutex);
> if (mm_iommu_find(mm, ua, entries)) {
> @@ -152,13 +153,29 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
> }
>
> down_read(&mm->mmap_sem);
> - ret = get_user_pages_longterm(ua, entries, FOLL_WRITE, hpages, NULL);
> + chunk = (1UL << (PAGE_SHIFT + MAX_ORDER - 1)) /
> + sizeof(struct vm_area_struct *);
> + chunk = min(chunk, entries);
I think this is redundant with..
> + for (entry = 0, pinned = 0; entry < entries; entry += chunk) {
> + unsigned long n = min(entries - entry, chunk);
.. this.
But otherwise LGTM.
> +
> + ret = get_user_pages_longterm(ua + (entry << PAGE_SHIFT), n,
> + FOLL_WRITE, hpages + entry, NULL);
> + if (ret == n) {
> + pinned += n;
> + continue;
> + }
> + if (ret >= 0)
> + pinned += ret;
> + break;
> + }
> up_read(&mm->mmap_sem);
> - if (ret != entries) {
> + if (pinned != entries) {
> /* free the reference taken */
> - for (i = 0; i < ret; i++)
> + for (i = 0; i < pinned; i++)
> put_page(hpages[i]);
> - ret = -EFAULT;
> + if (!ret)
> + ret = -EFAULT;
> goto cleanup_exit;
> }
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2019-04-08 4:30 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-04-02 4:31 [PATCH kernel] powerpc/mm_iommu: Allow pinning large regions Alexey Kardashevskiy
2019-04-08 3:58 ` David Gibson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).