linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Alexey Kardashevskiy <aik@ozlabs.ru>
Cc: linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH kernel v2] powerpc/powernv/ioda: Fix locked_vm counting for memory used by IOMMU tables
Date: Wed, 13 Feb 2019 15:48:35 +1100	[thread overview]
Message-ID: <20190213044835.GJ1884@umbus.fritz.box> (raw)
In-Reply-To: <20190213033818.51452-1-aik@ozlabs.ru>

[-- Attachment #1: Type: text/plain, Size: 4363 bytes --]

On Wed, Feb 13, 2019 at 02:38:18PM +1100, Alexey Kardashevskiy wrote:
> We store 2 multilevel tables in iommu_table - one for the hardware and
> one with the corresponding userspace addresses. Before allocating
> the tables, the iommu_table_group_ops::get_table_size() hook returns
> the combined size of the two and VFIO SPAPR TCE IOMMU driver adjusts
> the locked_vm counter correctly. When the table is actually allocated,
> the amount of allocated memory is stored in iommu_table::it_allocated_size
> and used to decrement the locked_vm counter when we release the memory
> used by the table; .get_table_size() and .create_table() calculate it
> independently but the result is expected to be the same.
> 
> However the allocator does not add the userspace table size to
> .it_allocated_size so when we destroy the table because of VFIO PCI
> unplug (i.e. VFIO container is gone but the userspace keeps running),
> we decrement locked_vm by just a half of size of memory we are releasing.
> 
> To make things worse, since we enabled on-demain allocation of

s/demain/demand/

> indirect levels, it_allocated_size contains only the amount of memory
> actually allocated at the table creation time which can just be
> a fraction. It is not a problem with incrementing locked_vm (as
> get_table_size() value is used) but it is with decrementing.
> 
> As the result, we leak locked_vm and may not be able to allocate more
> IOMMU tables after few iterations of hotplug/unplug.
> 
> This sets it_allocated_size in the pnv_pci_ioda2_ops::create_table()
> hook to what pnv_pci_ioda2_get_table_size() returns so from now on
> we have a single place which calculates the maximum memory a table can
> occupy. The original meaning of it_allocated_size is somewhat lost now
> though.
> 
> We do not ditch it_allocated_size whatsoever here and we do not call
> get_table_size() from vfio_iommu_spapr_tce.c when decrementing locked_vm
> as we may have multiple IOMMU groups per container and even though they
> all are supposed to have the same get_table_size() implementation,
> there is a small chance for failure or confusion.
> 
> Fixes: 090bad39b "powerpc/powernv: Add indirect levels to it_userspace"
> Fixes: a68bd1267 "powerpc/powernv/ioda: Allocate indirect TCE levels on demand"
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

Apart from the typo above,

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
> Changes:
> v2:
> * this is reworked "[PATCH kernel] powerpc/powernv/ioda: Store correct amount of memory used for table"
> 
> ---
>  arch/powerpc/platforms/powernv/pci-ioda-tce.c | 1 -
>  arch/powerpc/platforms/powernv/pci-ioda.c     | 7 ++++++-
>  2 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
> index 697449a..e28f03e 100644
> --- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c
> +++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
> @@ -313,7 +313,6 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
>  			page_shift);
>  	tbl->it_level_size = 1ULL << (level_shift - 3);
>  	tbl->it_indirect_levels = levels - 1;
> -	tbl->it_allocated_size = total_allocated;
>  	tbl->it_userspace = uas;
>  	tbl->it_nid = nid;
>  
> diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
> index 7db3119..d415739 100644
> --- a/arch/powerpc/platforms/powernv/pci-ioda.c
> +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
> @@ -2592,8 +2592,13 @@ static long pnv_pci_ioda2_create_table_userspace(
>  		int num, __u32 page_shift, __u64 window_size, __u32 levels,
>  		struct iommu_table **ptbl)
>  {
> -	return pnv_pci_ioda2_create_table(table_group,
> +	long ret = pnv_pci_ioda2_create_table(table_group,
>  			num, page_shift, window_size, levels, true, ptbl);
> +
> +	if (!ret)
> +		(*ptbl)->it_allocated_size = pnv_pci_ioda2_get_table_size(
> +				page_shift, window_size, levels);
> +	return ret;
>  }
>  
>  static void pnv_ioda2_take_ownership(struct iommu_table_group *table_group)

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2019-02-13  4:50 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-13  3:38 [PATCH kernel v2] powerpc/powernv/ioda: Fix locked_vm counting for memory used by IOMMU tables Alexey Kardashevskiy
2019-02-13  4:48 ` David Gibson [this message]
2019-02-28  9:21 ` [kernel, " Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190213044835.GJ1884@umbus.fritz.box \
    --to=david@gibson.dropbear.id.au \
    --cc=aik@ozlabs.ru \
    --cc=linuxppc-dev@lists.ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).