linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Frederic Barrat <fbarrat@linux.ibm.com>
To: Alexey Kardashevskiy <aik@ozlabs.ru>, linuxppc-dev@lists.ozlabs.org
Cc: kvm-ppc@vger.kernel.org
Subject: Re: [PATCH kernel] powerpc/iommu: Annotate nested lock for lockdep
Date: Thu, 18 Feb 2021 13:59:43 +0100	[thread overview]
Message-ID: <49b1f5cb-107c-296f-c339-13e627a73d6d@linux.ibm.com> (raw)
In-Reply-To: <20210216032000.21642-1-aik@ozlabs.ru>



On 16/02/2021 04:20, Alexey Kardashevskiy wrote:
> The IOMMU table is divided into pools for concurrent mappings and each
> pool has a separate spinlock. When taking the ownership of an IOMMU group
> to pass through a device to a VM, we lock these spinlocks which triggers
> a false negative warning in lockdep (below).
> 
> This fixes it by annotating the large pool's spinlock as a nest lock.
> 
> ===
> WARNING: possible recursive locking detected
> 5.11.0-le_syzkaller_a+fstn1 #100 Not tainted
> --------------------------------------------
> qemu-system-ppc/4129 is trying to acquire lock:
> c0000000119bddb0 (&(p->lock)/1){....}-{2:2}, at: iommu_take_ownership+0xac/0x1e0
> 
> but task is already holding lock:
> c0000000119bdd30 (&(p->lock)/1){....}-{2:2}, at: iommu_take_ownership+0xac/0x1e0
> 
> other info that might help us debug this:
>   Possible unsafe locking scenario:
> 
>         CPU0
>         ----
>    lock(&(p->lock)/1);
>    lock(&(p->lock)/1);
> ===
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
>   arch/powerpc/kernel/iommu.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
> index 557a09dd5b2f..2ee642a6731a 100644
> --- a/arch/powerpc/kernel/iommu.c
> +++ b/arch/powerpc/kernel/iommu.c
> @@ -1089,7 +1089,7 @@ int iommu_take_ownership(struct iommu_table *tbl)
>   
>   	spin_lock_irqsave(&tbl->large_pool.lock, flags);
>   	for (i = 0; i < tbl->nr_pools; i++)
> -		spin_lock(&tbl->pools[i].lock);
> +		spin_lock_nest_lock(&tbl->pools[i].lock, &tbl->large_pool.lock);


We have the same pattern and therefore should have the same problem in 
iommu_release_ownership().

But as I understand, we're hacking our way around lockdep here, since 
conceptually, those locks are independent. I was wondering why it seems 
to fix it by worrying only about the large pool lock. That loop can take 
many locks (up to 4 with current config). However, if the dma window is 
less than 1GB, we would only have one, so it would make sense for 
lockdep to stop complaining. Is it what happened? In which case, this 
patch doesn't really fix it. Or I'm missing something :-)

   Fred



>   	iommu_table_release_pages(tbl);
>   
> 

  reply	other threads:[~2021-02-18 13:00 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-16  3:20 [PATCH kernel] powerpc/iommu: Annotate nested lock for lockdep Alexey Kardashevskiy
2021-02-18 12:59 ` Frederic Barrat [this message]
2021-02-20  3:49   ` Alexey Kardashevskiy
2021-02-22  9:35     ` Alexey Kardashevskiy
2021-02-23  5:13   ` Alexey Kardashevskiy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49b1f5cb-107c-296f-c339-13e627a73d6d@linux.ibm.com \
    --to=fbarrat@linux.ibm.com \
    --cc=aik@ozlabs.ru \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).