linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Frederic Barrat <fbarrat@linux.ibm.com>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
	Frederic Barrat <fbarrat@linux.vnet.ibm.com>,
	linuxppc-dev@lists.ozlabs.org, mpe@ellerman.id.au
Cc: ldufour@linux.vnet.ibm.com, clombard@linux.vnet.ibm.com,
	andrew.donnellan@au1.ibm.com, vaibhav@linux.vnet.ibm.com
Subject: Re: [PATCH] cxl: Fix possible deadlock when processing page faults from cxllib
Date: Wed, 4 Apr 2018 12:58:59 +0200	[thread overview]
Message-ID: <c0b7931a-4bae-f8d0-0da5-04cf2a18f8c5@linux.ibm.com> (raw)
In-Reply-To: <8e099a01-ba11-d26f-07ab-bf0fa6e387c1@linux.ibm.com>



Le 03/04/2018 à 17:31, Aneesh Kumar K.V a écrit :
> On 04/03/2018 08:10 PM, Aneesh Kumar K.V wrote:
>> On 04/03/2018 03:13 PM, Frederic Barrat wrote:
>>> cxllib_handle_fault() is called by an external driver when it needs to
>>> have the host process page faults for a buffer which may cover several
>>> pages. Currently the function holds the mm->mmap_sem semaphore with
>>> read access while iterating over the buffer, since it could spawn
>>> several VMAs. When calling a lower-level function to handle the page
>>> fault for a single page, the semaphore is accessed again in read
>>> mode. That is wrong and can lead to deadlocks if a writer tries to
>>> sneak in while a buffer of several pages is being processed.
>>>
>>> The fix is to release the semaphore once cxllib_handle_fault() got the
>>> information it needs from the current vma. The address space/VMAs
>>> could evolve while we iterate over the full buffer, but in the
>>> unlikely case where we miss a page, the driver will raise a new page
>>> fault when retrying.
>>>
>>> Fixes: 3ced8d730063 ("cxl: Export library to support IBM XSL")
>>> Cc: stable@vger.kernel.org # 4.13+
>>> Signed-off-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
>>> ---
>>>   drivers/misc/cxl/cxllib.c | 85 
>>> ++++++++++++++++++++++++++++++-----------------
>>>   1 file changed, 55 insertions(+), 30 deletions(-)
>>>
>>> diff --git a/drivers/misc/cxl/cxllib.c b/drivers/misc/cxl/cxllib.c
>>> index 30ccba436b3b..55cd35d1a9cc 100644
>>> --- a/drivers/misc/cxl/cxllib.c
>>> +++ b/drivers/misc/cxl/cxllib.c
>>> @@ -208,49 +208,74 @@ int cxllib_get_PE_attributes(struct task_struct 
>>> *task,
>>>   }
>>>   EXPORT_SYMBOL_GPL(cxllib_get_PE_attributes);
>>> -int cxllib_handle_fault(struct mm_struct *mm, u64 addr, u64 size, 
>>> u64 flags)
>>> +static int get_vma_info(struct mm_struct *mm, u64 addr,
>>> +            u64 *vma_start, u64 *vma_end,
>>> +            unsigned long *page_size)
>>>   {
>>> -    int rc;
>>> -    u64 dar;
>>>       struct vm_area_struct *vma = NULL;
>>> -    unsigned long page_size;
>>> -
>>> -    if (mm == NULL)
>>> -        return -EFAULT;
>>> +    int rc = 0;
>>>       down_read(&mm->mmap_sem);
>>>       vma = find_vma(mm, addr);
>>>       if (!vma) {
>>> -        pr_err("Can't find vma for addr %016llx\n", addr);
>>>           rc = -EFAULT;
>>>           goto out;
>>>       }
>>> -    /* get the size of the pages allocated */
>>> -    page_size = vma_kernel_pagesize(vma);
>>> -
>>> -    for (dar = (addr & ~(page_size - 1)); dar < (addr + size); dar 
>>> += page_size) {
>>> -        if (dar < vma->vm_start || dar >= vma->vm_end) {
>>> -            vma = find_vma(mm, addr);
>>> -            if (!vma) {
>>> -                pr_err("Can't find vma for addr %016llx\n", addr);
>>> -                rc = -EFAULT;
>>> -                goto out;
>>> -            }
>>> -            /* get the size of the pages allocated */
>>> -            page_size = vma_kernel_pagesize(vma);
>>> +    *page_size = vma_kernel_pagesize(vma);
>>> +    *vma_start = vma->vm_start;
>>> +    *vma_end = vma->vm_end;
>>> +out:
>>> +    up_read(&mm->mmap_sem);
>>> +    return rc;
>>> +}
>>> +
>>> +int cxllib_handle_fault(struct mm_struct *mm, u64 addr, u64 size, 
>>> u64 flags)
>>> +{
>>> +    int rc;
>>> +    u64 dar, vma_start, vma_end;
>>> +    unsigned long page_size;
>>> +
>>> +    if (mm == NULL)
>>> +        return -EFAULT;
>>> +
>>> +    /*
>>> +     * The buffer we have to process can extend over several pages
>>> +     * and may also cover several VMAs.
>>> +     * We iterate over all the pages. The page size could vary
>>> +     * between VMAs.
>>> +     */
>>> +    rc = get_vma_info(mm, addr, &vma_start, &vma_end, &page_size);
>>> +    if (rc)
>>> +        return rc;
>>> +
>>> +    for (dar = (addr & ~(page_size - 1)); dar < (addr + size);
>>> +         dar += page_size) {
>>> +        if (dar < vma_start || dar >= vma_end) {
>>
>>
>> IIUC, we are fetching the vma to get just the page_size with which it 
>> is mapped? Can't we iterate with PAGE_SIZE? Considering hugetlb page 
>> size will be larger than PAGE_SIZE, we might call into 
>> cxl_handle_mm_fault multiple times for a hugetlb page. Does that cause 
>> any issue? Also can cxl be used with hugetlb mappings?
>>
> Can you also try to use a helper like below. That will clarify the need 
> of find_vma there.
> 
> static int __cxllib_handle_fault(struct mm_struct *mm, unsigned long 
> start, unsigned long end,
>                   unsigned long mapping_psize, u64 flags)
> {
>      int rc;
>      unsigned long dar;
> 
>      for (dar = start; dar < end; dar += mapping_psize) {
>          rc = cxl_handle_mm_fault(mm, flags, dar);
>          if (rc) {
>              rc = -EFAULT;
>              goto out;
>          }
>      }
>      rc = 0;
> out:
>      return rc;
> }


I'm struggling to make good use of it. I see your point, it makes 
touching all the pages for a given VMA easy (same page size).
But I'm given a buffer, which can span several VMAs, and we can have a 
varying page size. So I now need an outside loop iterating over all the 
VMAs and call the helper for a subset of the VMA. IMHO, it's not making 
the code any easier, i.e what I gain in the helper is lost in the 
outside VMA loop.
The current code, with a single loop and a varying increment based on 
the page size, doesn't look that bad to me.

   Fred

  reply	other threads:[~2018-04-04 13:57 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-03  9:43 [PATCH] cxl: Fix possible deadlock when processing page faults from cxllib Frederic Barrat
2018-04-03  9:56 ` Laurent Dufour
2018-04-03 11:43 ` Vaibhav Jain
2018-04-03 14:40 ` Aneesh Kumar K.V
2018-04-03 15:31   ` Aneesh Kumar K.V
2018-04-04 10:58     ` Frederic Barrat [this message]
2018-04-03 16:40   ` Frederic Barrat

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c0b7931a-4bae-f8d0-0da5-04cf2a18f8c5@linux.ibm.com \
    --to=fbarrat@linux.ibm.com \
    --cc=andrew.donnellan@au1.ibm.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=clombard@linux.vnet.ibm.com \
    --cc=fbarrat@linux.vnet.ibm.com \
    --cc=ldufour@linux.vnet.ibm.com \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=vaibhav@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).