linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Frederic Barrat <fbarrat@linux.ibm.com>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
	Frederic Barrat <fbarrat@linux.vnet.ibm.com>,
	linuxppc-dev@lists.ozlabs.org, mpe@ellerman.id.au
Cc: ldufour@linux.vnet.ibm.com, clombard@linux.vnet.ibm.com,
	andrew.donnellan@au1.ibm.com, vaibhav@linux.vnet.ibm.com
Subject: Re: [PATCH] cxl: Fix possible deadlock when processing page faults from cxllib
Date: Tue, 3 Apr 2018 18:40:15 +0200	[thread overview]
Message-ID: <4893c7a6-b8d8-14e9-21e9-e9a059477613@linux.ibm.com> (raw)
In-Reply-To: <606e7509-5c37-b38e-9b6c-a60a5694c652@linux.ibm.com>



Le 03/04/2018 à 16:40, Aneesh Kumar K.V a écrit :
> On 04/03/2018 03:13 PM, Frederic Barrat wrote:
>> cxllib_handle_fault() is called by an external driver when it needs to
>> have the host process page faults for a buffer which may cover several
>> pages. Currently the function holds the mm->mmap_sem semaphore with
>> read access while iterating over the buffer, since it could spawn
>> several VMAs. When calling a lower-level function to handle the page
>> fault for a single page, the semaphore is accessed again in read
>> mode. That is wrong and can lead to deadlocks if a writer tries to
>> sneak in while a buffer of several pages is being processed.
>>
>> The fix is to release the semaphore once cxllib_handle_fault() got the
>> information it needs from the current vma. The address space/VMAs
>> could evolve while we iterate over the full buffer, but in the
>> unlikely case where we miss a page, the driver will raise a new page
>> fault when retrying.
>>
>> Fixes: 3ced8d730063 ("cxl: Export library to support IBM XSL")
>> Cc: stable@vger.kernel.org # 4.13+
>> Signed-off-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
>> ---
>>   drivers/misc/cxl/cxllib.c | 85 
>> ++++++++++++++++++++++++++++++-----------------
>>   1 file changed, 55 insertions(+), 30 deletions(-)
>>
>> diff --git a/drivers/misc/cxl/cxllib.c b/drivers/misc/cxl/cxllib.c
>> index 30ccba436b3b..55cd35d1a9cc 100644
>> --- a/drivers/misc/cxl/cxllib.c
>> +++ b/drivers/misc/cxl/cxllib.c
>> @@ -208,49 +208,74 @@ int cxllib_get_PE_attributes(struct task_struct 
>> *task,
>>   }
>>   EXPORT_SYMBOL_GPL(cxllib_get_PE_attributes);
>> -int cxllib_handle_fault(struct mm_struct *mm, u64 addr, u64 size, u64 
>> flags)
>> +static int get_vma_info(struct mm_struct *mm, u64 addr,
>> +            u64 *vma_start, u64 *vma_end,
>> +            unsigned long *page_size)
>>   {
>> -    int rc;
>> -    u64 dar;
>>       struct vm_area_struct *vma = NULL;
>> -    unsigned long page_size;
>> -
>> -    if (mm == NULL)
>> -        return -EFAULT;
>> +    int rc = 0;
>>       down_read(&mm->mmap_sem);
>>       vma = find_vma(mm, addr);
>>       if (!vma) {
>> -        pr_err("Can't find vma for addr %016llx\n", addr);
>>           rc = -EFAULT;
>>           goto out;
>>       }
>> -    /* get the size of the pages allocated */
>> -    page_size = vma_kernel_pagesize(vma);
>> -
>> -    for (dar = (addr & ~(page_size - 1)); dar < (addr + size); dar += 
>> page_size) {
>> -        if (dar < vma->vm_start || dar >= vma->vm_end) {
>> -            vma = find_vma(mm, addr);
>> -            if (!vma) {
>> -                pr_err("Can't find vma for addr %016llx\n", addr);
>> -                rc = -EFAULT;
>> -                goto out;
>> -            }
>> -            /* get the size of the pages allocated */
>> -            page_size = vma_kernel_pagesize(vma);
>> +    *page_size = vma_kernel_pagesize(vma);
>> +    *vma_start = vma->vm_start;
>> +    *vma_end = vma->vm_end;
>> +out:
>> +    up_read(&mm->mmap_sem);
>> +    return rc;
>> +}
>> +
>> +int cxllib_handle_fault(struct mm_struct *mm, u64 addr, u64 size, u64 
>> flags)
>> +{
>> +    int rc;
>> +    u64 dar, vma_start, vma_end;
>> +    unsigned long page_size;
>> +
>> +    if (mm == NULL)
>> +        return -EFAULT;
>> +
>> +    /*
>> +     * The buffer we have to process can extend over several pages
>> +     * and may also cover several VMAs.
>> +     * We iterate over all the pages. The page size could vary
>> +     * between VMAs.
>> +     */
>> +    rc = get_vma_info(mm, addr, &vma_start, &vma_end, &page_size);
>> +    if (rc)
>> +        return rc;
>> +
>> +    for (dar = (addr & ~(page_size - 1)); dar < (addr + size);
>> +         dar += page_size) {
>> +        if (dar < vma_start || dar >= vma_end) {
> 
> 
> IIUC, we are fetching the vma to get just the page_size with which it is 
> mapped? Can't we iterate with PAGE_SIZE? Considering hugetlb page size 
> will be larger than PAGE_SIZE, we might call into cxl_handle_mm_fault 
> multiple times for a hugetlb page. Does that cause any issue? Also can 
> cxl be used with hugetlb mappings?

I discussed it with Aneesh, but for the record:
- huge pages could be used, cxl has no control over it.
- incrementing the loop with PAGE_SIZE if it's a huge page would be a 
waste, as only the first call to cxl_handle_mm_fault() would be useful.
- having to account for several VMAs and potentially page sizes make it 
more complicated. An idea is to check with Mellanox if we can reduce the 
scope, in case the caller can rule out some cases. It's too late for 
coral, but something we can look into for the future/upstream.

   Fred



>> +            /*
>> +             * We don't hold the mm->mmap_sem semaphore
>> +             * while iterating, since the semaphore is
>> +             * required by one of the lower-level page
>> +             * fault processing functions and it could
>> +             * create a deadlock.
>> +             *
>> +             * It means the VMAs can be altered between 2
>> +             * loop iterations and we could theoretically
>> +             * miss a page (however unlikely). But that's
>> +             * not really a problem, as the driver will
>> +             * retry access, get another page fault on the
>> +             * missing page and call us again.
>> +             */
>> +            rc = get_vma_info(mm, dar, &vma_start, &vma_end,
>> +                    &page_size);
>> +            if (rc)
>> +                return rc;
>>           }
>>           rc = cxl_handle_mm_fault(mm, flags, dar);
>> -        if (rc) {
>> -            pr_err("cxl_handle_mm_fault failed %d", rc);
>> -            rc = -EFAULT;
>> -            goto out;
>> -        }
>> +        if (rc)
>> +            return -EFAULT;
>>       }
>> -    rc = 0;
>> -out:
>> -    up_read(&mm->mmap_sem);
>> -    return rc;
>> +    return 0;
>>   }
>>   EXPORT_SYMBOL_GPL(cxllib_handle_fault);
>>
> 
> -aneesh
> 

      parent reply	other threads:[~2018-04-03 16:40 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-03  9:43 [PATCH] cxl: Fix possible deadlock when processing page faults from cxllib Frederic Barrat
2018-04-03  9:56 ` Laurent Dufour
2018-04-03 11:43 ` Vaibhav Jain
2018-04-03 14:40 ` Aneesh Kumar K.V
2018-04-03 15:31   ` Aneesh Kumar K.V
2018-04-04 10:58     ` Frederic Barrat
2018-04-03 16:40   ` Frederic Barrat [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4893c7a6-b8d8-14e9-21e9-e9a059477613@linux.ibm.com \
    --to=fbarrat@linux.ibm.com \
    --cc=andrew.donnellan@au1.ibm.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=clombard@linux.vnet.ibm.com \
    --cc=fbarrat@linux.vnet.ibm.com \
    --cc=ldufour@linux.vnet.ibm.com \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=vaibhav@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).