From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758473AbbCER6a (ORCPT ); Thu, 5 Mar 2015 12:58:30 -0500 Received: from mx2.parallels.com ([199.115.105.18]:53360 "EHLO mx2.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751786AbbCER61 (ORCPT ); Thu, 5 Mar 2015 12:58:27 -0500 Message-ID: <54F89927.2090409@parallels.com> Date: Thu, 5 Mar 2015 20:57:59 +0300 From: Pavel Emelyanov User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Andrea Arcangeli , , , , , , Android Kernel Team CC: "Kirill A. Shutemov" , Sanidhya Kashyap , , Linus Torvalds , Andres Lagar-Cavilla , Dave Hansen , Paolo Bonzini , Rik van Riel , Mel Gorman , Andy Lutomirski , Andrew Morton , Sasha Levin , Hugh Dickins , Peter Feiner , "Dr. David Alan Gilbert" , Christopher Covington , Johannes Weiner , Robert Love , Dmitry Adamushko , Neil Brown , Mike Hommey , Taras Glek , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , Minchan Kim , Keith Packard , "Huangpeng (Peter)" , Anthony Liguori , Stefan Hajnoczi , Wenchao Xia , Andrew Jones , Juan Quintela Subject: Re: [PATCH 10/21] userfaultfd: add new syscall to provide memory externalization References: <1425575884-2574-1-git-send-email-aarcange@redhat.com> <1425575884-2574-11-git-send-email-aarcange@redhat.com> In-Reply-To: <1425575884-2574-11-git-send-email-aarcange@redhat.com> Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-Originating-IP: [89.169.95.100] X-ClientProxiedBy: US-EXCH.sw.swsoft.com (10.255.249.47) To US-EXCH.sw.swsoft.com (10.255.249.47) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > +int handle_userfault(struct vm_area_struct *vma, unsigned long address, > + unsigned int flags, unsigned long reason) > +{ > + struct mm_struct *mm = vma->vm_mm; > + struct userfaultfd_ctx *ctx; > + struct userfaultfd_wait_queue uwq; > + > + BUG_ON(!rwsem_is_locked(&mm->mmap_sem)); > + > + ctx = vma->vm_userfaultfd_ctx.ctx; > + if (!ctx) > + return VM_FAULT_SIGBUS; > + > + BUG_ON(ctx->mm != mm); > + > + VM_BUG_ON(reason & ~(VM_UFFD_MISSING|VM_UFFD_WP)); > + VM_BUG_ON(!(reason & VM_UFFD_MISSING) ^ !!(reason & VM_UFFD_WP)); > + > + /* > + * If it's already released don't get it. This avoids to loop > + * in __get_user_pages if userfaultfd_release waits on the > + * caller of handle_userfault to release the mmap_sem. > + */ > + if (unlikely(ACCESS_ONCE(ctx->released))) > + return VM_FAULT_SIGBUS; > + > + /* check that we can return VM_FAULT_RETRY */ > + if (unlikely(!(flags & FAULT_FLAG_ALLOW_RETRY))) { > + /* > + * Validate the invariant that nowait must allow retry > + * to be sure not to return SIGBUS erroneously on > + * nowait invocations. > + */ > + BUG_ON(flags & FAULT_FLAG_RETRY_NOWAIT); > +#ifdef CONFIG_DEBUG_VM > + if (printk_ratelimit()) { > + printk(KERN_WARNING > + "FAULT_FLAG_ALLOW_RETRY missing %x\n", flags); > + dump_stack(); > + } > +#endif > + return VM_FAULT_SIGBUS; > + } > + > + /* > + * Handle nowait, not much to do other than tell it to retry > + * and wait. > + */ > + if (flags & FAULT_FLAG_RETRY_NOWAIT) > + return VM_FAULT_RETRY; > + > + /* take the reference before dropping the mmap_sem */ > + userfaultfd_ctx_get(ctx); > + > + /* be gentle and immediately relinquish the mmap_sem */ > + up_read(&mm->mmap_sem); > + > + init_waitqueue_func_entry(&uwq.wq, userfaultfd_wake_function); > + uwq.wq.private = current; > + uwq.address = userfault_address(address, flags, reason); Since we report only the virtual address of the fault, this will make difficulties for task monitoring the address space of some other task. Like this: Let's assume a task creates a userfaultfd, activates one, registers several VMAs in it and then sends the ufd descriptor to other task. If later the first task will remap those VMAs and will start touching pages, the monitor will start receiving fault addresses using which it will not be able to guess the exact vma the requests come from. Thanks, Pavel