From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6444FC433E1 for ; Fri, 26 Jun 2020 22:36:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E29CC20C09 for ; Fri, 26 Jun 2020 22:36:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZgvMtxo7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E29CC20C09 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7D3F26B0055; Fri, 26 Jun 2020 18:36:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7849B6B005D; Fri, 26 Jun 2020 18:36:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64CBD6B0062; Fri, 26 Jun 2020 18:36:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id 45ED56B0055 for ; Fri, 26 Jun 2020 18:36:56 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0DE94824556B for ; Fri, 26 Jun 2020 22:36:56 +0000 (UTC) X-FDA: 76972824432.24.bat07_0313d5526e59 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id DAB271A4A0 for ; Fri, 26 Jun 2020 22:36:55 +0000 (UTC) X-HE-Tag: bat07_0313d5526e59 X-Filterd-Recvd-Size: 32115 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Fri, 26 Jun 2020 22:36:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593211014; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w7iXcZhQYu4WM0y6dP2Yn9IEmieBN+tAo0TwngF+atA=; b=ZgvMtxo78wunUugNfP9WsTNFJRkGchhYy8cR17E75khq9WqlHI0xGosT02GL2WPUVkTjXf xBU/F8Orx5dxvCdtjIqmRDbNt2DhzO/50AsuAuFyOOlHGoQJgzuiJrevKusdOUag/1hqzF THjTz0wn90tavG0vkNnWdh/e7Hyfq8Q= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-52-PKP1tIfNO2eiy-knugf_8w-1; Fri, 26 Jun 2020 18:36:53 -0400 X-MC-Unique: PKP1tIfNO2eiy-knugf_8w-1 Received: by mail-qk1-f197.google.com with SMTP id j16so7671683qka.11 for ; Fri, 26 Jun 2020 15:36:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=w7iXcZhQYu4WM0y6dP2Yn9IEmieBN+tAo0TwngF+atA=; b=L+Xx4rmPXmiL2CqXE4T/ooDI3qwnYrYUhHGnjT61m86wCzy+ukCWxZbvd43FVnbIbQ pPYZRl1mONeihLt/pphh1D1ddpLZOM+KtVRFdLO8L97LI4HYmChhyq5cs1gdQNYERzNb skAah8I/3emPCBdX/EfMrUYOsmoAg8YJBAwnokG5VnbkPPd+iN6MX66IlOv63yZ/hrM5 UVry+40QDnTdTb+7SKDpOgZb4oEQXwDrBvjH/vg0Tx6I1rN3w6vRGNqUbaqOi8dpJbg7 I+wAbp7o9SdnQ20gyqnWP+TRxUn28ieqXpss2dli1lwyxjr0CCI879QhzsXwPgpIsa6a lWXQ== X-Gm-Message-State: AOAM532CRo81tYyE/A5sdoGmXdW5CFYYPkSNZKmGNIt2KJBbWaXTlFV1 EYDRQhwzG9YWVEBmGh6SoEHT6hGcbky9dG/CcIqcZCyegQ47Q+N9ulkFktwVMQcjHmB0JpBGdzk vU5wOtCI7DEw= X-Received: by 2002:a37:887:: with SMTP id 129mr5000691qki.52.1593211010801; Fri, 26 Jun 2020 15:36:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzvq92JUa05+mYjuhtdjTn/Kh7PHIgDVmSeW2WxL73f4sLi2Su/EUUgnRLsMPHzoQ7fjPyCxg== X-Received: by 2002:a37:887:: with SMTP id 129mr5000652qki.52.1593211010334; Fri, 26 Jun 2020 15:36:50 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id t35sm4597525qth.79.2020.06.26.15.36.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Jun 2020 15:36:49 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Peter Xu , Andrew Morton , Linus Torvalds , Gerald Schaefer , Andrea Arcangeli , Will Deacon , Michael Ellerman Subject: [PATCH 26/26] mm/gup: Remove task_struct pointer for all gup code Date: Fri, 26 Jun 2020 18:36:48 -0400 Message-Id: <20200626223648.200249-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626223130.199227-1-peterx@redhat.com> References: <20200626223130.199227-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: DAB271A4A0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After the cleanup of page fault accounting, gup does not need to pass task_struct around any more. Remove that parameter in the whole gup stac= k. Signed-off-by: Peter Xu --- arch/arc/kernel/process.c | 2 +- arch/s390/kvm/interrupt.c | 2 +- arch/s390/kvm/kvm-s390.c | 2 +- arch/s390/kvm/priv.c | 8 +- arch/s390/mm/gmap.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 2 +- drivers/infiniband/core/umem_odp.c | 2 +- drivers/vfio/vfio_iommu_type1.c | 2 +- fs/exec.c | 2 +- include/linux/mm.h | 9 +-- kernel/events/uprobes.c | 6 +- kernel/futex.c | 2 +- mm/gup.c | 90 +++++++++------------ mm/memory.c | 2 +- mm/process_vm_access.c | 2 +- security/tomoyo/domain.c | 2 +- virt/kvm/async_pf.c | 2 +- virt/kvm/kvm_main.c | 2 +- 18 files changed, 63 insertions(+), 80 deletions(-) diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c index 315528f04bc1..2aad79ffc7f8 100644 --- a/arch/arc/kernel/process.c +++ b/arch/arc/kernel/process.c @@ -91,7 +91,7 @@ SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, exp= ected, int, new) goto fail; =20 down_read(¤t->mm->mmap_sem); - ret =3D fixup_user_fault(current, current->mm, (unsigned long) uaddr, + ret =3D fixup_user_fault(current->mm, (unsigned long) uaddr, FAULT_FLAG_WRITE, NULL); up_read(¤t->mm->mmap_sem); =20 diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index bfb481134994..7f4c5895aabd 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -2768,7 +2768,7 @@ static struct page *get_map_page(struct kvm *kvm, u= 64 uaddr) struct page *page =3D NULL; =20 down_read(&kvm->mm->mmap_sem); - get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE, + get_user_pages_remote(kvm->mm, uaddr, 1, FOLL_WRITE, &page, NULL, NULL); up_read(&kvm->mm->mmap_sem); return page; diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index d05bb040fd42..12fa299986f8 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -1892,7 +1892,7 @@ static long kvm_s390_set_skeys(struct kvm *kvm, str= uct kvm_s390_skeys *args) =20 r =3D set_guest_storage_key(current->mm, hva, keys[i], 0); if (r) { - r =3D fixup_user_fault(current, current->mm, hva, + r =3D fixup_user_fault(current->mm, hva, FAULT_FLAG_WRITE, &unlocked); if (r) break; diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index 893893642415..45b7d5df72d7 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -274,7 +274,7 @@ static int handle_iske(struct kvm_vcpu *vcpu) rc =3D get_guest_storage_key(current->mm, vmaddr, &key); =20 if (rc) { - rc =3D fixup_user_fault(current, current->mm, vmaddr, + rc =3D fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); if (!rc) { up_read(¤t->mm->mmap_sem); @@ -320,7 +320,7 @@ static int handle_rrbe(struct kvm_vcpu *vcpu) down_read(¤t->mm->mmap_sem); rc =3D reset_guest_reference_bit(current->mm, vmaddr); if (rc < 0) { - rc =3D fixup_user_fault(current, current->mm, vmaddr, + rc =3D fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); if (!rc) { up_read(¤t->mm->mmap_sem); @@ -391,7 +391,7 @@ static int handle_sske(struct kvm_vcpu *vcpu) m3 & SSKE_MC); =20 if (rc < 0) { - rc =3D fixup_user_fault(current, current->mm, vmaddr, + rc =3D fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); rc =3D !rc ? -EAGAIN : rc; } @@ -1095,7 +1095,7 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) rc =3D cond_set_guest_storage_key(current->mm, vmaddr, key, NULL, nq, mr, mc); if (rc < 0) { - rc =3D fixup_user_fault(current, current->mm, vmaddr, + rc =3D fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); rc =3D !rc ? -EAGAIN : rc; } diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 1a95d8809cc3..0faf4f5f3fd4 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -649,7 +649,7 @@ int gmap_fault(struct gmap *gmap, unsigned long gaddr= , rc =3D vmaddr; goto out_up; } - if (fixup_user_fault(current, gmap->mm, vmaddr, fault_flags, + if (fixup_user_fault(gmap->mm, vmaddr, fault_flags, &unlocked)) { rc =3D -EFAULT; goto out_up; @@ -879,7 +879,7 @@ static int gmap_pte_op_fixup(struct gmap *gmap, unsig= ned long gaddr, =20 BUG_ON(gmap_is_shadow(gmap)); fault_flags =3D (prot =3D=3D PROT_WRITE) ? FAULT_FLAG_WRITE : 0; - if (fixup_user_fault(current, mm, vmaddr, fault_flags, &unlocked)) + if (fixup_user_fault(mm, vmaddr, fault_flags, &unlocked)) return -EFAULT; if (unlocked) /* lost mmap_sem, caller has to retry __gmap_translate */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/dr= m/i915/gem/i915_gem_userptr.c index 7ffd7afeb7a5..e87fa79c18d5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -472,7 +472,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struc= t *_work) locked =3D 1; } ret =3D get_user_pages_remote - (work->task, mm, + (mm, obj->userptr.ptr + pinned * PAGE_SIZE, npages - pinned, flags, diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core= /umem_odp.c index 3b1e627d9a8d..73b1a01b7339 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -437,7 +437,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *ume= m_odp, u64 user_virt, * complex (and doesn't gain us much performance in most use * cases). */ - npages =3D get_user_pages_remote(owning_process, owning_mm, + npages =3D get_user_pages_remote(owning_mm, user_virt, gup_num_pages, flags, local_page_list, NULL, NULL); up_read(&owning_mm->mmap_sem); diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_ty= pe1.c index cc1d64765ce7..d77b34d6ee19 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -329,7 +329,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsign= ed long vaddr, flags |=3D FOLL_WRITE; =20 down_read(&mm->mmap_sem); - ret =3D pin_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM= , + ret =3D pin_user_pages_remote(mm, vaddr, 1, flags | FOLL_LONGTERM, page, NULL, NULL); if (ret =3D=3D 1) { *pfn =3D page_to_pfn(page[0]); diff --git a/fs/exec.c b/fs/exec.c index 2c465119affc..f3f87911f3d0 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -213,7 +213,7 @@ static struct page *get_arg_page(struct linux_binprm = *bprm, unsigned long pos, * We are doing an exec(). 'current' is the process * doing the exec and bprm->mm is the new process's mm. */ - ret =3D get_user_pages_remote(current, bprm->mm, pos, 1, gup_flags, + ret =3D get_user_pages_remote(bprm->mm, pos, 1, gup_flags, &page, NULL, NULL); if (ret <=3D 0) return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index 46bee4044ac1..5e347ffb049f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1655,7 +1655,7 @@ int invalidate_inode_page(struct page *page); extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct pt_regs *regs); -extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *m= m, +extern int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); void unmap_mapping_pages(struct address_space *mapping, @@ -1671,8 +1671,7 @@ static inline vm_fault_t handle_mm_fault(struct vm_= area_struct *vma, BUG(); return VM_FAULT_SIGBUS; } -static inline int fixup_user_fault(struct task_struct *tsk, - struct mm_struct *mm, unsigned long address, +static inline int fixup_user_fault(struct mm_struct *mm, unsigned long a= ddress, unsigned int fault_flags, bool *unlocked) { /* should never happen if there's no MMU */ @@ -1698,11 +1697,11 @@ extern int access_remote_vm(struct mm_struct *mm,= unsigned long addr, extern int __access_remote_vm(struct task_struct *tsk, struct mm_struct = *mm, unsigned long addr, void *buf, int len, unsigned int gup_flags); =20 -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm= , +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); -long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm= , +long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index ece7e13f6e4a..b7c9ad7e7d54 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -382,7 +382,7 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long = vaddr, short d) if (!vaddr || !d) return -EINVAL; =20 - ret =3D get_user_pages_remote(NULL, mm, vaddr, 1, + ret =3D get_user_pages_remote(mm, vaddr, 1, FOLL_WRITE, &page, &vma, NULL); if (unlikely(ret <=3D 0)) { /* @@ -483,7 +483,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, = struct mm_struct *mm, if (is_register) gup_flags |=3D FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ - ret =3D get_user_pages_remote(NULL, mm, vaddr, 1, gup_flags, + ret =3D get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, &vma, NULL); if (ret <=3D 0) return ret; @@ -2027,7 +2027,7 @@ static int is_trap_at_addr(struct mm_struct *mm, un= signed long vaddr) * but we treat this as a 'remote' access since it is * essentially a kernel access to the memory. */ - result =3D get_user_pages_remote(NULL, mm, vaddr, 1, FOLL_FORCE, &page, + result =3D get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, NULL, NULL); if (result < 0) return result; diff --git a/kernel/futex.c b/kernel/futex.c index b59532862bc0..1466b4322491 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -696,7 +696,7 @@ static int fault_in_user_writeable(u32 __user *uaddr) int ret; =20 down_read(&mm->mmap_sem); - ret =3D fixup_user_fault(current, mm, (unsigned long)uaddr, + ret =3D fixup_user_fault(mm, (unsigned long)uaddr, FAULT_FLAG_WRITE, NULL); up_read(&mm->mmap_sem); =20 diff --git a/mm/gup.c b/mm/gup.c index 17b4d0c45a6b..b8eb02673c10 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -851,7 +851,7 @@ static int get_gate_page(struct mm_struct *mm, unsign= ed long address, * does not include FOLL_NOWAIT, the mmap_sem may be released. If it * is, *@locked will be set to 0 and -EBUSY returned. */ -static int faultin_page(struct task_struct *tsk, struct vm_area_struct *= vma, +static int faultin_page(struct vm_area_struct *vma, unsigned long address, unsigned int *flags, int *locked) { unsigned int fault_flags =3D 0; @@ -954,7 +954,6 @@ static int check_vma_flags(struct vm_area_struct *vma= , unsigned long gup_flags) =20 /** * __get_user_pages() - pin user pages in memory - * @tsk: task_struct of target task * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -1012,7 +1011,7 @@ static int check_vma_flags(struct vm_area_struct *v= ma, unsigned long gup_flags) * instead of __get_user_pages. __get_user_pages should be used only if * you need some special @gup_flags. */ -static long __get_user_pages(struct task_struct *tsk, struct mm_struct *= mm, +static long __get_user_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1088,8 +1087,7 @@ static long __get_user_pages(struct task_struct *ts= k, struct mm_struct *mm, =20 page =3D follow_page_mask(vma, start, foll_flags, &ctx); if (!page) { - ret =3D faultin_page(tsk, vma, start, &foll_flags, - locked); + ret =3D faultin_page(vma, start, &foll_flags, locked); switch (ret) { case 0: goto retry; @@ -1163,8 +1161,6 @@ static bool vma_permits_fault(struct vm_area_struct= *vma, =20 /* * fixup_user_fault() - manually resolve a user page fault - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @address: user address * @fault_flags:flags to pass down to handle_mm_fault() @@ -1191,7 +1187,7 @@ static bool vma_permits_fault(struct vm_area_struct= *vma, * This function will not return with an unlocked mmap_sem. So it has no= t the * same semantics wrt the @mm->mmap_sem as does filemap_fault(). */ -int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, +int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked) { @@ -1236,8 +1232,7 @@ int fixup_user_fault(struct task_struct *tsk, struc= t mm_struct *mm, } EXPORT_SYMBOL_GPL(fixup_user_fault); =20 -static __always_inline long __get_user_pages_locked(struct task_struct *= tsk, - struct mm_struct *mm, +static __always_inline long __get_user_pages_locked(struct mm_struct *mm= , unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1270,7 +1265,7 @@ static __always_inline long __get_user_pages_locked= (struct task_struct *tsk, pages_done =3D 0; lock_dropped =3D false; for (;;) { - ret =3D __get_user_pages(tsk, mm, start, nr_pages, flags, pages, + ret =3D __get_user_pages(mm, start, nr_pages, flags, pages, vmas, locked); if (!locked) /* VM_FAULT_RETRY couldn't trigger, bypass */ @@ -1330,7 +1325,7 @@ static __always_inline long __get_user_pages_locked= (struct task_struct *tsk, } =20 *locked =3D 1; - ret =3D __get_user_pages(tsk, mm, start, 1, flags | FOLL_TRIED, + ret =3D __get_user_pages(mm, start, 1, flags | FOLL_TRIED, pages, NULL, locked); if (!*locked) { /* Continue to retry until we succeeded */ @@ -1416,7 +1411,7 @@ long populate_vma_page_range(struct vm_area_struct = *vma, * We made sure addr is within a VMA, so the following will * not result in a stack expansion that recurses back here. */ - return __get_user_pages(current, mm, start, nr_pages, gup_flags, + return __get_user_pages(mm, start, nr_pages, gup_flags, NULL, NULL, locked); } =20 @@ -1500,7 +1495,7 @@ struct page *get_dump_page(unsigned long addr) struct vm_area_struct *vma; struct page *page; =20 - if (__get_user_pages(current, current->mm, addr, 1, + if (__get_user_pages(current->mm, addr, 1, FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, NULL) < 1) return NULL; @@ -1509,8 +1504,7 @@ struct page *get_dump_page(unsigned long addr) } #endif /* CONFIG_ELF_CORE */ #else /* CONFIG_MMU */ -static long __get_user_pages_locked(struct task_struct *tsk, - struct mm_struct *mm, unsigned long start, +static long __get_user_pages_locked(struct mm_struct *mm, unsigned long = start, unsigned long nr_pages, struct page **pages, struct vm_area_struct **vmas, int *locked, unsigned int foll_flags) @@ -1626,8 +1620,7 @@ static struct page *new_non_cma_page(struct page *p= age, unsigned long private) return __alloc_pages_node(nid, gfp_mask, 0); } =20 -static long check_and_migrate_cma_pages(struct task_struct *tsk, - struct mm_struct *mm, +static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1701,7 +1694,7 @@ static long check_and_migrate_cma_pages(struct task= _struct *tsk, * again migrating any new CMA pages which we failed to isolate * earlier. */ - ret =3D __get_user_pages_locked(tsk, mm, start, nr_pages, + ret =3D __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, gup_flags); =20 @@ -1715,8 +1708,7 @@ static long check_and_migrate_cma_pages(struct task= _struct *tsk, return ret; } #else -static long check_and_migrate_cma_pages(struct task_struct *tsk, - struct mm_struct *mm, +static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1731,8 +1723,7 @@ static long check_and_migrate_cma_pages(struct task= _struct *tsk, * __gup_longterm_locked() is a wrapper for __get_user_pages_locked whic= h * allows us to process the FOLL_LONGTERM flag. */ -static long __gup_longterm_locked(struct task_struct *tsk, - struct mm_struct *mm, +static long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1757,7 +1748,7 @@ static long __gup_longterm_locked(struct task_struc= t *tsk, flags =3D memalloc_nocma_save(); } =20 - rc =3D __get_user_pages_locked(tsk, mm, start, nr_pages, pages, + rc =3D __get_user_pages_locked(mm, start, nr_pages, pages, vmas_tmp, NULL, gup_flags); =20 if (gup_flags & FOLL_LONGTERM) { @@ -1772,7 +1763,7 @@ static long __gup_longterm_locked(struct task_struc= t *tsk, goto out; } =20 - rc =3D check_and_migrate_cma_pages(tsk, mm, start, rc, pages, + rc =3D check_and_migrate_cma_pages(mm, start, rc, pages, vmas_tmp, gup_flags); } =20 @@ -1782,22 +1773,20 @@ static long __gup_longterm_locked(struct task_str= uct *tsk, return rc; } #else /* !CONFIG_FS_DAX && !CONFIG_CMA */ -static __always_inline long __gup_longterm_locked(struct task_struct *ts= k, - struct mm_struct *mm, +static __always_inline long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, struct vm_area_struct **vmas, unsigned int flags) { - return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, flags); } #endif /* CONFIG_FS_DAX || CONFIG_CMA */ =20 #ifdef CONFIG_MMU -static long __get_user_pages_remote(struct task_struct *tsk, - struct mm_struct *mm, +static long __get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1816,20 +1805,18 @@ static long __get_user_pages_remote(struct task_s= truct *tsk, * This will check the vmas (even if our vmas arg is NULL) * and return -ENOTSUPP if DAX isn't allowed in this case: */ - return __gup_longterm_locked(tsk, mm, start, nr_pages, pages, + return __gup_longterm_locked(mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH | FOLL_REMOTE); } =20 - return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, locked, gup_flags | FOLL_TOUCH | FOLL_REMOTE); } =20 /* * get_user_pages_remote() - pin user pages in memory - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -1888,7 +1875,7 @@ static long __get_user_pages_remote(struct task_str= uct *tsk, * should use get_user_pages because it cannot pass * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault. */ -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm= , +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1900,13 +1887,13 @@ long get_user_pages_remote(struct task_struct *ts= k, struct mm_struct *mm, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; =20 - return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, + return __get_user_pages_remote(mm, start, nr_pages, gup_flags, pages, vmas, locked); } EXPORT_SYMBOL(get_user_pages_remote); =20 #else /* CONFIG_MMU */ -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm= , +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1914,8 +1901,7 @@ long get_user_pages_remote(struct task_struct *tsk,= struct mm_struct *mm, return 0; } =20 -static long __get_user_pages_remote(struct task_struct *tsk, - struct mm_struct *mm, +static long __get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1942,7 +1928,7 @@ long get_user_pages(unsigned long start, unsigned l= ong nr_pages, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; =20 - return __gup_longterm_locked(current, current->mm, start, nr_pages, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH); } EXPORT_SYMBOL(get_user_pages); @@ -1956,7 +1942,7 @@ EXPORT_SYMBOL(get_user_pages); * * down_read(&mm->mmap_sem); * do_something() - * get_user_pages(tsk, mm, ..., pages, NULL); + * get_user_pages(mm, ..., pages, NULL); * up_read(&mm->mmap_sem); * * to: @@ -1964,7 +1950,7 @@ EXPORT_SYMBOL(get_user_pages); * int locked =3D 1; * down_read(&mm->mmap_sem); * do_something() - * get_user_pages_locked(tsk, mm, ..., pages, &locked); + * get_user_pages_locked(mm, ..., pages, &locked); * if (locked) * up_read(&mm->mmap_sem); */ @@ -1981,7 +1967,7 @@ long get_user_pages_locked(unsigned long start, uns= igned long nr_pages, if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) return -EINVAL; =20 - return __get_user_pages_locked(current, current->mm, start, nr_pages, + return __get_user_pages_locked(current->mm, start, nr_pages, pages, NULL, locked, gup_flags | FOLL_TOUCH); } @@ -1991,12 +1977,12 @@ EXPORT_SYMBOL(get_user_pages_locked); * get_user_pages_unlocked() is suitable to replace the form: * * down_read(&mm->mmap_sem); - * get_user_pages(tsk, mm, ..., pages, NULL); + * get_user_pages(mm, ..., pages, NULL); * up_read(&mm->mmap_sem); * * with: * - * get_user_pages_unlocked(tsk, mm, ..., pages); + * get_user_pages_unlocked(mm, ..., pages); * * It is functionally equivalent to get_user_pages_fast so * get_user_pages_fast should be used instead if specific gup_flags @@ -2019,7 +2005,7 @@ long get_user_pages_unlocked(unsigned long start, u= nsigned long nr_pages, return -EINVAL; =20 down_read(&mm->mmap_sem); - ret =3D __get_user_pages_locked(current, mm, start, nr_pages, pages, NU= LL, + ret =3D __get_user_pages_locked(mm, start, nr_pages, pages, NULL, &locked, gup_flags | FOLL_TOUCH); if (locked) up_read(&mm->mmap_sem); @@ -2720,7 +2706,7 @@ static int __gup_longterm_unlocked(unsigned long st= art, int nr_pages, */ if (gup_flags & FOLL_LONGTERM) { down_read(¤t->mm->mmap_sem); - ret =3D __gup_longterm_locked(current, current->mm, + ret =3D __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, gup_flags); up_read(¤t->mm->mmap_sem); @@ -2850,10 +2836,8 @@ int pin_user_pages_fast(unsigned long start, int n= r_pages, EXPORT_SYMBOL_GPL(pin_user_pages_fast); =20 /** - * pin_user_pages_remote() - pin pages of a remote process (task !=3D cu= rrent) + * pin_user_pages_remote() - pin pages of a remote process * - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -2877,7 +2861,7 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast); * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.= rst. It * is NOT intended for Case 2 (RDMA: long-term pins). */ -long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm= , +long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -2887,7 +2871,7 @@ long pin_user_pages_remote(struct task_struct *tsk,= struct mm_struct *mm, return -EINVAL; =20 gup_flags |=3D FOLL_PIN; - return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, + return __get_user_pages_remote(mm, start, nr_pages, gup_flags, pages, vmas, locked); } EXPORT_SYMBOL(pin_user_pages_remote); @@ -2922,7 +2906,7 @@ long pin_user_pages(unsigned long start, unsigned l= ong nr_pages, return -EINVAL; =20 gup_flags |=3D FOLL_PIN; - return __gup_longterm_locked(current, current->mm, start, nr_pages, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags); } EXPORT_SYMBOL(pin_user_pages); diff --git a/mm/memory.c b/mm/memory.c index 0b3c747cd2b3..65576e3b382f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4739,7 +4739,7 @@ int __access_remote_vm(struct task_struct *tsk, str= uct mm_struct *mm, void *maddr; struct page *page =3D NULL; =20 - ret =3D get_user_pages_remote(tsk, mm, addr, 1, + ret =3D get_user_pages_remote(mm, addr, 1, gup_flags, &page, &vma, NULL); if (ret <=3D 0) { #ifndef CONFIG_HAVE_IOREMAP_PROT diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index 74e957e302fe..5523464d0ab5 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -105,7 +105,7 @@ static int process_vm_rw_single_vec(unsigned long add= r, * current/current->mm */ down_read(&mm->mmap_sem); - pinned_pages =3D pin_user_pages_remote(task, mm, pa, pinned_pages, + pinned_pages =3D pin_user_pages_remote(mm, pa, pinned_pages, flags, process_pages, NULL, &locked); if (locked) diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 7869d6a9980b..afe5e68ede77 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -914,7 +914,7 @@ bool tomoyo_dump_page(struct linux_binprm *bprm, unsi= gned long pos, * (represented by bprm). 'current' is the process doing * the execve(). */ - if (get_user_pages_remote(current, bprm->mm, pos, 1, + if (get_user_pages_remote(bprm->mm, pos, 1, FOLL_FORCE, &page, NULL, NULL) <=3D 0) return false; #else diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 15e5b037f92d..73098e18baaf 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -60,7 +60,7 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ down_read(&mm->mmap_sem); - get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE, NULL, NULL, + get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, NULL, &locked); if (locked) up_read(&mm->mmap_sem); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 731c1e517716..3e1b2ec4ec96 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1829,7 +1829,7 @@ static int hva_to_pfn_remapped(struct vm_area_struc= t *vma, * not call the fault handler, so do it here. */ bool unlocked =3D false; - r =3D fixup_user_fault(current, current->mm, addr, + r =3D fixup_user_fault(current->mm, addr, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked) --=20 2.26.2