From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D912CE7AE8 for ; Fri, 6 Sep 2024 09:00:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B26676B0088; Fri, 6 Sep 2024 05:00:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AAE106B0085; Fri, 6 Sep 2024 05:00:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 976466B0088; Fri, 6 Sep 2024 05:00:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 72F736B0082 for ; Fri, 6 Sep 2024 05:00:34 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id EDB5C8126F for ; Fri, 6 Sep 2024 09:00:33 +0000 (UTC) X-FDA: 82533717546.14.5349E06 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf11.hostedemail.com (Postfix) with ESMTP id EB09240027 for ; Fri, 6 Sep 2024 09:00:31 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725613183; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F7y+rPQWBOX2JkldDpjaN4TAbYyEMXEhyJWZIeFN604=; b=nZXGiHK2eXhxxfnILjEGaGU5Jv2MVcvd8gaBovweR5NDvY4GWH57PVV65hrUUlVd4pdXfy HDKC0V/dOJuuzNvl+VQQPjUc9DzDejYq2FhSj/mChBFgrBatrIgl65KyXZenEWY+OLhvCW 0RFyMMirfyvgHC4n1M8JQafxQVk8RME= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725613183; a=rsa-sha256; cv=none; b=4o2JKwS1yBb3AW55rTzDF3hojfhe7TK3VV4B4tBEyb2eBiHbohjqE87rSpo7Zq489w51P6 bHd/KCQ2dfBvGO0/DmI16YxEX44iglTa/+9MmGBQDlm3TKiMtlh0n/lHUY3f4EnwZZBfFm nWTcLXBVGBZnmtCaSSL18s6sYWNaqAY= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5B714FEC; Fri, 6 Sep 2024 02:00:58 -0700 (PDT) Received: from [10.57.86.132] (unknown [10.57.86.132]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9375D3F73F; Fri, 6 Sep 2024 02:00:27 -0700 (PDT) Message-ID: Date: Fri, 6 Sep 2024 10:00:26 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/2] mm: Abstract THP allocation Content-Language: en-GB To: Dev Jain , akpm@linux-foundation.org, david@redhat.com, willy@infradead.org, kirill.shutemov@linux.intel.com Cc: anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, mark.rutland@arm.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, jglisse@google.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20240904100923.290042-1-dev.jain@arm.com> <20240904100923.290042-2-dev.jain@arm.com> <336ce914-43dc-4613-a339-1a33f16f71ad@arm.com> <10aea3c3-42be-4f82-8961-75d5142a9653@arm.com> <60f1577f-314d-4ac9-85b3-068111594845@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: EB09240027 X-Stat-Signature: i8hts1qb6pfd5ytgd8ze14pq717i5o7d X-HE-Tag: 1725613231-748871 X-HE-Meta: U2FsdGVkX1+NI+6QHfypXcfC1b1ob3XJCUlk63/sfSrUrP0LBqSL/MJnMrBBTjrROt0o2NTn+TjgSwykbqBBdRK2DcHnwTENHRDy+6BWUBGBYr1Rj6MEH9RQrpzeV1fHhupg51bkjtXN5XG7xYYdnOAK04vordgsaUK806Bqi65b2dNpt1G8FJvBGfMAPROJZJ0o72G+3VoV9cfMRJZlmgGU/fLnuFy8Wx5zhjfWqhe2Wjpc2498tuFumSZH2b+AgV1I3y5145A6fzpzi0yRgctBymuFhD/IOROv0mSQYqpftJ7RDB7e7pPVvqsXHKOnjSSaDH55xrKIxC8GmS2gv7YsdUaKLK4+MZHhBtGCONPmS5bDXhCW9iDoU5d78DDTcB/s+6gVxODkAnTBrvx/Wv1cD1B58NPlwLpaReBYvHUj34TJwa/iRO3B5PhqxYzLe5mvngmKjUZVHtvO/a177CjblJ4F6ryHnWaAveUO9mf5vpM//ncQFrTmY2Usr3ZmHhsJBn4AnonPKEzQIL/knTjmvpE9yPpD9CQ5BizqtpA9SB71cijCNhpoUajpJ+QL25q7P51twlotj67Vq6WzgJcCy6UoJuqA2hMhPq12doi6xsP1sIZxS82aga6oMVIXygiZbDouLszbFKSgD6gCI3kg+rO+6mwcLVQV4CGKC1Xtgx8IZcbQjaCsWJkN3uHFlIMwcay1aeFSnyo/l0Y1rQNAZmWE5GDndhk3ltsm8k/8nf+vPvmEB7YCc/JBu2yLvcWT3c93JyS+tO8B5UHmShLT7oWZCcAU8+kHXnC0EzrqHskCeQx5DHL5yhT/WRhSjjYIjQQ7UWBalUTA649HeOgeadyXfP3h3BA4a7WyqsHIvFOcDIn0KkwYn7SoPAQW7Id8x+QJftlgMpBDy//xb2Sd1T1gTPO7XggKxbVxKBG8kfvT8MGdWRselE+QB8Xwm367+qT8F1j4EsWLmRd O0L0YPIW VR62IU/EGZ4oruj8HTHc5xt1d/UUH7srLXnY84eaU14KcLqypm/4Uy+QWXSMrrwf+JiyTQ/FY2yi0cleamqy4WL8rvBiCsF8IfuWHhDFx+g97WwJcshPBzUbudjQaz3SGqp0+lIYIl5+hZR0m0GHIeOBQhMFvcSeJbZqopho/fWPFcpvZ/n5zwIY+/w/IY8M/jp35c2lYj47ZEGHlIpH1vL2nE37e4sUMyEtJVCkmGyasI7A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 06/09/2024 09:45, Dev Jain wrote: > > On 9/6/24 13:58, Ryan Roberts wrote: >> On 06/09/2024 06:42, Dev Jain wrote: >>> On 9/5/24 16:38, Ryan Roberts wrote: >>>> On 04/09/2024 11:09, Dev Jain wrote: >>>>> In preparation for the second patch, abstract away the THP allocation >>>>> logic present in the create_huge_pmd() path, which corresponds to the >>>>> faulting case when no page is present. >>>>> >>>>> There should be no functional change as a result of applying >>>>> this patch. >>>>> >>>>> Signed-off-by: Dev Jain >>>>> --- >>>>>    mm/huge_memory.c | 110 +++++++++++++++++++++++++++++------------------ >>>>>    1 file changed, 67 insertions(+), 43 deletions(-) >>>>> >>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>>> index 67c86a5d64a6..58125fbcc532 100644 >>>>> --- a/mm/huge_memory.c >>>>> +++ b/mm/huge_memory.c >>>>> @@ -943,47 +943,89 @@ unsigned long thp_get_unmapped_area(struct file *filp, >>>>> unsigned long addr, >>>>>    } >>>>>    EXPORT_SYMBOL_GPL(thp_get_unmapped_area); >>>>>    -static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, >>>>> -            struct page *page, gfp_t gfp) >>>>> +static vm_fault_t thp_fault_alloc(gfp_t gfp, int order, struct >>>>> vm_area_struct *vma, >>>> Is there a reason for specifying order as a parameter? Previously it was >>>> hardcoded to HPAGE_PMD_ORDER. But now, thp_fault_alloc() and >>>> __thp_fault_success_stats() both take order and map_pmd_thp() is still >>>> implicitly mapping a PMD-sized block. Unless there is a reason you need this >>>> parameter in the next patch (I don't think there is?) I suggest simplifying. >>> If I am not wrong, thp_fault_alloc() and __thp_fault_success_stats() >>> will remain the same in case of mTHP? >> No, its a bit different for smaller-than-pmd THP - that's handled in >> alloc_anon_folio() in memory.c. It deals with fallback to smaller orders (inc >> order-0. Potentially alloc_anon_folio() could be refactored to use your new >> functions in future but I'm sure there would be a number of subtlties to >> consider and in any case you would want to make that a separate change. The >> order param would be added as part of that future change. > > Okay. > >> >>> I chose to pass order so that these >>> functions can be used by others in the future. >> It's not valuable to have an internal interface with no users. My advice is to >> keep it simple and only extend it when a user pops up. > > Makes sense. In that case, should I call it pmd_thp_fault_alloc() to enforce that > this is not a non-PMD THP? Yeah, that works for me. > >> >>> Therefore, these two functions >>> can be generically used in the future while map_pmd_thp() (as the name suggests) >>> maps only a PMD-mappable THP. >>> >>>>> +                  unsigned long haddr, struct folio **foliop, >>>> FWIW, I agree with Kirill's suggestion to just return folio* and drop the >>>> output >>>> param. >>> As I replied to Kirill, I do not think that is a good idea. If you do a git >>> grep on >>> the tree for "foliop", you will find several places where that is being used, >>> for >>> example, check out alloc_charge_folio() in mm/khugepaged.c. The author >>> intends to >>> do the stat computation and setting *foliop in the function itself, so that the >>> caller is only concerned with the return value. >> By having 2 return params, you open up the possibility of returning an invalid >> combination (e.g. ret==FALLBACK, folio!=NULL). You avoid that by having a single >> return param, which is either NULL (failure) or a valid pointer. > > Okay, I agree with this in a generic sense. > >> >>> Also, if we return the folio instead of VM_FAULT_FALLBACK from >>> thp_fault_alloc(), >>> then you will have two "if (unlikely(!folio))" branches, the first to do the >>> stat >>> computation in the function, and the second branch in the caller to set ret = >>> VM_FAULT_FALLBACK >>> and then goto out/release. >> There are already 2 conditionals, that doesn't change. >> >>> And, it is already inconsistent to break away the stat computation and the >>> return value >>> setting, when the stat computation name is of the form >>> "count_vm_event(THP_FAULT_FALLBACK)" >>> and friends, at which point it would just be better to open code the function. >> I don't understand this. >> >>> If I am missing something and your suggestion can be implemented neatly, please >>> guide. >> This looks cleaner/simpler to my eye (also removing the order parameter): >> >> static folio *thp_fault_alloc(gfp_t gfp, struct vm_area_struct *vma, >>                   unsigned long haddr, unsigned long addr) >> { >>     const int order = HPAGE_PMD_ORDER; >>     struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr, true); >> >>     if (unlikely(!folio)) { >>         count_vm_event(THP_FAULT_FALLBACK); >>         count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK); >>         goto out; >>     } >> >>     VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); >>     if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { >>         folio_put(folio); >>         count_vm_event(THP_FAULT_FALLBACK); >>         count_vm_event(THP_FAULT_FALLBACK_CHARGE); >>         count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK); >>         count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); >>         goto out; >>     } >>     folio_throttle_swaprate(folio, gfp); >> >>     folio_zero_user(folio, addr); >>     /* >>      * The memory barrier inside __folio_mark_uptodate makes sure that >>      * folio_zero_user writes become visible before the set_pmd_at() >>      * write. >>      */ >>     __folio_mark_uptodate(folio); >> out: >>     return folio; >> } >> >> >> static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf) >> { >>     vm_fault_t ret = VM_FAULT_FALLBACK; >>     ... >> >>     folio = thp_fault_alloc(gfp, vma, haddr, vmf->address); >>     if (unlikely(!folio)) >>         goto release; >>     ... >> } >> > > Thanks for your help. I will switch to this, just that, ret should still > be initialized to zero in case we spot the pmd changes after taking the lock. > So I'll set it in the case of !folio. ret = thp_fault_alloc() will have already set it back to zero so there isn't a bug. But if you prefer to explicitly set it in the "if" to make the code clearer to read, that's fine by me. Thanks, Ryan > >> >>>> Thanks, >>>> Ryan >>>> >>>>> +                  unsigned long addr) >>>>>    { >>>>> -    struct vm_area_struct *vma = vmf->vma; >>>>> -    struct folio *folio = page_folio(page); >>>>> -    pgtable_t pgtable; >>>>> -    unsigned long haddr = vmf->address & HPAGE_PMD_MASK; >>>>> -    vm_fault_t ret = 0; >>>>> +    struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr, true); >>>>>    -    VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); >>>>> +    *foliop = folio; >>>>> +    if (unlikely(!folio)) { >>>>> +        count_vm_event(THP_FAULT_FALLBACK); >>>>> +        count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK); >>>>> +        return VM_FAULT_FALLBACK; >>>>> +    } >>>>>    +    VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); >>>>>        if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { >>>>>            folio_put(folio); >>>>>            count_vm_event(THP_FAULT_FALLBACK); >>>>>            count_vm_event(THP_FAULT_FALLBACK_CHARGE); >>>>> -        count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK); >>>>> -        count_mthp_stat(HPAGE_PMD_ORDER, >>>>> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); >>>>> +        count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK); >>>>> +        count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); >>>>>            return VM_FAULT_FALLBACK; >>>>>        } >>>>>        folio_throttle_swaprate(folio, gfp); >>>>>    -    pgtable = pte_alloc_one(vma->vm_mm); >>>>> -    if (unlikely(!pgtable)) { >>>>> -        ret = VM_FAULT_OOM; >>>>> -        goto release; >>>>> -    } >>>>> - >>>>> -    folio_zero_user(folio, vmf->address); >>>>> +    folio_zero_user(folio, addr); >>>>>        /* >>>>>         * The memory barrier inside __folio_mark_uptodate makes sure that >>>>>         * folio_zero_user writes become visible before the set_pmd_at() >>>>>         * write. >>>>>         */ >>>>>        __folio_mark_uptodate(folio); >>>>> +    return 0; >>>>> +} >>>>> + >>>>> +static void __thp_fault_success_stats(struct vm_area_struct *vma, int order) >>>>> +{ >>>>> +    count_vm_event(THP_FAULT_ALLOC); >>>>> +    count_mthp_stat(order, MTHP_STAT_ANON_FAULT_ALLOC); >>>>> +    count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); >>>>> +} >>>>> + >>>>> +static void map_pmd_thp(struct folio *folio, struct vm_fault *vmf, >>>>> +            struct vm_area_struct *vma, unsigned long haddr, >>>>> +            pgtable_t pgtable) >>>>> +{ >>>>> +    pmd_t entry; >>>>> + >>>>> +    entry = mk_huge_pmd(&folio->page, vma->vm_page_prot); >>>>> +    entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); >>>>> +    folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); >>>>> +    folio_add_lru_vma(folio, vma); >>>>> +    pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); >>>>> +    set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); >>>>> +    update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); >>>>> +    add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); >>>>> +    mm_inc_nr_ptes(vma->vm_mm); >>>>> +} >>>>> + >>>>> +static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf) >>>>> +{ >>>>> +    struct vm_area_struct *vma = vmf->vma; >>>>> +    struct folio *folio = NULL; >>>>> +    pgtable_t pgtable; >>>>> +    unsigned long haddr = vmf->address & HPAGE_PMD_MASK; >>>>> +    vm_fault_t ret = 0; >>>>> +    gfp_t gfp = vma_thp_gfp_mask(vma); >>>>> + >>>>> +    pgtable = pte_alloc_one(vma->vm_mm); >>>>> +    if (unlikely(!pgtable)) { >>>>> +        ret = VM_FAULT_OOM; >>>>> +        goto release; >>>>> +    } >>>>> + >>>>> +    ret = thp_fault_alloc(gfp, HPAGE_PMD_ORDER, vma, haddr, &folio, >>>>> +                  vmf->address); >>>>> +    if (ret) >>>>> +        goto release; >>>>>          vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); >>>>> + >>>>>        if (unlikely(!pmd_none(*vmf->pmd))) { >>>>>            goto unlock_release; >>>>>        } else { >>>>> -        pmd_t entry; >>>>> - >>>>>            ret = check_stable_address_space(vma->vm_mm); >>>>>            if (ret) >>>>>                goto unlock_release; >>>>> @@ -997,20 +1039,9 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct >>>>> vm_fault *vmf, >>>>>                VM_BUG_ON(ret & VM_FAULT_FALLBACK); >>>>>                return ret; >>>>>            } >>>>> - >>>>> -        entry = mk_huge_pmd(page, vma->vm_page_prot); >>>>> -        entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); >>>>> -        folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); >>>>> -        folio_add_lru_vma(folio, vma); >>>>> -        pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); >>>>> -        set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); >>>>> -        update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); >>>>> -        add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); >>>>> -        mm_inc_nr_ptes(vma->vm_mm); >>>>> +        map_pmd_thp(folio, vmf, vma, haddr, pgtable); >>>>>            spin_unlock(vmf->ptl); >>>>> -        count_vm_event(THP_FAULT_ALLOC); >>>>> -        count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); >>>>> -        count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); >>>>> +        __thp_fault_success_stats(vma, HPAGE_PMD_ORDER); >>>>>        } >>>>>          return 0; >>>>> @@ -1019,7 +1050,8 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct >>>>> vm_fault *vmf, >>>>>    release: >>>>>        if (pgtable) >>>>>            pte_free(vma->vm_mm, pgtable); >>>>> -    folio_put(folio); >>>>> +    if (folio) >>>>> +        folio_put(folio); >>>>>        return ret; >>>>>      } >>>>> @@ -1077,8 +1109,6 @@ static void set_huge_zero_folio(pgtable_t pgtable, >>>>> struct mm_struct *mm, >>>>>    vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) >>>>>    { >>>>>        struct vm_area_struct *vma = vmf->vma; >>>>> -    gfp_t gfp; >>>>> -    struct folio *folio; >>>>>        unsigned long haddr = vmf->address & HPAGE_PMD_MASK; >>>>>        vm_fault_t ret; >>>>>    @@ -1129,14 +1159,8 @@ vm_fault_t do_huge_pmd_anonymous_page(struct >>>>> vm_fault *vmf) >>>>>            } >>>>>            return ret; >>>>>        } >>>>> -    gfp = vma_thp_gfp_mask(vma); >>>>> -    folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr, true); >>>>> -    if (unlikely(!folio)) { >>>>> -        count_vm_event(THP_FAULT_FALLBACK); >>>>> -        count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK); >>>>> -        return VM_FAULT_FALLBACK; >>>>> -    } >>>>> -    return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp); >>>>> + >>>>> +    return __do_huge_pmd_anonymous_page(vmf); >>>>>    } >>>>>      static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long >>>>> addr,