From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> To: linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, linux@arm.linux.org.uk, linux-arch@vger.kernel.org, linux-mm@kvack.org Cc: will.deacon@arm.com, gary.robertson@linaro.org, christoffer.dall@linaro.org, peterz@infradead.org, anders.roxell@linaro.org, akpm@linux-foundation.org, dann.frazier@canonical.com, mark.rutland@arm.com, mgorman@suse.de, hughd@google.com, Steve Capper <steve.capper@linaro.org> Subject: Re: [PATCH V4 1/6] mm: Introduce a general RCU get_user_pages_fast. Date: Mon, 13 Oct 2014 11:52:26 +0530 [thread overview] Message-ID: <87a9501obh.fsf@linux.vnet.ibm.com> (raw) In-Reply-To: <1411740233-28038-2-git-send-email-steve.capper@linaro.org> Steve Capper <steve.capper@linaro.org> writes: ..... > + > +static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > + unsigned long end, int write, struct page **pages, int *nr) > +{ > + struct page *head, *page, *tail; > + int refs; > + > + if (write && !pmd_write(orig)) > + return 0; > + > + refs = 0; > + head = pmd_page(orig); > + page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT); > + tail = page; > + do { > + VM_BUG_ON_PAGE(compound_head(page) != head, page); > + pages[*nr] = page; > + (*nr)++; > + page++; > + refs++; > + } while (addr += PAGE_SIZE, addr != end); > + > + if (!page_cache_add_speculative(head, refs)) { > + *nr -= refs; > + return 0; > + } > + > + if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { > + *nr -= refs; > + while (refs--) > + put_page(head); > + return 0; > + } > + > + /* > + * Any tail pages need their mapcount reference taken before we > + * return. (This allows the THP code to bump their ref count when > + * they are split into base pages). > + */ > + while (refs--) { > + if (PageTail(tail)) > + get_huge_page_tail(tail); > + tail++; > + } > + > + return 1; > +} > + ..... > +static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, > + int write, struct page **pages, int *nr) > +{ > + unsigned long next; > + pmd_t *pmdp; > + > + pmdp = pmd_offset(&pud, addr); > + do { > + pmd_t pmd = ACCESS_ONCE(*pmdp); > + > + next = pmd_addr_end(addr, end); > + if (pmd_none(pmd) || pmd_trans_splitting(pmd)) > + return 0; > + > + if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd))) { We don't check the _PAGE_PRESENT here > + /* > + * NUMA hinting faults need to be handled in the GUP > + * slowpath for accounting purposes and so that they > + * can be serialised against THP migration. > + */ > + if (pmd_numa(pmd)) > + return 0; > + > + if (!gup_huge_pmd(pmd, pmdp, addr, next, write, > + pages, nr)) > + return 0; > + > + } else if (!gup_pte_range(pmd, addr, next, write, pages, nr)) > + return 0; > + } while (pmdp++, addr = next, addr != end); > + > + return 1; > +} > + -aneesh -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
WARNING: multiple messages have this Message-ID (diff)
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> To: Steve Capper <steve.capper@linaro.org>, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, linux@arm.linux.org.uk, linux-arch@vger.kernel.org, linux-mm@kvack.org Cc: will.deacon@arm.com, gary.robertson@linaro.org, christoffer.dall@linaro.org, peterz@infradead.org, anders.roxell@linaro.org, akpm@linux-foundation.org, dann.frazier@canonical.com, mark.rutland@arm.com, mgorman@suse.de, hughd@google.com Subject: Re: [PATCH V4 1/6] mm: Introduce a general RCU get_user_pages_fast. Date: Mon, 13 Oct 2014 11:52:26 +0530 [thread overview] Message-ID: <87a9501obh.fsf@linux.vnet.ibm.com> (raw) Message-ID: <20141013062226.-3pKfEiD_Bf447m2cOoK6FQbaZA4p2fpF6nx2-Fod88@z> (raw) In-Reply-To: <1411740233-28038-2-git-send-email-steve.capper@linaro.org> Steve Capper <steve.capper@linaro.org> writes: ..... > + > +static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > + unsigned long end, int write, struct page **pages, int *nr) > +{ > + struct page *head, *page, *tail; > + int refs; > + > + if (write && !pmd_write(orig)) > + return 0; > + > + refs = 0; > + head = pmd_page(orig); > + page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT); > + tail = page; > + do { > + VM_BUG_ON_PAGE(compound_head(page) != head, page); > + pages[*nr] = page; > + (*nr)++; > + page++; > + refs++; > + } while (addr += PAGE_SIZE, addr != end); > + > + if (!page_cache_add_speculative(head, refs)) { > + *nr -= refs; > + return 0; > + } > + > + if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { > + *nr -= refs; > + while (refs--) > + put_page(head); > + return 0; > + } > + > + /* > + * Any tail pages need their mapcount reference taken before we > + * return. (This allows the THP code to bump their ref count when > + * they are split into base pages). > + */ > + while (refs--) { > + if (PageTail(tail)) > + get_huge_page_tail(tail); > + tail++; > + } > + > + return 1; > +} > + ..... > +static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, > + int write, struct page **pages, int *nr) > +{ > + unsigned long next; > + pmd_t *pmdp; > + > + pmdp = pmd_offset(&pud, addr); > + do { > + pmd_t pmd = ACCESS_ONCE(*pmdp); > + > + next = pmd_addr_end(addr, end); > + if (pmd_none(pmd) || pmd_trans_splitting(pmd)) > + return 0; > + > + if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd))) { We don't check the _PAGE_PRESENT here > + /* > + * NUMA hinting faults need to be handled in the GUP > + * slowpath for accounting purposes and so that they > + * can be serialised against THP migration. > + */ > + if (pmd_numa(pmd)) > + return 0; > + > + if (!gup_huge_pmd(pmd, pmdp, addr, next, write, > + pages, nr)) > + return 0; > + > + } else if (!gup_pte_range(pmd, addr, next, write, pages, nr)) > + return 0; > + } while (pmdp++, addr = next, addr != end); > + > + return 1; > +} > + -aneesh
next prev parent reply other threads:[~2014-10-13 6:22 UTC|newest] Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-09-26 14:03 [PATCH V4 0/6] RCU get_user_pages_fast and __get_user_pages_fast Steve Capper 2014-09-26 14:03 ` Steve Capper 2014-09-26 14:03 ` [PATCH V4 1/6] mm: Introduce a general RCU get_user_pages_fast Steve Capper 2014-09-26 14:03 ` Steve Capper 2014-09-29 21:51 ` Hugh Dickins 2014-09-29 21:51 ` Hugh Dickins 2014-10-01 11:11 ` Catalin Marinas 2014-10-01 11:11 ` Catalin Marinas 2014-10-02 16:00 ` Steve Capper 2014-10-02 16:00 ` Steve Capper 2014-10-02 12:19 ` Andrea Arcangeli 2014-10-02 12:19 ` Andrea Arcangeli 2014-10-02 16:18 ` Steve Capper 2014-10-02 16:18 ` Steve Capper 2014-10-02 16:54 ` Andrea Arcangeli 2014-10-02 16:54 ` Andrea Arcangeli 2014-10-13 5:15 ` Aneesh Kumar K.V 2014-10-13 5:21 ` David Miller 2014-10-13 5:21 ` David Miller 2014-10-13 11:44 ` Steve Capper 2014-10-13 11:44 ` Steve Capper 2014-10-13 16:06 ` David Miller 2014-10-14 12:38 ` Steve Capper 2014-10-14 12:38 ` Steve Capper 2014-10-14 16:30 ` David Miller 2014-10-14 16:30 ` David Miller 2014-10-13 17:04 ` Aneesh Kumar K.V 2014-10-13 6:22 ` Aneesh Kumar K.V [this message] 2014-10-13 6:22 ` Aneesh Kumar K.V 2014-09-26 14:03 ` [PATCH V4 2/6] arm: mm: Introduce special ptes for LPAE Steve Capper 2014-09-26 14:03 ` Steve Capper 2014-09-26 14:03 ` [PATCH V4 3/6] arm: mm: Enable HAVE_RCU_TABLE_FREE logic Steve Capper 2014-09-26 14:03 ` Steve Capper 2014-09-26 14:03 ` [PATCH V4 4/6] arm: mm: Enable RCU fast_gup Steve Capper 2014-09-26 14:03 ` Steve Capper 2014-09-26 14:03 ` [PATCH V4 5/6] arm64: mm: Enable HAVE_RCU_TABLE_FREE logic Steve Capper 2014-09-26 14:03 ` [PATCH V4 6/6] arm64: mm: Enable RCU fast_gup Steve Capper 2015-02-27 12:42 ` [PATCH V4 0/6] RCU get_user_pages_fast and __get_user_pages_fast Jon Masters 2015-02-27 12:42 ` Jon Masters 2015-02-27 13:20 ` Mark Rutland 2015-03-02 14:16 ` Mark Rutland 2015-03-02 2:10 ` PMD update corruption (sync question) Jon Masters 2015-03-02 2:10 ` Jon Masters 2015-03-02 5:58 ` Jon Masters 2015-03-02 5:58 ` Jon Masters 2015-03-02 10:50 ` Catalin Marinas 2015-03-02 11:06 ` Jon Masters 2015-03-02 11:06 ` Jon Masters 2015-03-02 12:31 ` Peter Zijlstra 2015-03-02 12:31 ` Peter Zijlstra 2015-03-02 12:40 ` Geert Uytterhoeven 2015-03-02 12:40 ` Geert Uytterhoeven 2015-03-02 22:21 ` Jon Masters 2015-03-02 22:21 ` Jon Masters 2015-03-02 22:29 ` Jon Masters 2015-03-02 22:29 ` Jon Masters 2015-03-03 9:06 ` Arnd Bergmann 2015-03-03 15:46 ` Jon Masters 2015-03-03 15:46 ` Jon Masters
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=87a9501obh.fsf@linux.vnet.ibm.com \ --to=aneesh.kumar@linux.vnet.ibm.com \ --cc=akpm@linux-foundation.org \ --cc=anders.roxell@linaro.org \ --cc=catalin.marinas@arm.com \ --cc=christoffer.dall@linaro.org \ --cc=dann.frazier@canonical.com \ --cc=gary.robertson@linaro.org \ --cc=hughd@google.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-mm@kvack.org \ --cc=linux@arm.linux.org.uk \ --cc=mark.rutland@arm.com \ --cc=mgorman@suse.de \ --cc=peterz@infradead.org \ --cc=steve.capper@linaro.org \ --cc=will.deacon@arm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).