From: Balbir Singh <bsingharora@gmail.com>
To: john.hubbard@gmail.com
Cc: Matthew Wilcox <willy@infradead.org>,
Michal Hocko <mhocko@kernel.org>,
Christopher Lameter <cl@linux.com>,
Jason Gunthorpe <jgg@ziepe.ca>,
Dan Williams <dan.j.williams@intel.com>, Jan Kara <jack@suse.cz>,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
LKML <linux-kernel@vger.kernel.org>,
linux-rdma <linux-rdma@vger.kernel.org>,
linux-fsdevel@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>
Subject: Re: [PATCH 4/6] mm: introduce page->dma_pinned_flags, _count
Date: Fri, 12 Oct 2018 21:56:12 +1100 [thread overview]
Message-ID: <20181012105612.GK8537@350D> (raw)
In-Reply-To: <20181012060014.10242-5-jhubbard@nvidia.com>
On Thu, Oct 11, 2018 at 11:00:12PM -0700, john.hubbard@gmail.com wrote:
> From: John Hubbard <jhubbard@nvidia.com>
>
> Add two struct page fields that, combined, are unioned with
> struct page->lru. There is no change in the size of
> struct page. These new fields are for type safety and clarity.
>
> Also add page flag accessors to test, set and clear the new
> page->dma_pinned_flags field.
>
> The page->dma_pinned_count field will be used in upcoming
> patches
>
> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> ---
> include/linux/mm_types.h | 22 +++++++++++++-----
> include/linux/page-flags.h | 47 ++++++++++++++++++++++++++++++++++++++
> 2 files changed, 63 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 5ed8f6292a53..017ab82e36ca 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -78,12 +78,22 @@ struct page {
> */
> union {
> struct { /* Page cache and anonymous pages */
> - /**
> - * @lru: Pageout list, eg. active_list protected by
> - * zone_lru_lock. Sometimes used as a generic list
> - * by the page owner.
> - */
> - struct list_head lru;
> + union {
> + /**
> + * @lru: Pageout list, eg. active_list protected
> + * by zone_lru_lock. Sometimes used as a
> + * generic list by the page owner.
> + */
> + struct list_head lru;
> + /* Used by get_user_pages*(). Pages may not be
> + * on an LRU while these dma_pinned_* fields
> + * are in use.
> + */
> + struct {
> + unsigned long dma_pinned_flags;
> + atomic_t dma_pinned_count;
> + };
> + };
> /* See page-flags.h for PAGE_MAPPING_FLAGS */
> struct address_space *mapping;
> pgoff_t index; /* Our offset within mapping. */
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 74bee8cecf4c..81ed52c3caae 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -425,6 +425,53 @@ static __always_inline int __PageMovable(struct page *page)
> PAGE_MAPPING_MOVABLE;
> }
>
> +/*
> + * Because page->dma_pinned_flags is unioned with page->lru, any page that
> + * uses these flags must NOT be on an LRU. That's partly enforced by
> + * ClearPageDmaPinned, which gives the page back to LRU.
> + *
> + * PageDmaPinned also corresponds to PageTail (the 0th bit in the first union
> + * of struct page), and this flag is checked without knowing whether it is a
> + * tail page or a PageDmaPinned page. Therefore, start the flags at bit 1 (0x2),
> + * rather than bit 0.
> + */
> +#define PAGE_DMA_PINNED 0x2
> +#define PAGE_DMA_PINNED_FLAGS (PAGE_DMA_PINNED)
> +
This is really subtle, additional changes to compound_head will need to coordinate
with these flags? Also doesn't this bit need to be unique across all structs in
the union? I guess that is guaranteed by the fact that page == compound_head(page)
as per your assertion, but I've forgotten why that is true. Could you please
add some commentary on that
> +/*
> + * Because these flags are read outside of a lock, ensure visibility between
> + * different threads, by using READ|WRITE_ONCE.
> + */
> +static __always_inline int PageDmaPinnedFlags(struct page *page)
> +{
> + VM_BUG_ON(page != compound_head(page));
> + return (READ_ONCE(page->dma_pinned_flags) & PAGE_DMA_PINNED_FLAGS) != 0;
> +}
> +
> +static __always_inline int PageDmaPinned(struct page *page)
> +{
> + VM_BUG_ON(page != compound_head(page));
> + return (READ_ONCE(page->dma_pinned_flags) & PAGE_DMA_PINNED) != 0;
> +}
> +
> +static __always_inline void SetPageDmaPinned(struct page *page)
> +{
> + VM_BUG_ON(page != compound_head(page));
VM_BUG_ON(!list_empty(&page->lru))
> + WRITE_ONCE(page->dma_pinned_flags, PAGE_DMA_PINNED);
> +}
> +
> +static __always_inline void ClearPageDmaPinned(struct page *page)
> +{
> + VM_BUG_ON(page != compound_head(page));
> + VM_BUG_ON_PAGE(!PageDmaPinnedFlags(page), page);
> +
> + /* This does a WRITE_ONCE to the lru.next, which is also the
> + * page->dma_pinned_flags field. So in addition to restoring page->lru,
> + * this provides visibility to other threads.
> + */
> + INIT_LIST_HEAD(&page->lru);
This assumes certain things about list_head, why not use the correct
initialization bits.
> +}
> +
> #ifdef CONFIG_KSM
> /*
> * A KSM page is one of those write-protected "shared pages" or "merged pages"
> --
> 2.19.1
>
next prev parent reply other threads:[~2018-10-12 10:56 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-12 6:00 [PATCH 0/6] RFC: gup+dma: tracking dma-pinned pages john.hubbard
2018-10-12 6:00 ` [PATCH 1/6] mm: get_user_pages: consolidate error handling john.hubbard
2018-10-12 6:30 ` Balbir Singh
2018-10-12 22:45 ` John Hubbard
2018-10-12 6:00 ` [PATCH 2/6] mm: introduce put_user_page*(), placeholder versions john.hubbard
2018-10-12 7:35 ` Balbir Singh
2018-10-12 22:31 ` John Hubbard
2018-10-12 6:00 ` [PATCH 3/6] infiniband/mm: convert put_page() to put_user_page*() john.hubbard
2018-10-12 6:00 ` [PATCH 4/6] mm: introduce page->dma_pinned_flags, _count john.hubbard
2018-10-12 10:56 ` Balbir Singh [this message]
2018-10-13 0:15 ` John Hubbard
2018-10-24 11:00 ` Balbir Singh
2018-11-02 23:27 ` John Hubbard
2018-10-13 3:55 ` Dave Chinner
2018-10-13 7:34 ` John Hubbard
2018-10-13 16:47 ` Christoph Hellwig
2018-10-13 21:19 ` John Hubbard
2018-11-05 7:10 ` John Hubbard
2018-11-05 9:54 ` Jan Kara
2018-11-06 0:26 ` John Hubbard
2018-11-06 2:47 ` Dave Chinner
2018-11-06 11:00 ` Jan Kara
2018-11-06 20:41 ` Dave Chinner
2018-11-07 6:36 ` John Hubbard
2018-10-13 23:01 ` Dave Chinner
2018-10-16 8:51 ` Jan Kara
2018-10-17 1:48 ` John Hubbard
2018-10-17 11:09 ` Michal Hocko
2018-10-18 0:03 ` John Hubbard
2018-10-19 8:11 ` Michal Hocko
2018-10-12 6:00 ` [PATCH 5/6] mm: introduce zone_gup_lock, for dma-pinned pages john.hubbard
2018-10-12 6:00 ` [PATCH 6/6] mm: track gup pages with page->dma_pinned_* fields john.hubbard
2018-10-12 11:07 ` Balbir Singh
2018-10-13 0:33 ` John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181012105612.GK8537@350D \
--to=bsingharora@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=dan.j.williams@intel.com \
--cc=jack@suse.cz \
--cc=jgg@ziepe.ca \
--cc=jhubbard@nvidia.com \
--cc=john.hubbard@gmail.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=mhocko@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).