linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH] docs/mm: Physical Memory: add "Memory map" section
Date: Wed, 6 Sep 2023 13:21:02 +0100	[thread overview]
Message-ID: <ZPhurt9P7hnsVvua@casper.infradead.org> (raw)
In-Reply-To: <20230906074210.3051751-1-rppt@kernel.org>

On Wed, Sep 06, 2023 at 10:42:10AM +0300, Mike Rapoport wrote:
> +The basic memory descriptor is called :ref:`struct page <Pages>` and it is
> +essentially a union of several structures, each representing a page frame
> +metadata for a paricular usage.

"each representing page frame metadata".  And "particular".

>  Folios
> -======
> +------
>  
> -.. admonition:: Stub
> +`struct folio` represents a physically, virtually and logically contiguous
> +set of bytes. It is a power-of-two in size, and it is aligned to that same
> +power-of-two. It is at least as large as ``PAGE_SIZE``. If it is in the
> +page cache, it is at a file offset which is a multiple of that
> +power-of-two. It may be mapped into userspace at an address which is at an
> +arbitrary page offset, but its kernel virtual address is aligned to its
> +size.
>  
> -   This section is incomplete. Please list and describe the appropriate fields.
> +`struct folio` occupies several consecutive entries in the memory map and
> +has the following fields:
> +
> +``flags``
> +  Identical to the page flags.
> +
> +``lru``
> +  Least Recently Used list; tracks how recently this folio was used.
> +
> +``mlock_count``
> +  Number of times this folio has been pinned by mlock().
> +
> +``mapping``
> +  The file this page belongs to. Can be pagecache or swapcahe. For
> +  anonymous memory refers to the `struct anon_vma`.
> +
> +``index``
> +  Offset within the file, in units of pages. For anonymous memory, this is
> +  the index from the beginning of the mmap.
> +
> +``private``
> +  Filesystem per-folio data (see folio_attach_private()). Used for
> +  ``swp_entry_t`` if folio is in the swap cache
> +  (i.e. folio_test_swapcache() is true)
> +
> +``_mapcount``
> +  Do not access this member directly. Use folio_mapcount() to find out how
> +  many times this folio is mapped by userspace.
> +
> +``_refcount``
> +  Do not access this member directly. Use folio_ref_count() to find how
> +  many references there are to this folio.
> +
> +``memcg_data``
> +  Memory Control Group data.
> +
> +``_folio_dtor``
> +  Which destructor to use for this folio.
> +
> +``_folio_order``
> +  The allocation order of a folio. Do not use directly, call folio_order().
> +
> +``_entire_mapcount``
> +  How many times the entire folio is mapped as a single unit (for example
> +  by a PMD or PUD entry). Does not include PTE-mapped subpages. This might
> +  be useful for debugging, but to find out how many times the folio is
> +  mapped look at folio_mapcount() or page_mapcount() or total_mapcount()
> +  instead.
> +  Do not use directly, call folio_entire_mapcount().
> +
> +``_nr_pages_mapped``
> +  The total number of times the folio is mapped.
> +  Do not use directly, call folio_mapcount().
> +
> +``_pincount``
> +  Used to track pinning of the folio for DMA.
> +  Do not use directly, call folio_maybe_dma_pinned().
> +
> +``_folio_nr_pages``
> +  The number of pages in the folio.
> +  Do not use directly, call folio_nr_pages().
> +
> +``_hugetlb_subpool``
> +  HugeTLB subpool the folio beongs to.
> +  Do not use directly, use accessor in ``include/linux/hugetlb.h``.
> +
> +``_hugetlb_cgroup``
> +  Memory Control Group data for a HugeTLB folio.
> +  Do not use directly, use accessor in ``include/linux/hugetlb_cgroup.h``.
> +
> +``_hugetlb_cgroup_rsvd``
> +  Memory Control Group data for a HugeTLB folio.
> +  Do not use directly, use accessor in ``include/linux/hugetlb_cgroup.h``.
> +
> +``_hugetlb_hwpoison``
> +  List of failed (hwpoisoned) pages for a HugeTLB folio.
> +  Do not use directly, call raw_hwp_list_head().
> +
> +``_deferred_list``
> +  Folios to be split under memory pressure.

I don't understand why you've done all this instead of linking to the
kernel-doc I wrote.


  reply	other threads:[~2023-09-06 12:21 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-06  7:42 [PATCH] docs/mm: Physical Memory: add "Memory map" section Mike Rapoport
2023-09-06 12:21 ` Matthew Wilcox [this message]
2023-09-06 12:52   ` Mike Rapoport
2023-09-06 14:09     ` Matthew Wilcox
2023-09-06 14:41 ` Jonathan Corbet
2023-09-06 15:04   ` Matthew Wilcox
2023-09-06 15:24     ` Jonathan Corbet
2023-09-07 14:20   ` Mike Rapoport

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZPhurt9P7hnsVvua@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=corbet@lwn.net \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rppt@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).