From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EFF4EB8FB8 for ; Wed, 6 Sep 2023 12:21:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24BDC440160; Wed, 6 Sep 2023 08:21:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FBB244015C; Wed, 6 Sep 2023 08:21:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EBB1440160; Wed, 6 Sep 2023 08:21:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F3E8844015C for ; Wed, 6 Sep 2023 08:21:13 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BF78B80C96 for ; Wed, 6 Sep 2023 12:21:13 +0000 (UTC) X-FDA: 81206082426.25.3F2B4B2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id BF3E518002B for ; Wed, 6 Sep 2023 12:21:11 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Ztu7hUbO; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694002872; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=E6QAhaI9qRhRux4vtLPUC4tpft4aXvSd8wvlxQ6bjS4=; b=uJdcIwTq7fJTR+cIVQj98ZlWZfjbFzEtxYa/ERV5mOhAFt48AcSEOSUeDcsAim8n2Qo4iL QcRNTwfShFaXwL2r5orUW4LYslUBkCc/kkCdGdQ5GkAbMz/iDsK1RBke5cwOHH9kWN4bqA VaPges2aB0Ixljm62RXlsB4Yid89X9Q= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694002872; a=rsa-sha256; cv=none; b=U8JE7laN/zQIZ85AqMbeKt6u+x32s89cvN/nES86D+gQ41MhIu++sK+L1drSYK2NnwezU6 Wv96LNWxZuptCJW/aLmJwCCGTRjdqcY+PGfTNM/HF7ajfFtI1fH7GrXbdMv5yxR7VOq1Sj QlI3/kZpiwweYm90eztGwDVqglKRXD4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Ztu7hUbO; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=E6QAhaI9qRhRux4vtLPUC4tpft4aXvSd8wvlxQ6bjS4=; b=Ztu7hUbOnG0O5iWcoCcEbFJUF/ OFkWEW1WeFGOvSr/3p6FGxTLH7D2Nb5+MIjkJ50cwDO1q2yG2E58TH0ICO0er8Aa0rgYPSOXx8N9+ MSrHmc11fNTgXtVJMn5WsLf7rn6sy0NgP37EncOsESFrR97O9lw3Q0wXwg29Gcvrwo0S/lnVtyukX relFIArRHcako8KUVib5ng/pdLmoab7iowDu4J1hOSYYIXcZvFzSPxu7TsgX6+dQXCN2RiIAuDkGn 5sqip9OutQcCnoNZGUkCkALVCsc7DYgi3C5r3xcu++89okftih7gkEaJnZEWZxkwjL/Hc1LES8DR1 kcjc3nnQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qdrWo-002DcI-CY; Wed, 06 Sep 2023 12:21:02 +0000 Date: Wed, 6 Sep 2023 13:21:02 +0100 From: Matthew Wilcox To: Mike Rapoport Cc: Jonathan Corbet , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH] docs/mm: Physical Memory: add "Memory map" section Message-ID: References: <20230906074210.3051751-1-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230906074210.3051751-1-rppt@kernel.org> X-Rspamd-Queue-Id: BF3E518002B X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: te1h137fa6z7az38go74d8i81zncwjze X-HE-Tag: 1694002871-909258 X-HE-Meta: U2FsdGVkX18tu6/TFWS84RTJoI2uvKYYLiryHL2+wz7Nz1OQYcGN0yOozniFuH4SKFs+InqEhbwkeNMsraNSMIcIehMDJdzqLhp4SsSMaMqlFK7g/EofOQqD9Q1RmJopypT4LV5scrpB+H9XrrjyJH5HULxYBLg5tcFB4H9yeH9EsAgZljUcC/CaHJIPlmWzg1Jf5CRjpgRTeZIdwDnAsFZXEDO96Xrtuck+Bb8SEm5HqG/V5Q7/Rfjm3eJagM0t8Q5VMosOTH53CA7Zdfp+6en5VECdzhViDq5dGXRKIz0dsOiIxCxQdFyIuCg3qHguK7B8WAr387XoLFzjbd3YMt9rp/BeqU88rlRM+hBMpWkxs0CTY8jGhUTqCMO0Q8reH4giPT8V0RZeAX8Vz5cxR8oclDO5Z4ZRS26Dae3YsvUYAwciog/6kUU+00PngYoYCqu42rESKsKWR9TCRwvrnUJ1HI4BQlYhZLwnaW48GXLEmLYV0uqFniS6N/vPuzSAEBCzU9K2dd+QXDttnIGrdbywgY6F95cc6arV+8mR3F9Y6cvUpXZd21VrgPtQnQ2YO2zOLxrO9+dXaJ9GTzkefZwLmvlKtLR7mNUw41H8JN+GAdsXkUKezg4QACMaxeR2Va/amtCgWbq9vIOKVHOo88+NERJ/ovBp4L94IPlifJQpajINaZZ/vt6NZmp/eJDVwP2L9FN9R+d1wYglC2nmui5MiK9s5Ow1G1Bd+tWkTO4jbpGuXr/A/li1ZyQ0wzLgjE/3eNsnbUotKdMZUHOdkmsyM7dn6VXoXpuLIsnmbVZ3tfAglOhmM2EXMPPX62+i2+SWHYDASUZmtfX7If7HK+WeD2ukBxLH345ygWHUJwDbDmPBQ/Cgonh1x242H8Tgf44tJcIri8pmCc9XkpSnkQhTGYQGNYNBmog83IjHLB8hqCf4HmDh3G/yP7QG+4lhlmoOelS36dSL1qDzUx8 dKobmu4i OVSiwoOmTrEP2GWikDAi2aZQF+49I+OakOgBr3S1RBpWbsasAR/FCM8Rse+AXqa+ArVM1hn2pQJ8kX0+SKUOfftMF3SrrNeV39NVB+NoIFbL4dvRcnrRbp5Q5gRoEO4ykC7AtuHh03nTAaDI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Sep 06, 2023 at 10:42:10AM +0300, Mike Rapoport wrote: > +The basic memory descriptor is called :ref:`struct page ` and it is > +essentially a union of several structures, each representing a page frame > +metadata for a paricular usage. "each representing page frame metadata". And "particular". > Folios > -====== > +------ > > -.. admonition:: Stub > +`struct folio` represents a physically, virtually and logically contiguous > +set of bytes. It is a power-of-two in size, and it is aligned to that same > +power-of-two. It is at least as large as ``PAGE_SIZE``. If it is in the > +page cache, it is at a file offset which is a multiple of that > +power-of-two. It may be mapped into userspace at an address which is at an > +arbitrary page offset, but its kernel virtual address is aligned to its > +size. > > - This section is incomplete. Please list and describe the appropriate fields. > +`struct folio` occupies several consecutive entries in the memory map and > +has the following fields: > + > +``flags`` > + Identical to the page flags. > + > +``lru`` > + Least Recently Used list; tracks how recently this folio was used. > + > +``mlock_count`` > + Number of times this folio has been pinned by mlock(). > + > +``mapping`` > + The file this page belongs to. Can be pagecache or swapcahe. For > + anonymous memory refers to the `struct anon_vma`. > + > +``index`` > + Offset within the file, in units of pages. For anonymous memory, this is > + the index from the beginning of the mmap. > + > +``private`` > + Filesystem per-folio data (see folio_attach_private()). Used for > + ``swp_entry_t`` if folio is in the swap cache > + (i.e. folio_test_swapcache() is true) > + > +``_mapcount`` > + Do not access this member directly. Use folio_mapcount() to find out how > + many times this folio is mapped by userspace. > + > +``_refcount`` > + Do not access this member directly. Use folio_ref_count() to find how > + many references there are to this folio. > + > +``memcg_data`` > + Memory Control Group data. > + > +``_folio_dtor`` > + Which destructor to use for this folio. > + > +``_folio_order`` > + The allocation order of a folio. Do not use directly, call folio_order(). > + > +``_entire_mapcount`` > + How many times the entire folio is mapped as a single unit (for example > + by a PMD or PUD entry). Does not include PTE-mapped subpages. This might > + be useful for debugging, but to find out how many times the folio is > + mapped look at folio_mapcount() or page_mapcount() or total_mapcount() > + instead. > + Do not use directly, call folio_entire_mapcount(). > + > +``_nr_pages_mapped`` > + The total number of times the folio is mapped. > + Do not use directly, call folio_mapcount(). > + > +``_pincount`` > + Used to track pinning of the folio for DMA. > + Do not use directly, call folio_maybe_dma_pinned(). > + > +``_folio_nr_pages`` > + The number of pages in the folio. > + Do not use directly, call folio_nr_pages(). > + > +``_hugetlb_subpool`` > + HugeTLB subpool the folio beongs to. > + Do not use directly, use accessor in ``include/linux/hugetlb.h``. > + > +``_hugetlb_cgroup`` > + Memory Control Group data for a HugeTLB folio. > + Do not use directly, use accessor in ``include/linux/hugetlb_cgroup.h``. > + > +``_hugetlb_cgroup_rsvd`` > + Memory Control Group data for a HugeTLB folio. > + Do not use directly, use accessor in ``include/linux/hugetlb_cgroup.h``. > + > +``_hugetlb_hwpoison`` > + List of failed (hwpoisoned) pages for a HugeTLB folio. > + Do not use directly, call raw_hwp_list_head(). > + > +``_deferred_list`` > + Folios to be split under memory pressure. I don't understand why you've done all this instead of linking to the kernel-doc I wrote.