From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A8AAC61DB3 for ; Thu, 12 Jan 2023 16:45:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D8AD88E0003; Thu, 12 Jan 2023 11:45:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D13C88E0001; Thu, 12 Jan 2023 11:45:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B8DA08E0003; Thu, 12 Jan 2023 11:45:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A2F0D8E0001 for ; Thu, 12 Jan 2023 11:45:55 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7212B1C5F71 for ; Thu, 12 Jan 2023 16:45:55 +0000 (UTC) X-FDA: 80346723870.11.CFE3F09 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) by imf09.hostedemail.com (Postfix) with ESMTP id C4F06140018 for ; Thu, 12 Jan 2023 16:45:53 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UXcdjNXl; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of jthoughton@google.com designates 209.85.128.43 as permitted sender) smtp.mailfrom=jthoughton@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673541953; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QCC7F6qUZs0eUKDNKlSeqfOsOhCWReUqSztDnstfON4=; b=ngDlChSIUn/tyP5vpQqg4rp0c/NOmlfFTQ87slLTYFyjZDCtagZIqWIDKZyrHCF9Wuhp4L KaHvfBK2jTG2bbkveeoehjwiVWOLmPkMQeljXvEhOT0ONhY3MrdGbwjOaTT8N6cXt4ESa+ pfW9IIgeBC4X9zPq/MDkcZDBN74GX8g= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UXcdjNXl; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of jthoughton@google.com designates 209.85.128.43 as permitted sender) smtp.mailfrom=jthoughton@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673541953; a=rsa-sha256; cv=none; b=P1Oeu2lBw7kuzLPcbikLA8pdEmdVIUXd9TpBZ9cbqJxkSTGjKLBm6QtGFJRpG+yz4lX+vT AV9YU9YWqRet0GAdJlHX0entmtgmdLolaSet4p6tEdQE9RPnwNafBD0k0WmAjaep3HVC3Y 9nWpG3GER3wOC7oXt8UcOfmtXkZkVz0= Received: by mail-wm1-f43.google.com with SMTP id c4-20020a1c3504000000b003d9e2f72093so12192656wma.1 for ; Thu, 12 Jan 2023 08:45:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=QCC7F6qUZs0eUKDNKlSeqfOsOhCWReUqSztDnstfON4=; b=UXcdjNXlaCt4LwQOR/qgcKICTSeBNgwIL+8GO3dHlt3MuoEZC43XwGsFwEf/v4JB+Z dNQaf23BlkoXzmUQLsK4kchNdtWUrhLpGY54UUbDeTrqShEDrVLE09j93d6XdmcVw1tT k1hmNSso38+RUYUIAd4qGLFk67sczM5tsK2dkDjRvWrSB3Q0lEr7ckkkz5UWfdmpEItx wMs0k1vg/LS5KhFkawebl1grSHdaj0CXnqRpE03Q+couNWYIz07L+CZnSM1qKtLJNmxG /gZvJ80qCFLumCCoi6RQ3tHsw4PhZBD24GWSzJ2t9jpFn623BnT3bC6N+AZ1So4kT1Q8 Trcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=QCC7F6qUZs0eUKDNKlSeqfOsOhCWReUqSztDnstfON4=; b=rfUoaZqmy5iQH5D263MS1BYR7/981nZgep86HViU1ajldcg3gdaXbnyPV6xJ/AwYS1 RxI3LGYlEwNTBcXkb/3D3EW65r1GNZVs93dt09NzinfoACz6CfHVoXi9olJ0rsojWeGK zrmgzgwIx8XsJrb1mrX9gGKJaU3Dku2v8z9pjJtN/R/Rm5liqMpEzYM3l9ihgypb0nw/ ebwDu8PaqhOUeARj9e4NY/2EZ8hH9AWKO5D4Hj8DZv6A9q8H+FXVlrSE2+8s+V9N0mSh H9KbNakvsC3+J2Ufzmtu8zn25kNIUst2GxhwAbbK8+pbFSijFACBaQ+9EBnynhnov0fH +0RQ== X-Gm-Message-State: AFqh2koZp8gznLM3S0WGhWmmC6VTl68a5jNoS8SimzzshSfR3W6y0YNq /VTlZDFTRLrygxNpup4OkyAwMk7Tzl36nntvXgqV8A== X-Google-Smtp-Source: AMrXdXvJxsgdT1Mu+zOXVWaBzs9HjS3yFquSb47LhdrZigL+Ou0iL5eujvx2GH7nVlGlpWRn1omwLON2B3uVnAPnHmg= X-Received: by 2002:a05:600c:1684:b0:3da:1b37:8ff5 with SMTP id k4-20020a05600c168400b003da1b378ff5mr113519wmn.166.1673541952186; Thu, 12 Jan 2023 08:45:52 -0800 (PST) MIME-Version: 1.0 References: <20230105101844.1893104-1-jthoughton@google.com> <20230105101844.1893104-22-jthoughton@google.com> In-Reply-To: From: James Houghton Date: Thu, 12 Jan 2023 11:45:40 -0500 Message-ID: Subject: Re: [PATCH 21/46] hugetlb: use struct hugetlb_pte for walk_hugetlb_range To: Peter Xu Cc: Mike Kravetz , Muchun Song , David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: C4F06140018 X-Stat-Signature: 9gyqbuj4tzutn1nkjihmbxz8x5auuoam X-HE-Tag: 1673541953-477041 X-HE-Meta: U2FsdGVkX1/tpO7/NeRgOCz4+QYc6heQstq8QmLnM84H3or0aQCR3BSBy6QTV7epEp6/KV7T4ALn2E3VY/l3mefzOu2hpU+THEmw7xESQaSpak0VLmSR6T8+WRpDRdxyfbWmbIMgIF71GoSbVC1wUk+Knf3SraBiBJTj1IucvqZGHBEhOs7rZdP4sJPNqowyQyr8bT4fGykUz9Z5IztMwbwR8y1uRUcPhkIWoFBnByYJlH1TXNk4aPkFqYrwRvb1iGx+43jyL+B1n0G5WJP8zlepn+ZoNFY/Uc2XNWUfL/FNK5aqPtzCFXuIn6eiPSojs/wGJgdP9x8f8xhfAbfttaA1iX2b2R9C2yuq+7WK+IHu9uUzCRwQByqiRRWJlI16fSM4/S9WTCnYzZdQowFSQDcw2Eq8HmHf26Q+O8sGtBa2ES5mDZjF/Rfz9lQ/slyqtvxC3AwgbPF11Xi60G7uvljYCxH0ty5/CCQw8deQfLS3u/GQdJc6br/gzatY5lKFNGuPq2IzTmFqKXGD8xjjaakbq+w08YxqmkN/agdmTUWZYc5wzcaB2/7bwv2wjZ3T3wJJYUnpYmk/sy8WHzhQM+xK/qortb6ptpQxI/9w1a//Kfb6r+X6rTUih4wWLc8NIiIiTCOREhny8iyJFUg7b3bPDAoltCDTjZnHEkOz2xtZwBZldCI6w94JfA29uBAI5WENr0aoZjq8Q0S5MeUHvPw/wk61RKSgDmv4oKffOq4sxldNQ5pB5RzgMVUbwaKiFKOEwqD7fJ41TAKc2tHN+MUD8doXJuFdm28Q20s+LnrltzL7IQlI3qk6lkmzvPJTJKddajoXBpwkmFWyIdOtQbQPjVuHeGzjIJfCIJSdcZk58tXZXq0r+EEviHZnl863Pxn1LaCyISz0Y2+NDLAQGXES2FyELGvFlq1qn3o6ss2JFH92iW/D1Qe5nAcBzpLZPqBvJbiMPqS2F+dlPNR FgeUsNLV Ju351cokvSrY9MQDUdQJ0Y9oFWxA6fT6Ier0zlWDBTnW/kQALYz+WnvnC6OXrMLNs+05D X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 12, 2023 at 10:29 AM Peter Xu wrote: > > On Thu, Jan 12, 2023 at 09:06:48AM -0500, James Houghton wrote: > > On Wed, Jan 11, 2023 at 5:58 PM Peter Xu wrote: > > > > > > James, > > > > > > On Thu, Jan 05, 2023 at 10:18:19AM +0000, James Houghton wrote: > > > > @@ -751,9 +761,9 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask, > > > > int mapcount = page_mapcount(page); > > > > > > > > if (mapcount >= 2) > > > > - mss->shared_hugetlb += huge_page_size(hstate_vma(vma)); > > > > + mss->shared_hugetlb += hugetlb_pte_size(hpte); > > > > else > > > > - mss->private_hugetlb += huge_page_size(hstate_vma(vma)); > > > > + mss->private_hugetlb += hugetlb_pte_size(hpte); > > > > } > > > > return 0; > > > > > > One thing interesting I found with hgm right now is mostly everything will > > > be counted as "shared" here, I think it's because mapcount is accounted > > > always to the huge page even if mapped in smaller sizes, so page_mapcount() > > > to a small page should be huge too because the head page mapcount should be > > > huge. I'm curious the reasons behind the mapcount decision. > > > > > > For example, would that risk overflow with head_compound_mapcount? One 1G > > > page mapping all 4K takes 0.25M counts, while the limit should be 2G for > > > atomic_t. Looks like it's possible. > > > > The original mapcount approach was "if the hstate-level PTE is > > present, increment the compound_mapcount by 1" (basically "if any of > > the hugepage is mapped, increment the compound_mapcount by 1"), but > > this was painful to implement, > > Any more info here on why it was painful? What is the major blocker? The original approach was implemented in RFC v1, but the implementation was broken: the way refcount was handled was wrong; it was incremented once for each new page table mapping. (How? find_lock_page(), called once per hugetlb_no_page/UFFDIO_CONTINUE would increment refcount and we wouldn't drop it, and in __unmap_hugepage_range(), the mmu_gather bits would decrement the refcount once per mapping.) At the time, I figured the complexity of handling mapcount AND refcount correctly in the original approach would be quite complex, so I switched to the new one. 1. In places that already change the mapcount, check that we're installing the hstate-level PTE, not a high-granularity PTE. Adjust mapcount AND refcount appropriately. 2. In the HGM walking bits, to the caller if we made the hstate-level PTE present. (hugetlb_[pmd,pte]_alloc is the source of truth.) Need to keep track of this until we figure out which page we're allocating PTEs for, then change mapcount/refcount appropriately. 3. In unmapping bits, change mmu_gather/tlb bits to drop refcount only once per hugepage. (This is probably the hardest of these three things to get right.) > > > so I changed it to what it is now (each new PT mapping increments the > > compound_mapcount by 1). I think you're right, there is absolutely an > > overflow risk. :( I'm not sure what the best solution is. I could just go > > back to the old approach. > > No rush on that; let's discuss it thoroughly before doing anything. We > have more context than when it was discussed initially in the calls, so > maybe a good time to revisit. > > > > > Regarding when things are accounted in private_hugetlb vs. > > shared_hugetlb, HGM shouldn't change that, because it only applies to > > shared mappings (right now anyway). It seems like "private_hugetlb" > > can include cases where the page is shared but has only one mapping, > > in which case HGM will change it from "private" to "shared". > > The two fields are not defined against VM_SHARED, it seems. At least not > with current code base. > > Let me quote the code again just to be clear: > > int mapcount = page_mapcount(page); <------------- [1] > > if (mapcount >= 2) > mss->shared_hugetlb += hugetlb_pte_size(hpte); > else > mss->private_hugetlb += hugetlb_pte_size(hpte); > > smaps_hugetlb_hgm_account(mss, hpte); > > So that information (for some reason) is only relevant to how many mapcount > is there. If we have one 1G page and only two 4K mapped, with the existing > logic we should see 8K private_hugetlb while in fact I think it should be > 8K shared_hugetlb due to page_mapcount() taking account of both 4K mappings > (as they all goes back to the head). > > I have no idea whether violating that will be a problem or not, I suppose > at least it needs justification if it will be violated, or hopefully it can > be kept as-is. Agreed that this is a problem. I'm not sure what should be done here. It seems like the current upstream implementation is incorrect (surely MAP_SHARED with only one mapping should still be shared_hugetlb not private_hugetlb); the check should really be `if (vma->vm_flags & VM_MAYSHARE)` instead of `mapcount >= 2`. If that change can be taken, we don't have a problem here. > > > > > > > > > Btw, are the small page* pointers still needed in the latest HGM design? > > > Is there code taking care of disabling of hugetlb vmemmap optimization for > > > HGM? Or maybe it's not needed anymore for the current design? > > > > The hugetlb vmemmap optimization can still be used with HGM, so there > > is no code to disable it. We don't need small page* pointers either, > > except for when we're dealing with mapping subpages, like in > > hugetlb_no_page. Everything else can handle the hugetlb page as a > > folio. > > > > I hope we can keep compatibility with the vmemmap optimization while > > solving the mapcount overflow risk. > > Yeh that'll be perfect if it works. But afaiu even with your current > design (ignoring all the issues on either smaps accounting or overflow > risks), we already referenced the small pages, aren't we? See: > > static inline int page_mapcount(struct page *page) > { > int mapcount = atomic_read(&page->_mapcount) + 1; <-------- here > > if (likely(!PageCompound(page))) > return mapcount; > page = compound_head(page); > return head_compound_mapcount(page) + mapcount; > } > > Even if we assume small page->_mapcount should always be zero in this case, > we may need to take special care of hugetlb pages in page_mapcount() to not > reference it at all. Or I think it's reading random values and some days > it can be non-zero. IIUC, it's ok to read from all the hugetlb subpage structs, you just can't *write* to them (except the first few). The first page of page structs is mapped RW; all the others are mapped RO to a single physical page. > > The other approach is probably using the thp approach. After Hugh's rework > on the thp accounting I assumed it would be even cleaner (at least no > DoubleMap complexity anymore.. even though I can't say I fully digested the > whole history of that). It's all about what's the major challenges of > using the same approach there with thp. You may have more knowledge on > that aspect so I'd be willing to know. I need to take a closer look at Hugh's approach to see if we can do it the same way. (I wonder if the 1G THP series has some ideas too.) A really simple solution could be just to prevent userspace from doing MADV_SPLIT (or, if we are enabling HGM due to hwpoison, ignore the poison) if it could result in a mapcount overflow. For 1G pages, userspace would need 8192 mappings to overflow mapcount/refcount.