From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65B9FC3DA41 for ; Wed, 10 Jul 2024 17:13:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:Cc:To:Subject:Message-ID:Date:From:In-Reply-To:References: MIME-Version:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=LPwiABRO0gJOFSW7TMavO+8O7Q4qhArDFONeWxtkC/4=; b=AxdcqNFOXOKzlZrayD1tcVTokj m0uzkkQlo47v1Zmlmf+c8fKK+zAlR46P2EPgXMllfim/oVK6jd6cSk6oQYMYWxOOAG9dItFunKJtm BIerHI1Th7isOJO4cXrM/8XC1Zlw0t4KJnOTYRHFTpjDLhSwOZ/yRWddukbAscf0AAaG1zuEzVG+S C9D/f6qlrphtTkCeu1VEsejlvMkBNui5tmWbM1zOWtGmOzS6RZyPr8ElgG7KacitHwAgpxUOxAXA+ I1SzyXQN9ZF/5l6roReeU3NtRleN114NbkimKW1uNe8u17mPPW6jcIPPrJb8t3/wm1kiY8SMIX7Kw W24Dhbeg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sRasH-0000000BGsV-3ifD; Wed, 10 Jul 2024 17:13:01 +0000 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sRas0-0000000BGoY-2M4Z for linux-arm-kernel@lists.infradead.org; Wed, 10 Jul 2024 17:12:45 +0000 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-4266edcc54cso1915e9.0 for ; Wed, 10 Jul 2024 10:12:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1720631562; x=1721236362; darn=lists.infradead.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=LPwiABRO0gJOFSW7TMavO+8O7Q4qhArDFONeWxtkC/4=; b=wslF1ixsXetzNYR/sOZoamTSB0R8WYsHFfBaEWMhS/3Ks6qKqxM7PUcYZduIkBxi6J CaujW9rkWCPpTOeD6z5ETW/EDd7t3LAR4Ubaicrmh7BIPES33mvsH4AuFc4EV+4O7HMJ tdlwFvvo3i0j50ouGO3ICtbVQhmV+pULV9D0RDgtBvlGn4eOKYfkDTavI2PcvvDkHOKt 8u5TX1bdK4tK9Ch8MGnk9EbrrK+Spg2/3VLgxZU8KOPUjCWWHwSeN6aV0U034dVy0CdS aMLCnmB8wZeTLdH7bggTf6F1oFbcyWhz03nOHJ4+RLTrvxlUJ6+kCmQVy6kJHB3yoShd GCzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720631562; x=1721236362; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LPwiABRO0gJOFSW7TMavO+8O7Q4qhArDFONeWxtkC/4=; b=IhxrK8blhbVJ0Czysb8+K8hPtvkytPUAcmoargK5OSR1f3dNCe1AdS1QdrVRpArSsK dL0Bo3Mb4WdebiUz4WLRQlZc5MnfnaBmhq0YiqoQ5ZI4DEAQRV4VrF/cI6WzPG0nCYvz RcwTeDo/g4DuYsBuqTGvXk4DVaX+4KDY5VL3SJNWWI3ISuoW1P/7V+6lfyuwSREXR0+n NizeOAgeDZX8P5tKMI1eYcc7mKC/oqUQlbWbNmq4sB+T+r7yZI2iJNOcuC/drX5P31Cw zOGCE6uaIFg0pwSa5grQb0DNN9Et4mn1AYmBGyc3d9lzCk6mUvbj2BUUouivzOtYwh8z ZeOQ== X-Forwarded-Encrypted: i=1; AJvYcCV1qHoOdbMcWUppdAkcQJoMXe1nFKXcngvLHEEXVOVolPdFXOiFq6NfpEciGH7DgpSnnw+NLjKsJhTr0MHPMGWOovXvtkCSvnXGtelVj+8eLpjvD7Q= X-Gm-Message-State: AOJu0Yy+NVZjyBijiC8/0aOtIY/4QWXBGVyHzEj6wFvH5CwNw2I9MrHr KM7wdI2YxvhzyFqtoPLhX1mIm+KSy67d9e+Ci9w4MvIjH+y7l8Co/rlUgXAHhXipsUczbDuBAbh uv/Qaz/4Hd9FETLupI9JeYM8Wr22E9Bus+SQE X-Google-Smtp-Source: AGHT+IFp/Qq+1y0meIj9pymlqXMiS+2/rtF+O2LmyGTaDm/g/qYw3Oop34bWO4Zb1vn1Iw7Kt13tULjmbZtjNce0nwU= X-Received: by 2002:a7b:ce99:0:b0:424:898b:522b with SMTP id 5b1f17b1804b1-427942230e7mr1686105e9.1.1720631562070; Wed, 10 Jul 2024 10:12:42 -0700 (PDT) MIME-Version: 1.0 References: <20240113094436.2506396-1-sunnanyong@huawei.com> In-Reply-To: From: Yu Zhao Date: Wed, 10 Jul 2024 11:12:01 -0600 Message-ID: Subject: Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize To: Catalin Marinas Cc: Nanyong Sun , will@kernel.org, mike.kravetz@oracle.com, muchun.song@linux.dev, akpm@linux-foundation.org, anshuman.khandual@arm.com, willy@infradead.org, wangkefeng.wang@huawei.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240710_101244_606161_3F88DF2F X-CRM114-Status: GOOD ( 31.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Jul 10, 2024 at 10:51=E2=80=AFAM Catalin Marinas wrote: > > On Fri, Jul 05, 2024 at 11:41:34AM -0600, Yu Zhao wrote: > > On Fri, Jul 5, 2024 at 9:49=E2=80=AFAM Catalin Marinas wrote: > > > If I did the maths right, for a 2MB hugetlb page, we have about 8 > > > vmemmap pages (32K). Once we split a 2MB vmemap range, > > > > Correct. > > > > > whatever else > > > needs to be touched in this range won't require a stop_machine(). > > > > There might be some misunderstandings here. > > > > To do HVO: > > 1. we split a PMD into 512 PTEs; > > 2. for every 8 PTEs: > > 2a. we allocate an order-0 page for PTE #0; > > 2b. we remap PTE #0 *RW* to this page; > > 2c. we remap PTEs #1-7 *RO* to this page; > > 2d. we free the original order-3 page. > > Thanks. I now remember why we reverted such support in 060a2c92d1b6 > ("arm64: mm: hugetlb: Disable HUGETLB_PAGE_OPTIMIZE_VMEMMAP"). The main > problem is that point 2c also changes the output address of the PTE > (and the content of the page slightly). The architecture requires a > break-before-make in such scenario, though it would have been nice if it > was more specific on what could go wrong. > > We can do point 1 safely if we have FEAT_BBM level 2. For point 2, I > assume these 8 vmemmap pages may be accessed and that's why we can't do > a break-before-make safely. Correct > I was wondering whether we could make the > PTEs RO first and then change the output address but we have another > rule that the content of the page should be the same. I don't think > entries 1-7 are identical to entry 0 (though we could ask the architects > for clarification here). Also, can we guarantee that nothing writes to > entry 0 while we would do such remapping? Yes, it's already guaranteed. > We know entries 1-7 won't be > written as we mapped them as RO but entry 0 contains the head page. > Maybe it's ok to map it RO temporarily until the newly allocated hugetlb > page is returned. We can do that. I don't understand how this could elide BBM. After the above, we would still need to do: 3. remap entry 0 from RO to RW, mapping the `struct page` page that will be shared with entry 1-7. 4. remap entry 1-7 from their respective `struct page` pages to that of entry 0, while they remain RO. > If we could get the above work, it would be a lot simpler than thinking > of stop_machine() or other locks to wait for such remapping. Steps 3/4 would not require BBM somehow? > > To do de-HVO: > > 1. for every 8 PTEs: > > 1a. we allocate 7 order-0 pages. > > 1b. we remap PTEs #1-7 *RW* to those pages, respectively. > > Similar problem in 1.b, changing the output address. Here we could force > the content to be the same I don't follow the "the content to be the same" part. After HVO, we have: Entry 0 -> `struct page` page A, RW Entry 1 -> `struct page` page A, RO ... Entry 7 -> `struct page` page A, RO To de-HVO, we need to make them: Entry 0 -> `struct page` page A, RW Entry 1 -> `struct page` page B, RW ... Entry 7 -> `struct page` page H, RW I assume the same content means PTE_0 =3D=3D PTE_1/.../7? > and remap PTEs 1-7 RO first to the new page, > turn them RW afterwards and it's all compliant with the architecture > (even without FEAT_BBM). It'd be great if we could do that. I don't fully understand it though, at the moment.