From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E0D8246BD5 for ; Fri, 3 Apr 2026 16:08:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775232527; cv=none; b=dQBQjrEkiRUTs8lUdiLL9crCFlR1z9ok6Wto3Okk6SJk1FgMbWOi8u6IadSK0e8zzpigWcGfyvFkjGOhkbcR7TrPF7Vp+LJYUVk7w23kuBEGsmu/RaxzBh7fF57pBR7ij+PfD0NMpaQarnV44lHTF7Jnjv6mI252c/+3+kHE/Ok= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775232527; c=relaxed/simple; bh=VypwSWNnwhAnxziBrzK7w7AlYaT+GC1ix3L+HwsKnPI=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Y/m5m/5c4X3zyiclW8Qdh2K+B3xxXeEgU1MN0JZzlG7WpHzNA13NBB1PDgn7f+sARRKjphrcfeIjwW8cSkdFiH2IZZfq/WzN97XF+YnZJOEO71UQb1SfAbcvJr7QcpmU4bIqNNa1UxJ4/NW9OKPUhgmXNW7CPa96nZDEkjsG3uw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=aeiwlwZq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aeiwlwZq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 64C01C4CEF7; Fri, 3 Apr 2026 16:08:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775232527; bh=VypwSWNnwhAnxziBrzK7w7AlYaT+GC1ix3L+HwsKnPI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=aeiwlwZq2d4EHooxsbiD30vbJoWhfP670Jcs1U161O3MNaQcnYdL119IrMWJZ1Sid RM+qsTBhxYAhskPSDVDeE4wM8/o+dN2yfi003VMNbzvVdM6UbnyO4EWr4PmCQlp59U QaXxvFwFXKpXdw3ktRdOT/Rp9KI173qnyUtqyXpPUgGVcQ+/pNkffB7GBAwJIFdURS UhxfEm9MY1+X6GOJtUQyKghbYu69SNCaDfM3iXqHE4zLSkl8MUhe3yZfpzC2l1cdom uyxavX9Ry4Ot08+lrR0USLpItFlYRiQCQT8Axu6SfVhRexqHqqhQa60FTV+vS6fjAT nbJQtTKKc2T1A== Date: Fri, 3 Apr 2026 19:08:40 +0300 From: Mike Rapoport To: "Lorenzo Stoakes (Oracle)" Cc: Andrew Morton , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , David Hildenbrand , Suren Baghdasaryan , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] tools/testing/selftests: add merge test for partial msealed range Message-ID: References: <20260331073627.50010-1-ljs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260331073627.50010-1-ljs@kernel.org> On Tue, Mar 31, 2026 at 08:36:27AM +0100, Lorenzo Stoakes (Oracle) wrote: > Commit 2697dd8ae721 ("mm/mseal: update VMA end correctly on merge") fixed > an issue in the loop which iterates through VMAs applying mseal, which was > triggered by mseal()'ing a range of VMAs where the second was mseal()'d and > the first mergeable with it, once mseal()'d. > > Add a regression test to assert that this behaviour is correct. We place it > in the merge selftests as this is strictly an issue with merging (via a > vma_modify() invocation). > > It also assert that mseal()'d ranges are correctly merged as you'd expect. > > The test is implemented such that it is skipped if mseal() is not > available on the system. > > Signed-off-by: Lorenzo Stoakes (Oracle) > --- > v2: > * Added tools/ based header so __NR_mseal should always be available. > * However, for completeness, also check to see if defined, and assume ENOSYS if > not. > * Thanks to Mike for reporting issues in his build on this test! Unfortunately it's not the only one :) The header inclusion actually causes handle_uprobe_upon_merged_vma() to fail because of mismatch in definition of__NR_perf_event_open in and the correct definition from /usr/include/unistd.h With the patch below handle_uprobe_upon_merged_vma() passes again and I think it's not needed for mseal tests as well. diff --git a/tools/testing/selftests/mm/merge.c b/tools/testing/selftests/mm/merge.c index efcb100fd865..519e5ac02db7 100644 --- a/tools/testing/selftests/mm/merge.c +++ b/tools/testing/selftests/mm/merge.c @@ -2,7 +2,6 @@ #define _GNU_SOURCE #include "kselftest_harness.h" -#include #include #include #include > v1: > https://lore.kernel.org/all/20260330135011.107036-1-ljs@kernel.org/ > > tools/testing/selftests/mm/merge.c | 89 ++++++++++++++++++++++++++++++ > 1 file changed, 89 insertions(+) > > diff --git a/tools/testing/selftests/mm/merge.c b/tools/testing/selftests/mm/merge.c > index 10b686102b79..f73803b3679a 100644 > --- a/tools/testing/selftests/mm/merge.c > +++ b/tools/testing/selftests/mm/merge.c > @@ -2,6 +2,7 @@ > > #define _GNU_SOURCE > #include "kselftest_harness.h" > +#include > #include > #include > #include > @@ -48,6 +49,19 @@ static pid_t do_fork(struct procmap_fd *procmap) > return 0; > } > > +#ifdef __NR_mseal > +static int sys_mseal(void *ptr, size_t len, unsigned long flags) > +{ > + return syscall(__NR_mseal, (unsigned long)ptr, len, flags); > +} > +#else > +static int sys_mseal(void *ptr, size_t len, unsigned long flags) > +{ > + errno = ENOSYS; > + return -1; > +} > +#endif > + > FIXTURE_SETUP(merge) > { > self->page_size = psize(); > @@ -1217,6 +1231,81 @@ TEST_F(merge, mremap_correct_placed_faulted) > ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 15 * page_size); > } > > +TEST_F(merge, merge_vmas_with_mseal) > +{ > + unsigned int page_size = self->page_size; > + struct procmap_fd *procmap = &self->procmap; > + char *ptr, *ptr2, *ptr3; > + /* We need our own as cannot munmap() once sealed. */ > + char *carveout; > + > + /* Invalid mseal() call to see if implemented. */ > + ASSERT_EQ(sys_mseal(NULL, 0, ~0UL), -1); > + if (errno == ENOSYS) > + SKIP(return, "mseal not supported, skipping."); > + > + /* Map carveout. */ > + carveout = mmap(NULL, 17 * page_size, PROT_NONE, > + MAP_PRIVATE | MAP_ANON, -1, 0); > + ASSERT_NE(carveout, MAP_FAILED); > + > + /* > + * Map 3 separate VMAs: > + * > + * |-----------|-----------|-----------| > + * | RW | RWE | RO | > + * |-----------|-----------|-----------| > + * ptr ptr2 ptr3 > + */ > + ptr = mmap(&carveout[page_size], 5 * page_size, PROT_READ | PROT_WRITE, > + MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0); > + ASSERT_NE(ptr, MAP_FAILED); > + ptr2 = mmap(&carveout[page_size * 6], 5 * page_size, > + PROT_READ | PROT_WRITE | PROT_EXEC, > + MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0); > + ASSERT_NE(ptr2, MAP_FAILED); > + ptr3 = mmap(&carveout[page_size * 11], 5 * page_size, PROT_READ, > + MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0); > + ASSERT_NE(ptr3, MAP_FAILED); > + > + /* > + * mseal the second VMA: > + * > + * |-----------|-----------|-----------| > + * | RW | RWES | RO | > + * |-----------|-----------|-----------| > + * ptr ptr2 ptr3 > + */ > + ASSERT_EQ(sys_mseal(ptr2, 5 * page_size, 0), 0); > + > + /* Make first VMA mergeable upon mseal. */ > + ASSERT_EQ(mprotect(ptr, 5 * page_size, > + PROT_READ | PROT_WRITE | PROT_EXEC), 0); > + /* > + * At this point we have: > + * > + * |-----------|-----------|-----------| > + * | RWE | RWES | RO | > + * |-----------|-----------|-----------| > + * ptr ptr2 ptr3 > + * > + * Now mseal all of the VMAs. > + */ > + ASSERT_EQ(sys_mseal(ptr, 15 * page_size, 0), 0); > + > + /* > + * We should end up with: > + * > + * |-----------------------|-----------| > + * | RWES | ROS | > + * |-----------------------|-----------| > + * ptr ptr3 > + */ > + ASSERT_TRUE(find_vma_procmap(procmap, ptr)); > + ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr); > + ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 10 * page_size); > +} > + > TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_prev) > { > struct procmap_fd *procmap = &self->procmap; > -- > 2.53.0 > -- Sincerely yours, Mike.