From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE8B33C455C; Tue, 24 Mar 2026 06:45:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774334737; cv=none; b=Mgk2+vOiRaPoPYcW16d9/1SR8V8PBMbIl41K4MZmW/mof5b+OXkcDBaowCBJtI8L9Xkg11yFgc8tVQWeC0VSjcrzb0B+a1Hu46syBalFihSXEN7sMdiuy2wyPGcN2zGe8COn03xKWwoovpQY24i6FHjpwFzhsCNCEJsCZol8Ep0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774334737; c=relaxed/simple; bh=CFAcjcScsp2g4fjSxF44fdt4KC8QQnf9IITIdFdVqOM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZCe7W8jfcolgWCaS4kAWB4qkgDrGA6SRjOh+oCl/f8PopQHPZucEgSYmjYaRM97v3rVnamTbnvqEX4FAW4jAkc4nS1EQTAu/S8tvNlyvnbOFDJm3ESOc6qtaCPs3DgrgqNoDoeisrv0ZS0z2LDgV16l6+WaOoYNM4k8dwjQ95IQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=wWaNxRrO; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="wWaNxRrO" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=fStmksvtywXlbYrMkIeKxeAAlIJznjKfNiH+KK7VUOE=; b=wWaNxRrOViqWgkXgzwE3PuRFrQ i8mHjv7VOSwIX44k5K6uDhsre5gMrDijW1+tOGBqNJkz/L8CmDTt5Oe1nX89uuTzAY/OTddOo/KXD tj6Zpmgjo/EIo3QWSvzHn8E9aByAqhWQ2Ar9WNj8sUMv1IxuS1Z+4lpZpOLnHm6L/jhfa6nE63bIY TLAC+Jka7xbYGKBnp51Lp+mQKHtgpxVgQ0RGk94LVjNlQRgkFQhLbkwj/Soa9R6gV9Teginqx7bvt CdoWt5MHqrTkalyJV0T8y/NabV12EwJbJB0zGrR3rKXgswDMG1mHKpZlYR3pDWbWv7rp5A00ose/l kRJ8Z4OA==; Received: from 2a02-8389-2341-5b80-d601-7564-c2e0-491c.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:d601:7564:c2e0:491c] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1w4vVt-00000000ovK-05zE; Tue, 24 Mar 2026 06:45:17 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Huacai Chen , WANG Xuerui , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , "Christophe Leroy (CS GROUP)" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Herbert Xu , Dan Williams , Chris Mason , David Sterba , Arnd Bergmann , Song Liu , Yu Kuai , Li Nan , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-arch@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 17/17] raid6_kunit: randomize buffer alignment Date: Tue, 24 Mar 2026 07:40:52 +0100 Message-ID: <20260324064115.3217136-18-hch@lst.de> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260324064115.3217136-1-hch@lst.de> References: <20260324064115.3217136-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add code to add random alignment to the buffers to test the case where they are not page aligned, and to move the buffers to the end of the allocation so that they are next to the vmalloc guard page. This does not include the recovery buffers as the recovery requires page alignment. Signed-off-by: Christoph Hellwig --- lib/raid/raid6/tests/raid6_kunit.c | 41 +++++++++++++++++++++++++----- 1 file changed, 35 insertions(+), 6 deletions(-) diff --git a/lib/raid/raid6/tests/raid6_kunit.c b/lib/raid/raid6/tests/raid6_kunit.c index a0a473643e91..d2fd4a9b74d4 100644 --- a/lib/raid/raid6/tests/raid6_kunit.c +++ b/lib/raid/raid6/tests/raid6_kunit.c @@ -20,6 +20,7 @@ MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); static struct rnd_state rng; static void *test_buffers[RAID6_KUNIT_MAX_BUFFERS]; +static void *aligned_buffers[RAID6_KUNIT_MAX_BUFFERS]; static void *test_recov_buffers[RAID6_KUNIT_MAX_FAILURES]; static size_t test_buflen; @@ -43,6 +44,14 @@ static unsigned int random_length(unsigned int max_length) return (rand32() % (max_length + 1)) & ~511; } +/* Generate a random alignment that is a multiple of 64. */ +static unsigned int random_alignment(unsigned int max_alignment) +{ + if (max_alignment == 0) + return 0; + return (rand32() % (max_alignment + 1)) & ~63; +} + static void makedata(int start, int stop) { int i; @@ -73,7 +82,7 @@ static void test_recover_one(struct kunit *test, unsigned int nr_buffers, for (i = 0; i < RAID6_KUNIT_MAX_FAILURES; i++) memset(test_recov_buffers[i], 0xf0, test_buflen); - memcpy(dataptrs, test_buffers, sizeof(dataptrs)); + memcpy(dataptrs, aligned_buffers, sizeof(dataptrs)); dataptrs[faila] = test_recov_buffers[0]; dataptrs[failb] = test_recov_buffers[1]; @@ -95,13 +104,13 @@ static void test_recover_one(struct kunit *test, unsigned int nr_buffers, ta->recov->data2(nr_buffers, len, faila, failb, dataptrs); } - KUNIT_EXPECT_MEMEQ_MSG(test, test_buffers[faila], test_recov_buffers[0], + KUNIT_EXPECT_MEMEQ_MSG(test, aligned_buffers[faila], dataptrs[faila], len, "faila miscompared: %3d[%c] buffers %u len %u (failb=%3d[%c])\n", faila, member_type(nr_buffers, faila), nr_buffers, len, failb, member_type(nr_buffers, failb)); - KUNIT_EXPECT_MEMEQ_MSG(test, test_buffers[failb], test_recov_buffers[1], + KUNIT_EXPECT_MEMEQ_MSG(test, aligned_buffers[failb], dataptrs[failb], len, "failb miscompared: %3d[%c] buffers %u len %u (faila=%3d[%c])\n", failb, member_type(nr_buffers, failb), @@ -140,9 +149,9 @@ static void test_rmw_one(struct kunit *test, unsigned int nr_buffers, { const struct test_args *ta = test->param_value; - ta->gen->xor_syndrome(nr_buffers, p1, p2, len, test_buffers); + ta->gen->xor_syndrome(nr_buffers, p1, p2, len, aligned_buffers); makedata(p1, p2); - ta->gen->xor_syndrome(nr_buffers, p1, p2, len, test_buffers); + ta->gen->xor_syndrome(nr_buffers, p1, p2, len, aligned_buffers); test_recover(test, nr_buffers, len); } @@ -168,13 +177,33 @@ static void raid6_test_one(struct kunit *test) unsigned int nr_buffers = (rand32() % (RAID6_KUNIT_MAX_BUFFERS - 2)) + 3; unsigned int len = random_length(RAID6_KUNIT_MAX_BYTES); + unsigned int max_alignment; + int i; /* Nuke syndromes */ memset(test_buffers[nr_buffers - 2], 0xee, test_buflen); memset(test_buffers[nr_buffers - 1], 0xee, test_buflen); + /* + * If we're not using the entire buffer size, inject randomize alignment + * into the buffer. + */ + max_alignment = RAID6_KUNIT_MAX_BYTES - len; + if (rand32() % 2 == 0) { + /* Use random alignments mod 64 */ + for (i = 0; i < nr_buffers; i++) + aligned_buffers[i] = test_buffers[i] + + random_alignment(max_alignment); + } else { + /* Go up to the guard page, to catch buffer overreads */ + unsigned int align = test_buflen - len; + + for (i = 0; i < nr_buffers; i++) + aligned_buffers[i] = test_buffers[i] + align; + } + /* Generate assumed good syndrome */ - ta->gen->gen_syndrome(nr_buffers, len, test_buffers); + ta->gen->gen_syndrome(nr_buffers, len, aligned_buffers); test_recover(test, nr_buffers, len); -- 2.47.3