From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6056FCD484E for ; Tue, 12 May 2026 05:26:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=BNJEm+BWUy4IcOMyMdOsApBihLLyr0NeOtDd249LUxw=; b=5Db5vdxti9y8Jl3fCKXa9/lWmS IvquVfbmzsdoXJEO+iqmUBzeu8JbLLW0jXGmtKojreyVkKgHlh+vbHlPVgK8P/uYKZYf3qeKUGtRB MkHR4wzpOl5mGm3ugwn5cEIIV0XVHU3dp8J7vc5CYU3a6WbHAM6uz1cVs4mt2gUZlwrcmYmyEU4iH uwPvkfh9eBUUrIOgncF6kUt2Zyj0bdXBl6P0IGRizVXADU0E6iMn6J3suKtDJErA2GBLoeQmOrDvP 2EgtJddB3tythrC7V6iLqP/D9BK7oYiPykNfQmfEUR/S7C0NVyzs0jiKoltu6f0UEFgFOZqv1rWzF eNxNglhg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.99.1 #2 (Red Hat Linux)) id 1wMfdM-0000000FdaG-0QBD; Tue, 12 May 2026 05:26:20 +0000 Received: from 2a02-8389-2341-5b80-decc-1a96-daaa-a2cc.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:decc:1a96:daaa:a2cc] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.99.1 #2 (Red Hat Linux)) id 1wMfdK-0000000FdPB-1GFj; Tue, 12 May 2026 05:26:18 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Huacai Chen , WANG Xuerui , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , "Christophe Leroy (CS GROUP)" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Herbert Xu , Dan Williams , Chris Mason , David Sterba , Arnd Bergmann , Song Liu , Yu Kuai , Li Nan , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-arch@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 19/19] raid6_kunit: randomize buffer alignment Date: Tue, 12 May 2026 07:20:59 +0200 Message-ID: <20260512052230.2947683-20-hch@lst.de> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260512052230.2947683-1-hch@lst.de> References: <20260512052230.2947683-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add code to add random alignment to the buffers to test the case where they are not page aligned, and to move the buffers to the end of the allocation so that they are next to the vmalloc guard page. This does not include the recovery buffers as the recovery requires page alignment. Signed-off-by: Christoph Hellwig --- lib/raid/raid6/tests/raid6_kunit.c | 41 +++++++++++++++++++++++++----- 1 file changed, 35 insertions(+), 6 deletions(-) diff --git a/lib/raid/raid6/tests/raid6_kunit.c b/lib/raid/raid6/tests/raid6_kunit.c index d6ac777dcaee..7b45c7be36fc 100644 --- a/lib/raid/raid6/tests/raid6_kunit.c +++ b/lib/raid/raid6/tests/raid6_kunit.c @@ -21,6 +21,7 @@ MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); static struct rnd_state rng; static void *test_buffers[RAID6_KUNIT_MAX_BUFFERS]; +static void *aligned_buffers[RAID6_KUNIT_MAX_BUFFERS]; static void *test_recov_buffers[RAID6_KUNIT_MAX_FAILURES]; static size_t test_buflen; @@ -50,6 +51,14 @@ static unsigned int random_nr_buffers(void) RAID6_MIN_DISKS; } +/* Generate a random alignment that is a multiple of 64. */ +static unsigned int random_alignment(unsigned int max_alignment) +{ + if (max_alignment == 0) + return 0; + return (rand32() % (max_alignment + 1)) & ~63; +} + static void makedata(int start, int stop) { int i; @@ -80,7 +89,7 @@ static void test_recover_one(struct kunit *test, unsigned int nr_buffers, for (i = 0; i < RAID6_KUNIT_MAX_FAILURES; i++) memset(test_recov_buffers[i], 0xf0, test_buflen); - memcpy(dataptrs, test_buffers, sizeof(dataptrs)); + memcpy(dataptrs, aligned_buffers, sizeof(dataptrs)); dataptrs[faila] = test_recov_buffers[0]; dataptrs[failb] = test_recov_buffers[1]; @@ -102,13 +111,13 @@ static void test_recover_one(struct kunit *test, unsigned int nr_buffers, ta->recov->data2(nr_buffers, len, faila, failb, dataptrs); } - KUNIT_EXPECT_MEMEQ_MSG(test, test_buffers[faila], test_recov_buffers[0], + KUNIT_EXPECT_MEMEQ_MSG(test, aligned_buffers[faila], dataptrs[faila], len, "faila miscompared: %3d[%c] buffers %u len %u (failb=%3d[%c])\n", faila, member_type(nr_buffers, faila), nr_buffers, len, failb, member_type(nr_buffers, failb)); - KUNIT_EXPECT_MEMEQ_MSG(test, test_buffers[failb], test_recov_buffers[1], + KUNIT_EXPECT_MEMEQ_MSG(test, aligned_buffers[failb], dataptrs[failb], len, "failb miscompared: %3d[%c] buffers %u len %u (faila=%3d[%c])\n", failb, member_type(nr_buffers, failb), @@ -152,9 +161,9 @@ static void test_rmw_one(struct kunit *test, unsigned int nr_buffers, { const struct test_args *ta = test->param_value; - ta->gen->xor_syndrome(nr_buffers, p1, p2, len, test_buffers); + ta->gen->xor_syndrome(nr_buffers, p1, p2, len, aligned_buffers); makedata(p1, p2); - ta->gen->xor_syndrome(nr_buffers, p1, p2, len, test_buffers); + ta->gen->xor_syndrome(nr_buffers, p1, p2, len, aligned_buffers); test_recover(test, nr_buffers, len); } @@ -178,13 +187,33 @@ static void raid6_test_one(struct kunit *test) const struct test_args *ta = test->param_value; unsigned int nr_buffers = random_nr_buffers(); unsigned int len = random_length(RAID6_KUNIT_MAX_BYTES); + unsigned int max_alignment; + int i; /* Nuke syndromes */ memset(test_buffers[nr_buffers - 2], 0xee, test_buflen); memset(test_buffers[nr_buffers - 1], 0xee, test_buflen); + /* + * If we're not using the entire buffer size, inject randomize alignment + * into the buffer. + */ + max_alignment = RAID6_KUNIT_MAX_BYTES - len; + if (rand32() % 2 == 0) { + /* Use random alignments mod 64 */ + for (i = 0; i < nr_buffers; i++) + aligned_buffers[i] = test_buffers[i] + + random_alignment(max_alignment); + } else { + /* Go up to the guard page, to catch buffer overreads */ + unsigned int align = test_buflen - len; + + for (i = 0; i < nr_buffers; i++) + aligned_buffers[i] = test_buffers[i] + align; + } + /* Generate assumed good syndrome */ - ta->gen->gen_syndrome(nr_buffers, len, test_buffers); + ta->gen->gen_syndrome(nr_buffers, len, aligned_buffers); test_recover(test, nr_buffers, len); -- 2.53.0