From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4D1333B970; Tue, 12 May 2026 05:26:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778563594; cv=none; b=SjXvcmy6XKKhAOUz5BziBjKibpkcuZqeTIWGiwQtNmfhF5Nx0E9/nUgeSXcLlPJ4d+xBqQpgGDP2tzUxFh8NCFXp73Khwtv9qPPwCZobfy6xzg6ASp+NQbRhEp7Kt+Yw8LmR+V81vxYEBUnQd2KPX4apg6KbaX38twUpRN5Qsp4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778563594; c=relaxed/simple; bh=7XUFAWl1OFmRiZZU1pA0eUCrF0fYXzwuHuD5Ga5vnbw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ON95MkXz6i1ARbhiPG8HEd2Pg1cSQtOc0WpdiTZGTqfdY2AFqI5eXO9bvtQzMegvcnkQCdu/yOVJwBiFTT97uyHxyE+Gteuz5HTSTB4vtv0OwmlLYzHDUkF5xd60MF7EWP51vrK874ZVcYT8iuLs3lBYdZ7oYkhGl5lG7XOK0iE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=WFUbvFnl; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="WFUbvFnl" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=BNJEm+BWUy4IcOMyMdOsApBihLLyr0NeOtDd249LUxw=; b=WFUbvFnlIAd0p1zAFTPlyqzBjc fgw3V92oHkzT8uqbafgXaKnGKHw3/l3uSLvtbfpiZQA6mDUAf0qKt9D44riKTxvoo042Aw02T5aeU KAbHTV9QEhn3NTK2NQmFpmrfcRhmFQH85Oyaij4MVuO6rAVVkdn2/GKq+2WNT6vw4P/lzwdAyEb76 5+e0nSz4nDhteBgdtlbbw/N/HNL9e/OUy/og9zj6mKvP9RM3piglSSSJ4RcV/+pAy23pXz/RUkZVR VmV7gebwjdE9fOtxyTJy5qR8VQL7tVOxz4ad07m0T5ii76fAhpKpGnp3hsZSuTBImWten2FwO6Sgo 8lmLAqxg==; Received: from 2a02-8389-2341-5b80-decc-1a96-daaa-a2cc.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:decc:1a96:daaa:a2cc] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.99.1 #2 (Red Hat Linux)) id 1wMfdK-0000000FdPB-1GFj; Tue, 12 May 2026 05:26:18 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Huacai Chen , WANG Xuerui , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , "Christophe Leroy (CS GROUP)" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Herbert Xu , Dan Williams , Chris Mason , David Sterba , Arnd Bergmann , Song Liu , Yu Kuai , Li Nan , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-arch@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 19/19] raid6_kunit: randomize buffer alignment Date: Tue, 12 May 2026 07:20:59 +0200 Message-ID: <20260512052230.2947683-20-hch@lst.de> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260512052230.2947683-1-hch@lst.de> References: <20260512052230.2947683-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add code to add random alignment to the buffers to test the case where they are not page aligned, and to move the buffers to the end of the allocation so that they are next to the vmalloc guard page. This does not include the recovery buffers as the recovery requires page alignment. Signed-off-by: Christoph Hellwig --- lib/raid/raid6/tests/raid6_kunit.c | 41 +++++++++++++++++++++++++----- 1 file changed, 35 insertions(+), 6 deletions(-) diff --git a/lib/raid/raid6/tests/raid6_kunit.c b/lib/raid/raid6/tests/raid6_kunit.c index d6ac777dcaee..7b45c7be36fc 100644 --- a/lib/raid/raid6/tests/raid6_kunit.c +++ b/lib/raid/raid6/tests/raid6_kunit.c @@ -21,6 +21,7 @@ MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); static struct rnd_state rng; static void *test_buffers[RAID6_KUNIT_MAX_BUFFERS]; +static void *aligned_buffers[RAID6_KUNIT_MAX_BUFFERS]; static void *test_recov_buffers[RAID6_KUNIT_MAX_FAILURES]; static size_t test_buflen; @@ -50,6 +51,14 @@ static unsigned int random_nr_buffers(void) RAID6_MIN_DISKS; } +/* Generate a random alignment that is a multiple of 64. */ +static unsigned int random_alignment(unsigned int max_alignment) +{ + if (max_alignment == 0) + return 0; + return (rand32() % (max_alignment + 1)) & ~63; +} + static void makedata(int start, int stop) { int i; @@ -80,7 +89,7 @@ static void test_recover_one(struct kunit *test, unsigned int nr_buffers, for (i = 0; i < RAID6_KUNIT_MAX_FAILURES; i++) memset(test_recov_buffers[i], 0xf0, test_buflen); - memcpy(dataptrs, test_buffers, sizeof(dataptrs)); + memcpy(dataptrs, aligned_buffers, sizeof(dataptrs)); dataptrs[faila] = test_recov_buffers[0]; dataptrs[failb] = test_recov_buffers[1]; @@ -102,13 +111,13 @@ static void test_recover_one(struct kunit *test, unsigned int nr_buffers, ta->recov->data2(nr_buffers, len, faila, failb, dataptrs); } - KUNIT_EXPECT_MEMEQ_MSG(test, test_buffers[faila], test_recov_buffers[0], + KUNIT_EXPECT_MEMEQ_MSG(test, aligned_buffers[faila], dataptrs[faila], len, "faila miscompared: %3d[%c] buffers %u len %u (failb=%3d[%c])\n", faila, member_type(nr_buffers, faila), nr_buffers, len, failb, member_type(nr_buffers, failb)); - KUNIT_EXPECT_MEMEQ_MSG(test, test_buffers[failb], test_recov_buffers[1], + KUNIT_EXPECT_MEMEQ_MSG(test, aligned_buffers[failb], dataptrs[failb], len, "failb miscompared: %3d[%c] buffers %u len %u (faila=%3d[%c])\n", failb, member_type(nr_buffers, failb), @@ -152,9 +161,9 @@ static void test_rmw_one(struct kunit *test, unsigned int nr_buffers, { const struct test_args *ta = test->param_value; - ta->gen->xor_syndrome(nr_buffers, p1, p2, len, test_buffers); + ta->gen->xor_syndrome(nr_buffers, p1, p2, len, aligned_buffers); makedata(p1, p2); - ta->gen->xor_syndrome(nr_buffers, p1, p2, len, test_buffers); + ta->gen->xor_syndrome(nr_buffers, p1, p2, len, aligned_buffers); test_recover(test, nr_buffers, len); } @@ -178,13 +187,33 @@ static void raid6_test_one(struct kunit *test) const struct test_args *ta = test->param_value; unsigned int nr_buffers = random_nr_buffers(); unsigned int len = random_length(RAID6_KUNIT_MAX_BYTES); + unsigned int max_alignment; + int i; /* Nuke syndromes */ memset(test_buffers[nr_buffers - 2], 0xee, test_buflen); memset(test_buffers[nr_buffers - 1], 0xee, test_buflen); + /* + * If we're not using the entire buffer size, inject randomize alignment + * into the buffer. + */ + max_alignment = RAID6_KUNIT_MAX_BYTES - len; + if (rand32() % 2 == 0) { + /* Use random alignments mod 64 */ + for (i = 0; i < nr_buffers; i++) + aligned_buffers[i] = test_buffers[i] + + random_alignment(max_alignment); + } else { + /* Go up to the guard page, to catch buffer overreads */ + unsigned int align = test_buflen - len; + + for (i = 0; i < nr_buffers; i++) + aligned_buffers[i] = test_buffers[i] + align; + } + /* Generate assumed good syndrome */ - ta->gen->gen_syndrome(nr_buffers, len, test_buffers); + ta->gen->gen_syndrome(nr_buffers, len, aligned_buffers); test_recover(test, nr_buffers, len); -- 2.53.0