From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 08C0B2D9EE2 for ; Fri, 14 Nov 2025 22:58:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763161128; cv=none; b=t/VCPxX5jO6DFRy40A47R0TNHLtJfnzbjEB3F8A3Bnr1l7i0KX09DEssvWMEdMHcLFl8Y+g1UD9XqgX871xIkvvBfmZyySBMVRZ79PpasDpIhgG0s5OCI4Plvc59NqoiOz6Gpdj2/+Kn7ToeAqZViaSyvVWMpRgJwOckpiACJFo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763161128; c=relaxed/simple; bh=mdtvdVzBsxuukSwq6dZe3+qjanhSKLe8fbhFxlJeiFE=; h=Date:To:From:Subject:Message-Id; b=dNj4SSNYmmoS4SOfZ+m6zBWvXPzJoWJCl7p7JRwosU2o7lmQZNohYowTS1bruoo9X4e8CBJSIHZ+MXiP9o8wYpAB5c+fZTV/DNZrJfj+SqPk1PdE7d4cMoUZjb6tyGHw4G6Q9hP5P1wooz7jN+p0uJ0RMdzixIkpqCK0L1nuzew= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=LLNOqfPl; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="LLNOqfPl" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC5A8C4CEF8; Fri, 14 Nov 2025 22:58:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1763161127; bh=mdtvdVzBsxuukSwq6dZe3+qjanhSKLe8fbhFxlJeiFE=; h=Date:To:From:Subject:From; b=LLNOqfPlTeX1ia/Sp4lnkgUjaJXr6cP+kjZP2FwJ/N0t5q/Dzx8oBcLi2htfT5XIC WANnkKV/vuQJqDnOI7x67onJI3uD9YPZJbDc7OJoEs+omxAJxGOm3+AuM0aUHaLT2q o5dz0eF1cddWQOrkGL57ZUVUohA0/FipezALn3Rs= Date: Fri, 14 Nov 2025 14:58:47 -0800 To: mm-commits@vger.kernel.org,xiubli@redhat.com,visitorckw@gmail.com,tytso@mit.edu,Slava.Dubeyko@ibm.com,sagi@grimberg.me,kbusch@kernel.org,jaegeuk@kernel.org,idryomov@gmail.com,home7438072@gmail.com,hch@lst.de,ebiggers@kernel.org,david.laight.linux@gmail.com,axboe@kernel.dk,409411716@gms.tku.edu.tw,akpm@linux-foundation.org From: Andrew Morton Subject: + lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch added to mm-nonmm-unstable branch Message-Id: <20251114225847.BC5A8C4CEF8@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The patch titled Subject: lib/base64: rework encode/decode for speed and stricter validation has been added to the -mm mm-nonmm-unstable branch. Its filename is lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch This patch will later appear in the mm-nonmm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Guan-Chun Wu <409411716@gms.tku.edu.tw> Subject: lib/base64: rework encode/decode for speed and stricter validation Date: Fri, 14 Nov 2025 14:01:32 +0800 The old base64 implementation relied on a bit-accumulator loop, which was slow for larger inputs and too permissive in validation. It would accept extra '=', missing '=', or even '=' appearing in the middle of the input, allowing malformed strings to pass. This patch reworks the internals to improve performance and enforce stricter validation. Changes: - Encoder: * Process input in 3-byte blocks, mapping 24 bits into four 6-bit symbols, avoiding bit-by-bit shifting and reducing loop iterations. * Handle the final 1-2 leftover bytes explicitly and emit '=' only when requested. - Decoder: * Based on the reverse lookup tables from the previous patch, decode input in 4-character groups. * Each group is looked up directly, converted into numeric values, and combined into 3 output bytes. * Explicitly handle padded and unpadded forms: - With padding: input length must be a multiple of 4, and '=' is allowed only in the last two positions. Reject stray or early '='. - Without padding: validate tail lengths (2 or 3 chars) and require unused low bits to be zero. * Removed the bit-accumulator style loop to reduce loop iterations. Performance (x86_64, Intel Core i7-10700 @ 2.90GHz, avg over 1000 runs, KUnit): Encode: 64B ~90ns -> ~32ns (~2.8x) 1KB ~1332ns -> ~510ns (~2.6x) Decode: 64B ~1530ns -> ~35ns (~43.7x) 1KB ~27726ns -> ~530ns (~52.3x) Link: https://lkml.kernel.org/r/20251114060132.89279-1-409411716@gms.tku.edu.tw Co-developed-by: Kuan-Wei Chiu Signed-off-by: Kuan-Wei Chiu Co-developed-by: Yu-Sheng Huang Signed-off-by: Yu-Sheng Huang Signed-off-by: Guan-Chun Wu <409411716@gms.tku.edu.tw> Reviewed-by: David Laight Cc: Christoph Hellwig Cc: Eric Biggers Cc: Ilya Dryomov Cc: Jaegeuk Kim Cc: Jens Axboe Cc: Keith Busch Cc: Sagi Grimberg Cc: "Theodore Y. Ts'o" Cc: Viacheslav Dubeyko Cc: Xiubo Li Signed-off-by: Andrew Morton --- lib/base64.c | 109 ++++++++++++++++++++++++++++++------------------- 1 file changed, 68 insertions(+), 41 deletions(-) --- a/lib/base64.c~lib-base64-rework-encode-decode-for-speed-and-stricter-validation +++ a/lib/base64.c @@ -79,28 +79,38 @@ static const s8 base64_rev_maps[][256] = int base64_encode(const u8 *src, int srclen, char *dst, bool padding, enum base64_variant variant) { u32 ac = 0; - int bits = 0; - int i; char *cp = dst; const char *base64_table = base64_tables[variant]; - for (i = 0; i < srclen; i++) { - ac = (ac << 8) | src[i]; - bits += 8; - do { - bits -= 6; - *cp++ = base64_table[(ac >> bits) & 0x3f]; - } while (bits >= 6); - } - if (bits) { - *cp++ = base64_table[(ac << (6 - bits)) & 0x3f]; - bits -= 6; + while (srclen >= 3) { + ac = (u32)src[0] << 16 | (u32)src[1] << 8 | (u32)src[2]; + *cp++ = base64_table[ac >> 18]; + *cp++ = base64_table[(ac >> 12) & 0x3f]; + *cp++ = base64_table[(ac >> 6) & 0x3f]; + *cp++ = base64_table[ac & 0x3f]; + + src += 3; + srclen -= 3; } - if (padding) { - while (bits < 0) { + + switch (srclen) { + case 2: + ac = (u32)src[0] << 16 | (u32)src[1] << 8; + *cp++ = base64_table[ac >> 18]; + *cp++ = base64_table[(ac >> 12) & 0x3f]; + *cp++ = base64_table[(ac >> 6) & 0x3f]; + if (padding) + *cp++ = '='; + break; + case 1: + ac = (u32)src[0] << 16; + *cp++ = base64_table[ac >> 18]; + *cp++ = base64_table[(ac >> 12) & 0x3f]; + if (padding) { + *cp++ = '='; *cp++ = '='; - bits += 2; } + break; } return cp - dst; } @@ -116,41 +126,58 @@ EXPORT_SYMBOL_GPL(base64_encode); * * Decodes a string using the selected Base64 variant. * - * This implementation hasn't been optimized for performance. - * * Return: the length of the resulting decoded binary data in bytes, * or -1 if the string isn't a valid Base64 string. */ int base64_decode(const char *src, int srclen, u8 *dst, bool padding, enum base64_variant variant) { - u32 ac = 0; - int bits = 0; - int i; u8 *bp = dst; - s8 ch; + s8 input[4]; + s32 val; + const u8 *s = (const u8 *)src; + const s8 *base64_rev_tables = base64_rev_maps[variant]; - for (i = 0; i < srclen; i++) { - if (padding) { - if (src[i] == '=') { - ac = (ac << 6); - bits += 6; - if (bits >= 8) - bits -= 8; - continue; - } - } - ch = base64_rev_maps[variant][(u8)src[i]]; - if (ch == -1) - return -1; - ac = (ac << 6) | ch; - bits += 6; - if (bits >= 8) { - bits -= 8; - *bp++ = (u8)(ac >> bits); + while (srclen >= 4) { + input[0] = base64_rev_tables[s[0]]; + input[1] = base64_rev_tables[s[1]]; + input[2] = base64_rev_tables[s[2]]; + input[3] = base64_rev_tables[s[3]]; + + val = input[0] << 18 | input[1] << 12 | input[2] << 6 | input[3]; + + if (unlikely(val < 0)) { + if (!padding || srclen != 4 || s[3] != '=') + return -1; + padding = 0; + srclen = s[2] == '=' ? 2 : 3; + break; } + + *bp++ = val >> 16; + *bp++ = val >> 8; + *bp++ = val; + + s += 4; + srclen -= 4; } - if (ac & ((1 << bits) - 1)) + + if (likely(!srclen)) + return bp - dst; + if (padding || srclen == 1) return -1; + + val = (base64_rev_tables[s[0]] << 12) | (base64_rev_tables[s[1]] << 6); + *bp++ = val >> 10; + + if (srclen == 2) { + if (val & 0x800003ff) + return -1; + } else { + val |= base64_rev_tables[s[2]]; + if (val & 0x80000003) + return -1; + *bp++ = val >> 2; + } return bp - dst; } EXPORT_SYMBOL_GPL(base64_decode); _ Patches currently in -mm which might be from 409411716@gms.tku.edu.tw are lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch lib-add-kunit-tests-for-base64-encoding-decoding.patch fscrypt-replace-local-base64url-helpers-with-lib-base64.patch ceph-replace-local-base64-helpers-with-lib-base64.patch