* + lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch added to mm-nonmm-unstable branch
@ 2025-11-01 4:10 Andrew Morton
0 siblings, 0 replies; 2+ messages in thread
From: Andrew Morton @ 2025-11-01 4:10 UTC (permalink / raw)
To: mm-commits, xiubli, visitorckw, tytso, sagi, kbusch, jaegeuk,
idryomov, home7438072, hch, ebiggers, axboe, 409411716, akpm
The patch titled
Subject: lib/base64: rework encode/decode for speed and stricter validation
has been added to the -mm mm-nonmm-unstable branch. Its filename is
lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch
This patch will later appear in the mm-nonmm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Guan-Chun Wu <409411716@gms.tku.edu.tw>
Subject: lib/base64: rework encode/decode for speed and stricter validation
Date: Wed, 29 Oct 2025 18:21:00 +0800
The old base64 implementation relied on a bit-accumulator loop, which was
slow for larger inputs and too permissive in validation. It would accept
extra '=', missing '=', or even '=' appearing in the middle of the input,
allowing malformed strings to pass. This patch reworks the internals to
improve performance and enforce stricter validation.
Changes:
- Encoder:
* Process input in 3-byte blocks, mapping 24 bits into four 6-bit
symbols, avoiding bit-by-bit shifting and reducing loop iterations.
* Handle the final 1-2 leftover bytes explicitly and emit '=' only when
requested.
- Decoder:
* Based on the reverse lookup tables from the previous patch, decode
input in 4-character groups.
* Each group is looked up directly, converted into numeric values, and
combined into 3 output bytes.
* Explicitly handle padded and unpadded forms:
- With padding: input length must be a multiple of 4, and '=' is
allowed only in the last two positions. Reject stray or early '='.
- Without padding: validate tail lengths (2 or 3 chars) and require
unused low bits to be zero.
* Removed the bit-accumulator style loop to reduce loop iterations.
Performance (x86_64, Intel Core i7-10700 @ 2.90GHz, avg over 1000 runs,
KUnit):
Encode:
64B ~90ns -> ~32ns (~2.8x)
1KB ~1332ns -> ~510ns (~2.6x)
Decode:
64B ~1530ns -> ~35ns (~43.7x)
1KB ~27726ns -> ~530ns (~52.3x)
Link: https://lkml.kernel.org/r/20251029102100.543446-1-409411716@gms.tku.edu.tw
Co-developed-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Co-developed-by: Yu-Sheng Huang <home7438072@gmail.com>
Signed-off-by: Yu-Sheng Huang <home7438072@gmail.com>
Signed-off-by: Guan-Chun Wu <409411716@gms.tku.edu.tw>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Ilya Dryomov <idryomov@gmail.com>
Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Ted Ts'o <tytso@mit.edu>
Cc: Xiubo Li <xiubli@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
lib/base64.c | 110 ++++++++++++++++++++++++++++++-------------------
1 file changed, 69 insertions(+), 41 deletions(-)
--- a/lib/base64.c~lib-base64-rework-encode-decode-for-speed-and-stricter-validation
+++ a/lib/base64.c
@@ -51,28 +51,38 @@ static const s8 base64_rev_maps[][256] =
int base64_encode(const u8 *src, int srclen, char *dst, bool padding, enum base64_variant variant)
{
u32 ac = 0;
- int bits = 0;
- int i;
char *cp = dst;
const char *base64_table = base64_tables[variant];
- for (i = 0; i < srclen; i++) {
- ac = (ac << 8) | src[i];
- bits += 8;
- do {
- bits -= 6;
- *cp++ = base64_table[(ac >> bits) & 0x3f];
- } while (bits >= 6);
- }
- if (bits) {
- *cp++ = base64_table[(ac << (6 - bits)) & 0x3f];
- bits -= 6;
+ while (srclen >= 3) {
+ ac = (u32)src[0] << 16 | (u32)src[1] << 8 | (u32)src[2];
+ *cp++ = base64_table[ac >> 18];
+ *cp++ = base64_table[(ac >> 12) & 0x3f];
+ *cp++ = base64_table[(ac >> 6) & 0x3f];
+ *cp++ = base64_table[ac & 0x3f];
+
+ src += 3;
+ srclen -= 3;
}
- if (padding) {
- while (bits < 0) {
+
+ switch (srclen) {
+ case 2:
+ ac = (u32)src[0] << 16 | (u32)src[1] << 8;
+ *cp++ = base64_table[ac >> 18];
+ *cp++ = base64_table[(ac >> 12) & 0x3f];
+ *cp++ = base64_table[(ac >> 6) & 0x3f];
+ if (padding)
+ *cp++ = '=';
+ break;
+ case 1:
+ ac = (u32)src[0] << 16;
+ *cp++ = base64_table[ac >> 18];
+ *cp++ = base64_table[(ac >> 12) & 0x3f];
+ if (padding) {
+ *cp++ = '=';
*cp++ = '=';
- bits += 2;
}
+ break;
}
return cp - dst;
}
@@ -88,41 +98,59 @@ EXPORT_SYMBOL_GPL(base64_encode);
*
* Decodes a string using the selected Base64 variant.
*
- * This implementation hasn't been optimized for performance.
- *
* Return: the length of the resulting decoded binary data in bytes,
* or -1 if the string isn't a valid Base64 string.
*/
int base64_decode(const char *src, int srclen, u8 *dst, bool padding, enum base64_variant variant)
{
- u32 ac = 0;
- int bits = 0;
- int i;
u8 *bp = dst;
- s8 ch;
+ s8 input[4];
+ s32 val;
+ const u8 *s = (const u8 *)src;
+ const s8 *base64_rev_tables = base64_rev_maps[variant];
- for (i = 0; i < srclen; i++) {
- if (padding) {
- if (src[i] == '=') {
- ac = (ac << 6);
- bits += 6;
- if (bits >= 8)
- bits -= 8;
- continue;
- }
- }
- ch = base64_rev_maps[variant][(u8)src[i]];
- if (ch == -1)
- return -1;
- ac = (ac << 6) | ch;
- bits += 6;
- if (bits >= 8) {
- bits -= 8;
- *bp++ = (u8)(ac >> bits);
+ while (srclen >= 4) {
+ input[0] = base64_rev_tables[s[0]];
+ input[1] = base64_rev_tables[s[1]];
+ input[2] = base64_rev_tables[s[2]];
+ input[3] = base64_rev_tables[s[3]];
+
+ val = input[0] << 18 | input[1] << 12 | input[2] << 6 | input[3];
+
+ if (unlikely(val < 0)) {
+ if (!padding || srclen != 4 || s[3] != '=')
+ return -1;
+ padding = 0;
+ srclen = s[2] == '=' ? 2 : 3;
+ break;
}
+
+ *bp++ = val >> 16;
+ *bp++ = val >> 8;
+ *bp++ = val;
+
+ s += 4;
+ srclen -= 4;
}
- if (ac & ((1 << bits) - 1))
+
+ if (likely(!srclen))
+ return bp - dst;
+ if (padding || srclen == 1)
return -1;
+
+ val = (base64_rev_tables[s[0]] << 12) | (base64_rev_tables[s[1]] << 6);
+ *bp++ = val >> 10;
+
+ if (srclen == 2) {
+ if (val & 0x800003ff)
+ return -1;
+ } else {
+ val |= base64_rev_tables[s[2]];
+ if (val & 0x80000003)
+ return -1;
+ *bp++ = val >> 2;
+ }
return bp - dst;
}
EXPORT_SYMBOL_GPL(base64_decode);
+
_
Patches currently in -mm which might be from 409411716@gms.tku.edu.tw are
lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch
lib-add-kunit-tests-for-base64-encoding-decoding.patch
fscrypt-replace-local-base64url-helpers-with-lib-base64.patch
ceph-replace-local-base64-helpers-with-lib-base64.patch
^ permalink raw reply [flat|nested] 2+ messages in thread
* + lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch added to mm-nonmm-unstable branch
@ 2025-11-14 22:58 Andrew Morton
0 siblings, 0 replies; 2+ messages in thread
From: Andrew Morton @ 2025-11-14 22:58 UTC (permalink / raw)
To: mm-commits, xiubli, visitorckw, tytso, Slava.Dubeyko, sagi,
kbusch, jaegeuk, idryomov, home7438072, hch, ebiggers,
david.laight.linux, axboe, 409411716, akpm
The patch titled
Subject: lib/base64: rework encode/decode for speed and stricter validation
has been added to the -mm mm-nonmm-unstable branch. Its filename is
lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch
This patch will later appear in the mm-nonmm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Guan-Chun Wu <409411716@gms.tku.edu.tw>
Subject: lib/base64: rework encode/decode for speed and stricter validation
Date: Fri, 14 Nov 2025 14:01:32 +0800
The old base64 implementation relied on a bit-accumulator loop, which was
slow for larger inputs and too permissive in validation. It would accept
extra '=', missing '=', or even '=' appearing in the middle of the input,
allowing malformed strings to pass. This patch reworks the internals to
improve performance and enforce stricter validation.
Changes:
- Encoder:
* Process input in 3-byte blocks, mapping 24 bits into four 6-bit
symbols, avoiding bit-by-bit shifting and reducing loop iterations.
* Handle the final 1-2 leftover bytes explicitly and emit '=' only when
requested.
- Decoder:
* Based on the reverse lookup tables from the previous patch, decode
input in 4-character groups.
* Each group is looked up directly, converted into numeric values, and
combined into 3 output bytes.
* Explicitly handle padded and unpadded forms:
- With padding: input length must be a multiple of 4, and '=' is
allowed only in the last two positions. Reject stray or early '='.
- Without padding: validate tail lengths (2 or 3 chars) and require
unused low bits to be zero.
* Removed the bit-accumulator style loop to reduce loop iterations.
Performance (x86_64, Intel Core i7-10700 @ 2.90GHz, avg over 1000 runs,
KUnit):
Encode:
64B ~90ns -> ~32ns (~2.8x)
1KB ~1332ns -> ~510ns (~2.6x)
Decode:
64B ~1530ns -> ~35ns (~43.7x)
1KB ~27726ns -> ~530ns (~52.3x)
Link: https://lkml.kernel.org/r/20251114060132.89279-1-409411716@gms.tku.edu.tw
Co-developed-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Co-developed-by: Yu-Sheng Huang <home7438072@gmail.com>
Signed-off-by: Yu-Sheng Huang <home7438072@gmail.com>
Signed-off-by: Guan-Chun Wu <409411716@gms.tku.edu.tw>
Reviewed-by: David Laight <david.laight.linux@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Ilya Dryomov <idryomov@gmail.com>
Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: "Theodore Y. Ts'o" <tytso@mit.edu>
Cc: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Cc: Xiubo Li <xiubli@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
lib/base64.c | 109 ++++++++++++++++++++++++++++++-------------------
1 file changed, 68 insertions(+), 41 deletions(-)
--- a/lib/base64.c~lib-base64-rework-encode-decode-for-speed-and-stricter-validation
+++ a/lib/base64.c
@@ -79,28 +79,38 @@ static const s8 base64_rev_maps[][256] =
int base64_encode(const u8 *src, int srclen, char *dst, bool padding, enum base64_variant variant)
{
u32 ac = 0;
- int bits = 0;
- int i;
char *cp = dst;
const char *base64_table = base64_tables[variant];
- for (i = 0; i < srclen; i++) {
- ac = (ac << 8) | src[i];
- bits += 8;
- do {
- bits -= 6;
- *cp++ = base64_table[(ac >> bits) & 0x3f];
- } while (bits >= 6);
- }
- if (bits) {
- *cp++ = base64_table[(ac << (6 - bits)) & 0x3f];
- bits -= 6;
+ while (srclen >= 3) {
+ ac = (u32)src[0] << 16 | (u32)src[1] << 8 | (u32)src[2];
+ *cp++ = base64_table[ac >> 18];
+ *cp++ = base64_table[(ac >> 12) & 0x3f];
+ *cp++ = base64_table[(ac >> 6) & 0x3f];
+ *cp++ = base64_table[ac & 0x3f];
+
+ src += 3;
+ srclen -= 3;
}
- if (padding) {
- while (bits < 0) {
+
+ switch (srclen) {
+ case 2:
+ ac = (u32)src[0] << 16 | (u32)src[1] << 8;
+ *cp++ = base64_table[ac >> 18];
+ *cp++ = base64_table[(ac >> 12) & 0x3f];
+ *cp++ = base64_table[(ac >> 6) & 0x3f];
+ if (padding)
+ *cp++ = '=';
+ break;
+ case 1:
+ ac = (u32)src[0] << 16;
+ *cp++ = base64_table[ac >> 18];
+ *cp++ = base64_table[(ac >> 12) & 0x3f];
+ if (padding) {
+ *cp++ = '=';
*cp++ = '=';
- bits += 2;
}
+ break;
}
return cp - dst;
}
@@ -116,41 +126,58 @@ EXPORT_SYMBOL_GPL(base64_encode);
*
* Decodes a string using the selected Base64 variant.
*
- * This implementation hasn't been optimized for performance.
- *
* Return: the length of the resulting decoded binary data in bytes,
* or -1 if the string isn't a valid Base64 string.
*/
int base64_decode(const char *src, int srclen, u8 *dst, bool padding, enum base64_variant variant)
{
- u32 ac = 0;
- int bits = 0;
- int i;
u8 *bp = dst;
- s8 ch;
+ s8 input[4];
+ s32 val;
+ const u8 *s = (const u8 *)src;
+ const s8 *base64_rev_tables = base64_rev_maps[variant];
- for (i = 0; i < srclen; i++) {
- if (padding) {
- if (src[i] == '=') {
- ac = (ac << 6);
- bits += 6;
- if (bits >= 8)
- bits -= 8;
- continue;
- }
- }
- ch = base64_rev_maps[variant][(u8)src[i]];
- if (ch == -1)
- return -1;
- ac = (ac << 6) | ch;
- bits += 6;
- if (bits >= 8) {
- bits -= 8;
- *bp++ = (u8)(ac >> bits);
+ while (srclen >= 4) {
+ input[0] = base64_rev_tables[s[0]];
+ input[1] = base64_rev_tables[s[1]];
+ input[2] = base64_rev_tables[s[2]];
+ input[3] = base64_rev_tables[s[3]];
+
+ val = input[0] << 18 | input[1] << 12 | input[2] << 6 | input[3];
+
+ if (unlikely(val < 0)) {
+ if (!padding || srclen != 4 || s[3] != '=')
+ return -1;
+ padding = 0;
+ srclen = s[2] == '=' ? 2 : 3;
+ break;
}
+
+ *bp++ = val >> 16;
+ *bp++ = val >> 8;
+ *bp++ = val;
+
+ s += 4;
+ srclen -= 4;
}
- if (ac & ((1 << bits) - 1))
+
+ if (likely(!srclen))
+ return bp - dst;
+ if (padding || srclen == 1)
return -1;
+
+ val = (base64_rev_tables[s[0]] << 12) | (base64_rev_tables[s[1]] << 6);
+ *bp++ = val >> 10;
+
+ if (srclen == 2) {
+ if (val & 0x800003ff)
+ return -1;
+ } else {
+ val |= base64_rev_tables[s[2]];
+ if (val & 0x80000003)
+ return -1;
+ *bp++ = val >> 2;
+ }
return bp - dst;
}
EXPORT_SYMBOL_GPL(base64_decode);
_
Patches currently in -mm which might be from 409411716@gms.tku.edu.tw are
lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch
lib-add-kunit-tests-for-base64-encoding-decoding.patch
fscrypt-replace-local-base64url-helpers-with-lib-base64.patch
ceph-replace-local-base64-helpers-with-lib-base64.patch
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-11-14 22:58 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-01 4:10 + lib-base64-rework-encode-decode-for-speed-and-stricter-validation.patch added to mm-nonmm-unstable branch Andrew Morton
-- strict thread matches above, loose matches on Subject: below --
2025-11-14 22:58 Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).