public inbox for openembedded-core@lists.openembedded.org
 help / color / mirror / Atom feed
* [OE-core][whinlatter v2 00/16] Patch review
@ 2026-03-06  7:22 Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 01/16] python3-pip: Backport fix CVE-2026-1703 Yoann Congal
                   ` (15 more replies)
  0 siblings, 16 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

This is an updated patch review request with added patches.

v1: https://lore.kernel.org/openembedded-core/cover.1772700454.git.yoann.congal@smile.fr/T/#u
v1->v2: added patches:
* python3-urllib3: patch CVE-2025-66471
* lz4: Remove a reference to the rejected CVE-2025-62813
* avahi: Remove a reference to the rejected CVE-2021-36217
* create-pull-request: Keep commit hash to be pulled in cover email

Please review this set of changes for whinlatter and have comments back
by end of day Monday, March 9.

Passed a-full on autobuilder:
https://autobuilder.yoctoproject.org/valkyrie/#/builders/29/builds/3334

The following changes since commit 45cba1329d541fdc5857d6df2624b34c91133f7a:

  build-appliance-image: Update to whinlatter head revisions (2026-02-27 17:46:44 +0000)

are available in the Git repository at:

  https://git.openembedded.org/openembedded-core-contrib stable/whinlatter-nut
  https://git.openembedded.org/openembedded-core-contrib/log/?h=stable/whinlatter-nut

for you to fetch changes up to cfc0e446a1ad57e710d2c82914211f9bcdc4a752:

  python3-urllib3: patch CVE-2025-66471 (2026-03-06 00:48:43 +0100)

----------------------------------------------------------------

Adarsh Jagadish Kamini (1):
  python3-pip: Backport fix CVE-2026-1703

Ankur Tyagi (1):
  wireless-regdb: upgrade 2025.10.07 -> 2026.02.04

Antonin Godard (1):
  python3: skip flaky test_default_timeout test

Benjamin Robin (Schneider Electric) (2):
  avahi: Remove a reference to the rejected CVE-2021-36217
  lz4: Remove a reference to the rejected CVE-2025-62813

Hugo SIMELIERE (2):
  zlib: Fix CVE-2026-27171
  harfbuzz: Fix CVE-2026-22693

Paul Barker (1):
  create-pull-request: Keep commit hash to be pulled in cover email

Peter Marko (4):
  linux-yocto: apply cve-exclusions also to rt and tiny recipe variants
  cve-exclusions: set status for 5 CVEs
  ffmpeg: set status for CVE-2025-12343
  python3-urllib3: patch CVE-2025-66471

Shaik Moin (1):
  gdk-pixbuf: Fix CVE-2025-6199

Vijay Anusuri (1):
  gnutls: Fix CVE-2025-14831

Yoann Congal (2):
  README: Add whinlatter subject-prefix to git-send-email suggestion
  b4-config: add send-prefixes for whinlatter

 .b4-config                                    |   1 +
 README.OE-Core.md                             |   2 +-
 .../avahi/files/local-ping.patch              |   1 -
 .../zlib/zlib/CVE-2026-27171.patch            |  63 ++
 meta/recipes-core/zlib/zlib_1.3.1.bb          |   1 +
 .../python/python3-pip/CVE-2026-1703.patch    |  41 +
 .../python/python3-pip_25.2.bb                |   4 +-
 .../python3-urllib3/CVE-2025-66471.patch      | 926 ++++++++++++++++++
 .../python/python3-urllib3_2.5.0.bb           |   1 +
 ...kip-flaky-test_default_timeout-tests.patch |  49 +
 .../python/python3_3.13.11.bb                 |   1 +
 .../gdk-pixbuf/gdk-pixbuf/CVE-2025-6199.patch |  36 +
 .../gdk-pixbuf/gdk-pixbuf_2.42.12.bb          |   1 +
 .../harfbuzz/files/CVE-2026-22693.patch       |  33 +
 .../harfbuzz/harfbuzz_11.4.5.bb               |   4 +-
 meta/recipes-kernel/linux/cve-exclusion.inc   |  16 +
 .../linux/linux-yocto-rt_6.12.bb              |   1 +
 .../linux/linux-yocto-rt_6.16.bb              |   1 +
 .../linux/linux-yocto-tiny_6.12.bb            |   1 +
 .../linux/linux-yocto-tiny_6.16.bb            |   1 +
 ....10.07.bb => wireless-regdb_2026.02.04.bb} |   2 +-
 meta/recipes-multimedia/ffmpeg/ffmpeg_8.0.bb  |   1 +
 .../gnutls/gnutls/CVE-2025-14831-1.patch      | 119 +++
 .../gnutls/gnutls/CVE-2025-14831-10.patch     | 424 ++++++++
 .../gnutls/gnutls/CVE-2025-14831-2.patch      |  66 ++
 .../gnutls/gnutls/CVE-2025-14831-3.patch      |  30 +
 .../gnutls/gnutls/CVE-2025-14831-4.patch      |  45 +
 .../gnutls/gnutls/CVE-2025-14831-5.patch      | 205 ++++
 .../gnutls/gnutls/CVE-2025-14831-6.patch      | 505 ++++++++++
 .../gnutls/gnutls/CVE-2025-14831-7.patch      | 124 +++
 .../gnutls/gnutls/CVE-2025-14831-8.patch      | 155 +++
 .../gnutls/gnutls/CVE-2025-14831-9.patch      | 110 +++
 meta/recipes-support/gnutls/gnutls_3.8.10.bb  |  10 +
 ...13.patch => fix-null-error-handling.patch} |   1 -
 meta/recipes-support/lz4/lz4_1.10.0.bb        |   2 +-
 scripts/create-pull-request                   |   2 +-
 36 files changed, 2977 insertions(+), 8 deletions(-)
 create mode 100644 meta/recipes-core/zlib/zlib/CVE-2026-27171.patch
 create mode 100644 meta/recipes-devtools/python/python3-pip/CVE-2026-1703.patch
 create mode 100644 meta/recipes-devtools/python/python3-urllib3/CVE-2025-66471.patch
 create mode 100644 meta/recipes-devtools/python/python3/0001-Skip-flaky-test_default_timeout-tests.patch
 create mode 100644 meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/CVE-2025-6199.patch
 create mode 100644 meta/recipes-graphics/harfbuzz/files/CVE-2026-22693.patch
 rename meta/recipes-kernel/wireless-regdb/{wireless-regdb_2025.10.07.bb => wireless-regdb_2026.02.04.bb} (94%)
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-1.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-10.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-2.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-3.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-4.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-5.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-6.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-7.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-8.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-9.patch
 rename meta/recipes-support/lz4/lz4/{CVE-2025-62813.patch => fix-null-error-handling.patch} (99%)



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 01/16] python3-pip: Backport fix CVE-2026-1703
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 02/16] wireless-regdb: upgrade 2025.10.07 -> 2026.02.04 Yoann Congal
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Adarsh Jagadish Kamini <adarsh.jagadish.kamini@est.tech>

Include the patch linked in the NVD report: https://github.com/pypa/pip/commit/8e227a9be4faa9594e05d02ca05a413a2a4e7735

Signed-off-by: Adarsh Jagadish Kamini <adarsh.jagadish.kamini@est.tech>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 .../python/python3-pip/CVE-2026-1703.patch    | 41 +++++++++++++++++++
 .../python/python3-pip_25.2.bb                |  4 +-
 2 files changed, 44 insertions(+), 1 deletion(-)
 create mode 100644 meta/recipes-devtools/python/python3-pip/CVE-2026-1703.patch

diff --git a/meta/recipes-devtools/python/python3-pip/CVE-2026-1703.patch b/meta/recipes-devtools/python/python3-pip/CVE-2026-1703.patch
new file mode 100644
index 00000000000..826f483ea21
--- /dev/null
+++ b/meta/recipes-devtools/python/python3-pip/CVE-2026-1703.patch
@@ -0,0 +1,41 @@
+From abce61e230c47598ce836157d075608595216a4c Mon Sep 17 00:00:00 2001
+From: Damian Shaw <damian.peter.shaw@gmail.com>
+Date: Fri, 30 Jan 2026 16:27:57 -0500
+Subject: [PATCH v4] Merge pull request #13777 from sethmlarson/commonpath
+
+Use os.path.commonpath() instead of commonprefix()
+
+CVE: CVE-2026-1703
+
+Upstream-Status: Backport [https://github.com/pypa/pip/commit/8e227a9be4faa9594e05d02ca05a413a2a4e7735]
+
+Signed-off-by: Adarsh Jagadish Kamini <adarsh.jagadish.kamini@est.tech>
+---
+ news/+1ee322a1.bugfix.rst            | 1 +
+ src/pip/_internal/utils/unpacking.py | 2 +-
+ 2 files changed, 2 insertions(+), 1 deletion(-)
+ create mode 100644 news/+1ee322a1.bugfix.rst
+
+diff --git a/news/+1ee322a1.bugfix.rst b/news/+1ee322a1.bugfix.rst
+new file mode 100644
+index 000000000..edb1b320c
+--- /dev/null
++++ b/news/+1ee322a1.bugfix.rst
+@@ -0,0 +1 @@
++Use a path-segment prefix comparison, not char-by-char.
+diff --git a/src/pip/_internal/utils/unpacking.py b/src/pip/_internal/utils/unpacking.py
+index 0ad3129ac..7cb3de3c4 100644
+--- a/src/pip/_internal/utils/unpacking.py
++++ b/src/pip/_internal/utils/unpacking.py
+@@ -83,7 +83,7 @@ def is_within_directory(directory: str, target: str) -> bool:
+     abs_directory = os.path.abspath(directory)
+     abs_target = os.path.abspath(target)
+ 
+-    prefix = os.path.commonprefix([abs_directory, abs_target])
++    prefix = os.path.commonpath([abs_directory, abs_target])
+     return prefix == abs_directory
+ 
+ 
+-- 
+2.34.1
+
diff --git a/meta/recipes-devtools/python/python3-pip_25.2.bb b/meta/recipes-devtools/python/python3-pip_25.2.bb
index 350092d9ad0..496eff1f15d 100644
--- a/meta/recipes-devtools/python/python3-pip_25.2.bb
+++ b/meta/recipes-devtools/python/python3-pip_25.2.bb
@@ -24,7 +24,9 @@ LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=63ec52baf95163b597008bb46db68030 \
 
 inherit pypi python_setuptools_build_meta
 
-SRC_URI += "file://no_shebang_mangling.patch"
+SRC_URI += "file://no_shebang_mangling.patch \
+            file://CVE-2026-1703.patch \
+            "
 
 SRC_URI[sha256sum] = "578283f006390f85bb6282dffb876454593d637f5d1be494b5202ce4877e71f2"
 


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 02/16] wireless-regdb: upgrade 2025.10.07 -> 2026.02.04
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 01/16] python3-pip: Backport fix CVE-2026-1703 Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 03/16] gdk-pixbuf: Fix CVE-2025-6199 Yoann Congal
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Ankur Tyagi <ankur.tyagi85@gmail.com>

Signed-off-by: Ankur Tyagi <ankur.tyagi85@gmail.com>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 ...ireless-regdb_2025.10.07.bb => wireless-regdb_2026.02.04.bb} | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
 rename meta/recipes-kernel/wireless-regdb/{wireless-regdb_2025.10.07.bb => wireless-regdb_2026.02.04.bb} (94%)

diff --git a/meta/recipes-kernel/wireless-regdb/wireless-regdb_2025.10.07.bb b/meta/recipes-kernel/wireless-regdb/wireless-regdb_2026.02.04.bb
similarity index 94%
rename from meta/recipes-kernel/wireless-regdb/wireless-regdb_2025.10.07.bb
rename to meta/recipes-kernel/wireless-regdb/wireless-regdb_2026.02.04.bb
index 68ae3b0464c..2f7c8160434 100644
--- a/meta/recipes-kernel/wireless-regdb/wireless-regdb_2025.10.07.bb
+++ b/meta/recipes-kernel/wireless-regdb/wireless-regdb_2026.02.04.bb
@@ -5,7 +5,7 @@ LICENSE = "ISC"
 LIC_FILES_CHKSUM = "file://LICENSE;md5=07c4f6dea3845b02a18dc00c8c87699c"
 
 SRC_URI = "https://www.kernel.org/pub/software/network/${BPN}/${BP}.tar.xz"
-SRC_URI[sha256sum] = "d4c872a44154604c869f5851f7d21d818d492835d370af7f58de8847973801c3"
+SRC_URI[sha256sum] = "0ff48a5cd9e9cfe8e815a24e023734919e9a3b7ad2f039243ad121cf5aabf6c6"
 
 inherit bin_package allarch
 


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 03/16] gdk-pixbuf: Fix CVE-2025-6199
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 01/16] python3-pip: Backport fix CVE-2026-1703 Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 02/16] wireless-regdb: upgrade 2025.10.07 -> 2026.02.04 Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 04/16] linux-yocto: apply cve-exclusions also to rt and tiny recipe variants Yoann Congal
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Shaik Moin <careers.myinfo@gmail.com>

Backport the fix for CVE-2025-6199
Add below patch to fix
CVE-2025-6199.patch

Reference: In Ubuntu and debian, fixed patch is given -> [https://gitlab.gnome.org/GNOME/gdk-pixbuf/-/commit/c4986342b241cdc075259565f3fa7a7597d32a32]

Signed-off-by: Shaik Moin <moins@kpit.com>
[YC: Link to Debian security tracker: https://security-tracker.debian.org/tracker/CVE-2025-6199 ]
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 .../gdk-pixbuf/gdk-pixbuf/CVE-2025-6199.patch | 36 +++++++++++++++++++
 .../gdk-pixbuf/gdk-pixbuf_2.42.12.bb          |  1 +
 2 files changed, 37 insertions(+)
 create mode 100644 meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/CVE-2025-6199.patch

diff --git a/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/CVE-2025-6199.patch b/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/CVE-2025-6199.patch
new file mode 100644
index 00000000000..1952e3ceaf5
--- /dev/null
+++ b/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/CVE-2025-6199.patch
@@ -0,0 +1,36 @@
+From 140200be0b4d5355aab76a6fd474e17d117045ca Mon Sep 17 00:00:00 2001
+From: lumi <lumi@suwi.moe>
+Date: Sat, 7 Jun 2025 22:27:06 +0200
+Subject: [PATCH] lzw: Fix reporting of bytes written in decoder
+
+When the LZW decoder encounters an invalid code, it stops
+processing the image and returns the whole buffer size.
+It should return the amount of bytes written, instead.
+
+Fixes #257
+
+CVE: CVE-2025-6199
+
+Upstream-Status: Backport [https://gitlab.gnome.org/GNOME/gdk-pixbuf/-/commit/c4986342b241cdc075259565f3fa7a7597d32a32]
+
+Signed-off-by: Shaik Moin <moins@kpit.com>
+---
+ gdk-pixbuf/lzw.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/gdk-pixbuf/lzw.c b/gdk-pixbuf/lzw.c
+index 15293560b..4f3dd8beb 100644
+--- a/gdk-pixbuf/lzw.c
++++ b/gdk-pixbuf/lzw.c
+@@ -208,7 +208,7 @@ lzw_decoder_feed (LZWDecoder *self,
+                                 /* Invalid code received - just stop here */
+                                 if (self->code >= self->code_table_size) {
+                                         self->last_code = self->eoi_code;
+-                                        return output_length;
++                                        return n_written;
+                                 }
+ 
+                                 /* Convert codeword into indexes */
+-- 
+2.34.1
+
diff --git a/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.12.bb b/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.12.bb
index 98993cc07d7..f22dc2cd915 100644
--- a/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.12.bb
+++ b/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.12.bb
@@ -22,6 +22,7 @@ SRC_URI += "\
            file://run-ptest \
            file://fatal-loader.patch \
            file://0001-meson.build-allow-a-subset-of-tests-in-cross-compile.patch \
+           file://CVE-2025-6199.patch \
            "
 
 GIR_MESON_OPTION = 'introspection'


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 04/16] linux-yocto: apply cve-exclusions also to rt and tiny recipe variants
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (2 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 03/16] gdk-pixbuf: Fix CVE-2025-6199 Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 05/16] cve-exclusions: set status for 5 CVEs Yoann Congal
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Peter Marko <peter.marko@siemens.com>

Version is the same as base kernel, only configuration differs.
There is no reason to not apply the exclusions to all variants.

This commit is equivalent of master commit
77fb0331ecc0cb9eff6a711c2a7889f2f6bdda92

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 meta/recipes-kernel/linux/linux-yocto-rt_6.12.bb   | 1 +
 meta/recipes-kernel/linux/linux-yocto-rt_6.16.bb   | 1 +
 meta/recipes-kernel/linux/linux-yocto-tiny_6.12.bb | 1 +
 meta/recipes-kernel/linux/linux-yocto-tiny_6.16.bb | 1 +
 4 files changed, 4 insertions(+)

diff --git a/meta/recipes-kernel/linux/linux-yocto-rt_6.12.bb b/meta/recipes-kernel/linux/linux-yocto-rt_6.12.bb
index e720629b14d..5816902a7ff 100644
--- a/meta/recipes-kernel/linux/linux-yocto-rt_6.12.bb
+++ b/meta/recipes-kernel/linux/linux-yocto-rt_6.12.bb
@@ -3,6 +3,7 @@ KBRANCH ?= "v6.12/standard/preempt-rt/base"
 require recipes-kernel/linux/linux-yocto.inc
 
 # CVE exclusions
+include recipes-kernel/linux/cve-exclusion.inc
 include recipes-kernel/linux/cve-exclusion_6.12.inc
 
 # Skip processing of this recipe if it is not explicitly specified as the
diff --git a/meta/recipes-kernel/linux/linux-yocto-rt_6.16.bb b/meta/recipes-kernel/linux/linux-yocto-rt_6.16.bb
index d1f7e76501d..2b54315670b 100644
--- a/meta/recipes-kernel/linux/linux-yocto-rt_6.16.bb
+++ b/meta/recipes-kernel/linux/linux-yocto-rt_6.16.bb
@@ -3,6 +3,7 @@ KBRANCH ?= "v6.16/standard/preempt-rt/base"
 require recipes-kernel/linux/linux-yocto.inc
 
 # CVE exclusions
+include recipes-kernel/linux/cve-exclusion.inc
 include recipes-kernel/linux/cve-exclusion_6.16.inc
 
 # Skip processing of this recipe if it is not explicitly specified as the
diff --git a/meta/recipes-kernel/linux/linux-yocto-tiny_6.12.bb b/meta/recipes-kernel/linux/linux-yocto-tiny_6.12.bb
index 164557eaa0a..5828ff986bd 100644
--- a/meta/recipes-kernel/linux/linux-yocto-tiny_6.12.bb
+++ b/meta/recipes-kernel/linux/linux-yocto-tiny_6.12.bb
@@ -6,6 +6,7 @@ KCONFIG_MODE = "--allnoconfig"
 require recipes-kernel/linux/linux-yocto.inc
 
 # CVE exclusions
+include recipes-kernel/linux/cve-exclusion.inc
 include recipes-kernel/linux/cve-exclusion_6.12.inc
 
 LINUX_VERSION ?= "6.12.69"
diff --git a/meta/recipes-kernel/linux/linux-yocto-tiny_6.16.bb b/meta/recipes-kernel/linux/linux-yocto-tiny_6.16.bb
index ef904adad48..92c26d42e3d 100644
--- a/meta/recipes-kernel/linux/linux-yocto-tiny_6.16.bb
+++ b/meta/recipes-kernel/linux/linux-yocto-tiny_6.16.bb
@@ -6,6 +6,7 @@ KCONFIG_MODE = "--allnoconfig"
 require recipes-kernel/linux/linux-yocto.inc
 
 # CVE exclusions
+include recipes-kernel/linux/cve-exclusion.inc
 include recipes-kernel/linux/cve-exclusion_6.16.inc
 
 LINUX_VERSION ?= "6.16.11"


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 05/16] cve-exclusions: set status for 5 CVEs
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (3 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 04/16] linux-yocto: apply cve-exclusions also to rt and tiny recipe variants Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 06/16] ffmpeg: set status for CVE-2025-12343 Yoann Congal
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Peter Marko <peter.marko@siemens.com>

Reuse work of Debian researchers and set status for fixed CVEs
accordingly.
These are not tracked by kernel itself, so generated exclusions won't
help here.

* https://security-tracker.debian.org/tracker/CVE-2022-38096
* https://security-tracker.debian.org/tracker/CVE-2023-39176
* https://security-tracker.debian.org/tracker/CVE-2023-39179
* https://security-tracker.debian.org/tracker/CVE-2023-39180
* https://security-tracker.debian.org/tracker/CVE-2023-6535

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

(From OE-Core rev: 699dbbdf3ab2693bae8a7e0425e2519250fdfec4)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 meta/recipes-kernel/linux/cve-exclusion.inc | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/meta/recipes-kernel/linux/cve-exclusion.inc b/meta/recipes-kernel/linux/cve-exclusion.inc
index 80c76433ef2..7d68a9bbaac 100644
--- a/meta/recipes-kernel/linux/cve-exclusion.inc
+++ b/meta/recipes-kernel/linux/cve-exclusion.inc
@@ -157,3 +157,19 @@ CVE_STATUS[CVE-2023-7042] = "fixed-version: Fixed from 6.9rc1"
 
 #Fix https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=7315dc1e122c85ffdfc8defffbb8f8b616c2eb1a
 CVE_STATUS[CVE-2024-0193] = "fixed-version: Fixed from 6.7"
+
+# Fix https://git.kernel.org/linus/517621b7060096e48e42f545fa6646fc00252eac
+CVE_STATUS[CVE-2022-38096] = "fixed-version: Fixed from 6.9"
+
+# Fix https://git.kernel.org/linus/5aa4fda5aa9c2a5a7bac67b4a12b089ab81fee3c
+# Fix https://git.kernel.org/linus/79ed288cef201f1f212dfb934bcaac75572fb8f6
+CVE_STATUS[CVE-2023-39176] = "fixed-version: Fixed from 6.5"
+
+# Fix https://git.kernel.org/linus/e202a1e8634b186da38cbbff85382ea2b9e297cf
+CVE_STATUS[CVE-2023-39179] = "fixed-version: Fixed from 6.5"
+CVE_STATUS[CVE-2023-39180] = "fixed-version: Fixed from 6.5"
+
+# Fix https://git.kernel.org/linus/efa56305908ba20de2104f1b8508c6a7401833be
+# Fix https://git.kernel.org/linus/0849a5441358cef02586fb2d60f707c0db195628
+# Fix https://git.kernel.org/linus/9a1abc24850eb759e36a2f8869161c3b7254c904
+CVE_STATUS[CVE-2023-6535] = "fixed-version: Fixed from 6.8"


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 06/16] ffmpeg: set status for CVE-2025-12343
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (4 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 05/16] cve-exclusions: set status for 5 CVEs Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 07/16] zlib: Fix CVE-2026-27171 Yoann Congal
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Peter Marko <peter.marko@siemens.com>

Per [1] is patch for this CVE [2].
This is equivalent of [3] which is included in n8.0.

[1] https://security-tracker.debian.org/tracker/CVE-2025-12343
[2] https://git.ffmpeg.org/gitweb/ffmpeg.git/commit/b8d5f65b9e89d893f27cf00799dbc15fc0ca2f8e
[3] https://git.ffmpeg.org/gitweb/ffmpeg.git/commit/263e819aa45cd3c48bf6887be02b4ec504c02048

Signed-off-by: Peter Marko <peter.marko@siemens.com>
[YC: commit 8536c8b9e9093ac2d7d82c49e61e1c1cded5d1e0 upstream]
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 meta/recipes-multimedia/ffmpeg/ffmpeg_8.0.bb | 1 +
 1 file changed, 1 insertion(+)

diff --git a/meta/recipes-multimedia/ffmpeg/ffmpeg_8.0.bb b/meta/recipes-multimedia/ffmpeg/ffmpeg_8.0.bb
index 1e59bfa33fe..0970575b3ca 100644
--- a/meta/recipes-multimedia/ffmpeg/ffmpeg_8.0.bb
+++ b/meta/recipes-multimedia/ffmpeg/ffmpeg_8.0.bb
@@ -177,3 +177,4 @@ CVE_STATUS_WRONG_CPE[status] = "fixed-version: these CVEs are fixed in used vers
 
 CVE_STATUS[CVE-2025-25468] = "fixed-version: these CVEs are fixed since v8.0"
 CVE_STATUS[CVE-2025-25469] = "fixed-version: these CVEs are fixed since v8.0"
+CVE_STATUS[CVE-2025-12343] = "fixed-version: this CVE are fixed since v8.0"


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 07/16] zlib: Fix CVE-2026-27171
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (5 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 06/16] ffmpeg: set status for CVE-2025-12343 Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 08/16] harfbuzz: Fix CVE-2026-22693 Yoann Congal
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Hugo SIMELIERE <hsimeliere.opensource@witekio.com>

Pick patch from [1] also mentioned in [2]

[1] https://github.com/madler/zlib/issues/904
[2] https://security-tracker.debian.org/tracker/CVE-2026-27171

Signed-off-by: Bruno VERNAY <bruno.vernay@se.com>
Signed-off-by: Hugo SIMELIERE <hsimeliere.opensource@witekio.com>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 .../zlib/zlib/CVE-2026-27171.patch            | 63 +++++++++++++++++++
 meta/recipes-core/zlib/zlib_1.3.1.bb          |  1 +
 2 files changed, 64 insertions(+)
 create mode 100644 meta/recipes-core/zlib/zlib/CVE-2026-27171.patch

diff --git a/meta/recipes-core/zlib/zlib/CVE-2026-27171.patch b/meta/recipes-core/zlib/zlib/CVE-2026-27171.patch
new file mode 100644
index 00000000000..e6a8a3eac5f
--- /dev/null
+++ b/meta/recipes-core/zlib/zlib/CVE-2026-27171.patch
@@ -0,0 +1,63 @@
+From f234bdf5c0f94b681312452fcd5e36968221fa04 Mon Sep 17 00:00:00 2001
+From: Mark Adler <git@madler.net>
+Date: Sun, 21 Dec 2025 18:17:56 -0800
+Subject: [PATCH] Check for negative lengths in crc32_combine functions.
+
+Though zlib.h says that len2 must be non-negative, this avoids the
+possibility of an accidental infinite loop.
+
+Upstream-Status: Backport [https://github.com/madler/zlib/commit/ba829a458576d1ff0f26fc7230c6de816d1f6a77]
+CVE: CVE-2026-27171
+
+Signed-off-by: Hugo SIMELIERE <hsimeliere.opensource@witekio.com>
+---
+ crc32.c | 4 ++++
+ zlib.h  | 4 ++--
+ 2 files changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/crc32.c b/crc32.c
+index 6c38f5c..33d8c79 100644
+--- a/crc32.c
++++ b/crc32.c
+@@ -1019,6 +1019,8 @@ unsigned long ZEXPORT crc32(unsigned long crc, const unsigned char FAR *buf,
+ 
+ /* ========================================================================= */
+ uLong ZEXPORT crc32_combine64(uLong crc1, uLong crc2, z_off64_t len2) {
++    if (len2 < 0)
++        return 0;
+ #ifdef DYNAMIC_CRC_TABLE
+     once(&made, make_crc_table);
+ #endif /* DYNAMIC_CRC_TABLE */
+@@ -1032,6 +1034,8 @@ uLong ZEXPORT crc32_combine(uLong crc1, uLong crc2, z_off_t len2) {
+ 
+ /* ========================================================================= */
+ uLong ZEXPORT crc32_combine_gen64(z_off64_t len2) {
++    if (len2 < 0)
++        return 0;
+ #ifdef DYNAMIC_CRC_TABLE
+     once(&made, make_crc_table);
+ #endif /* DYNAMIC_CRC_TABLE */
+diff --git a/zlib.h b/zlib.h
+index 8d4b932..8c7f8ac 100644
+--- a/zlib.h
++++ b/zlib.h
+@@ -1758,14 +1758,14 @@ ZEXTERN uLong ZEXPORT crc32_combine(uLong crc1, uLong crc2, z_off_t len2);
+    seq1 and seq2 with lengths len1 and len2, CRC-32 check values were
+    calculated for each, crc1 and crc2.  crc32_combine() returns the CRC-32
+    check value of seq1 and seq2 concatenated, requiring only crc1, crc2, and
+-   len2. len2 must be non-negative.
++   len2. len2 must be non-negative, otherwise zero is returned.
+ */
+ 
+ /*
+ ZEXTERN uLong ZEXPORT crc32_combine_gen(z_off_t len2);
+ 
+      Return the operator corresponding to length len2, to be used with
+-   crc32_combine_op(). len2 must be non-negative.
++   crc32_combine_op(). len2 must be non-negative, otherwise zero is returned.
+ */
+ 
+ ZEXTERN uLong ZEXPORT crc32_combine_op(uLong crc1, uLong crc2, uLong op);
+-- 
+2.43.0
+
diff --git a/meta/recipes-core/zlib/zlib_1.3.1.bb b/meta/recipes-core/zlib/zlib_1.3.1.bb
index ef831421216..892467a1fbd 100644
--- a/meta/recipes-core/zlib/zlib_1.3.1.bb
+++ b/meta/recipes-core/zlib/zlib_1.3.1.bb
@@ -10,6 +10,7 @@ LIC_FILES_CHKSUM = "file://zlib.h;beginline=6;endline=23;md5=5377232268e952e9ef6
 SRC_URI = "https://zlib.net/${BP}.tar.gz \
            file://0001-configure-Pass-LDFLAGS-to-link-tests.patch \
            file://run-ptest \
+           file://CVE-2026-27171.patch \
            "
 UPSTREAM_CHECK_URI = "http://zlib.net/"
 


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 08/16] harfbuzz: Fix CVE-2026-22693
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (6 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 07/16] zlib: Fix CVE-2026-27171 Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 09/16] gnutls: Fix CVE-2025-14831 Yoann Congal
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Hugo SIMELIERE <hsimeliere.opensource@witekio.com>

Pick patch mentioned in NVD report [1]

[1] https://nvd.nist.gov/vuln/detail/CVE-2026-22693

Signed-off-by: Bruno VERNAY <bruno.vernay@se.com>
Signed-off-by: Hugo SIMELIERE <hsimeliere.opensource@witekio.com>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 .../harfbuzz/files/CVE-2026-22693.patch       | 33 +++++++++++++++++++
 .../harfbuzz/harfbuzz_11.4.5.bb               |  4 ++-
 2 files changed, 36 insertions(+), 1 deletion(-)
 create mode 100644 meta/recipes-graphics/harfbuzz/files/CVE-2026-22693.patch

diff --git a/meta/recipes-graphics/harfbuzz/files/CVE-2026-22693.patch b/meta/recipes-graphics/harfbuzz/files/CVE-2026-22693.patch
new file mode 100644
index 00000000000..bf821bb63ae
--- /dev/null
+++ b/meta/recipes-graphics/harfbuzz/files/CVE-2026-22693.patch
@@ -0,0 +1,33 @@
+From 21c880d1154a5bcef2ef68c1687d286820a274ee Mon Sep 17 00:00:00 2001
+From: Behdad Esfahbod <behdad@behdad.org>
+Date: Fri, 9 Jan 2026 04:54:42 -0700
+Subject: [PATCH] [cmap] malloc fail test (#5710)
+
+Fixes https://github.com/harfbuzz/harfbuzz/security/advisories/GHSA-xvjr-f2r9-c7ww
+
+Upstream-Status: Backport [https://github.com/harfbuzz/harfbuzz/commit/1265ff8d990284f04d8768f35b0e20ae5f60daae]
+CVE: CVE-2026-22693
+
+Signed-off-by: Hugo SIMELIERE <hsimeliere.opensource@witekio.com>
+---
+ src/hb-ot-cmap-table.hh | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/src/hb-ot-cmap-table.hh b/src/hb-ot-cmap-table.hh
+index 294b2b60d..95a436b54 100644
+--- a/src/hb-ot-cmap-table.hh
++++ b/src/hb-ot-cmap-table.hh
+@@ -1679,6 +1679,10 @@ struct SubtableUnicodesCache {
+   {
+     SubtableUnicodesCache* cache =
+         (SubtableUnicodesCache*) hb_malloc (sizeof(SubtableUnicodesCache));
++
++    if (unlikely (!cache))
++      return nullptr;
++
+     new (cache) SubtableUnicodesCache (source_table);
+     return cache;
+   }
+-- 
+2.43.0
+
diff --git a/meta/recipes-graphics/harfbuzz/harfbuzz_11.4.5.bb b/meta/recipes-graphics/harfbuzz/harfbuzz_11.4.5.bb
index 9e0e42b7174..2364dd7efdc 100644
--- a/meta/recipes-graphics/harfbuzz/harfbuzz_11.4.5.bb
+++ b/meta/recipes-graphics/harfbuzz/harfbuzz_11.4.5.bb
@@ -8,7 +8,9 @@ LIC_FILES_CHKSUM = "file://COPYING;md5=b98429b8e8e3c2a67cfef01e99e4893d \
                     file://src/hb-ucd.cc;beginline=1;endline=15;md5=29d4dcb6410429195df67efe3382d8bc \
                     "
 
-SRC_URI = "${GITHUB_BASE_URI}/download/${PV}/${BPN}-${PV}.tar.xz"
+SRC_URI = "${GITHUB_BASE_URI}/download/${PV}/${BPN}-${PV}.tar.xz \
+           file://CVE-2026-22693.patch \
+           "
 SRC_URI[sha256sum] = "0f052eb4ab01d8bae98ba971c954becb32be57d7250f18af343b1d27892e03fa"
 
 DEPENDS += "glib-2.0-native"


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 09/16] gnutls: Fix CVE-2025-14831
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (7 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 08/16] harfbuzz: Fix CVE-2026-22693 Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 10/16] README: Add whinlatter subject-prefix to git-send-email suggestion Yoann Congal
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Vijay Anusuri <vanusuri@mvista.com>

Picked commits which mentions this CVE per [1].

[1] https://ubuntu.com/security/CVE-2025-14831
[2] https://security-tracker.debian.org/tracker/CVE-2025-14831
[3] https://gitlab.com/gnutls/gnutls/-/issues/1773

Backported
https://gitlab.com/gnutls/gnutls/-/commit/6e118a4dfe820ce62fc77130b89188bcd8fbcaad
to apply patches cleanly.

Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 .../gnutls/gnutls/CVE-2025-14831-1.patch      | 119 +++++
 .../gnutls/gnutls/CVE-2025-14831-10.patch     | 424 +++++++++++++++
 .../gnutls/gnutls/CVE-2025-14831-2.patch      |  66 +++
 .../gnutls/gnutls/CVE-2025-14831-3.patch      |  30 ++
 .../gnutls/gnutls/CVE-2025-14831-4.patch      |  45 ++
 .../gnutls/gnutls/CVE-2025-14831-5.patch      | 205 +++++++
 .../gnutls/gnutls/CVE-2025-14831-6.patch      | 505 ++++++++++++++++++
 .../gnutls/gnutls/CVE-2025-14831-7.patch      | 124 +++++
 .../gnutls/gnutls/CVE-2025-14831-8.patch      | 155 ++++++
 .../gnutls/gnutls/CVE-2025-14831-9.patch      | 110 ++++
 meta/recipes-support/gnutls/gnutls_3.8.10.bb  |  10 +
 11 files changed, 1793 insertions(+)
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-1.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-10.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-2.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-3.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-4.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-5.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-6.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-7.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-8.patch
 create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-9.patch

diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-1.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-1.patch
new file mode 100644
index 00000000000..1bfa771043c
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-1.patch
@@ -0,0 +1,119 @@
+From 6e118a4dfe820ce62fc77130b89188bcd8fbcaad Mon Sep 17 00:00:00 2001
+From: chenjianhu <chenjianhu@kylinos.cn>
+Date: Fri, 1 Aug 2025 17:18:23 +0800
+Subject: [PATCH] x509: fix incorrect handling in name constraints merging
+
+As mentioned in commit ca573d65 ("x509: Fix asymmetry in name
+constraints intersection", 2016-07-29), the
+_gnutls_name_constraints_intersect function exhibited an
+asymmetry in name constraints intersection behavior, specifically
+manifested as:
+1. Nodes of unique types in PERMITTED (absent in PERMITTED2) were
+   preserved
+2. Nodes of unique types in PERMITTED2 (absent in PERMITTED) were
+   discarded
+
+A 'used' flag was introduced, where if a node from PERMITTED2 was
+   not used for the intersection, it would be copied to PERMITTED.
+
+However,an unresolved edge case persisted:
+- When 'removed.size > 0', the 'used' flag was unconditionally set
+to 1
+- This prevented copying of PERMITTED2 nodes with unique types
+
+Signed-off-by: chenjianhu <chenjianhu@kylinos.cn>
+Modified-by: Daiki Ueno <ueno@gnu.org>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/6e118a4dfe820ce62fc77130b89188bcd8fbcaad]
+CVE: CVE-2025-14831 #Backport to apply CVE patches cleanly
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c    |  5 +++-
+ tests/name-constraints-merge.c | 55 ++++++++++++++++++++++++++++++++++
+ 2 files changed, 59 insertions(+), 1 deletion(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 3c6e306303..2be6a2aaa6 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -414,7 +414,10 @@ static int name_constraints_node_list_intersect(
+ 				gnutls_assert();
+ 				goto cleanup;
+ 			}
+-			used = 1;
++
++			if (t->type == t2->type)
++				used = 1;
++
+ 			// if intersection is not empty
+ 			if (tmp !=
+ 			    NULL) { // intersection for this type is not empty
+diff --git a/tests/name-constraints-merge.c b/tests/name-constraints-merge.c
+index 03b3243cc7..70376aaa74 100644
+--- a/tests/name-constraints-merge.c
++++ b/tests/name-constraints-merge.c
+@@ -418,6 +418,61 @@ void doit(void)
+ 	gnutls_x509_name_constraints_deinit(nc1);
+ 	gnutls_x509_name_constraints_deinit(nc2);
+ 
++	/* 5: variant of suite 0: after moving rfc822Name (ccc.com)
++	 * from NC1 to NC2, dNSName (xxx.ccc.com) should still be
++	 * rejected.
++	 *
++	 * NC1: permitted DNS org
++	 *      permitted DNS ccc.com
++	 * NC2: permitted DNS org
++	 *      permitted email ccc.com
++	 *      permitted DNS aaa.bbb.ccc.com
++	 */
++	suite = 5;
++
++	ret = gnutls_x509_name_constraints_init(&nc1);
++	check_for_error(ret);
++
++	ret = gnutls_x509_name_constraints_init(&nc2);
++	check_for_error(ret);
++
++	set_name("org", &name);
++	ret = gnutls_x509_name_constraints_add_permitted(
++		nc1, GNUTLS_SAN_DNSNAME, &name);
++	check_for_error(ret);
++
++	set_name("ccc.com", &name);
++	ret = gnutls_x509_name_constraints_add_permitted(
++		nc1, GNUTLS_SAN_DNSNAME, &name);
++	check_for_error(ret);
++
++	set_name("org", &name);
++	ret = gnutls_x509_name_constraints_add_permitted(
++		nc2, GNUTLS_SAN_DNSNAME, &name);
++	check_for_error(ret);
++
++	set_name("ccc.com", &name);
++	ret = gnutls_x509_name_constraints_add_permitted(
++		nc2, GNUTLS_SAN_RFC822NAME, &name);
++	check_for_error(ret);
++
++	set_name("aaa.bbb.ccc.com", &name);
++	ret = gnutls_x509_name_constraints_add_permitted(
++		nc2, GNUTLS_SAN_DNSNAME, &name);
++	check_for_error(ret);
++
++	ret = _gnutls_x509_name_constraints_merge(nc1, nc2);
++	check_for_error(ret);
++
++	/* check intersection of permitted */
++	set_name("xxx.ccc.com", &name);
++	ret = gnutls_x509_name_constraints_check(nc1, GNUTLS_SAN_DNSNAME,
++						 &name);
++	check_test_result(suite, ret, NAME_REJECTED, &name);
++
++	gnutls_x509_name_constraints_deinit(nc1);
++	gnutls_x509_name_constraints_deinit(nc2);
++
+ 	/* Test footer */
+ 
+ 	if (debug)
+-- 
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-10.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-10.patch
new file mode 100644
index 00000000000..5507719ac01
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-10.patch
@@ -0,0 +1,424 @@
+From d6054f0016db05fb5c82177ddbd0a4e8331059a1 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Wed, 4 Feb 2026 20:03:49 +0100
+Subject: [PATCH] x509/name_constraints: name_constraints_node_list_intersect
+ over sorted
+
+Fixes: #1773
+Fixes: GNUTLS-SA-2026-02-09-2
+Fixes: CVE-2025-14831
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/d6054f0016db05fb5c82177ddbd0a4e8331059a1]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 350 ++++++++++++++----------------------
+ 1 file changed, 135 insertions(+), 215 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 1d78d1b..04722bd 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -446,13 +446,6 @@ name_constraints_node_add_copy(gnutls_x509_name_constraints_t nc,
+ 					     src->name.data, src->name.size);
+ }
+ 
+-// for documentation see the implementation
+-static int name_constraints_intersect_nodes(
+-	gnutls_x509_name_constraints_t nc,
+-	const struct name_constraints_node_st *node1,
+-	const struct name_constraints_node_st *node2,
+-	struct name_constraints_node_st **intersection);
+-
+ /*-
+  * _gnutls_x509_name_constraints_is_empty:
+  * @nc: name constraints structure
+@@ -716,132 +709,143 @@ typedef char assert_ipaddr[(GNUTLS_SAN_IPADDRESS <= GNUTLS_SAN_MAX) ? 1 : -1];
+ static int name_constraints_node_list_intersect(
+ 	gnutls_x509_name_constraints_t nc,
+ 	struct name_constraints_node_list_st *permitted,
+-	const struct name_constraints_node_list_st *permitted2,
++	struct name_constraints_node_list_st *permitted2,
+ 	struct name_constraints_node_list_st *excluded)
+ {
+-	struct name_constraints_node_st *tmp;
+-	int ret, type, used;
+-	struct name_constraints_node_list_st removed = { .data = NULL,
+-							 .size = 0,
+-							 .capacity = 0 };
++	struct name_constraints_node_st *nc1, *nc2;
++	struct name_constraints_node_list_st result = { 0 };
++	struct name_constraints_node_list_st unsupp2 = { 0 };
++	enum name_constraint_relation rel;
++	unsigned type;
++	int ret = GNUTLS_E_SUCCESS;
++	size_t i, j, p1_unsupp = 0, p2_unsupp = 0;
++	type_bitmask_t universal_exclude_needed = 0;
++	type_bitmask_t types_in_p1 = 0, types_in_p2 = 0;
+ 	static const unsigned char universal_ip[32] = { 0 };
+ 
+-	/* bitmask to see if we need to add universal excluded constraints
+-	 * (see phase 3 for details) */
+-	type_bitmask_t types_with_empty_intersection = 0;
+-
+ 	if (permitted->size == 0 || permitted2->size == 0)
+-		return 0;
++		return GNUTLS_E_SUCCESS;
+ 
+-	/* Phase 1
+-	 * For each name in PERMITTED, if a PERMITTED2 does not contain a name
+-	 * with the same type, move the original name to REMOVED.
+-	 * Do this also for node of unknown type (not DNS, email, IP) */
+-	for (size_t i = 0; i < permitted->size;) {
+-		struct name_constraints_node_st *t = permitted->data[i];
+-		const struct name_constraints_node_st *found = NULL;
+-
+-		for (size_t j = 0; j < permitted2->size; j++) {
+-			const struct name_constraints_node_st *t2 =
+-				permitted2->data[j];
+-			if (t->type == t2->type) {
+-				// check bounds (we will use 't->type' as index)
+-				if (t->type > GNUTLS_SAN_MAX || t->type == 0) {
+-					gnutls_assert();
+-					ret = GNUTLS_E_INTERNAL_ERROR;
+-					goto cleanup;
+-				}
+-				// note the possibility of empty intersection for this type
+-				// if we add something to the intersection in phase 2,
+-				// we will reset this flag back to 0 then
+-				type_bitmask_set(types_with_empty_intersection,
+-						 t->type);
+-				found = t2;
+-				break;
+-			}
+-		}
++	/* make sorted views of the arrays */
++	ret = ensure_sorted(permitted);
++	if (ret < 0) {
++		gnutls_assert();
++		goto cleanup;
++	}
++	ret = ensure_sorted(permitted2);
++	if (ret < 0) {
++		gnutls_assert();
++		goto cleanup;
++	}
+ 
+-		if (found != NULL && is_supported_type(t->type)) {
+-			/* move node from PERMITTED to REMOVED */
+-			ret = name_constraints_node_list_add(&removed, t);
+-			if (ret < 0) {
+-				gnutls_assert();
+-				goto cleanup;
+-			}
+-			/* remove node by swapping */
+-			if (i < permitted->size - 1)
+-				permitted->data[i] =
+-					permitted->data[permitted->size - 1];
+-			permitted->size--;
+-			permitted->dirty = true;
+-			continue;
++	/* deal with the leading unsupported types first: count, then union */
++	while (p1_unsupp < permitted->size &&
++	       !is_supported_type(permitted->sorted_view[p1_unsupp]->type))
++		p1_unsupp++;
++	while (p2_unsupp < permitted2->size &&
++	       !is_supported_type(permitted2->sorted_view[p2_unsupp]->type))
++		p2_unsupp++;
++	if (p1_unsupp) { /* copy p1 unsupported type pointers into result */
++		result.data = gnutls_calloc(
++			p1_unsupp, sizeof(struct name_constraints_node_st *));
++		if (!result.data) {
++			ret = GNUTLS_E_MEMORY_ERROR;
++			gnutls_assert();
++			goto cleanup;
++		}
++		memcpy(result.data, permitted->sorted_view,
++		       p1_unsupp * sizeof(struct name_constraints_node_st *));
++		result.size = result.capacity = p1_unsupp;
++		result.dirty = true;
++	}
++	if (p2_unsupp) { /* union will make deep copies from p2 */
++		unsupp2.data = permitted2->sorted_view; /* so, just alias */
++		unsupp2.size = unsupp2.capacity = p2_unsupp;
++		unsupp2.dirty = false; /* we know it's sorted */
++		unsupp2.sorted_view = permitted2->sorted_view;
++		ret = name_constraints_node_list_union(nc, &result, &unsupp2);
++		if (ret < 0) {
++			gnutls_assert();
++			goto cleanup;
+ 		}
+-		i++;
+ 	}
+ 
+-	/* Phase 2
+-	 * iterate through all combinations from PERMITTED2 and PERMITTED
+-	 * and create intersections of nodes with same type */
+-	for (size_t i = 0; i < permitted2->size; i++) {
+-		const struct name_constraints_node_st *t2 = permitted2->data[i];
+-
+-		// current PERMITTED2 node has not yet been used for any intersection
+-		// (and is not in REMOVED either)
+-		used = 0;
+-		for (size_t j = 0; j < removed.size; j++) {
+-			const struct name_constraints_node_st *t =
+-				removed.data[j];
+-			// save intersection of name constraints into tmp
+-			ret = name_constraints_intersect_nodes(nc, t, t2, &tmp);
+-			if (ret < 0) {
+-				gnutls_assert();
+-				goto cleanup;
+-			}
++	/* with that out of the way, pre-compute the supported types we have */
++	for (i = p1_unsupp; i < permitted->size; i++) {
++		type = permitted->sorted_view[i]->type;
++		if (type < 1 || type > GNUTLS_SAN_MAX) {
++			ret = gnutls_assert_val(GNUTLS_E_INTERNAL_ERROR);
++			goto cleanup;
++		}
++		type_bitmask_set(types_in_p1, type);
++	}
++	for (j = p2_unsupp; j < permitted2->size; j++) {
++		type = permitted2->sorted_view[j]->type;
++		if (type < 1 || type > GNUTLS_SAN_MAX) {
++			ret = gnutls_assert_val(GNUTLS_E_INTERNAL_ERROR);
++			goto cleanup;
++		}
++		type_bitmask_set(types_in_p2, type);
++	}
++	/* universal excludes might be needed for types intersecting to empty */
++	universal_exclude_needed = types_in_p1 & types_in_p2;
++
++	/* go through supported type NCs and intersect in a single pass */
++	i = p1_unsupp;
++	j = p2_unsupp;
++	while (i < permitted->size || j < permitted2->size) {
++		nc1 = (i < permitted->size) ? permitted->sorted_view[i] : NULL;
++		nc2 = (j < permitted2->size) ? permitted2->sorted_view[j] :
++					       NULL;
++		rel = compare_name_constraint_nodes(nc1, nc2);
+ 
+-			if (t->type == t2->type)
+-				used = 1;
+-
+-			// if intersection is not empty
+-			if (tmp !=
+-			    NULL) { // intersection for this type is not empty
+-				// check bounds
+-				if (tmp->type > GNUTLS_SAN_MAX ||
+-				    tmp->type == 0) {
+-					gnutls_free(tmp);
+-					return gnutls_assert_val(
+-						GNUTLS_E_INTERNAL_ERROR);
+-				}
+-				// we will not add universal excluded constraint for this type
+-				type_bitmask_clr(types_with_empty_intersection,
+-						 tmp->type);
+-				// add intersection node to PERMITTED
+-				ret = name_constraints_node_list_add(permitted,
+-								     tmp);
+-				if (ret < 0) {
+-					gnutls_assert();
+-					goto cleanup;
+-				}
+-			}
++		switch (rel) {
++		case NC_SORTS_BEFORE:
++			assert(nc1 != NULL); /* comparator-guaranteed */
++			/* if nothing to intersect with, shallow-copy nc1 */
++			if (!type_bitmask_in(types_in_p2, nc1->type))
++				ret = name_constraints_node_list_add(&result,
++								     nc1);
++			i++; /* otherwise skip nc1 */
++			break;
++		case NC_SORTS_AFTER:
++			assert(nc2 != NULL); /* comparator-guaranteed */
++			/* if nothing to intersect with, deep-copy nc2 */
++			if (!type_bitmask_in(types_in_p1, nc2->type))
++				ret = name_constraints_node_add_copy(
++					nc, &result, nc2);
++			j++; /* otherwise skip nc2 */
++			break;
++		case NC_INCLUDED_BY: /* add nc1, shallow-copy */
++			assert(nc1 != NULL && nc2 != NULL); /* comparator */
++			type_bitmask_clr(universal_exclude_needed, nc1->type);
++			ret = name_constraints_node_list_add(&result, nc1);
++			i++;
++			break;
++		case NC_INCLUDES: /* pick nc2, deep-copy */
++			assert(nc1 != NULL && nc2 != NULL); /* comparator */
++			type_bitmask_clr(universal_exclude_needed, nc2->type);
++			ret = name_constraints_node_add_copy(nc, &result, nc2);
++			j++;
++			break;
++		case NC_EQUAL: /* pick whichever: nc1, shallow-copy */
++			assert(nc1 != NULL && nc2 != NULL); /* loop condition */
++			type_bitmask_clr(universal_exclude_needed, nc1->type);
++			ret = name_constraints_node_list_add(&result, nc1);
++			i++;
++			j++;
++			break;
+ 		}
+-		// if the node from PERMITTED2 was not used for intersection, copy it to DEST
+-		// Beware: also copies nodes other than DNS, email, IP,
+-		//       since their counterpart may have been moved in phase 1.
+-		if (!used) {
+-			ret = name_constraints_node_add_copy(nc, permitted, t2);
+-			if (ret < 0) {
+-				gnutls_assert();
+-				goto cleanup;
+-			}
++		if (ret < 0) {
++			gnutls_assert();
++			goto cleanup;
+ 		}
+ 	}
+ 
+-	/* Phase 3
+-	 * For each type: If we have empty permitted name constraints now
+-	 * and we didn't have at the beginning, we have to add a new
+-	 * excluded constraint with universal wildcard
+-	 * (since the intersection of permitted is now empty). */
++	/* finishing touch: add universal excluded constraints for types where
++	 * both lists had constraints, but all intersections ended up empty */
+ 	for (type = 1; type <= GNUTLS_SAN_MAX; type++) {
+-		if (!type_bitmask_in(types_with_empty_intersection, type))
++		if (!type_bitmask_in(universal_exclude_needed, type))
+ 			continue;
+ 		_gnutls_hard_log(
+ 			"Adding universal excluded name constraint for type %d.\n",
+@@ -874,14 +878,24 @@ static int name_constraints_node_list_intersect(
+ 				goto cleanup;
+ 			}
+ 			break;
+-		default: // do nothing, at least one node was already moved in phase 1
+-			break;
++		default: /* unsupported type; should be unreacheable */
++			ret = gnutls_assert_val(GNUTLS_E_INTERNAL_ERROR);
++			goto cleanup;
+ 		}
+ 	}
+-	ret = GNUTLS_E_SUCCESS;
+ 
++	gnutls_free(permitted->data);
++	gnutls_free(permitted->sorted_view);
++	permitted->data = result.data;
++	permitted->sorted_view = NULL;
++	permitted->size = result.size;
++	permitted->capacity = result.capacity;
++	permitted->dirty = true;
++
++	result.data = NULL;
++	ret = GNUTLS_E_SUCCESS;
+ cleanup:
+-	gnutls_free(removed.data);
++	name_constraints_node_list_clear(&result);
+ 	return ret;
+ }
+ 
+@@ -1257,100 +1271,6 @@ static unsigned email_matches(const gnutls_datum_t *name,
+ 	return rel == NC_EQUAL || rel == NC_INCLUDED_BY;
+ }
+ 
+-/*-
+- * name_constraints_intersect_nodes:
+- * @nc1: name constraints node 1
+- * @nc2: name constraints node 2
+- * @_intersection: newly allocated node with intersected constraints,
+- *		 NULL if the intersection is empty
+- *
+- * Inspect 2 name constraints nodes (of possibly different types) and allocate
+- * a new node with intersection of given constraints.
+- *
+- * Returns: On success, %GNUTLS_E_SUCCESS (0) is returned, otherwise a negative error value.
+- -*/
+-static int name_constraints_intersect_nodes(
+-	gnutls_x509_name_constraints_t nc,
+-	const struct name_constraints_node_st *node1,
+-	const struct name_constraints_node_st *node2,
+-	struct name_constraints_node_st **_intersection)
+-{
+-	// presume empty intersection
+-	struct name_constraints_node_st *intersection = NULL;
+-	const struct name_constraints_node_st *to_copy = NULL;
+-	enum name_constraint_relation rel;
+-
+-	*_intersection = NULL;
+-
+-	if (node1->type != node2->type) {
+-		return GNUTLS_E_SUCCESS;
+-	}
+-	switch (node1->type) {
+-	case GNUTLS_SAN_DNSNAME:
+-		rel = compare_dns_names(&node1->name, &node2->name);
+-		switch (rel) {
+-		case NC_EQUAL: // equal means doesn't matter which one
+-		case NC_INCLUDES: // node2 is more specific
+-			to_copy = node2;
+-			break;
+-		case NC_INCLUDED_BY: // node1 is more specific
+-			to_copy = node1;
+-			break;
+-		case NC_SORTS_BEFORE: // no intersection
+-		case NC_SORTS_AFTER: // no intersection
+-			return GNUTLS_E_SUCCESS;
+-		}
+-		break;
+-	case GNUTLS_SAN_RFC822NAME:
+-		rel = compare_emails(&node1->name, &node2->name);
+-		switch (rel) {
+-		case NC_EQUAL: // equal means doesn't matter which one
+-		case NC_INCLUDES: // node2 is more specific
+-			to_copy = node2;
+-			break;
+-		case NC_INCLUDED_BY: // node1 is more specific
+-			to_copy = node1;
+-			break;
+-		case NC_SORTS_BEFORE: // no intersection
+-		case NC_SORTS_AFTER: // no intersection
+-			return GNUTLS_E_SUCCESS;
+-		}
+-		break;
+-	case GNUTLS_SAN_IPADDRESS:
+-		rel = compare_ip_ncs(&node1->name, &node2->name);
+-		switch (rel) {
+-		case NC_EQUAL: // equal means doesn't matter which one
+-		case NC_INCLUDES: // node2 is more specific
+-			to_copy = node2;
+-			break;
+-		case NC_INCLUDED_BY: // node1 is more specific
+-			to_copy = node1;
+-			break;
+-		case NC_SORTS_BEFORE: // no intersection
+-		case NC_SORTS_AFTER: // no intersection
+-			return GNUTLS_E_SUCCESS;
+-		}
+-		break;
+-	default:
+-		// for other types, we don't know how to do the intersection, assume empty
+-		return GNUTLS_E_SUCCESS;
+-	}
+-
+-	// copy existing node if applicable
+-	if (to_copy != NULL) {
+-		*_intersection = name_constraints_node_new(nc, to_copy->type,
+-							   to_copy->name.data,
+-							   to_copy->name.size);
+-		if (*_intersection == NULL)
+-			return gnutls_assert_val(GNUTLS_E_MEMORY_ERROR);
+-		intersection = *_intersection;
+-
+-		assert(intersection->name.data != NULL);
+-	}
+-
+-	return GNUTLS_E_SUCCESS;
+-}
+-
+ /*
+  * Returns: true if the certification is acceptable, and false otherwise.
+  */
+-- 
+2.43.0
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-2.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-2.patch
new file mode 100644
index 00000000000..9066098a01f
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-2.patch
@@ -0,0 +1,66 @@
+From 0b2377dfccd99be641bf3f1a0de9f0dc8dc0d4b1 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Mon, 26 Jan 2026 19:02:27 +0100
+Subject: [PATCH] x509/name_constraints: use actual zeroes in universal exclude
+ IP NC
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/0b2377dfccd99be641bf3f1a0de9f0dc8dc0d4b1]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 9 +++++----
+ 1 file changed, 5 insertions(+), 4 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 2be6a2aaa6..d07482e3c9 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -61,7 +61,7 @@ struct gnutls_name_constraints_st {
+ 
+ static struct name_constraints_node_st *
+ name_constraints_node_new(gnutls_x509_name_constraints_t nc, unsigned type,
+-			  unsigned char *data, unsigned int size);
++			  const unsigned char *data, unsigned int size);
+ 
+ static int
+ name_constraints_node_list_add(struct name_constraints_node_list_st *list,
+@@ -285,7 +285,7 @@ static void name_constraints_node_free(struct name_constraints_node_st *node)
+  -*/
+ static struct name_constraints_node_st *
+ name_constraints_node_new(gnutls_x509_name_constraints_t nc, unsigned type,
+-			  unsigned char *data, unsigned int size)
++			  const unsigned char *data, unsigned int size)
+ {
+ 	struct name_constraints_node_st *tmp;
+ 	int ret;
+@@ -339,6 +339,7 @@ static int name_constraints_node_list_intersect(
+ 	struct name_constraints_node_list_st removed = { .data = NULL,
+ 							 .size = 0,
+ 							 .capacity = 0 };
++	static const unsigned char universal_ip[32] = { 0 };
+ 
+ 	/* temporary array to see, if we need to add universal excluded constraints
+ 	 * (see phase 3 for details)
+@@ -474,7 +475,7 @@ static int name_constraints_node_list_intersect(
+ 		case GNUTLS_SAN_IPADDRESS:
+ 			// add universal restricted range for IPv4
+ 			tmp = name_constraints_node_new(
+-				nc, GNUTLS_SAN_IPADDRESS, NULL, 8);
++				nc, GNUTLS_SAN_IPADDRESS, universal_ip, 8);
+ 			if (tmp == NULL) {
+ 				gnutls_assert();
+ 				ret = GNUTLS_E_MEMORY_ERROR;
+@@ -487,7 +488,7 @@ static int name_constraints_node_list_intersect(
+ 			}
+ 			// add universal restricted range for IPv6
+ 			tmp = name_constraints_node_new(
+-				nc, GNUTLS_SAN_IPADDRESS, NULL, 32);
++				nc, GNUTLS_SAN_IPADDRESS, universal_ip, 32);
+ 			if (tmp == NULL) {
+ 				gnutls_assert();
+ 				ret = GNUTLS_E_MEMORY_ERROR;
+-- 
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-3.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-3.patch
new file mode 100644
index 00000000000..0d340325541
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-3.patch
@@ -0,0 +1,30 @@
+From 85d6348a30c74d4ee3710e0f4652f634eaad6914 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Mon, 26 Jan 2026 19:10:58 +0100
+Subject: [PATCH] tests/name-constraints-ip: stop swallowing errors...
+
+... now when it started to pass
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/85d6348a30c74d4ee3710e0f4652f634eaad6914]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ tests/name-constraints-ip.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tests/name-constraints-ip.c b/tests/name-constraints-ip.c
+index 7a196088dc..a0cf172b7f 100644
+--- a/tests/name-constraints-ip.c
++++ b/tests/name-constraints-ip.c
+@@ -772,5 +772,5 @@ int main(int argc, char **argv)
+ 		cmocka_unit_test_setup_teardown(
+ 			check_ipv4v6_single_constraint_each, setup, teardown)
+ 	};
+-	cmocka_run_group_tests(tests, NULL, NULL);
++	return cmocka_run_group_tests(tests, NULL, NULL);
+ }
+-- 
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-4.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-4.patch
new file mode 100644
index 00000000000..ed4a7da3c7a
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-4.patch
@@ -0,0 +1,45 @@
+From c28475413f82e1f34295d5c039f0c0a4ca2ee526 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Mon, 26 Jan 2026 20:14:33 +0100
+Subject: [PATCH] x509/name_constraints: reject some malformed domain names
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/c28475413f82e1f34295d5c039f0c0a4ca2ee526]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 17 +++++++++++++++++
+ 1 file changed, 17 insertions(+)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index d07482e3c9..9783d92851 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -159,6 +159,23 @@ static int validate_name_constraints_node(gnutls_x509_subject_alt_name_t type,
+ 			return gnutls_assert_val(GNUTLS_E_MALFORMED_CIDR);
+ 	}
+ 
++	/* Validate DNS names and email addresses for malformed input */
++	if (type == GNUTLS_SAN_DNSNAME || type == GNUTLS_SAN_RFC822NAME) {
++		unsigned int i;
++		if (name->size == 0)
++			return GNUTLS_E_SUCCESS;
++
++		/* reject names with consecutive dots... */
++		for (i = 0; i + 1 < name->size; i++) {
++			if (name->data[i] == '.' && name->data[i + 1] == '.')
++				return gnutls_assert_val(
++					GNUTLS_E_ILLEGAL_PARAMETER);
++		}
++		/* ... or names consisting exclusively of dots */
++		if (name->size == 1 && name->data[0] == '.')
++			return gnutls_assert_val(GNUTLS_E_ILLEGAL_PARAMETER);
++	}
++
+ 	return GNUTLS_E_SUCCESS;
+ }
+ 
+-- 
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-5.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-5.patch
new file mode 100644
index 00000000000..514ee3c70fc
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-5.patch
@@ -0,0 +1,205 @@
+From 6db7da7fcfe230f445b1edbb56e2a8346120c891 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Thu, 5 Feb 2026 13:22:10 +0100
+Subject: [PATCH] x509/name_constraints: name_constraints_node_add_{new,copy}
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/6db7da7fcfe230f445b1edbb56e2a8346120c891]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 112 ++++++++++++++++--------------------
+ 1 file changed, 51 insertions(+), 61 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 9783d92851..81035eef8f 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -86,6 +86,38 @@ name_constraints_node_list_add(struct name_constraints_node_list_st *list,
+ 	return 0;
+ }
+ 
++static int
++name_constraints_node_add_new(gnutls_x509_name_constraints_t nc,
++			      struct name_constraints_node_list_st *list,
++			      unsigned type, const unsigned char *data,
++			      unsigned int size)
++{
++	struct name_constraints_node_st *node;
++	int ret;
++	node = name_constraints_node_new(nc, type, data, size);
++	if (node == NULL) {
++		gnutls_assert();
++		return GNUTLS_E_MEMORY_ERROR;
++	}
++	ret = name_constraints_node_list_add(list, node);
++	if (ret < 0) {
++		gnutls_assert();
++		return ret;
++	}
++	return GNUTLS_E_SUCCESS;
++}
++
++static int
++name_constraints_node_add_copy(gnutls_x509_name_constraints_t nc,
++			       struct name_constraints_node_list_st *dest,
++			       const struct name_constraints_node_st *src)
++{
++	if (!src)
++		return gnutls_assert_val(GNUTLS_E_INTERNAL_ERROR);
++	return name_constraints_node_add_new(nc, dest, src->type,
++					     src->name.data, src->name.size);
++}
++
+ // for documentation see the implementation
+ static int name_constraints_intersect_nodes(
+ 	gnutls_x509_name_constraints_t nc,
+@@ -188,7 +220,6 @@ static int extract_name_constraints(gnutls_x509_name_constraints_t nc,
+ 	unsigned indx;
+ 	gnutls_datum_t tmp = { NULL, 0 };
+ 	unsigned int type;
+-	struct name_constraints_node_st *node;
+ 
+ 	for (indx = 1;; indx++) {
+ 		snprintf(tmpstr, sizeof(tmpstr), "%s.?%u.base", vstr, indx);
+@@ -231,15 +262,9 @@ static int extract_name_constraints(gnutls_x509_name_constraints_t nc,
+ 			goto cleanup;
+ 		}
+ 
+-		node = name_constraints_node_new(nc, type, tmp.data, tmp.size);
++		ret = name_constraints_node_add_new(nc, nodes, type, tmp.data,
++						    tmp.size);
+ 		_gnutls_free_datum(&tmp);
+-		if (node == NULL) {
+-			gnutls_assert();
+-			ret = GNUTLS_E_MEMORY_ERROR;
+-			goto cleanup;
+-		}
+-
+-		ret = name_constraints_node_list_add(nodes, node);
+ 		if (ret < 0) {
+ 			gnutls_assert();
+ 			goto cleanup;
+@@ -462,14 +487,7 @@ static int name_constraints_node_list_intersect(
+ 		// Beware: also copies nodes other than DNS, email, IP,
+ 		//       since their counterpart may have been moved in phase 1.
+ 		if (!used) {
+-			tmp = name_constraints_node_new(
+-				nc, t2->type, t2->name.data, t2->name.size);
+-			if (tmp == NULL) {
+-				gnutls_assert();
+-				ret = GNUTLS_E_MEMORY_ERROR;
+-				goto cleanup;
+-			}
+-			ret = name_constraints_node_list_add(permitted, tmp);
++			ret = name_constraints_node_add_copy(nc, permitted, t2);
+ 			if (ret < 0) {
+ 				gnutls_assert();
+ 				goto cleanup;
+@@ -491,27 +509,17 @@ static int name_constraints_node_list_intersect(
+ 		switch (type) {
+ 		case GNUTLS_SAN_IPADDRESS:
+ 			// add universal restricted range for IPv4
+-			tmp = name_constraints_node_new(
+-				nc, GNUTLS_SAN_IPADDRESS, universal_ip, 8);
+-			if (tmp == NULL) {
+-				gnutls_assert();
+-				ret = GNUTLS_E_MEMORY_ERROR;
+-				goto cleanup;
+-			}
+-			ret = name_constraints_node_list_add(excluded, tmp);
++			ret = name_constraints_node_add_new(
++				nc, excluded, GNUTLS_SAN_IPADDRESS,
++				universal_ip, 8);
+ 			if (ret < 0) {
+ 				gnutls_assert();
+ 				goto cleanup;
+ 			}
+ 			// add universal restricted range for IPv6
+-			tmp = name_constraints_node_new(
+-				nc, GNUTLS_SAN_IPADDRESS, universal_ip, 32);
+-			if (tmp == NULL) {
+-				gnutls_assert();
+-				ret = GNUTLS_E_MEMORY_ERROR;
+-				goto cleanup;
+-			}
+-			ret = name_constraints_node_list_add(excluded, tmp);
++			ret = name_constraints_node_add_new(
++				nc, excluded, GNUTLS_SAN_IPADDRESS,
++				universal_ip, 32);
+ 			if (ret < 0) {
+ 				gnutls_assert();
+ 				goto cleanup;
+@@ -519,13 +527,8 @@ static int name_constraints_node_list_intersect(
+ 			break;
+ 		case GNUTLS_SAN_DNSNAME:
+ 		case GNUTLS_SAN_RFC822NAME:
+-			tmp = name_constraints_node_new(nc, type, NULL, 0);
+-			if (tmp == NULL) {
+-				gnutls_assert();
+-				ret = GNUTLS_E_MEMORY_ERROR;
+-				goto cleanup;
+-			}
+-			ret = name_constraints_node_list_add(excluded, tmp);
++			ret = name_constraints_node_add_new(nc, excluded, type,
++							    NULL, 0);
+ 			if (ret < 0) {
+ 				gnutls_assert();
+ 				goto cleanup;
+@@ -547,20 +550,13 @@ static int name_constraints_node_list_concat(
+ 	struct name_constraints_node_list_st *nodes,
+ 	const struct name_constraints_node_list_st *nodes2)
+ {
+-	for (size_t i = 0; i < nodes2->size; i++) {
+-		const struct name_constraints_node_st *node = nodes2->data[i];
+-		struct name_constraints_node_st *tmp;
+-		int ret;
++	int ret;
+ 
+-		tmp = name_constraints_node_new(nc, node->type, node->name.data,
+-						node->name.size);
+-		if (tmp == NULL) {
+-			return gnutls_assert_val(GNUTLS_E_MEMORY_ERROR);
+-		}
+-		ret = name_constraints_node_list_add(nodes, tmp);
++	for (size_t i = 0; i < nodes2->size; i++) {
++		ret = name_constraints_node_add_copy(nc, nodes,
++						     nodes2->data[i]);
+ 		if (ret < 0) {
+-			name_constraints_node_free(tmp);
+-			return gnutls_assert_val(GNUTLS_E_MEMORY_ERROR);
++			return gnutls_assert_val(ret);
+ 		}
+ 	}
+ 
+@@ -690,7 +686,6 @@ static int name_constraints_add(gnutls_x509_name_constraints_t nc,
+ 				gnutls_x509_subject_alt_name_t type,
+ 				const gnutls_datum_t *name, unsigned permitted)
+ {
+-	struct name_constraints_node_st *tmp;
+ 	struct name_constraints_node_list_st *nodes;
+ 	int ret;
+ 
+@@ -700,15 +695,10 @@ static int name_constraints_add(gnutls_x509_name_constraints_t nc,
+ 
+ 	nodes = permitted ? &nc->permitted : &nc->excluded;
+ 
+-	tmp = name_constraints_node_new(nc, type, name->data, name->size);
+-	if (tmp == NULL)
+-		return gnutls_assert_val(GNUTLS_E_MEMORY_ERROR);
+-
+-	ret = name_constraints_node_list_add(nodes, tmp);
+-	if (ret < 0) {
+-		name_constraints_node_free(tmp);
++	ret = name_constraints_node_add_new(nc, nodes, type, name->data,
++					    name->size);
++	if (ret < 0)
+ 		return gnutls_assert_val(ret);
+-	}
+ 
+ 	return 0;
+ }
+-- 
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-6.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-6.patch
new file mode 100644
index 00000000000..6970c6ccfe4
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-6.patch
@@ -0,0 +1,505 @@
+From 094accd3ebec17ead6c391757eaa18763b72d83f Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Mon, 26 Jan 2026 20:16:36 +0100
+Subject: [PATCH] x509/name_constraints: introduce a rich comparator
+
+These are preparatory changes before implementing N * log N intersection
+over sorted lists of constraints.
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/094accd3ebec17ead6c391757eaa18763b72d83f]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 411 ++++++++++++++++++++++++++++--------
+ 1 file changed, 320 insertions(+), 91 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 81035eef8f..b5d732d0c5 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -39,6 +39,9 @@
+ #include "ip.h"
+ #include "ip-in-cidr.h"
+ #include "intprops.h"
++#include "minmax.h"
++
++#include <string.h>
+ 
+ #define MAX_NC_CHECKS (1 << 20)
+ 
+@@ -63,6 +66,282 @@ static struct name_constraints_node_st *
+ name_constraints_node_new(gnutls_x509_name_constraints_t nc, unsigned type,
+ 			  const unsigned char *data, unsigned int size);
+ 
++/* An enum for "rich" comparisons that not only let us sort name constraints,
++ * children-before-parent, but also subsume them during intersection. */
++enum name_constraint_relation {
++	NC_SORTS_BEFORE = -2, /* unrelated constraints */
++	NC_INCLUDED_BY = -1, /* nc1 is included by nc2 / children sort first */
++	NC_EQUAL = 0, /* exact match */
++	NC_INCLUDES = 1, /* nc1 includes nc2 / parents sort last */
++	NC_SORTS_AFTER = 2 /* unrelated constraints */
++};
++
++/* A helper to compare just a pair of strings with this rich comparison */
++static enum name_constraint_relation
++compare_strings(const void *n1, size_t n1_len, const void *n2, size_t n2_len)
++{
++	int r = memcmp(n1, n2, MIN(n1_len, n2_len));
++	if (r < 0)
++		return NC_SORTS_BEFORE;
++	if (r > 0)
++		return NC_SORTS_AFTER;
++	if (n1_len < n2_len)
++		return NC_SORTS_BEFORE;
++	if (n1_len > n2_len)
++		return NC_SORTS_AFTER;
++	return NC_EQUAL;
++}
++
++/* Rich-compare DNS names. Example order/relationships:
++ * z.x.a INCLUDED_BY x.a BEFORE y.a INCLUDED_BY a BEFORE x.b BEFORE y.b */
++static enum name_constraint_relation compare_dns_names(const gnutls_datum_t *n1,
++						       const gnutls_datum_t *n2)
++{
++	enum name_constraint_relation rel;
++	unsigned int i, j, i_end, j_end;
++
++	/* start from the end of each name */
++	i = i_end = n1->size;
++	j = j_end = n2->size;
++
++	/* skip the trailing dots for the comparison */
++	while (i && n1->data[i - 1] == '.')
++		i_end = i = i - 1;
++	while (j && n2->data[j - 1] == '.')
++		j_end = j = j - 1;
++
++	while (1) {
++		// rewind back to beginning or an after-dot position
++		while (i && n1->data[i - 1] != '.')
++			i--;
++		while (j && n2->data[j - 1] != '.')
++			j--;
++
++		rel = compare_strings(&n1->data[i], i_end - i, &n2->data[j],
++				      j_end - j);
++		if (rel == NC_SORTS_BEFORE) /* x.a BEFORE y.a */
++			return NC_SORTS_BEFORE;
++		if (rel == NC_SORTS_AFTER) /* y.a AFTER x.a */
++			return NC_SORTS_AFTER;
++		if (!i && j) /* x.a INCLUDES z.x.a */
++			return NC_INCLUDES;
++		if (i && !j) /* z.x.a INCLUDED_BY x.a */
++			return NC_INCLUDED_BY;
++
++		if (!i && !j) /* r == 0, we ran out of components to compare */
++			return NC_EQUAL;
++		/* r == 0, i && j: step back past a dot and keep comparing */
++		i_end = i = i - 1;
++		j_end = j = j - 1;
++
++		/* support for non-standard ".gr INCLUDES example.gr" [1] */
++		if (!i && j) /* .a INCLUDES x.a */
++			return NC_INCLUDES;
++		if (i && !j) /* x.a INCLUDED_BY .a */
++			return NC_INCLUDED_BY;
++	}
++}
++/* [1] https://mailarchive.ietf.org/arch/msg/saag/Bw6PtreW0G7aEG7SikfzKHES4VA */
++
++/* Rich-compare email name constraints. Example order/relationships:
++ * z@x.a INCLUDED_BY x.a BEFORE y.a INCLUDED_BY a BEFORE x@b BEFORE y@b */
++static enum name_constraint_relation compare_emails(const gnutls_datum_t *n1,
++						    const gnutls_datum_t *n2)
++{
++	enum name_constraint_relation domains_rel;
++	unsigned int i, j, i_end, j_end;
++	gnutls_datum_t d1, d2; /* borrow from n1 and n2 */
++
++	/* start from the end of each name */
++	i = i_end = n1->size;
++	j = j_end = n2->size;
++
++	/* rewind to @s to look for domains */
++	while (i && n1->data[i - 1] != '@')
++		i--;
++	d1.size = i_end - i;
++	d1.data = &n1->data[i];
++	while (j && n2->data[j - 1] != '@')
++		j--;
++	d2.size = j_end - j;
++	d2.data = &n2->data[j];
++
++	domains_rel = compare_dns_names(&d1, &d2);
++
++	/* email constraint semantics differ from DNS
++	 * DNS: x.a INCLUDED_BY a
++	 * Email: x.a INCLUDED_BY .a BEFORE a */
++	if (domains_rel == NC_INCLUDED_BY || domains_rel == NC_INCLUDES) {
++		bool d1_has_dot = (d1.size > 0 && d1.data[0] == '.');
++		bool d2_has_dot = (d2.size > 0 && d2.data[0] == '.');
++		/* a constraint without a dot is exact, excluding subdomains */
++		if (!d2_has_dot && domains_rel == NC_INCLUDED_BY)
++			domains_rel = NC_SORTS_BEFORE; /* x.a BEFORE a */
++		if (!d1_has_dot && domains_rel == NC_INCLUDES)
++			domains_rel = NC_SORTS_AFTER; /* a AFTER x.a */
++	}
++
++	if (!i && !j) { /* both are domains-only */
++		return domains_rel;
++	} else if (i && !j) { /* n1 is email, n2 is domain */
++		switch (domains_rel) {
++		case NC_SORTS_AFTER:
++			return NC_SORTS_AFTER;
++		case NC_SORTS_BEFORE:
++			return NC_SORTS_BEFORE;
++		case NC_INCLUDES: /* n2 is more specific, a@x.a AFTER z.x.a */
++			return NC_SORTS_AFTER;
++		case NC_EQUAL: /* subdomains match, z@x.a INCLUDED_BY x.a */
++		case NC_INCLUDED_BY: /* n1 is more specific */
++			return NC_INCLUDED_BY;
++		}
++	} else if (!i && j) { /* n1 is domain, n2 is email */
++		switch (domains_rel) {
++		case NC_SORTS_AFTER:
++			return NC_SORTS_AFTER;
++		case NC_SORTS_BEFORE:
++			return NC_SORTS_BEFORE;
++		case NC_INCLUDES: /* n2 is more specific, a AFTER z@x.a */
++			return NC_SORTS_AFTER;
++		case NC_EQUAL: /* subdomains match, x.a INCLUDES z@x.a */
++			return NC_INCLUDES;
++		case NC_INCLUDED_BY: /* n1 is more specific, x.a BEFORE z@a */
++			return NC_SORTS_BEFORE;
++		}
++	} else if (i && j) { /* both are emails */
++		switch (domains_rel) {
++		case NC_SORTS_AFTER:
++			return NC_SORTS_AFTER;
++		case NC_SORTS_BEFORE:
++			return NC_SORTS_BEFORE;
++		case NC_INCLUDES: // n2 is more specific
++			return NC_SORTS_AFTER;
++		case NC_INCLUDED_BY: // n1 is more specific
++			return NC_SORTS_BEFORE;
++		case NC_EQUAL: // only case when we need to look before the @
++			break; // see below for readability
++		}
++	}
++
++	/* i && j, both are emails, domain names match, compare up to @ */
++	return compare_strings(n1->data, i - 1, n2->data, j - 1);
++}
++
++/* Rich-compare IP address constraints. Example order/relationships:
++ * 10.0.0.0/24 INCLUDED_BY 10.0.0.0/16 BEFORE 1::1/128 INCLUDED_BY 1::1/127 */
++static enum name_constraint_relation compare_ip_ncs(const gnutls_datum_t *n1,
++						    const gnutls_datum_t *n2)
++{
++	unsigned int len, i;
++	int r;
++	const unsigned char *ip1, *ip2, *mask1, *mask2;
++	unsigned char masked11[16], masked22[16], masked12[16], masked21[16];
++
++	if (n1->size < n2->size)
++		return NC_SORTS_BEFORE;
++	if (n1->size > n2->size)
++		return NC_SORTS_AFTER;
++	len = n1->size / 2; /* 4 for IPv4, 16 for IPv6 */
++
++	/* data is a concatenation of prefix and mask */
++	ip1 = n1->data;
++	ip2 = n2->data;
++	mask1 = n1->data + len;
++	mask2 = n2->data + len;
++	for (i = 0; i < len; i++) {
++		masked11[i] = ip1[i] & mask1[i];
++		masked22[i] = ip2[i] & mask2[i];
++		masked12[i] = ip1[i] & mask2[i];
++		masked21[i] = ip2[i] & mask1[i];
++	}
++
++	r = memcmp(mask1, mask2, len);
++	if (r < 0 && !memcmp(masked11, masked21, len)) /* prefix1 < prefix2 */
++		return NC_INCLUDES; /* ip1 & mask1 == ip2 & mask1 */
++	if (r > 0 && !memcmp(masked12, masked22, len)) /* prefix1 > prefix2 */
++		return NC_INCLUDED_BY; /* ip1 & mask2 == ip2 & mask2 */
++
++	r = memcmp(masked11, masked22, len);
++	if (r < 0)
++		return NC_SORTS_BEFORE;
++	else if (r > 0)
++		return NC_SORTS_AFTER;
++	return NC_EQUAL;
++}
++
++static inline bool is_supported_type(unsigned type)
++{
++	return type == GNUTLS_SAN_DNSNAME || type == GNUTLS_SAN_RFC822NAME ||
++	       type == GNUTLS_SAN_IPADDRESS;
++}
++
++/* Universal comparison for name constraint nodes.
++ * Unsupported types sort before supported types to allow early handling.
++ * NULL represents end-of-list and sorts after everything else. */
++static enum name_constraint_relation
++compare_name_constraint_nodes(const struct name_constraints_node_st *n1,
++			      const struct name_constraints_node_st *n2)
++{
++	bool n1_supported, n2_supported;
++
++	if (!n1 && !n2)
++		return NC_EQUAL;
++	if (!n1)
++		return NC_SORTS_AFTER;
++	if (!n2)
++		return NC_SORTS_BEFORE;
++
++	n1_supported = is_supported_type(n1->type);
++	n2_supported = is_supported_type(n2->type);
++
++	/* unsupported types bubble up (sort first). intersect relies on this */
++	if (!n1_supported && n2_supported)
++		return NC_SORTS_BEFORE;
++	if (n1_supported && !n2_supported)
++		return NC_SORTS_AFTER;
++
++	/* next, sort by type */
++	if (n1->type < n2->type)
++		return NC_SORTS_BEFORE;
++	if (n1->type > n2->type)
++		return NC_SORTS_AFTER;
++
++	/* now look deeper */
++	switch (n1->type) {
++	case GNUTLS_SAN_DNSNAME:
++		return compare_dns_names(&n1->name, &n2->name);
++	case GNUTLS_SAN_RFC822NAME:
++		return compare_emails(&n1->name, &n2->name);
++	case GNUTLS_SAN_IPADDRESS:
++		return compare_ip_ncs(&n1->name, &n2->name);
++	default:
++		/* unsupported types: stable lexicographic order */
++		return compare_strings(n1->name.data, n1->name.size,
++				       n2->name.data, n2->name.size);
++	}
++}
++
++/* qsort-compatible wrapper */
++static int compare_name_constraint_nodes_qsort(const void *a, const void *b)
++{
++	const struct name_constraints_node_st *const *n1 = a;
++	const struct name_constraints_node_st *const *n2 = b;
++	enum name_constraint_relation rel;
++
++	rel = compare_name_constraint_nodes(*n1, *n2);
++	switch (rel) {
++	case NC_SORTS_BEFORE:
++	case NC_INCLUDED_BY:
++		return -1;
++	case NC_SORTS_AFTER:
++	case NC_INCLUDES:
++		return 1;
++	case NC_EQUAL:
++	default:
++		return 0;
++	}
++}
++
+ static int
+ name_constraints_node_list_add(struct name_constraints_node_list_st *list,
+ 			       struct name_constraints_node_st *node)
+@@ -420,9 +699,7 @@ static int name_constraints_node_list_intersect(
+ 			}
+ 		}
+ 
+-		if (found != NULL && (t->type == GNUTLS_SAN_DNSNAME ||
+-				      t->type == GNUTLS_SAN_RFC822NAME ||
+-				      t->type == GNUTLS_SAN_IPADDRESS)) {
++		if (found != NULL && is_supported_type(t->type)) {
+ 			/* move node from PERMITTED to REMOVED */
+ 			ret = name_constraints_node_list_add(&removed, t);
+ 			if (ret < 0) {
+@@ -827,61 +1104,14 @@ cleanup:
+ 	return ret;
+ }
+ 
+-static unsigned ends_with(const gnutls_datum_t *str,
+-			  const gnutls_datum_t *suffix)
+-{
+-	unsigned char *tree;
+-	unsigned int treelen;
+-
+-	if (suffix->size >= str->size)
+-		return 0;
+-
+-	tree = suffix->data;
+-	treelen = suffix->size;
+-	if ((treelen > 0) && (tree[0] == '.')) {
+-		tree++;
+-		treelen--;
+-	}
+-
+-	if (memcmp(str->data + str->size - treelen, tree, treelen) == 0 &&
+-	    str->data[str->size - treelen - 1] == '.')
+-		return 1; /* match */
+-
+-	return 0;
+-}
+-
+-static unsigned email_ends_with(const gnutls_datum_t *str,
+-				const gnutls_datum_t *suffix)
+-{
+-	if (suffix->size >= str->size) {
+-		return 0;
+-	}
+-
+-	if (suffix->size > 0 && memcmp(str->data + str->size - suffix->size,
+-				       suffix->data, suffix->size) != 0) {
+-		return 0;
+-	}
+-
+-	if (suffix->size > 1 && suffix->data[0] == '.') { /* .domain.com */
+-		return 1; /* match */
+-	} else if (str->data[str->size - suffix->size - 1] == '@') {
+-		return 1; /* match */
+-	}
+-
+-	return 0;
+-}
+-
+ static unsigned dnsname_matches(const gnutls_datum_t *name,
+ 				const gnutls_datum_t *suffix)
+ {
+ 	_gnutls_hard_log("matching %.*s with DNS constraint %.*s\n", name->size,
+ 			 name->data, suffix->size, suffix->data);
+ 
+-	if (suffix->size == name->size &&
+-	    memcmp(suffix->data, name->data, suffix->size) == 0)
+-		return 1; /* match */
+-
+-	return ends_with(name, suffix);
++	enum name_constraint_relation rel = compare_dns_names(name, suffix);
++	return rel == NC_EQUAL || rel == NC_INCLUDED_BY;
+ }
+ 
+ static unsigned email_matches(const gnutls_datum_t *name,
+@@ -890,11 +1120,8 @@ static unsigned email_matches(const gnutls_datum_t *name,
+ 	_gnutls_hard_log("matching %.*s with e-mail constraint %.*s\n",
+ 			 name->size, name->data, suffix->size, suffix->data);
+ 
+-	if (suffix->size == name->size &&
+-	    memcmp(suffix->data, name->data, suffix->size) == 0)
+-		return 1; /* match */
+-
+-	return email_ends_with(name, suffix);
++	enum name_constraint_relation rel = compare_emails(name, suffix);
++	return rel == NC_EQUAL || rel == NC_INCLUDED_BY;
+ }
+ 
+ /*-
+@@ -918,8 +1145,7 @@ static int name_constraints_intersect_nodes(
+ 	// presume empty intersection
+ 	struct name_constraints_node_st *intersection = NULL;
+ 	const struct name_constraints_node_st *to_copy = NULL;
+-	unsigned iplength = 0;
+-	unsigned byte;
++	enum name_constraint_relation rel;
+ 
+ 	*_intersection = NULL;
+ 
+@@ -928,32 +1154,49 @@ static int name_constraints_intersect_nodes(
+ 	}
+ 	switch (node1->type) {
+ 	case GNUTLS_SAN_DNSNAME:
+-		if (!dnsname_matches(&node2->name, &node1->name))
++		rel = compare_dns_names(&node1->name, &node2->name);
++		switch (rel) {
++		case NC_EQUAL: // equal means doesn't matter which one
++		case NC_INCLUDES: // node2 is more specific
++			to_copy = node2;
++			break;
++		case NC_INCLUDED_BY: // node1 is more specific
++			to_copy = node1;
++			break;
++		case NC_SORTS_BEFORE: // no intersection
++		case NC_SORTS_AFTER: // no intersection
+ 			return GNUTLS_E_SUCCESS;
+-		to_copy = node2;
++		}
+ 		break;
+ 	case GNUTLS_SAN_RFC822NAME:
+-		if (!email_matches(&node2->name, &node1->name))
++		rel = compare_emails(&node1->name, &node2->name);
++		switch (rel) {
++		case NC_EQUAL: // equal means doesn't matter which one
++		case NC_INCLUDES: // node2 is more specific
++			to_copy = node2;
++			break;
++		case NC_INCLUDED_BY: // node1 is more specific
++			to_copy = node1;
++			break;
++		case NC_SORTS_BEFORE: // no intersection
++		case NC_SORTS_AFTER: // no intersection
+ 			return GNUTLS_E_SUCCESS;
+-		to_copy = node2;
++		}
+ 		break;
+ 	case GNUTLS_SAN_IPADDRESS:
+-		if (node1->name.size != node2->name.size)
++		rel = compare_ip_ncs(&node1->name, &node2->name);
++		switch (rel) {
++		case NC_EQUAL: // equal means doesn't matter which one
++		case NC_INCLUDES: // node2 is more specific
++			to_copy = node2;
++			break;
++		case NC_INCLUDED_BY: // node1 is more specific
++			to_copy = node1;
++			break;
++		case NC_SORTS_BEFORE: // no intersection
++		case NC_SORTS_AFTER: // no intersection
+ 			return GNUTLS_E_SUCCESS;
+-		iplength = node1->name.size / 2;
+-		for (byte = 0; byte < iplength; byte++) {
+-			if (((node1->name.data[byte] ^
+-			      node2->name.data[byte]) // XOR of addresses
+-			     & node1->name.data[byte +
+-						iplength] // AND mask from nc1
+-			     & node2->name.data[byte +
+-						iplength]) // AND mask from nc2
+-			    != 0) {
+-				// CIDRS do not intersect
+-				return GNUTLS_E_SUCCESS;
+-			}
+ 		}
+-		to_copy = node2;
+ 		break;
+ 	default:
+ 		// for other types, we don't know how to do the intersection, assume empty
+@@ -970,20 +1213,6 @@ static int name_constraints_intersect_nodes(
+ 		intersection = *_intersection;
+ 
+ 		assert(intersection->name.data != NULL);
+-
+-		if (intersection->type == GNUTLS_SAN_IPADDRESS) {
+-			// make sure both IP addresses are correctly masked
+-			_gnutls_mask_ip(intersection->name.data,
+-					intersection->name.data + iplength,
+-					iplength);
+-			_gnutls_mask_ip(node1->name.data,
+-					node1->name.data + iplength, iplength);
+-			// update intersection, if necessary (we already know one is subset of other)
+-			for (byte = 0; byte < 2 * iplength; byte++) {
+-				intersection->name.data[byte] |=
+-					node1->name.data[byte];
+-			}
+-		}
+ 	}
+ 
+ 	return GNUTLS_E_SUCCESS;
+-- 
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-7.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-7.patch
new file mode 100644
index 00000000000..8dfda528619
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-7.patch
@@ -0,0 +1,124 @@
+From bc62fbb946085527b4b1c02f337dd10c68c54690 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Wed, 4 Feb 2026 09:09:46 +0100
+Subject: [PATCH] x509/name_constraints: add sorted_view in preparation...
+
+... for actually using it later for performance gains.
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/bc62fbb946085527b4b1c02f337dd10c68c54690]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 62 ++++++++++++++++++++++++++++++-------
+ 1 file changed, 51 insertions(+), 11 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index b5d732d0c5..41f30d13b9 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -54,6 +54,9 @@ struct name_constraints_node_list_st {
+ 	struct name_constraints_node_st **data;
+ 	size_t size;
+ 	size_t capacity;
++	/* sorted-on-demand view, valid only when dirty == false */
++	bool dirty;
++	struct name_constraints_node_st **sorted_view;
+ };
+ 
+ struct gnutls_name_constraints_st {
+@@ -342,6 +345,37 @@ static int compare_name_constraint_nodes_qsort(const void *a, const void *b)
+ 	}
+ }
+ 
++/* Bring the sorted view up to date with the list data; clear the dirty flag. */
++static int ensure_sorted(struct name_constraints_node_list_st *list)
++{
++	struct name_constraints_node_st **new_data;
++
++	if (!list->dirty)
++		return GNUTLS_E_SUCCESS;
++	if (!list->size) {
++		list->dirty = false;
++		return GNUTLS_E_SUCCESS;
++	}
++
++	/* reallocate sorted view to match current size */
++	new_data =
++		_gnutls_reallocarray(list->sorted_view, list->size,
++				     sizeof(struct name_constraints_node_st *));
++	if (!new_data)
++		return gnutls_assert_val(GNUTLS_E_MEMORY_ERROR);
++	list->sorted_view = new_data;
++
++	/* copy pointers and sort in-place */
++	memcpy(list->sorted_view, list->data,
++	       list->size * sizeof(struct name_constraints_node_st *));
++	qsort(list->sorted_view, list->size,
++	      sizeof(struct name_constraints_node_st *),
++	      compare_name_constraint_nodes_qsort);
++
++	list->dirty = false;
++	return GNUTLS_E_SUCCESS;
++}
++
+ static int
+ name_constraints_node_list_add(struct name_constraints_node_list_st *list,
+ 			       struct name_constraints_node_st *node)
+@@ -361,10 +395,23 @@ name_constraints_node_list_add(struct name_constraints_node_list_st *list,
+ 		list->capacity = new_capacity;
+ 		list->data = new_data;
+ 	}
++	list->dirty = true;
+ 	list->data[list->size++] = node;
+ 	return 0;
+ }
+ 
++static void
++name_constraints_node_list_clear(struct name_constraints_node_list_st *list)
++{
++	gnutls_free(list->data);
++	gnutls_free(list->sorted_view);
++	list->data = NULL;
++	list->sorted_view = NULL;
++	list->capacity = 0;
++	list->size = 0;
++	list->dirty = false;
++}
++
+ static int
+ name_constraints_node_add_new(gnutls_x509_name_constraints_t nc,
+ 			      struct name_constraints_node_list_st *list,
+@@ -711,6 +758,7 @@ static int name_constraints_node_list_intersect(
+ 				permitted->data[i] =
+ 					permitted->data[permitted->size - 1];
+ 			permitted->size--;
++			permitted->dirty = true;
+ 			continue;
+ 		}
+ 		i++;
+@@ -908,17 +956,9 @@ void _gnutls_x509_name_constraints_clear(gnutls_x509_name_constraints_t nc)
+ 		struct name_constraints_node_st *node = nc->nodes.data[i];
+ 		name_constraints_node_free(node);
+ 	}
+-	gnutls_free(nc->nodes.data);
+-	nc->nodes.capacity = 0;
+-	nc->nodes.size = 0;
+-
+-	gnutls_free(nc->permitted.data);
+-	nc->permitted.capacity = 0;
+-	nc->permitted.size = 0;
+-
+-	gnutls_free(nc->excluded.data);
+-	nc->excluded.capacity = 0;
+-	nc->excluded.size = 0;
++	name_constraints_node_list_clear(&nc->nodes);
++	name_constraints_node_list_clear(&nc->permitted);
++	name_constraints_node_list_clear(&nc->excluded);
+ }
+ 
+ /**
+-- 
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-8.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-8.patch
new file mode 100644
index 00000000000..28d86ec6273
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-8.patch
@@ -0,0 +1,155 @@
+From 80db5e90fa18d3e34bb91dd027bdf76d31e93dcd Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Wed, 4 Feb 2026 13:30:08 +0100
+Subject: [PATCH] x509/name_constraints: implement
+ name_constraints_node_list_union
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/80db5e90fa18d3e34bb91dd027bdf76d31e93dcd]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 98 ++++++++++++++++++++++++++++++++-----
+ 1 file changed, 86 insertions(+), 12 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 41f30d13b9..de20dd8ef4 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -41,6 +41,7 @@
+ #include "intprops.h"
+ #include "minmax.h"
+ 
++#include <assert.h>
+ #include <string.h>
+ 
+ #define MAX_NC_CHECKS (1 << 20)
+@@ -870,22 +871,95 @@ cleanup:
+ 	return ret;
+ }
+ 
+-static int name_constraints_node_list_concat(
+-	gnutls_x509_name_constraints_t nc,
+-	struct name_constraints_node_list_st *nodes,
+-	const struct name_constraints_node_list_st *nodes2)
++static int
++name_constraints_node_list_union(gnutls_x509_name_constraints_t nc,
++				 struct name_constraints_node_list_st *nodes,
++				 struct name_constraints_node_list_st *nodes2)
+ {
+ 	int ret;
++	size_t i = 0, j = 0;
++	struct name_constraints_node_st *nc1;
++	const struct name_constraints_node_st *nc2;
++	enum name_constraint_relation rel;
++	struct name_constraints_node_list_st result = { 0 };
++
++	if (nodes2->size == 0) /* nothing to do */
++		return GNUTLS_E_SUCCESS;
++
++	ret = ensure_sorted(nodes);
++	if (ret < 0) {
++		gnutls_assert();
++		goto cleanup;
++	}
++	ret = ensure_sorted(nodes2);
++	if (ret < 0) {
++		gnutls_assert();
++		goto cleanup;
++	}
++
++	/* traverse both lists in a single pass and merge them w/o duplicates */
++	while (i < nodes->size || j < nodes2->size) {
++		nc1 = (i < nodes->size) ? nodes->sorted_view[i] : NULL;
++		nc2 = (j < nodes2->size) ? nodes2->sorted_view[j] : NULL;
+ 
+-	for (size_t i = 0; i < nodes2->size; i++) {
+-		ret = name_constraints_node_add_copy(nc, nodes,
+-						     nodes2->data[i]);
++		rel = compare_name_constraint_nodes(nc1, nc2);
++		switch (rel) {
++		case NC_SORTS_BEFORE:
++			assert(nc1 != NULL); /* comparator-guaranteed */
++			ret = name_constraints_node_list_add(&result, nc1);
++			i++;
++			break;
++		case NC_SORTS_AFTER:
++			assert(nc2 != NULL); /* comparator-guaranteed */
++			ret = name_constraints_node_add_copy(nc, &result, nc2);
++			j++;
++			break;
++		case NC_INCLUDES: /* nc1 is broader, shallow-copy it */
++			assert(nc1 != NULL && nc2 != NULL); /* comparator */
++			ret = name_constraints_node_list_add(&result, nc1);
++			i++;
++			j++;
++			break;
++		case NC_INCLUDED_BY: /* nc2 is broader, deep-copy it */
++			assert(nc1 != NULL && nc2 != NULL); /* comparator */
++			ret = name_constraints_node_add_copy(nc, &result, nc2);
++			i++;
++			j++;
++			break;
++		case NC_EQUAL:
++			assert(nc1 != NULL && nc2 != NULL); /* loop condition */
++			ret = name_constraints_node_list_add(&result, nc1);
++			i++;
++			j++;
++			break;
++		}
+ 		if (ret < 0) {
+-			return gnutls_assert_val(ret);
++			gnutls_assert();
++			goto cleanup;
+ 		}
+ 	}
+ 
+-	return 0;
++	gnutls_free(nodes->data);
++	gnutls_free(nodes->sorted_view);
++	nodes->data = result.data;
++	nodes->sorted_view = NULL;
++	nodes->size = result.size;
++	nodes->capacity = result.capacity;
++	nodes->dirty = true;
++	/* since we know it's sorted, populate sorted_view almost for free */
++	nodes->sorted_view = gnutls_calloc(
++		nodes->size, sizeof(struct name_constraints_node_st *));
++	if (!nodes->sorted_view)
++		return GNUTLS_E_SUCCESS; /* we tried, no harm done */
++	memcpy(nodes->sorted_view, nodes->data,
++	       nodes->size * sizeof(struct name_constraints_node_st *));
++	nodes->dirty = false;
++
++	result.data = NULL;
++	return GNUTLS_E_SUCCESS;
++cleanup:
++	name_constraints_node_list_clear(&result);
++	return gnutls_assert_val(ret);
+ }
+ 
+ /**
+@@ -1026,7 +1100,7 @@ static int name_constraints_add(gnutls_x509_name_constraints_t nc,
+  * @nc2: The name constraints to be merged with
+  *
+  * This function will merge the provided name constraints structures
+- * as per RFC5280 p6.1.4. That is, the excluded constraints will be appended,
++ * as per RFC5280 p6.1.4. That is, the excluded constraints will be unioned,
+  * and permitted will be intersected. The intersection assumes that @nc
+  * is the root CA constraints.
+  *
+@@ -1048,8 +1122,8 @@ int _gnutls_x509_name_constraints_merge(gnutls_x509_name_constraints_t nc,
+ 		return ret;
+ 	}
+ 
+-	ret = name_constraints_node_list_concat(nc, &nc->excluded,
+-						&nc2->excluded);
++	ret = name_constraints_node_list_union(nc, &nc->excluded,
++					       &nc2->excluded);
+ 	if (ret < 0) {
+ 		gnutls_assert();
+ 		return ret;
+-- 
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-9.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-9.patch
new file mode 100644
index 00000000000..ed6be93c54b
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-9.patch
@@ -0,0 +1,110 @@
+From d0ac999620c8c0aeb6939e1e92d884ca8e40b759 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Wed, 4 Feb 2026 18:31:37 +0100
+Subject: [PATCH] x509/name_constraints: make types_with_empty_intersection a
+ bitmask
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/d0ac999620c8c0aeb6939e1e92d884ca8e40b759]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 39 +++++++++++++++++++++++++++----------
+ 1 file changed, 29 insertions(+), 10 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index de20dd8ef4..1d78d1bc50 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -275,6 +275,7 @@ static enum name_constraint_relation compare_ip_ncs(const gnutls_datum_t *n1,
+ 
+ static inline bool is_supported_type(unsigned type)
+ {
++	/* all of these should be under GNUTLS_SAN_MAX (intersect bitmasks) */
+ 	return type == GNUTLS_SAN_DNSNAME || type == GNUTLS_SAN_RFC822NAME ||
+ 	       type == GNUTLS_SAN_IPADDRESS;
+ }
+@@ -683,6 +684,21 @@ name_constraints_node_new(gnutls_x509_name_constraints_t nc, unsigned type,
+ 	return tmp;
+ }
+ 
++static int
++name_constraints_node_list_union(gnutls_x509_name_constraints_t nc,
++				 struct name_constraints_node_list_st *nodes,
++				 struct name_constraints_node_list_st *nodes2);
++
++#define type_bitmask_t uint8_t /* increase if GNUTLS_SAN_MAX grows */
++#define type_bitmask_set(mask, t) ((mask) |= (1u << (t)))
++#define type_bitmask_clr(mask, t) ((mask) &= ~(1u << (t)))
++#define type_bitmask_in(mask, t) ((mask) & (1u << (t)))
++/* C99-compatible compile-time assertions; gnutls_int.h undefines verify */
++typedef char assert_san_max[(GNUTLS_SAN_MAX < 8) ? 1 : -1];
++typedef char assert_dnsname[(GNUTLS_SAN_DNSNAME <= GNUTLS_SAN_MAX) ? 1 : -1];
++typedef char assert_rfc822[(GNUTLS_SAN_RFC822NAME <= GNUTLS_SAN_MAX) ? 1 : -1];
++typedef char assert_ipaddr[(GNUTLS_SAN_IPADDRESS <= GNUTLS_SAN_MAX) ? 1 : -1];
++
+ /*-
+  * @brief name_constraints_node_list_intersect:
+  * @nc: %gnutls_x509_name_constraints_t
+@@ -710,12 +726,9 @@ static int name_constraints_node_list_intersect(
+ 							 .capacity = 0 };
+ 	static const unsigned char universal_ip[32] = { 0 };
+ 
+-	/* temporary array to see, if we need to add universal excluded constraints
+-	 * (see phase 3 for details)
+-	 * indexed directly by (gnutls_x509_subject_alt_name_t enum - 1) */
+-	unsigned char types_with_empty_intersection[GNUTLS_SAN_MAX];
+-	memset(types_with_empty_intersection, 0,
+-	       sizeof(types_with_empty_intersection));
++	/* bitmask to see if we need to add universal excluded constraints
++	 * (see phase 3 for details) */
++	type_bitmask_t types_with_empty_intersection = 0;
+ 
+ 	if (permitted->size == 0 || permitted2->size == 0)
+ 		return 0;
+@@ -741,7 +754,8 @@ static int name_constraints_node_list_intersect(
+ 				// note the possibility of empty intersection for this type
+ 				// if we add something to the intersection in phase 2,
+ 				// we will reset this flag back to 0 then
+-				types_with_empty_intersection[t->type - 1] = 1;
++				type_bitmask_set(types_with_empty_intersection,
++						 t->type);
+ 				found = t2;
+ 				break;
+ 			}
+@@ -798,8 +812,8 @@ static int name_constraints_node_list_intersect(
+ 						GNUTLS_E_INTERNAL_ERROR);
+ 				}
+ 				// we will not add universal excluded constraint for this type
+-				types_with_empty_intersection[tmp->type - 1] =
+-					0;
++				type_bitmask_clr(types_with_empty_intersection,
++						 tmp->type);
+ 				// add intersection node to PERMITTED
+ 				ret = name_constraints_node_list_add(permitted,
+ 								     tmp);
+@@ -827,7 +841,7 @@ static int name_constraints_node_list_intersect(
+ 	 * excluded constraint with universal wildcard
+ 	 * (since the intersection of permitted is now empty). */
+ 	for (type = 1; type <= GNUTLS_SAN_MAX; type++) {
+-		if (types_with_empty_intersection[type - 1] == 0)
++		if (!type_bitmask_in(types_with_empty_intersection, type))
+ 			continue;
+ 		_gnutls_hard_log(
+ 			"Adding universal excluded name constraint for type %d.\n",
+@@ -871,6 +885,11 @@ cleanup:
+ 	return ret;
+ }
+ 
++#undef type_bitmask_t
++#undef type_bitmask_set
++#undef type_bitmask_clr
++#undef type_bitmask_in
++
+ static int
+ name_constraints_node_list_union(gnutls_x509_name_constraints_t nc,
+ 				 struct name_constraints_node_list_st *nodes,
+-- 
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls_3.8.10.bb b/meta/recipes-support/gnutls/gnutls_3.8.10.bb
index 5430ee20856..ce9fdce3f46 100644
--- a/meta/recipes-support/gnutls/gnutls_3.8.10.bb
+++ b/meta/recipes-support/gnutls/gnutls_3.8.10.bb
@@ -24,6 +24,16 @@ SRC_URI = "https://www.gnupg.org/ftp/gcrypt/gnutls/v${SHRT_VER}/gnutls-${PV}.tar
            file://run-ptest \
            file://Add-ptest-support.patch \
            file://CVE-2025-9820.patch \
+           file://CVE-2025-14831-1.patch \
+           file://CVE-2025-14831-2.patch \
+           file://CVE-2025-14831-3.patch \
+           file://CVE-2025-14831-4.patch \
+           file://CVE-2025-14831-5.patch \
+           file://CVE-2025-14831-6.patch \
+           file://CVE-2025-14831-7.patch \
+           file://CVE-2025-14831-8.patch \
+           file://CVE-2025-14831-9.patch \
+           file://CVE-2025-14831-10.patch \
            "
 
 SRC_URI[sha256sum] = "db7fab7cce791e7727ebbef2334301c821d79a550ec55c9ef096b610b03eb6b7"


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 10/16] README: Add whinlatter subject-prefix to git-send-email suggestion
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (8 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 09/16] gnutls: Fix CVE-2025-14831 Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 11/16] b4-config: add send-prefixes for whinlatter Yoann Congal
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Yoann Congal <yoann.congal@smile.fr>

That might help new users send correct first stable patches.

Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 README.OE-Core.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.OE-Core.md b/README.OE-Core.md
index 7187fb94be3..35a9866db7b 100644
--- a/README.OE-Core.md
+++ b/README.OE-Core.md
@@ -22,7 +22,7 @@ As a quick guide, patches should be sent to openembedded-core@lists.openembedded
 The git command to do that would be:
 
 ```
-git send-email -M -1 --to openembedded-core@lists.openembedded.org
+git send-email -M -1 --to openembedded-core@lists.openembedded.org --subject-prefix='whinlatter][PATCH'
 ```
 
 Mailing list:


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 11/16] b4-config: add send-prefixes for whinlatter
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (9 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 10/16] README: Add whinlatter subject-prefix to git-send-email suggestion Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 12/16] python3: skip flaky test_default_timeout test Yoann Congal
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Yoann Congal <yoann.congal@smile.fr>

That might help new users send correct first stable patches.

Reviewed-by: Quentin Schulz <quentin.schulz@cherry.de>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 .b4-config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.b4-config b/.b4-config
index 41094218531..82d6c756e5e 100644
--- a/.b4-config
+++ b/.b4-config
@@ -1,3 +1,4 @@
 [b4]
   send-series-to = openembedded-core@lists.openembedded.org
   prep-pre-flight-checks = disable-needs-auto-to-cc, disable-needs-checking
+  send-prefixes = whinlatter


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 12/16] python3: skip flaky test_default_timeout test
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (10 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 11/16] b4-config: add send-prefixes for whinlatter Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 13/16] create-pull-request: Keep commit hash to be pulled in cover email Yoann Congal
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Antonin Godard <antonin.godard@bootlin.com>

We have been observing intermittent issues with this test on the Autobuilder:
https://bugzilla.yoctoproject.org/show_bug.cgi?id=15885

This is probably due to the Autobuilder being heavily loaded at the time
of the test.

The logs on the ticket above do not allow us to know which of the two
"test_default_timeout" tests is failing, so disable both with
self.skipTest().

Excerpt from `ptest-runner python3`:

  test_default_timeout (test.test_multiprocessing_fork.test_manager.WithManagerTestBarrier.test_default_timeout)
  SKIP: Test the barrier's default timeout 'skip flaky timeout test'

[YOCTO #15885]

Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9382dee2a59a359f38e03179d01fce47c2d86372)
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 ...kip-flaky-test_default_timeout-tests.patch | 49 +++++++++++++++++++
 .../python/python3_3.13.11.bb                 |  1 +
 2 files changed, 50 insertions(+)
 create mode 100644 meta/recipes-devtools/python/python3/0001-Skip-flaky-test_default_timeout-tests.patch

diff --git a/meta/recipes-devtools/python/python3/0001-Skip-flaky-test_default_timeout-tests.patch b/meta/recipes-devtools/python/python3/0001-Skip-flaky-test_default_timeout-tests.patch
new file mode 100644
index 00000000000..4e1bd833b46
--- /dev/null
+++ b/meta/recipes-devtools/python/python3/0001-Skip-flaky-test_default_timeout-tests.patch
@@ -0,0 +1,49 @@
+From baf2dda48e51fcb17a716e52cc5c4e162a6bb7d3 Mon Sep 17 00:00:00 2001
+From: Antonin Godard <antonin.godard@bootlin.com>
+Date: Mon, 19 Jan 2026 11:38:36 +0100
+Subject: [PATCH] Skip flaky test_default_timeout tests
+
+We have been observing issues with this test on the Autobuilder:
+https://bugzilla.yoctoproject.org/show_bug.cgi?id=15885
+
+The logs on the ticket above do not allow us to know which of the two
+"test_default_timeout" tests is failing, so disable both with
+self.skipTest().
+
+Excerpt from `ptest-runner python3`:
+
+  test_default_timeout (test.test_multiprocessing_fork.test_manager.WithManagerTestBarrier.test_default_timeout)
+  SKIP: Test the barrier's default timeout 'skip flaky timeout test'
+
+Upstream-Status: Inappropriate [OE specific, but might be related to https://github.com/python/cpython/issues/129266#issuecomment-2613058866]
+
+Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
+---
+ Lib/test/_test_multiprocessing.py | 1 +
+ Lib/test/lock_tests.py            | 1 +
+ 2 files changed, 2 insertions(+)
+
+diff --git a/Lib/test/_test_multiprocessing.py b/Lib/test/_test_multiprocessing.py
+index e8064e7..9ab1f9e 100644
+--- a/Lib/test/_test_multiprocessing.py
++++ b/Lib/test/_test_multiprocessing.py
+@@ -2259,6 +2259,7 @@ class _TestBarrier(BaseTestCase):
+         """
+         Test the barrier's default timeout
+         """
++        self.skipTest('skip flaky timeout test')
+         barrier = self.Barrier(self.N, timeout=0.5)
+         results = self.DummyList()
+         self.run_threads(self._test_default_timeout_f, (barrier, results))
+diff --git a/Lib/test/lock_tests.py b/Lib/test/lock_tests.py
+index 8c8f890..43e9b90 100644
+--- a/Lib/test/lock_tests.py
++++ b/Lib/test/lock_tests.py
+@@ -1165,6 +1165,7 @@ class BarrierTests(BaseTestCase):
+         """
+         Test the barrier's default timeout
+         """
++        self.skipTest('skip flaky timeout test')
+         timeout = 0.100
+         barrier = self.barriertype(2, timeout=timeout)
+         def f():
diff --git a/meta/recipes-devtools/python/python3_3.13.11.bb b/meta/recipes-devtools/python/python3_3.13.11.bb
index 2fcfd4aba19..2bc2389b7e4 100644
--- a/meta/recipes-devtools/python/python3_3.13.11.bb
+++ b/meta/recipes-devtools/python/python3_3.13.11.bb
@@ -30,6 +30,7 @@ SRC_URI = "http://www.python.org/ftp/python/${PV}/Python-${PV}.tar.xz \
            file://0001-test_active_children-skip-problematic-test.patch \
            file://0001-test_readline-skip-limited-history-test.patch \
            file://0001-Generate-data-for-OpenSSL-3.4-and-add-it-to-multissl.patch \
+           file://0001-Skip-flaky-test_default_timeout-tests.patch \
            "
 SRC_URI:append:class-native = " \
            file://0001-Lib-sysconfig.py-use-prefix-value-from-build-configu.patch \


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 13/16] create-pull-request: Keep commit hash to be pulled in cover email
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (11 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 12/16] python3: skip flaky test_default_timeout test Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 14/16] avahi: Remove a reference to the rejected CVE-2021-36217 Yoann Congal
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Paul Barker <paul@pbarker.dev>

The cover email mangling in create-pull-request was cutting off the
actual commit hash to be pulled, making it difficult to verify that the
changes a maintainer merges exactly match those intended by the pull
request author.

The extra lines we want to include are, for example from a recent
whinlatter stable branch PR:

    for you to fetch changes up to 6c4c6d39ea3202d756acc13f8ce81b114a468541:

      cups: upgrade from 2.4.14 to 2.4.15 (2025-12-29 09:49:31 -0800)

Signed-off-by: Paul Barker <paul@pbarker.dev>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c78f5ae4a5ba3675b78cc226feb7b9fbbfd8da19)
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 scripts/create-pull-request | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/create-pull-request b/scripts/create-pull-request
index 885105fab3d..5c4414ecd5f 100755
--- a/scripts/create-pull-request
+++ b/scripts/create-pull-request
@@ -219,7 +219,7 @@ fi
 
 # The cover letter already has a diffstat, remove it from the pull-msg
 # before inserting it.
-sed -n "0,\#$REMOTE_URL# p" "$PM" | sed -i "/BLURB HERE/ r /dev/stdin" "$CL"
+sed -n "0,\#^----------------------------------------------------------------# p" "$PM" | sed -i "/BLURB HERE/ r /dev/stdin" "$CL"
 rm "$PM"
 
 # If this is an RFC, make that clear in the cover letter


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 14/16] avahi: Remove a reference to the rejected CVE-2021-36217
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (12 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 13/16] create-pull-request: Keep commit hash to be pulled in cover email Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 15/16] lz4: Remove a reference to the rejected CVE-2025-62813 Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 16/16] python3-urllib3: patch CVE-2025-66471 Yoann Congal
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Benjamin Robin (Schneider Electric) <benjamin.robin@bootlin.com>

CVE-2021-36217 is rejected, and should no longer be referenced.
CVE-2021-36217 is a duplicate of CVE-2021-3502 which is already
referenced in the local-ping.patch.

The CVE database indicates the following reason:
  ConsultIDs: CVE-2021-3502. Reason: This candidate is a duplicate of
  CVE-2021-3502. Notes: All CVE users should reference CVE-2021-3502
  instead of this candidate. All references and descriptions in this
  candidate have been removed to prevent accidental usage.

Signed-off-by: Benjamin Robin (Schneider Electric) <benjamin.robin@bootlin.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bf41240132e2efa6b46aab46290eed9c53e312e9)
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 meta/recipes-connectivity/avahi/files/local-ping.patch | 1 -
 1 file changed, 1 deletion(-)

diff --git a/meta/recipes-connectivity/avahi/files/local-ping.patch b/meta/recipes-connectivity/avahi/files/local-ping.patch
index 29c192d296e..8f102815df0 100644
--- a/meta/recipes-connectivity/avahi/files/local-ping.patch
+++ b/meta/recipes-connectivity/avahi/files/local-ping.patch
@@ -1,4 +1,3 @@
-CVE: CVE-2021-36217
 CVE: CVE-2021-3502
 Upstream-Status: Backport
 Signed-off-by: Ross Burton <ross.burton@arm.com>


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 15/16] lz4: Remove a reference to the rejected CVE-2025-62813
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (13 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 14/16] avahi: Remove a reference to the rejected CVE-2021-36217 Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  2026-03-06  7:22 ` [OE-core][whinlatter v2 16/16] python3-urllib3: patch CVE-2025-66471 Yoann Congal
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Benjamin Robin (Schneider Electric) <benjamin.robin@bootlin.com>

The CVE-2025-62813 is rejected so do not reference it anymore.
So keep the patch but without referencing the CVE identifier.

The CVE database indicates the following reason:
  This candidate was withdrawn by its CNA. Further investigation
  showed that it was not a security issue.

Signed-off-by: Benjamin Robin (Schneider Electric) <benjamin.robin@bootlin.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9c840a69b62a5fdffb3679a44d68dd5630b2916c)
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 .../lz4/{CVE-2025-62813.patch => fix-null-error-handling.patch} | 1 -
 meta/recipes-support/lz4/lz4_1.10.0.bb                          | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
 rename meta/recipes-support/lz4/lz4/{CVE-2025-62813.patch => fix-null-error-handling.patch} (99%)

diff --git a/meta/recipes-support/lz4/lz4/CVE-2025-62813.patch b/meta/recipes-support/lz4/lz4/fix-null-error-handling.patch
similarity index 99%
rename from meta/recipes-support/lz4/lz4/CVE-2025-62813.patch
rename to meta/recipes-support/lz4/lz4/fix-null-error-handling.patch
index 4fa0373ff77..1527cc75912 100644
--- a/meta/recipes-support/lz4/lz4/CVE-2025-62813.patch
+++ b/meta/recipes-support/lz4/lz4/fix-null-error-handling.patch
@@ -4,7 +4,6 @@ Date: Mon, 31 Mar 2025 20:48:52 +0200
 Subject: [PATCH] fix(null) : improve error handlings when passing a null
  pointer to some functions from lz4frame
 
-CVE: CVE-2025-62813
 Upstream-Status: Backport [https://github.com/lz4/lz4/commit/f64efec011c058bd70348576438abac222fe6c82]
 Signed-off-by: Peter Marko <peter.marko@siemens.com>
 ---
diff --git a/meta/recipes-support/lz4/lz4_1.10.0.bb b/meta/recipes-support/lz4/lz4_1.10.0.bb
index f2a86036b56..fae5796c2b9 100644
--- a/meta/recipes-support/lz4/lz4_1.10.0.bb
+++ b/meta/recipes-support/lz4/lz4_1.10.0.bb
@@ -15,7 +15,7 @@ SRCREV = "ebb370ca83af193212df4dcbadcc5d87bc0de2f0"
 SRC_URI = "git://github.com/lz4/lz4.git;branch=release;protocol=https \
            file://reproducibility.patch \
            file://run-ptest \
-           file://CVE-2025-62813.patch \
+           file://fix-null-error-handling.patch \
 "
 UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>.*)"
 


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [OE-core][whinlatter v2 16/16] python3-urllib3: patch CVE-2025-66471
  2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
                   ` (14 preceding siblings ...)
  2026-03-06  7:22 ` [OE-core][whinlatter v2 15/16] lz4: Remove a reference to the rejected CVE-2025-62813 Yoann Congal
@ 2026-03-06  7:22 ` Yoann Congal
  15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06  7:22 UTC (permalink / raw)
  To: openembedded-core

From: Peter Marko <peter.marko@siemens.com>

Pick patch per [1].

[1] https://nvd.nist.gov/vuln/detail/CVE-2025-66471

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
 .../python3-urllib3/CVE-2025-66471.patch      | 926 ++++++++++++++++++
 .../python/python3-urllib3_2.5.0.bb           |   1 +
 2 files changed, 927 insertions(+)
 create mode 100644 meta/recipes-devtools/python/python3-urllib3/CVE-2025-66471.patch

diff --git a/meta/recipes-devtools/python/python3-urllib3/CVE-2025-66471.patch b/meta/recipes-devtools/python/python3-urllib3/CVE-2025-66471.patch
new file mode 100644
index 00000000000..15f601843b8
--- /dev/null
+++ b/meta/recipes-devtools/python/python3-urllib3/CVE-2025-66471.patch
@@ -0,0 +1,926 @@
+From c19571de34c47de3a766541b041637ba5f716ed7 Mon Sep 17 00:00:00 2001
+From: Illia Volochii <illia.volochii@gmail.com>
+Date: Fri, 5 Dec 2025 16:40:41 +0200
+Subject: [PATCH] Merge commit from fork
+
+* Prevent decompression bomb for zstd in Python 3.14
+
+* Add experimental `decompress_iter` for Brotli
+
+* Update changes for Brotli
+
+* Add `GzipDecoder.decompress_iter`
+
+* Test https://github.com/python-hyper/brotlicffi/pull/207
+
+* Pin Brotli
+
+* Add `decompress_iter` to all decoders and make tests pass
+
+* Pin brotlicffi to an official release
+
+* Revert changes to response.py
+
+* Add `max_length` parameter to all `decompress` methods
+
+* Fix the `test_brotlipy` session
+
+* Unset `_data` on gzip error
+
+* Add a test for memory usage
+
+* Test more methods
+
+* Fix the test for `stream`
+
+* Cover more lines with tests
+
+* Add more coverage
+
+* Make `read1` a bit more efficient
+
+* Fix PyPy tests for Brotli
+
+* Revert an unnecessarily moved check
+
+* Add some comments
+
+* Leave just one `self._obj.decompress` call in `GzipDecoder`
+
+* Refactor test params
+
+* Test reads with all data already in the decompressor
+
+* Prevent needless copying of data decoded with `max_length`
+
+* Rename the changed test
+
+* Note that responses of unknown length should be streamed too
+
+* Add a changelog entry
+
+* Avoid returning a memory view from `BytesQueueBuffer`
+
+* Add one more note to the changelog entry
+
+CVE: CVE-2025-66471
+Upstream-Status: Backport [https://github.com/urllib3/urllib3/commit/c19571de34c47de3a766541b041637ba5f716ed7]
+Signed-off-by: Peter Marko <peter.marko@siemens.com>
+---
+ CHANGES.rst             |  15 +++
+ docs/advanced-usage.rst |   3 +-
+ docs/user-guide.rst     |   4 +-
+ pyproject.toml          |   5 +-
+ src/urllib3/response.py | 278 ++++++++++++++++++++++++++++++++++------
+ test/test_response.py   | 269 +++++++++++++++++++++++++++++++++++++-
+ 6 files changed, 525 insertions(+), 49 deletions(-)
+
+diff --git a/CHANGES.rst b/CHANGES.rst
+index add194eb..345476f3 100644
+--- a/CHANGES.rst
++++ b/CHANGES.rst
+@@ -10,6 +10,21 @@ Bugfixes
+
+ TODO: add other entries.
+
++- Fixed a security issue where streaming API could improperly handle highly
++  compressed HTTP content ("decompression bombs") leading to excessive resource
++  consumption even when a small amount of data was requested. Reading small
++  chunks of compressed data is safer and much more efficient now.
++
++.. caution::
++  - If urllib3 is not installed with the optional `urllib3[brotli]` extra, but
++    your environment contains a Brotli/brotlicffi/brotlipy package anyway, make
++    sure to upgrade it to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 to
++    benefit from the security fixes and avoid warnings. Prefer using
++    `urllib3[brotli]` to install a compatible Brotli package automatically.
++
++  - If you use custom decompressors, please make sure to update them to
++    respect the changed API of ``urllib3.response.ContentDecoder``.
++
+
+ 2.5.0 (2025-06-18)
+ ==================
+diff --git a/docs/advanced-usage.rst b/docs/advanced-usage.rst
+index ff773662..3ab4fcf3 100644
+--- a/docs/advanced-usage.rst
++++ b/docs/advanced-usage.rst
+@@ -66,7 +66,8 @@ When using ``preload_content=True`` (the default setting) the
+ response body will be read immediately into memory and the HTTP connection
+ will be released back into the pool without manual intervention.
+ 
+-However, when dealing with large responses it's often better to stream the response
++However, when dealing with responses of large or unknown length,
++it's often better to stream the response
+ content using ``preload_content=False``. Setting ``preload_content`` to ``False`` means
+ that urllib3 will only read from the socket when data is requested.
+ 
+diff --git a/docs/user-guide.rst b/docs/user-guide.rst
+index 5c78c8af..1d9d0bbd 100644
+--- a/docs/user-guide.rst
++++ b/docs/user-guide.rst
+@@ -145,8 +145,8 @@ to a byte string representing the response content:
+     print(resp.data)
+     # b"\xaa\xa5H?\x95\xe9\x9b\x11"
+ 
+-.. note:: For larger responses, it's sometimes better to :ref:`stream <stream>`
+-    the response.
++.. note:: For responses of large or unknown length, it's sometimes better to
++    :ref:`stream <stream>` the response.
+ 
+ Using io Wrappers with Response Content
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+diff --git a/pyproject.toml b/pyproject.toml
+index c9aa6d13..45538a6e 100644
+--- a/pyproject.toml
++++ b/pyproject.toml
+@@ -41,8 +41,8 @@ dynamic = ["version"]
+ 
+ [project.optional-dependencies]
+ brotli = [
+-  "brotli>=1.0.9; platform_python_implementation == 'CPython'",
+-  "brotlicffi>=0.8.0; platform_python_implementation != 'CPython'"
++  "brotli>=1.2.0; platform_python_implementation == 'CPython'",
++  "brotlicffi>=1.2.0.0; platform_python_implementation != 'CPython'"
+ ]
+ # Once we drop support for Python 3.13 this extra can be removed.
+ # We'll need a deprecation period for the 'zstandard' module support
+@@ -160,6 +160,7 @@ filterwarnings = [
+     '''default:ssl\.PROTOCOL_TLSv1_1 is deprecated:DeprecationWarning''',
+     '''default:ssl\.PROTOCOL_TLSv1_2 is deprecated:DeprecationWarning''',
+     '''default:ssl NPN is deprecated, use ALPN instead:DeprecationWarning''',
++    '''default:Brotli >= 1.2.0 is required to prevent decompression bombs\.:urllib3.exceptions.DependencyWarning''',
+     # https://github.com/SeleniumHQ/selenium/issues/13328
+     '''default:unclosed file <_io\.BufferedWriter name='/dev/null'>:ResourceWarning''',
+     # https://github.com/SeleniumHQ/selenium/issues/14686
+diff --git a/src/urllib3/response.py b/src/urllib3/response.py
+index 3df98184..4ba42136 100644
+--- a/src/urllib3/response.py
++++ b/src/urllib3/response.py
+@@ -33,6 +33,7 @@ from .connection import BaseSSLError, HTTPConnection, HTTPException
+ from .exceptions import (
+     BodyNotHttplibCompatible,
+     DecodeError,
++    DependencyWarning,
+     HTTPError,
+     IncompleteRead,
+     InvalidChunkLength,
+@@ -52,7 +53,11 @@ log = logging.getLogger(__name__)
+ 
+ 
+ class ContentDecoder:
+-    def decompress(self, data: bytes) -> bytes:
++    def decompress(self, data: bytes, max_length: int = -1) -> bytes:
++        raise NotImplementedError()
++
++    @property
++    def has_unconsumed_tail(self) -> bool:
+         raise NotImplementedError()
+ 
+     def flush(self) -> bytes:
+@@ -62,30 +67,57 @@ class ContentDecoder:
+ class DeflateDecoder(ContentDecoder):
+     def __init__(self) -> None:
+         self._first_try = True
+-        self._data = b""
++        self._first_try_data = b""
++        self._unfed_data = b""
+         self._obj = zlib.decompressobj()
+ 
+-    def decompress(self, data: bytes) -> bytes:
+-        if not data:
++    def decompress(self, data: bytes, max_length: int = -1) -> bytes:
++        data = self._unfed_data + data
++        self._unfed_data = b""
++        if not data and not self._obj.unconsumed_tail:
+             return data
++        original_max_length = max_length
++        if original_max_length < 0:
++            max_length = 0
++        elif original_max_length == 0:
++            # We should not pass 0 to the zlib decompressor because 0 is
++            # the default value that will make zlib decompress without a
++            # length limit.
++            # Data should be stored for subsequent calls.
++            self._unfed_data = data
++            return b""
+ 
++        # Subsequent calls always reuse `self._obj`. zlib requires
++        # passing the unconsumed tail if decompression is to continue.
+         if not self._first_try:
+-            return self._obj.decompress(data)
++            return self._obj.decompress(
++                self._obj.unconsumed_tail + data, max_length=max_length
++            )
+ 
+-        self._data += data
++        # First call tries with RFC 1950 ZLIB format.
++        self._first_try_data += data
+         try:
+-            decompressed = self._obj.decompress(data)
++            decompressed = self._obj.decompress(data, max_length=max_length)
+             if decompressed:
+                 self._first_try = False
+-                self._data = None  # type: ignore[assignment]
++                self._first_try_data = b""
+             return decompressed
++        # On failure, it falls back to RFC 1951 DEFLATE format.
+         except zlib.error:
+             self._first_try = False
+             self._obj = zlib.decompressobj(-zlib.MAX_WBITS)
+             try:
+-                return self.decompress(self._data)
++                return self.decompress(
++                    self._first_try_data, max_length=original_max_length
++                )
+             finally:
+-                self._data = None  # type: ignore[assignment]
++                self._first_try_data = b""
++
++    @property
++    def has_unconsumed_tail(self) -> bool:
++        return bool(self._unfed_data) or (
++            bool(self._obj.unconsumed_tail) and not self._first_try
++        )
+ 
+     def flush(self) -> bytes:
+         return self._obj.flush()
+@@ -101,27 +133,61 @@ class GzipDecoder(ContentDecoder):
+     def __init__(self) -> None:
+         self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
+         self._state = GzipDecoderState.FIRST_MEMBER
++        self._unconsumed_tail = b""
+ 
+-    def decompress(self, data: bytes) -> bytes:
++    def decompress(self, data: bytes, max_length: int = -1) -> bytes:
+         ret = bytearray()
+-        if self._state == GzipDecoderState.SWALLOW_DATA or not data:
++        if self._state == GzipDecoderState.SWALLOW_DATA:
+             return bytes(ret)
++
++        if max_length == 0:
++            # We should not pass 0 to the zlib decompressor because 0 is
++            # the default value that will make zlib decompress without a
++            # length limit.
++            # Data should be stored for subsequent calls.
++            self._unconsumed_tail += data
++            return b""
++
++        # zlib requires passing the unconsumed tail to the subsequent
++        # call if decompression is to continue.
++        data = self._unconsumed_tail + data
++        if not data and self._obj.eof:
++            return bytes(ret)
++
+         while True:
+             try:
+-                ret += self._obj.decompress(data)
++                ret += self._obj.decompress(
++                    data, max_length=max(max_length - len(ret), 0)
++                )
+             except zlib.error:
+                 previous_state = self._state
+                 # Ignore data after the first error
+                 self._state = GzipDecoderState.SWALLOW_DATA
++                self._unconsumed_tail = b""
+                 if previous_state == GzipDecoderState.OTHER_MEMBERS:
+                     # Allow trailing garbage acceptable in other gzip clients
+                     return bytes(ret)
+                 raise
+-            data = self._obj.unused_data
++
++            self._unconsumed_tail = data = (
++                self._obj.unconsumed_tail or self._obj.unused_data
++            )
++            if max_length > 0 and len(ret) >= max_length:
++                break
++
+             if not data:
+                 return bytes(ret)
+-            self._state = GzipDecoderState.OTHER_MEMBERS
+-            self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
++            # When the end of a gzip member is reached, a new decompressor
++            # must be created for unused (possibly future) data.
++            if self._obj.eof:
++                self._state = GzipDecoderState.OTHER_MEMBERS
++                self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
++
++        return bytes(ret)
++
++    @property
++    def has_unconsumed_tail(self) -> bool:
++        return bool(self._unconsumed_tail)
+ 
+     def flush(self) -> bytes:
+         return self._obj.flush()
+@@ -136,9 +202,35 @@ if brotli is not None:
+         def __init__(self) -> None:
+             self._obj = brotli.Decompressor()
+             if hasattr(self._obj, "decompress"):
+-                setattr(self, "decompress", self._obj.decompress)
++                setattr(self, "_decompress", self._obj.decompress)
+             else:
+-                setattr(self, "decompress", self._obj.process)
++                setattr(self, "_decompress", self._obj.process)
++
++        # Requires Brotli >= 1.2.0 for `output_buffer_limit`.
++        def _decompress(self, data: bytes, output_buffer_limit: int = -1) -> bytes:
++            raise NotImplementedError()
++
++        def decompress(self, data: bytes, max_length: int = -1) -> bytes:
++            try:
++                if max_length > 0:
++                    return self._decompress(data, output_buffer_limit=max_length)
++                else:
++                    return self._decompress(data)
++            except TypeError:
++                # Fallback for Brotli/brotlicffi/brotlipy versions without
++                # the `output_buffer_limit` parameter.
++                warnings.warn(
++                    "Brotli >= 1.2.0 is required to prevent decompression bombs.",
++                    DependencyWarning,
++                )
++                return self._decompress(data)
++
++        @property
++        def has_unconsumed_tail(self) -> bool:
++            try:
++                return not self._obj.can_accept_more_data()
++            except AttributeError:
++                return False
+ 
+         def flush(self) -> bytes:
+             if hasattr(self._obj, "flush"):
+@@ -156,16 +248,46 @@ try:
+         def __init__(self) -> None:
+             self._obj = zstd.ZstdDecompressor()
+ 
+-        def decompress(self, data: bytes) -> bytes:
+-            if not data:
++        def decompress(self, data: bytes, max_length: int = -1) -> bytes:
++            if not data and not self.has_unconsumed_tail:
+                 return b""
+-            data_parts = [self._obj.decompress(data)]
+-            while self._obj.eof and self._obj.unused_data:
+-                unused_data = self._obj.unused_data
++            if self._obj.eof:
++                data = self._obj.unused_data + data
+                 self._obj = zstd.ZstdDecompressor()
+-                data_parts.append(self._obj.decompress(unused_data))
++            part = self._obj.decompress(data, max_length=max_length)
++            length = len(part)
++            data_parts = [part]
++            # Every loop iteration is supposed to read data from a separate frame.
++            # The loop breaks when:
++            #   - enough data is read;
++            #   - no more unused data is available;
++            #   - end of the last read frame has not been reached (i.e.,
++            #     more data has to be fed).
++            while (
++                self._obj.eof
++                and self._obj.unused_data
++                and (max_length < 0 or length < max_length)
++            ):
++                unused_data = self._obj.unused_data
++                if not self._obj.needs_input:
++                    self._obj = zstd.ZstdDecompressor()
++                part = self._obj.decompress(
++                    unused_data,
++                    max_length=(max_length - length) if max_length > 0 else -1,
++                )
++                if part_length := len(part):
++                    data_parts.append(part)
++                    length += part_length
++                elif self._obj.needs_input:
++                    break
+             return b"".join(data_parts)
+ 
++        @property
++        def has_unconsumed_tail(self) -> bool:
++            return not (self._obj.needs_input or self._obj.eof) or bool(
++                self._obj.unused_data
++            )
++
+         def flush(self) -> bytes:
+             if not self._obj.eof:
+                 raise DecodeError("Zstandard data is incomplete")
+@@ -236,10 +358,35 @@ class MultiDecoder(ContentDecoder):
+     def flush(self) -> bytes:
+         return self._decoders[0].flush()
+ 
+-    def decompress(self, data: bytes) -> bytes:
+-        for d in reversed(self._decoders):
+-            data = d.decompress(data)
+-        return data
++    def decompress(self, data: bytes, max_length: int = -1) -> bytes:
++        if max_length <= 0:
++            for d in reversed(self._decoders):
++                data = d.decompress(data)
++            return data
++
++        ret = bytearray()
++        # Every while loop iteration goes through all decoders once.
++        # It exits when enough data is read or no more data can be read.
++        # It is possible that the while loop iteration does not produce
++        # any data because we retrieve up to `max_length` from every
++        # decoder, and the amount of bytes may be insufficient for the
++        # next decoder to produce enough/any output.
++        while True:
++            any_data = False
++            for d in reversed(self._decoders):
++                data = d.decompress(data, max_length=max_length - len(ret))
++                if data:
++                    any_data = True
++                # We should not break when no data is returned because
++                # next decoders may produce data even with empty input.
++            ret += data
++            if not any_data or len(ret) >= max_length:
++                return bytes(ret)
++            data = b""
++
++    @property
++    def has_unconsumed_tail(self) -> bool:
++        return any(d.has_unconsumed_tail for d in self._decoders)
+ 
+ 
+ def _get_decoder(mode: str) -> ContentDecoder:
+@@ -272,9 +419,6 @@ class BytesQueueBuffer:
+ 
+      * self.buffer, which contains the full data
+      * the largest chunk that we will copy in get()
+-
+-    The worst case scenario is a single chunk, in which case we'll make a full copy of
+-    the data inside get().
+     """
+ 
+     def __init__(self) -> None:
+@@ -296,6 +440,10 @@ class BytesQueueBuffer:
+         elif n < 0:
+             raise ValueError("n should be > 0")
+ 
++        if len(self.buffer[0]) == n and isinstance(self.buffer[0], bytes):
++            self._size -= n
++            return self.buffer.popleft()
++
+         fetched = 0
+         ret = io.BytesIO()
+         while fetched < n:
+@@ -502,7 +650,11 @@ class BaseHTTPResponse(io.IOBase):
+                     self._decoder = _get_decoder(content_encoding)
+ 
+     def _decode(
+-        self, data: bytes, decode_content: bool | None, flush_decoder: bool
++        self,
++        data: bytes,
++        decode_content: bool | None,
++        flush_decoder: bool,
++        max_length: int | None = None,
+     ) -> bytes:
+         """
+         Decode the data passed in and potentially flush the decoder.
+@@ -515,9 +667,12 @@ class BaseHTTPResponse(io.IOBase):
+                 )
+             return data
+ 
++        if max_length is None or flush_decoder:
++            max_length = -1
++
+         try:
+             if self._decoder:
+-                data = self._decoder.decompress(data)
++                data = self._decoder.decompress(data, max_length=max_length)
+                 self._has_decoded_content = True
+         except self.DECODER_ERROR_CLASSES as e:
+             content_encoding = self.headers.get("content-encoding", "").lower()
+@@ -988,6 +1143,14 @@ class HTTPResponse(BaseHTTPResponse):
+         elif amt is not None:
+             cache_content = False
+ 
++            if self._decoder and self._decoder.has_unconsumed_tail:
++                decoded_data = self._decode(
++                    b"",
++                    decode_content,
++                    flush_decoder=False,
++                    max_length=amt - len(self._decoded_buffer),
++                )
++                self._decoded_buffer.put(decoded_data)
+             if len(self._decoded_buffer) >= amt:
+                 return self._decoded_buffer.get(amt)
+ 
+@@ -995,7 +1158,11 @@ class HTTPResponse(BaseHTTPResponse):
+ 
+         flush_decoder = amt is None or (amt != 0 and not data)
+ 
+-        if not data and len(self._decoded_buffer) == 0:
++        if (
++            not data
++            and len(self._decoded_buffer) == 0
++            and not (self._decoder and self._decoder.has_unconsumed_tail)
++        ):
+             return data
+ 
+         if amt is None:
+@@ -1012,7 +1179,12 @@ class HTTPResponse(BaseHTTPResponse):
+                     )
+                 return data
+ 
+-            decoded_data = self._decode(data, decode_content, flush_decoder)
++            decoded_data = self._decode(
++                data,
++                decode_content,
++                flush_decoder,
++                max_length=amt - len(self._decoded_buffer),
++            )
+             self._decoded_buffer.put(decoded_data)
+ 
+             while len(self._decoded_buffer) < amt and data:
+@@ -1020,7 +1192,12 @@ class HTTPResponse(BaseHTTPResponse):
+                 # For example, the GZ file header takes 10 bytes, we don't want to read
+                 # it one byte at a time
+                 data = self._raw_read(amt)
+-                decoded_data = self._decode(data, decode_content, flush_decoder)
++                decoded_data = self._decode(
++                    data,
++                    decode_content,
++                    flush_decoder,
++                    max_length=amt - len(self._decoded_buffer),
++                )
+                 self._decoded_buffer.put(decoded_data)
+             data = self._decoded_buffer.get(amt)
+ 
+@@ -1055,6 +1232,20 @@ class HTTPResponse(BaseHTTPResponse):
+                     "Calling read1(decode_content=False) is not supported after "
+                     "read1(decode_content=True) was called."
+                 )
++            if (
++                self._decoder
++                and self._decoder.has_unconsumed_tail
++                and (amt is None or len(self._decoded_buffer) < amt)
++            ):
++                decoded_data = self._decode(
++                    b"",
++                    decode_content,
++                    flush_decoder=False,
++                    max_length=(
++                        amt - len(self._decoded_buffer) if amt is not None else None
++                    ),
++                )
++                self._decoded_buffer.put(decoded_data)
+             if len(self._decoded_buffer) > 0:
+                 if amt is None:
+                     return self._decoded_buffer.get_all()
+@@ -1070,7 +1261,9 @@ class HTTPResponse(BaseHTTPResponse):
+         self._init_decoder()
+         while True:
+             flush_decoder = not data
+-            decoded_data = self._decode(data, decode_content, flush_decoder)
++            decoded_data = self._decode(
++                data, decode_content, flush_decoder, max_length=amt
++            )
+             self._decoded_buffer.put(decoded_data)
+             if decoded_data or flush_decoder:
+                 break
+@@ -1101,7 +1294,11 @@ class HTTPResponse(BaseHTTPResponse):
+         if self.chunked and self.supports_chunked_reads():
+             yield from self.read_chunked(amt, decode_content=decode_content)
+         else:
+-            while not is_fp_closed(self._fp) or len(self._decoded_buffer) > 0:
++            while (
++                not is_fp_closed(self._fp)
++                or len(self._decoded_buffer) > 0
++                or (self._decoder and self._decoder.has_unconsumed_tail)
++            ):
+                 data = self.read(amt=amt, decode_content=decode_content)
+ 
+                 if data:
+@@ -1264,7 +1461,10 @@ class HTTPResponse(BaseHTTPResponse):
+                     break
+                 chunk = self._handle_chunk(amt)
+                 decoded = self._decode(
+-                    chunk, decode_content=decode_content, flush_decoder=False
++                    chunk,
++                    decode_content=decode_content,
++                    flush_decoder=False,
++                    max_length=amt,
+                 )
+                 if decoded:
+                     yield decoded
+diff --git a/test/test_response.py b/test/test_response.py
+index c97fdff0..9592fdd9 100644
+--- a/test/test_response.py
++++ b/test/test_response.py
+@@ -1,6 +1,7 @@
+ from __future__ import annotations
+ 
+ import contextlib
++import gzip
+ import http.client as httplib
+ import socket
+ import ssl
+@@ -43,6 +44,26 @@ def zstd_compress(data: bytes) -> bytes:
+     return zstd.compress(data)  # type: ignore[no-any-return]
+ 
+ 
++def deflate2_compress(data: bytes) -> bytes:
++    compressor = zlib.compressobj(6, zlib.DEFLATED, -zlib.MAX_WBITS)
++    return compressor.compress(data) + compressor.flush()
++
++
++if brotli:
++    try:
++        brotli.Decompressor().process(b"", output_buffer_limit=1024)
++        _brotli_gte_1_2_0_available = True
++    except (AttributeError, TypeError):
++        _brotli_gte_1_2_0_available = False
++else:
++    _brotli_gte_1_2_0_available = False
++try:
++    zstd_compress(b"")
++    _zstd_available = True
++except ModuleNotFoundError:
++    _zstd_available = False
++
++
+ class TestBytesQueueBuffer:
+     def test_single_chunk(self) -> None:
+         buffer = BytesQueueBuffer()
+@@ -118,12 +139,19 @@ class TestBytesQueueBuffer:
+ 
+         assert len(get_func(buffer)) == 10 * 2**20
+ 
++    @pytest.mark.parametrize(
++        "get_func",
++        (lambda b: b.get(len(b)), lambda b: b.get_all()),
++        ids=("get", "get_all"),
++    )
+     @pytest.mark.limit_memory("10.01 MB", current_thread_only=True)
+-    def test_get_all_memory_usage_single_chunk(self) -> None:
++    def test_memory_usage_single_chunk(
++        self, get_func: typing.Callable[[BytesQueueBuffer], bytes]
++    ) -> None:
+         buffer = BytesQueueBuffer()
+         chunk = bytes(10 * 2**20)  # 10 MiB
+         buffer.put(chunk)
+-        assert buffer.get_all() is chunk
++        assert get_func(buffer) is chunk
+ 
+ 
+ # A known random (i.e, not-too-compressible) payload generated with:
+@@ -426,7 +454,26 @@ class TestResponse:
+         assert r.data == b"foo"
+ 
+     @onlyZstd()
+-    def test_decode_multiframe_zstd(self) -> None:
++    @pytest.mark.parametrize(
++        "read_amt",
++        (
++            # Read all data at once.
++            None,
++            # Read one byte at a time, data of frames will be returned
++            # separately.
++            1,
++            # Read two bytes at a time, the second read should return
++            # data from both frames.
++            2,
++            # Read three bytes at a time, the whole frames will be
++            # returned separately in two calls.
++            3,
++            # Read four bytes at a time, the first read should return
++            # data from the first frame and a part of the second frame.
++            4,
++        ),
++    )
++    def test_decode_multiframe_zstd(self, read_amt: int | None) -> None:
+         data = (
+             # Zstandard frame
+             zstd_compress(b"foo")
+@@ -441,8 +488,57 @@ class TestResponse:
+         )
+ 
+         fp = BytesIO(data)
+-        r = HTTPResponse(fp, headers={"content-encoding": "zstd"})
+-        assert r.data == b"foobar"
++        result = bytearray()
++        r = HTTPResponse(
++            fp, headers={"content-encoding": "zstd"}, preload_content=False
++        )
++        total_length = 6
++        while len(result) < total_length:
++            chunk = r.read(read_amt, decode_content=True)
++            if read_amt is None:
++                assert len(chunk) == total_length
++            else:
++                assert len(chunk) == min(read_amt, total_length - len(result))
++            result += chunk
++        assert bytes(result) == b"foobar"
++
++    @onlyZstd()
++    def test_decode_multiframe_zstd_with_max_length_close_to_compressed_data_size(
++        self,
++    ) -> None:
++        """
++        Test decoding when the first read from the socket returns all
++        the compressed frames, but then it has to be decompressed in a
++        couple of read calls.
++        """
++        data = (
++            # Zstandard frame
++            zstd_compress(b"x" * 1024)
++            # skippable frame (must be ignored)
++            + bytes.fromhex(
++                "50 2A 4D 18"  # Magic_Number (little-endian)
++                "07 00 00 00"  # Frame_Size (little-endian)
++                "00 00 00 00 00 00 00"  # User_Data
++            )
++            # Zstandard frame
++            + zstd_compress(b"y" * 1024)
++        )
++
++        fp = BytesIO(data)
++        r = HTTPResponse(
++            fp, headers={"content-encoding": "zstd"}, preload_content=False
++        )
++        # Read the whole first frame.
++        assert r.read(1024) == b"x" * 1024
++        assert len(r._decoded_buffer) == 0
++        # Read the whole second frame in two reads.
++        assert r.read(512) == b"y" * 512
++        assert len(r._decoded_buffer) == 0
++        assert r.read(512) == b"y" * 512
++        assert len(r._decoded_buffer) == 0
++        # Ensure no more data is left.
++        assert r.read() == b""
++        assert len(r._decoded_buffer) == 0
+ 
+     @onlyZstd()
+     def test_chunked_decoding_zstd(self) -> None:
+@@ -535,6 +631,169 @@ class TestResponse:
+             decoded_data += part
+         assert decoded_data == data
+ 
++    _test_compressor_params: list[
++        tuple[str, tuple[str, typing.Callable[[bytes], bytes]] | None]
++    ] = [
++        ("deflate1", ("deflate", zlib.compress)),
++        ("deflate2", ("deflate", deflate2_compress)),
++        ("gzip", ("gzip", gzip.compress)),
++    ]
++    if _brotli_gte_1_2_0_available:
++        _test_compressor_params.append(("brotli", ("br", brotli.compress)))
++    else:
++        _test_compressor_params.append(("brotli", None))
++    if _zstd_available:
++        _test_compressor_params.append(("zstd", ("zstd", zstd_compress)))
++    else:
++        _test_compressor_params.append(("zstd", None))
++
++    @pytest.mark.parametrize("read_method", ("read", "read1"))
++    @pytest.mark.parametrize(
++        "data",
++        [d[1] for d in _test_compressor_params],
++        ids=[d[0] for d in _test_compressor_params],
++    )
++    def test_read_with_all_data_already_in_decompressor(
++        self,
++        request: pytest.FixtureRequest,
++        read_method: str,
++        data: tuple[str, typing.Callable[[bytes], bytes]] | None,
++    ) -> None:
++        if data is None:
++            pytest.skip(f"Proper {request.node.callspec.id} decoder is not available")
++        original_data = b"bar" * 1000
++        name, compress_func = data
++        compressed_data = compress_func(original_data)
++        fp = mock.Mock(read=mock.Mock(return_value=b""))
++        r = HTTPResponse(fp, headers={"content-encoding": name}, preload_content=False)
++        # Put all data in the decompressor's buffer.
++        r._init_decoder()
++        assert r._decoder is not None  # for mypy
++        decoded = r._decoder.decompress(compressed_data, max_length=0)
++        if name == "br":
++            # It's known that some Brotli libraries do not respect
++            # `max_length`.
++            r._decoded_buffer.put(decoded)
++        else:
++            assert decoded == b""
++        # Read the data via `HTTPResponse`.
++        read = getattr(r, read_method)
++        assert read(0) == b""
++        assert read(2500) == original_data[:2500]
++        assert read(500) == original_data[2500:]
++        assert read(0) == b""
++        assert read() == b""
++
++    @pytest.mark.parametrize(
++        "delta",
++        (
++            0,  # First read from socket returns all compressed data.
++            -1,  # First read from socket returns all but one byte of compressed data.
++        ),
++    )
++    @pytest.mark.parametrize("read_method", ("read", "read1"))
++    @pytest.mark.parametrize(
++        "data",
++        [d[1] for d in _test_compressor_params],
++        ids=[d[0] for d in _test_compressor_params],
++    )
++    def test_decode_with_max_length_close_to_compressed_data_size(
++        self,
++        request: pytest.FixtureRequest,
++        delta: int,
++        read_method: str,
++        data: tuple[str, typing.Callable[[bytes], bytes]] | None,
++    ) -> None:
++        """
++        Test decoding when the first read from the socket returns all or
++        almost all the compressed data, but then it has to be
++        decompressed in a couple of read calls.
++        """
++        if data is None:
++            pytest.skip(f"Proper {request.node.callspec.id} decoder is not available")
++
++        original_data = b"foo" * 1000
++        name, compress_func = data
++        compressed_data = compress_func(original_data)
++        fp = BytesIO(compressed_data)
++        r = HTTPResponse(fp, headers={"content-encoding": name}, preload_content=False)
++        initial_limit = len(compressed_data) + delta
++        read = getattr(r, read_method)
++        initial_chunk = read(amt=initial_limit, decode_content=True)
++        assert len(initial_chunk) == initial_limit
++        assert (
++            len(read(amt=len(original_data), decode_content=True))
++            == len(original_data) - initial_limit
++        )
++
++    # Prepare 50 MB of compressed data outside of the test measuring
++    # memory usage.
++    _test_memory_usage_decode_with_max_length_params: list[
++        tuple[str, tuple[str, bytes] | None]
++    ] = [
++        (
++            params[0],
++            (params[1][0], params[1][1](b"A" * (50 * 2**20))) if params[1] else None,
++        )
++        for params in _test_compressor_params
++    ]
++
++    @pytest.mark.parametrize(
++        "data",
++        [d[1] for d in _test_memory_usage_decode_with_max_length_params],
++        ids=[d[0] for d in _test_memory_usage_decode_with_max_length_params],
++    )
++    @pytest.mark.parametrize("read_method", ("read", "read1", "read_chunked", "stream"))
++    # Decoders consume different amounts of memory during decompression.
++    # We set the 10 MB limit to ensure that the whole decompressed data
++    # is not stored unnecessarily.
++    #
++    # FYI, the following consumption was observed for the test with
++    # `read` on CPython 3.14.0:
++    #   - deflate: 2.3 MiB
++    #   - deflate2: 2.1 MiB
++    #   - gzip: 2.1 MiB
++    #   - brotli:
++    #     - brotli v1.2.0: 9 MiB
++    #     - brotlicffi v1.2.0.0: 6 MiB
++    #     - brotlipy v0.7.0: 105.8 MiB
++    #   - zstd: 4.5 MiB
++    @pytest.mark.limit_memory("10 MB", current_thread_only=True)
++    def test_memory_usage_decode_with_max_length(
++        self,
++        request: pytest.FixtureRequest,
++        read_method: str,
++        data: tuple[str, bytes] | None,
++    ) -> None:
++        if data is None:
++            pytest.skip(f"Proper {request.node.callspec.id} decoder is not available")
++
++        name, compressed_data = data
++        limit = 1024 * 1024  # 1 MiB
++        if read_method in ("read_chunked", "stream"):
++            httplib_r = httplib.HTTPResponse(MockSock)  # type: ignore[arg-type]
++            httplib_r.fp = MockChunkedEncodingResponse([compressed_data])  # type: ignore[assignment]
++            r = HTTPResponse(
++                httplib_r,
++                preload_content=False,
++                headers={"transfer-encoding": "chunked", "content-encoding": name},
++            )
++            next(getattr(r, read_method)(amt=limit, decode_content=True))
++        else:
++            fp = BytesIO(compressed_data)
++            r = HTTPResponse(
++                fp, headers={"content-encoding": name}, preload_content=False
++            )
++            getattr(r, read_method)(amt=limit, decode_content=True)
++
++        # Check that the internal decoded buffer is empty unless brotli
++        # is used.
++        # Google's brotli library does not fully respect the output
++        # buffer limit: https://github.com/google/brotli/issues/1396
++        # And unmaintained brotlipy cannot limit the output buffer size.
++        if name != "br" or brotli.__name__ == "brotlicffi":
++            assert len(r._decoded_buffer) == 0
++
+     def test_multi_decoding_deflate_deflate(self) -> None:
+         data = zlib.compress(zlib.compress(b"foo"))
+ 
diff --git a/meta/recipes-devtools/python/python3-urllib3_2.5.0.bb b/meta/recipes-devtools/python/python3-urllib3_2.5.0.bb
index 7892fc0874e..e277a192e53 100644
--- a/meta/recipes-devtools/python/python3-urllib3_2.5.0.bb
+++ b/meta/recipes-devtools/python/python3-urllib3_2.5.0.bb
@@ -10,6 +10,7 @@ inherit pypi python_hatchling
 SRC_URI += "\
     file://CVE-2025-66418.patch \
     file://CVE-2026-21441.patch \
+    file://CVE-2025-66471.patch \
 "
 
 DEPENDS += "python3-hatch-vcs-native"


^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2026-03-06  7:23 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-06  7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 01/16] python3-pip: Backport fix CVE-2026-1703 Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 02/16] wireless-regdb: upgrade 2025.10.07 -> 2026.02.04 Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 03/16] gdk-pixbuf: Fix CVE-2025-6199 Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 04/16] linux-yocto: apply cve-exclusions also to rt and tiny recipe variants Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 05/16] cve-exclusions: set status for 5 CVEs Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 06/16] ffmpeg: set status for CVE-2025-12343 Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 07/16] zlib: Fix CVE-2026-27171 Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 08/16] harfbuzz: Fix CVE-2026-22693 Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 09/16] gnutls: Fix CVE-2025-14831 Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 10/16] README: Add whinlatter subject-prefix to git-send-email suggestion Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 11/16] b4-config: add send-prefixes for whinlatter Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 12/16] python3: skip flaky test_default_timeout test Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 13/16] create-pull-request: Keep commit hash to be pulled in cover email Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 14/16] avahi: Remove a reference to the rejected CVE-2021-36217 Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 15/16] lz4: Remove a reference to the rejected CVE-2025-62813 Yoann Congal
2026-03-06  7:22 ` [OE-core][whinlatter v2 16/16] python3-urllib3: patch CVE-2025-66471 Yoann Congal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox