* [OE-core][whinlatter v2 09/16] gnutls: Fix CVE-2025-14831
2026-03-06 7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
` (7 preceding siblings ...)
2026-03-06 7:22 ` [OE-core][whinlatter v2 08/16] harfbuzz: Fix CVE-2026-22693 Yoann Congal
@ 2026-03-06 7:22 ` Yoann Congal
2026-03-06 7:22 ` [OE-core][whinlatter v2 10/16] README: Add whinlatter subject-prefix to git-send-email suggestion Yoann Congal
` (6 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06 7:22 UTC (permalink / raw)
To: openembedded-core
From: Vijay Anusuri <vanusuri@mvista.com>
Picked commits which mentions this CVE per [1].
[1] https://ubuntu.com/security/CVE-2025-14831
[2] https://security-tracker.debian.org/tracker/CVE-2025-14831
[3] https://gitlab.com/gnutls/gnutls/-/issues/1773
Backported
https://gitlab.com/gnutls/gnutls/-/commit/6e118a4dfe820ce62fc77130b89188bcd8fbcaad
to apply patches cleanly.
Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
.../gnutls/gnutls/CVE-2025-14831-1.patch | 119 +++++
.../gnutls/gnutls/CVE-2025-14831-10.patch | 424 +++++++++++++++
.../gnutls/gnutls/CVE-2025-14831-2.patch | 66 +++
.../gnutls/gnutls/CVE-2025-14831-3.patch | 30 ++
.../gnutls/gnutls/CVE-2025-14831-4.patch | 45 ++
.../gnutls/gnutls/CVE-2025-14831-5.patch | 205 +++++++
.../gnutls/gnutls/CVE-2025-14831-6.patch | 505 ++++++++++++++++++
.../gnutls/gnutls/CVE-2025-14831-7.patch | 124 +++++
.../gnutls/gnutls/CVE-2025-14831-8.patch | 155 ++++++
.../gnutls/gnutls/CVE-2025-14831-9.patch | 110 ++++
meta/recipes-support/gnutls/gnutls_3.8.10.bb | 10 +
11 files changed, 1793 insertions(+)
create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-1.patch
create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-10.patch
create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-2.patch
create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-3.patch
create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-4.patch
create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-5.patch
create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-6.patch
create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-7.patch
create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-8.patch
create mode 100644 meta/recipes-support/gnutls/gnutls/CVE-2025-14831-9.patch
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-1.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-1.patch
new file mode 100644
index 00000000000..1bfa771043c
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-1.patch
@@ -0,0 +1,119 @@
+From 6e118a4dfe820ce62fc77130b89188bcd8fbcaad Mon Sep 17 00:00:00 2001
+From: chenjianhu <chenjianhu@kylinos.cn>
+Date: Fri, 1 Aug 2025 17:18:23 +0800
+Subject: [PATCH] x509: fix incorrect handling in name constraints merging
+
+As mentioned in commit ca573d65 ("x509: Fix asymmetry in name
+constraints intersection", 2016-07-29), the
+_gnutls_name_constraints_intersect function exhibited an
+asymmetry in name constraints intersection behavior, specifically
+manifested as:
+1. Nodes of unique types in PERMITTED (absent in PERMITTED2) were
+ preserved
+2. Nodes of unique types in PERMITTED2 (absent in PERMITTED) were
+ discarded
+
+A 'used' flag was introduced, where if a node from PERMITTED2 was
+ not used for the intersection, it would be copied to PERMITTED.
+
+However,an unresolved edge case persisted:
+- When 'removed.size > 0', the 'used' flag was unconditionally set
+to 1
+- This prevented copying of PERMITTED2 nodes with unique types
+
+Signed-off-by: chenjianhu <chenjianhu@kylinos.cn>
+Modified-by: Daiki Ueno <ueno@gnu.org>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/6e118a4dfe820ce62fc77130b89188bcd8fbcaad]
+CVE: CVE-2025-14831 #Backport to apply CVE patches cleanly
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 5 +++-
+ tests/name-constraints-merge.c | 55 ++++++++++++++++++++++++++++++++++
+ 2 files changed, 59 insertions(+), 1 deletion(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 3c6e306303..2be6a2aaa6 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -414,7 +414,10 @@ static int name_constraints_node_list_intersect(
+ gnutls_assert();
+ goto cleanup;
+ }
+- used = 1;
++
++ if (t->type == t2->type)
++ used = 1;
++
+ // if intersection is not empty
+ if (tmp !=
+ NULL) { // intersection for this type is not empty
+diff --git a/tests/name-constraints-merge.c b/tests/name-constraints-merge.c
+index 03b3243cc7..70376aaa74 100644
+--- a/tests/name-constraints-merge.c
++++ b/tests/name-constraints-merge.c
+@@ -418,6 +418,61 @@ void doit(void)
+ gnutls_x509_name_constraints_deinit(nc1);
+ gnutls_x509_name_constraints_deinit(nc2);
+
++ /* 5: variant of suite 0: after moving rfc822Name (ccc.com)
++ * from NC1 to NC2, dNSName (xxx.ccc.com) should still be
++ * rejected.
++ *
++ * NC1: permitted DNS org
++ * permitted DNS ccc.com
++ * NC2: permitted DNS org
++ * permitted email ccc.com
++ * permitted DNS aaa.bbb.ccc.com
++ */
++ suite = 5;
++
++ ret = gnutls_x509_name_constraints_init(&nc1);
++ check_for_error(ret);
++
++ ret = gnutls_x509_name_constraints_init(&nc2);
++ check_for_error(ret);
++
++ set_name("org", &name);
++ ret = gnutls_x509_name_constraints_add_permitted(
++ nc1, GNUTLS_SAN_DNSNAME, &name);
++ check_for_error(ret);
++
++ set_name("ccc.com", &name);
++ ret = gnutls_x509_name_constraints_add_permitted(
++ nc1, GNUTLS_SAN_DNSNAME, &name);
++ check_for_error(ret);
++
++ set_name("org", &name);
++ ret = gnutls_x509_name_constraints_add_permitted(
++ nc2, GNUTLS_SAN_DNSNAME, &name);
++ check_for_error(ret);
++
++ set_name("ccc.com", &name);
++ ret = gnutls_x509_name_constraints_add_permitted(
++ nc2, GNUTLS_SAN_RFC822NAME, &name);
++ check_for_error(ret);
++
++ set_name("aaa.bbb.ccc.com", &name);
++ ret = gnutls_x509_name_constraints_add_permitted(
++ nc2, GNUTLS_SAN_DNSNAME, &name);
++ check_for_error(ret);
++
++ ret = _gnutls_x509_name_constraints_merge(nc1, nc2);
++ check_for_error(ret);
++
++ /* check intersection of permitted */
++ set_name("xxx.ccc.com", &name);
++ ret = gnutls_x509_name_constraints_check(nc1, GNUTLS_SAN_DNSNAME,
++ &name);
++ check_test_result(suite, ret, NAME_REJECTED, &name);
++
++ gnutls_x509_name_constraints_deinit(nc1);
++ gnutls_x509_name_constraints_deinit(nc2);
++
+ /* Test footer */
+
+ if (debug)
+--
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-10.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-10.patch
new file mode 100644
index 00000000000..5507719ac01
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-10.patch
@@ -0,0 +1,424 @@
+From d6054f0016db05fb5c82177ddbd0a4e8331059a1 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Wed, 4 Feb 2026 20:03:49 +0100
+Subject: [PATCH] x509/name_constraints: name_constraints_node_list_intersect
+ over sorted
+
+Fixes: #1773
+Fixes: GNUTLS-SA-2026-02-09-2
+Fixes: CVE-2025-14831
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/d6054f0016db05fb5c82177ddbd0a4e8331059a1]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 350 ++++++++++++++----------------------
+ 1 file changed, 135 insertions(+), 215 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 1d78d1b..04722bd 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -446,13 +446,6 @@ name_constraints_node_add_copy(gnutls_x509_name_constraints_t nc,
+ src->name.data, src->name.size);
+ }
+
+-// for documentation see the implementation
+-static int name_constraints_intersect_nodes(
+- gnutls_x509_name_constraints_t nc,
+- const struct name_constraints_node_st *node1,
+- const struct name_constraints_node_st *node2,
+- struct name_constraints_node_st **intersection);
+-
+ /*-
+ * _gnutls_x509_name_constraints_is_empty:
+ * @nc: name constraints structure
+@@ -716,132 +709,143 @@ typedef char assert_ipaddr[(GNUTLS_SAN_IPADDRESS <= GNUTLS_SAN_MAX) ? 1 : -1];
+ static int name_constraints_node_list_intersect(
+ gnutls_x509_name_constraints_t nc,
+ struct name_constraints_node_list_st *permitted,
+- const struct name_constraints_node_list_st *permitted2,
++ struct name_constraints_node_list_st *permitted2,
+ struct name_constraints_node_list_st *excluded)
+ {
+- struct name_constraints_node_st *tmp;
+- int ret, type, used;
+- struct name_constraints_node_list_st removed = { .data = NULL,
+- .size = 0,
+- .capacity = 0 };
++ struct name_constraints_node_st *nc1, *nc2;
++ struct name_constraints_node_list_st result = { 0 };
++ struct name_constraints_node_list_st unsupp2 = { 0 };
++ enum name_constraint_relation rel;
++ unsigned type;
++ int ret = GNUTLS_E_SUCCESS;
++ size_t i, j, p1_unsupp = 0, p2_unsupp = 0;
++ type_bitmask_t universal_exclude_needed = 0;
++ type_bitmask_t types_in_p1 = 0, types_in_p2 = 0;
+ static const unsigned char universal_ip[32] = { 0 };
+
+- /* bitmask to see if we need to add universal excluded constraints
+- * (see phase 3 for details) */
+- type_bitmask_t types_with_empty_intersection = 0;
+-
+ if (permitted->size == 0 || permitted2->size == 0)
+- return 0;
++ return GNUTLS_E_SUCCESS;
+
+- /* Phase 1
+- * For each name in PERMITTED, if a PERMITTED2 does not contain a name
+- * with the same type, move the original name to REMOVED.
+- * Do this also for node of unknown type (not DNS, email, IP) */
+- for (size_t i = 0; i < permitted->size;) {
+- struct name_constraints_node_st *t = permitted->data[i];
+- const struct name_constraints_node_st *found = NULL;
+-
+- for (size_t j = 0; j < permitted2->size; j++) {
+- const struct name_constraints_node_st *t2 =
+- permitted2->data[j];
+- if (t->type == t2->type) {
+- // check bounds (we will use 't->type' as index)
+- if (t->type > GNUTLS_SAN_MAX || t->type == 0) {
+- gnutls_assert();
+- ret = GNUTLS_E_INTERNAL_ERROR;
+- goto cleanup;
+- }
+- // note the possibility of empty intersection for this type
+- // if we add something to the intersection in phase 2,
+- // we will reset this flag back to 0 then
+- type_bitmask_set(types_with_empty_intersection,
+- t->type);
+- found = t2;
+- break;
+- }
+- }
++ /* make sorted views of the arrays */
++ ret = ensure_sorted(permitted);
++ if (ret < 0) {
++ gnutls_assert();
++ goto cleanup;
++ }
++ ret = ensure_sorted(permitted2);
++ if (ret < 0) {
++ gnutls_assert();
++ goto cleanup;
++ }
+
+- if (found != NULL && is_supported_type(t->type)) {
+- /* move node from PERMITTED to REMOVED */
+- ret = name_constraints_node_list_add(&removed, t);
+- if (ret < 0) {
+- gnutls_assert();
+- goto cleanup;
+- }
+- /* remove node by swapping */
+- if (i < permitted->size - 1)
+- permitted->data[i] =
+- permitted->data[permitted->size - 1];
+- permitted->size--;
+- permitted->dirty = true;
+- continue;
++ /* deal with the leading unsupported types first: count, then union */
++ while (p1_unsupp < permitted->size &&
++ !is_supported_type(permitted->sorted_view[p1_unsupp]->type))
++ p1_unsupp++;
++ while (p2_unsupp < permitted2->size &&
++ !is_supported_type(permitted2->sorted_view[p2_unsupp]->type))
++ p2_unsupp++;
++ if (p1_unsupp) { /* copy p1 unsupported type pointers into result */
++ result.data = gnutls_calloc(
++ p1_unsupp, sizeof(struct name_constraints_node_st *));
++ if (!result.data) {
++ ret = GNUTLS_E_MEMORY_ERROR;
++ gnutls_assert();
++ goto cleanup;
++ }
++ memcpy(result.data, permitted->sorted_view,
++ p1_unsupp * sizeof(struct name_constraints_node_st *));
++ result.size = result.capacity = p1_unsupp;
++ result.dirty = true;
++ }
++ if (p2_unsupp) { /* union will make deep copies from p2 */
++ unsupp2.data = permitted2->sorted_view; /* so, just alias */
++ unsupp2.size = unsupp2.capacity = p2_unsupp;
++ unsupp2.dirty = false; /* we know it's sorted */
++ unsupp2.sorted_view = permitted2->sorted_view;
++ ret = name_constraints_node_list_union(nc, &result, &unsupp2);
++ if (ret < 0) {
++ gnutls_assert();
++ goto cleanup;
+ }
+- i++;
+ }
+
+- /* Phase 2
+- * iterate through all combinations from PERMITTED2 and PERMITTED
+- * and create intersections of nodes with same type */
+- for (size_t i = 0; i < permitted2->size; i++) {
+- const struct name_constraints_node_st *t2 = permitted2->data[i];
+-
+- // current PERMITTED2 node has not yet been used for any intersection
+- // (and is not in REMOVED either)
+- used = 0;
+- for (size_t j = 0; j < removed.size; j++) {
+- const struct name_constraints_node_st *t =
+- removed.data[j];
+- // save intersection of name constraints into tmp
+- ret = name_constraints_intersect_nodes(nc, t, t2, &tmp);
+- if (ret < 0) {
+- gnutls_assert();
+- goto cleanup;
+- }
++ /* with that out of the way, pre-compute the supported types we have */
++ for (i = p1_unsupp; i < permitted->size; i++) {
++ type = permitted->sorted_view[i]->type;
++ if (type < 1 || type > GNUTLS_SAN_MAX) {
++ ret = gnutls_assert_val(GNUTLS_E_INTERNAL_ERROR);
++ goto cleanup;
++ }
++ type_bitmask_set(types_in_p1, type);
++ }
++ for (j = p2_unsupp; j < permitted2->size; j++) {
++ type = permitted2->sorted_view[j]->type;
++ if (type < 1 || type > GNUTLS_SAN_MAX) {
++ ret = gnutls_assert_val(GNUTLS_E_INTERNAL_ERROR);
++ goto cleanup;
++ }
++ type_bitmask_set(types_in_p2, type);
++ }
++ /* universal excludes might be needed for types intersecting to empty */
++ universal_exclude_needed = types_in_p1 & types_in_p2;
++
++ /* go through supported type NCs and intersect in a single pass */
++ i = p1_unsupp;
++ j = p2_unsupp;
++ while (i < permitted->size || j < permitted2->size) {
++ nc1 = (i < permitted->size) ? permitted->sorted_view[i] : NULL;
++ nc2 = (j < permitted2->size) ? permitted2->sorted_view[j] :
++ NULL;
++ rel = compare_name_constraint_nodes(nc1, nc2);
+
+- if (t->type == t2->type)
+- used = 1;
+-
+- // if intersection is not empty
+- if (tmp !=
+- NULL) { // intersection for this type is not empty
+- // check bounds
+- if (tmp->type > GNUTLS_SAN_MAX ||
+- tmp->type == 0) {
+- gnutls_free(tmp);
+- return gnutls_assert_val(
+- GNUTLS_E_INTERNAL_ERROR);
+- }
+- // we will not add universal excluded constraint for this type
+- type_bitmask_clr(types_with_empty_intersection,
+- tmp->type);
+- // add intersection node to PERMITTED
+- ret = name_constraints_node_list_add(permitted,
+- tmp);
+- if (ret < 0) {
+- gnutls_assert();
+- goto cleanup;
+- }
+- }
++ switch (rel) {
++ case NC_SORTS_BEFORE:
++ assert(nc1 != NULL); /* comparator-guaranteed */
++ /* if nothing to intersect with, shallow-copy nc1 */
++ if (!type_bitmask_in(types_in_p2, nc1->type))
++ ret = name_constraints_node_list_add(&result,
++ nc1);
++ i++; /* otherwise skip nc1 */
++ break;
++ case NC_SORTS_AFTER:
++ assert(nc2 != NULL); /* comparator-guaranteed */
++ /* if nothing to intersect with, deep-copy nc2 */
++ if (!type_bitmask_in(types_in_p1, nc2->type))
++ ret = name_constraints_node_add_copy(
++ nc, &result, nc2);
++ j++; /* otherwise skip nc2 */
++ break;
++ case NC_INCLUDED_BY: /* add nc1, shallow-copy */
++ assert(nc1 != NULL && nc2 != NULL); /* comparator */
++ type_bitmask_clr(universal_exclude_needed, nc1->type);
++ ret = name_constraints_node_list_add(&result, nc1);
++ i++;
++ break;
++ case NC_INCLUDES: /* pick nc2, deep-copy */
++ assert(nc1 != NULL && nc2 != NULL); /* comparator */
++ type_bitmask_clr(universal_exclude_needed, nc2->type);
++ ret = name_constraints_node_add_copy(nc, &result, nc2);
++ j++;
++ break;
++ case NC_EQUAL: /* pick whichever: nc1, shallow-copy */
++ assert(nc1 != NULL && nc2 != NULL); /* loop condition */
++ type_bitmask_clr(universal_exclude_needed, nc1->type);
++ ret = name_constraints_node_list_add(&result, nc1);
++ i++;
++ j++;
++ break;
+ }
+- // if the node from PERMITTED2 was not used for intersection, copy it to DEST
+- // Beware: also copies nodes other than DNS, email, IP,
+- // since their counterpart may have been moved in phase 1.
+- if (!used) {
+- ret = name_constraints_node_add_copy(nc, permitted, t2);
+- if (ret < 0) {
+- gnutls_assert();
+- goto cleanup;
+- }
++ if (ret < 0) {
++ gnutls_assert();
++ goto cleanup;
+ }
+ }
+
+- /* Phase 3
+- * For each type: If we have empty permitted name constraints now
+- * and we didn't have at the beginning, we have to add a new
+- * excluded constraint with universal wildcard
+- * (since the intersection of permitted is now empty). */
++ /* finishing touch: add universal excluded constraints for types where
++ * both lists had constraints, but all intersections ended up empty */
+ for (type = 1; type <= GNUTLS_SAN_MAX; type++) {
+- if (!type_bitmask_in(types_with_empty_intersection, type))
++ if (!type_bitmask_in(universal_exclude_needed, type))
+ continue;
+ _gnutls_hard_log(
+ "Adding universal excluded name constraint for type %d.\n",
+@@ -874,14 +878,24 @@ static int name_constraints_node_list_intersect(
+ goto cleanup;
+ }
+ break;
+- default: // do nothing, at least one node was already moved in phase 1
+- break;
++ default: /* unsupported type; should be unreacheable */
++ ret = gnutls_assert_val(GNUTLS_E_INTERNAL_ERROR);
++ goto cleanup;
+ }
+ }
+- ret = GNUTLS_E_SUCCESS;
+
++ gnutls_free(permitted->data);
++ gnutls_free(permitted->sorted_view);
++ permitted->data = result.data;
++ permitted->sorted_view = NULL;
++ permitted->size = result.size;
++ permitted->capacity = result.capacity;
++ permitted->dirty = true;
++
++ result.data = NULL;
++ ret = GNUTLS_E_SUCCESS;
+ cleanup:
+- gnutls_free(removed.data);
++ name_constraints_node_list_clear(&result);
+ return ret;
+ }
+
+@@ -1257,100 +1271,6 @@ static unsigned email_matches(const gnutls_datum_t *name,
+ return rel == NC_EQUAL || rel == NC_INCLUDED_BY;
+ }
+
+-/*-
+- * name_constraints_intersect_nodes:
+- * @nc1: name constraints node 1
+- * @nc2: name constraints node 2
+- * @_intersection: newly allocated node with intersected constraints,
+- * NULL if the intersection is empty
+- *
+- * Inspect 2 name constraints nodes (of possibly different types) and allocate
+- * a new node with intersection of given constraints.
+- *
+- * Returns: On success, %GNUTLS_E_SUCCESS (0) is returned, otherwise a negative error value.
+- -*/
+-static int name_constraints_intersect_nodes(
+- gnutls_x509_name_constraints_t nc,
+- const struct name_constraints_node_st *node1,
+- const struct name_constraints_node_st *node2,
+- struct name_constraints_node_st **_intersection)
+-{
+- // presume empty intersection
+- struct name_constraints_node_st *intersection = NULL;
+- const struct name_constraints_node_st *to_copy = NULL;
+- enum name_constraint_relation rel;
+-
+- *_intersection = NULL;
+-
+- if (node1->type != node2->type) {
+- return GNUTLS_E_SUCCESS;
+- }
+- switch (node1->type) {
+- case GNUTLS_SAN_DNSNAME:
+- rel = compare_dns_names(&node1->name, &node2->name);
+- switch (rel) {
+- case NC_EQUAL: // equal means doesn't matter which one
+- case NC_INCLUDES: // node2 is more specific
+- to_copy = node2;
+- break;
+- case NC_INCLUDED_BY: // node1 is more specific
+- to_copy = node1;
+- break;
+- case NC_SORTS_BEFORE: // no intersection
+- case NC_SORTS_AFTER: // no intersection
+- return GNUTLS_E_SUCCESS;
+- }
+- break;
+- case GNUTLS_SAN_RFC822NAME:
+- rel = compare_emails(&node1->name, &node2->name);
+- switch (rel) {
+- case NC_EQUAL: // equal means doesn't matter which one
+- case NC_INCLUDES: // node2 is more specific
+- to_copy = node2;
+- break;
+- case NC_INCLUDED_BY: // node1 is more specific
+- to_copy = node1;
+- break;
+- case NC_SORTS_BEFORE: // no intersection
+- case NC_SORTS_AFTER: // no intersection
+- return GNUTLS_E_SUCCESS;
+- }
+- break;
+- case GNUTLS_SAN_IPADDRESS:
+- rel = compare_ip_ncs(&node1->name, &node2->name);
+- switch (rel) {
+- case NC_EQUAL: // equal means doesn't matter which one
+- case NC_INCLUDES: // node2 is more specific
+- to_copy = node2;
+- break;
+- case NC_INCLUDED_BY: // node1 is more specific
+- to_copy = node1;
+- break;
+- case NC_SORTS_BEFORE: // no intersection
+- case NC_SORTS_AFTER: // no intersection
+- return GNUTLS_E_SUCCESS;
+- }
+- break;
+- default:
+- // for other types, we don't know how to do the intersection, assume empty
+- return GNUTLS_E_SUCCESS;
+- }
+-
+- // copy existing node if applicable
+- if (to_copy != NULL) {
+- *_intersection = name_constraints_node_new(nc, to_copy->type,
+- to_copy->name.data,
+- to_copy->name.size);
+- if (*_intersection == NULL)
+- return gnutls_assert_val(GNUTLS_E_MEMORY_ERROR);
+- intersection = *_intersection;
+-
+- assert(intersection->name.data != NULL);
+- }
+-
+- return GNUTLS_E_SUCCESS;
+-}
+-
+ /*
+ * Returns: true if the certification is acceptable, and false otherwise.
+ */
+--
+2.43.0
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-2.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-2.patch
new file mode 100644
index 00000000000..9066098a01f
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-2.patch
@@ -0,0 +1,66 @@
+From 0b2377dfccd99be641bf3f1a0de9f0dc8dc0d4b1 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Mon, 26 Jan 2026 19:02:27 +0100
+Subject: [PATCH] x509/name_constraints: use actual zeroes in universal exclude
+ IP NC
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/0b2377dfccd99be641bf3f1a0de9f0dc8dc0d4b1]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 9 +++++----
+ 1 file changed, 5 insertions(+), 4 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 2be6a2aaa6..d07482e3c9 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -61,7 +61,7 @@ struct gnutls_name_constraints_st {
+
+ static struct name_constraints_node_st *
+ name_constraints_node_new(gnutls_x509_name_constraints_t nc, unsigned type,
+- unsigned char *data, unsigned int size);
++ const unsigned char *data, unsigned int size);
+
+ static int
+ name_constraints_node_list_add(struct name_constraints_node_list_st *list,
+@@ -285,7 +285,7 @@ static void name_constraints_node_free(struct name_constraints_node_st *node)
+ -*/
+ static struct name_constraints_node_st *
+ name_constraints_node_new(gnutls_x509_name_constraints_t nc, unsigned type,
+- unsigned char *data, unsigned int size)
++ const unsigned char *data, unsigned int size)
+ {
+ struct name_constraints_node_st *tmp;
+ int ret;
+@@ -339,6 +339,7 @@ static int name_constraints_node_list_intersect(
+ struct name_constraints_node_list_st removed = { .data = NULL,
+ .size = 0,
+ .capacity = 0 };
++ static const unsigned char universal_ip[32] = { 0 };
+
+ /* temporary array to see, if we need to add universal excluded constraints
+ * (see phase 3 for details)
+@@ -474,7 +475,7 @@ static int name_constraints_node_list_intersect(
+ case GNUTLS_SAN_IPADDRESS:
+ // add universal restricted range for IPv4
+ tmp = name_constraints_node_new(
+- nc, GNUTLS_SAN_IPADDRESS, NULL, 8);
++ nc, GNUTLS_SAN_IPADDRESS, universal_ip, 8);
+ if (tmp == NULL) {
+ gnutls_assert();
+ ret = GNUTLS_E_MEMORY_ERROR;
+@@ -487,7 +488,7 @@ static int name_constraints_node_list_intersect(
+ }
+ // add universal restricted range for IPv6
+ tmp = name_constraints_node_new(
+- nc, GNUTLS_SAN_IPADDRESS, NULL, 32);
++ nc, GNUTLS_SAN_IPADDRESS, universal_ip, 32);
+ if (tmp == NULL) {
+ gnutls_assert();
+ ret = GNUTLS_E_MEMORY_ERROR;
+--
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-3.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-3.patch
new file mode 100644
index 00000000000..0d340325541
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-3.patch
@@ -0,0 +1,30 @@
+From 85d6348a30c74d4ee3710e0f4652f634eaad6914 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Mon, 26 Jan 2026 19:10:58 +0100
+Subject: [PATCH] tests/name-constraints-ip: stop swallowing errors...
+
+... now when it started to pass
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/85d6348a30c74d4ee3710e0f4652f634eaad6914]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ tests/name-constraints-ip.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tests/name-constraints-ip.c b/tests/name-constraints-ip.c
+index 7a196088dc..a0cf172b7f 100644
+--- a/tests/name-constraints-ip.c
++++ b/tests/name-constraints-ip.c
+@@ -772,5 +772,5 @@ int main(int argc, char **argv)
+ cmocka_unit_test_setup_teardown(
+ check_ipv4v6_single_constraint_each, setup, teardown)
+ };
+- cmocka_run_group_tests(tests, NULL, NULL);
++ return cmocka_run_group_tests(tests, NULL, NULL);
+ }
+--
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-4.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-4.patch
new file mode 100644
index 00000000000..ed4a7da3c7a
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-4.patch
@@ -0,0 +1,45 @@
+From c28475413f82e1f34295d5c039f0c0a4ca2ee526 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Mon, 26 Jan 2026 20:14:33 +0100
+Subject: [PATCH] x509/name_constraints: reject some malformed domain names
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/c28475413f82e1f34295d5c039f0c0a4ca2ee526]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 17 +++++++++++++++++
+ 1 file changed, 17 insertions(+)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index d07482e3c9..9783d92851 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -159,6 +159,23 @@ static int validate_name_constraints_node(gnutls_x509_subject_alt_name_t type,
+ return gnutls_assert_val(GNUTLS_E_MALFORMED_CIDR);
+ }
+
++ /* Validate DNS names and email addresses for malformed input */
++ if (type == GNUTLS_SAN_DNSNAME || type == GNUTLS_SAN_RFC822NAME) {
++ unsigned int i;
++ if (name->size == 0)
++ return GNUTLS_E_SUCCESS;
++
++ /* reject names with consecutive dots... */
++ for (i = 0; i + 1 < name->size; i++) {
++ if (name->data[i] == '.' && name->data[i + 1] == '.')
++ return gnutls_assert_val(
++ GNUTLS_E_ILLEGAL_PARAMETER);
++ }
++ /* ... or names consisting exclusively of dots */
++ if (name->size == 1 && name->data[0] == '.')
++ return gnutls_assert_val(GNUTLS_E_ILLEGAL_PARAMETER);
++ }
++
+ return GNUTLS_E_SUCCESS;
+ }
+
+--
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-5.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-5.patch
new file mode 100644
index 00000000000..514ee3c70fc
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-5.patch
@@ -0,0 +1,205 @@
+From 6db7da7fcfe230f445b1edbb56e2a8346120c891 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Thu, 5 Feb 2026 13:22:10 +0100
+Subject: [PATCH] x509/name_constraints: name_constraints_node_add_{new,copy}
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/6db7da7fcfe230f445b1edbb56e2a8346120c891]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 112 ++++++++++++++++--------------------
+ 1 file changed, 51 insertions(+), 61 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 9783d92851..81035eef8f 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -86,6 +86,38 @@ name_constraints_node_list_add(struct name_constraints_node_list_st *list,
+ return 0;
+ }
+
++static int
++name_constraints_node_add_new(gnutls_x509_name_constraints_t nc,
++ struct name_constraints_node_list_st *list,
++ unsigned type, const unsigned char *data,
++ unsigned int size)
++{
++ struct name_constraints_node_st *node;
++ int ret;
++ node = name_constraints_node_new(nc, type, data, size);
++ if (node == NULL) {
++ gnutls_assert();
++ return GNUTLS_E_MEMORY_ERROR;
++ }
++ ret = name_constraints_node_list_add(list, node);
++ if (ret < 0) {
++ gnutls_assert();
++ return ret;
++ }
++ return GNUTLS_E_SUCCESS;
++}
++
++static int
++name_constraints_node_add_copy(gnutls_x509_name_constraints_t nc,
++ struct name_constraints_node_list_st *dest,
++ const struct name_constraints_node_st *src)
++{
++ if (!src)
++ return gnutls_assert_val(GNUTLS_E_INTERNAL_ERROR);
++ return name_constraints_node_add_new(nc, dest, src->type,
++ src->name.data, src->name.size);
++}
++
+ // for documentation see the implementation
+ static int name_constraints_intersect_nodes(
+ gnutls_x509_name_constraints_t nc,
+@@ -188,7 +220,6 @@ static int extract_name_constraints(gnutls_x509_name_constraints_t nc,
+ unsigned indx;
+ gnutls_datum_t tmp = { NULL, 0 };
+ unsigned int type;
+- struct name_constraints_node_st *node;
+
+ for (indx = 1;; indx++) {
+ snprintf(tmpstr, sizeof(tmpstr), "%s.?%u.base", vstr, indx);
+@@ -231,15 +262,9 @@ static int extract_name_constraints(gnutls_x509_name_constraints_t nc,
+ goto cleanup;
+ }
+
+- node = name_constraints_node_new(nc, type, tmp.data, tmp.size);
++ ret = name_constraints_node_add_new(nc, nodes, type, tmp.data,
++ tmp.size);
+ _gnutls_free_datum(&tmp);
+- if (node == NULL) {
+- gnutls_assert();
+- ret = GNUTLS_E_MEMORY_ERROR;
+- goto cleanup;
+- }
+-
+- ret = name_constraints_node_list_add(nodes, node);
+ if (ret < 0) {
+ gnutls_assert();
+ goto cleanup;
+@@ -462,14 +487,7 @@ static int name_constraints_node_list_intersect(
+ // Beware: also copies nodes other than DNS, email, IP,
+ // since their counterpart may have been moved in phase 1.
+ if (!used) {
+- tmp = name_constraints_node_new(
+- nc, t2->type, t2->name.data, t2->name.size);
+- if (tmp == NULL) {
+- gnutls_assert();
+- ret = GNUTLS_E_MEMORY_ERROR;
+- goto cleanup;
+- }
+- ret = name_constraints_node_list_add(permitted, tmp);
++ ret = name_constraints_node_add_copy(nc, permitted, t2);
+ if (ret < 0) {
+ gnutls_assert();
+ goto cleanup;
+@@ -491,27 +509,17 @@ static int name_constraints_node_list_intersect(
+ switch (type) {
+ case GNUTLS_SAN_IPADDRESS:
+ // add universal restricted range for IPv4
+- tmp = name_constraints_node_new(
+- nc, GNUTLS_SAN_IPADDRESS, universal_ip, 8);
+- if (tmp == NULL) {
+- gnutls_assert();
+- ret = GNUTLS_E_MEMORY_ERROR;
+- goto cleanup;
+- }
+- ret = name_constraints_node_list_add(excluded, tmp);
++ ret = name_constraints_node_add_new(
++ nc, excluded, GNUTLS_SAN_IPADDRESS,
++ universal_ip, 8);
+ if (ret < 0) {
+ gnutls_assert();
+ goto cleanup;
+ }
+ // add universal restricted range for IPv6
+- tmp = name_constraints_node_new(
+- nc, GNUTLS_SAN_IPADDRESS, universal_ip, 32);
+- if (tmp == NULL) {
+- gnutls_assert();
+- ret = GNUTLS_E_MEMORY_ERROR;
+- goto cleanup;
+- }
+- ret = name_constraints_node_list_add(excluded, tmp);
++ ret = name_constraints_node_add_new(
++ nc, excluded, GNUTLS_SAN_IPADDRESS,
++ universal_ip, 32);
+ if (ret < 0) {
+ gnutls_assert();
+ goto cleanup;
+@@ -519,13 +527,8 @@ static int name_constraints_node_list_intersect(
+ break;
+ case GNUTLS_SAN_DNSNAME:
+ case GNUTLS_SAN_RFC822NAME:
+- tmp = name_constraints_node_new(nc, type, NULL, 0);
+- if (tmp == NULL) {
+- gnutls_assert();
+- ret = GNUTLS_E_MEMORY_ERROR;
+- goto cleanup;
+- }
+- ret = name_constraints_node_list_add(excluded, tmp);
++ ret = name_constraints_node_add_new(nc, excluded, type,
++ NULL, 0);
+ if (ret < 0) {
+ gnutls_assert();
+ goto cleanup;
+@@ -547,20 +550,13 @@ static int name_constraints_node_list_concat(
+ struct name_constraints_node_list_st *nodes,
+ const struct name_constraints_node_list_st *nodes2)
+ {
+- for (size_t i = 0; i < nodes2->size; i++) {
+- const struct name_constraints_node_st *node = nodes2->data[i];
+- struct name_constraints_node_st *tmp;
+- int ret;
++ int ret;
+
+- tmp = name_constraints_node_new(nc, node->type, node->name.data,
+- node->name.size);
+- if (tmp == NULL) {
+- return gnutls_assert_val(GNUTLS_E_MEMORY_ERROR);
+- }
+- ret = name_constraints_node_list_add(nodes, tmp);
++ for (size_t i = 0; i < nodes2->size; i++) {
++ ret = name_constraints_node_add_copy(nc, nodes,
++ nodes2->data[i]);
+ if (ret < 0) {
+- name_constraints_node_free(tmp);
+- return gnutls_assert_val(GNUTLS_E_MEMORY_ERROR);
++ return gnutls_assert_val(ret);
+ }
+ }
+
+@@ -690,7 +686,6 @@ static int name_constraints_add(gnutls_x509_name_constraints_t nc,
+ gnutls_x509_subject_alt_name_t type,
+ const gnutls_datum_t *name, unsigned permitted)
+ {
+- struct name_constraints_node_st *tmp;
+ struct name_constraints_node_list_st *nodes;
+ int ret;
+
+@@ -700,15 +695,10 @@ static int name_constraints_add(gnutls_x509_name_constraints_t nc,
+
+ nodes = permitted ? &nc->permitted : &nc->excluded;
+
+- tmp = name_constraints_node_new(nc, type, name->data, name->size);
+- if (tmp == NULL)
+- return gnutls_assert_val(GNUTLS_E_MEMORY_ERROR);
+-
+- ret = name_constraints_node_list_add(nodes, tmp);
+- if (ret < 0) {
+- name_constraints_node_free(tmp);
++ ret = name_constraints_node_add_new(nc, nodes, type, name->data,
++ name->size);
++ if (ret < 0)
+ return gnutls_assert_val(ret);
+- }
+
+ return 0;
+ }
+--
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-6.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-6.patch
new file mode 100644
index 00000000000..6970c6ccfe4
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-6.patch
@@ -0,0 +1,505 @@
+From 094accd3ebec17ead6c391757eaa18763b72d83f Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Mon, 26 Jan 2026 20:16:36 +0100
+Subject: [PATCH] x509/name_constraints: introduce a rich comparator
+
+These are preparatory changes before implementing N * log N intersection
+over sorted lists of constraints.
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/094accd3ebec17ead6c391757eaa18763b72d83f]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 411 ++++++++++++++++++++++++++++--------
+ 1 file changed, 320 insertions(+), 91 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 81035eef8f..b5d732d0c5 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -39,6 +39,9 @@
+ #include "ip.h"
+ #include "ip-in-cidr.h"
+ #include "intprops.h"
++#include "minmax.h"
++
++#include <string.h>
+
+ #define MAX_NC_CHECKS (1 << 20)
+
+@@ -63,6 +66,282 @@ static struct name_constraints_node_st *
+ name_constraints_node_new(gnutls_x509_name_constraints_t nc, unsigned type,
+ const unsigned char *data, unsigned int size);
+
++/* An enum for "rich" comparisons that not only let us sort name constraints,
++ * children-before-parent, but also subsume them during intersection. */
++enum name_constraint_relation {
++ NC_SORTS_BEFORE = -2, /* unrelated constraints */
++ NC_INCLUDED_BY = -1, /* nc1 is included by nc2 / children sort first */
++ NC_EQUAL = 0, /* exact match */
++ NC_INCLUDES = 1, /* nc1 includes nc2 / parents sort last */
++ NC_SORTS_AFTER = 2 /* unrelated constraints */
++};
++
++/* A helper to compare just a pair of strings with this rich comparison */
++static enum name_constraint_relation
++compare_strings(const void *n1, size_t n1_len, const void *n2, size_t n2_len)
++{
++ int r = memcmp(n1, n2, MIN(n1_len, n2_len));
++ if (r < 0)
++ return NC_SORTS_BEFORE;
++ if (r > 0)
++ return NC_SORTS_AFTER;
++ if (n1_len < n2_len)
++ return NC_SORTS_BEFORE;
++ if (n1_len > n2_len)
++ return NC_SORTS_AFTER;
++ return NC_EQUAL;
++}
++
++/* Rich-compare DNS names. Example order/relationships:
++ * z.x.a INCLUDED_BY x.a BEFORE y.a INCLUDED_BY a BEFORE x.b BEFORE y.b */
++static enum name_constraint_relation compare_dns_names(const gnutls_datum_t *n1,
++ const gnutls_datum_t *n2)
++{
++ enum name_constraint_relation rel;
++ unsigned int i, j, i_end, j_end;
++
++ /* start from the end of each name */
++ i = i_end = n1->size;
++ j = j_end = n2->size;
++
++ /* skip the trailing dots for the comparison */
++ while (i && n1->data[i - 1] == '.')
++ i_end = i = i - 1;
++ while (j && n2->data[j - 1] == '.')
++ j_end = j = j - 1;
++
++ while (1) {
++ // rewind back to beginning or an after-dot position
++ while (i && n1->data[i - 1] != '.')
++ i--;
++ while (j && n2->data[j - 1] != '.')
++ j--;
++
++ rel = compare_strings(&n1->data[i], i_end - i, &n2->data[j],
++ j_end - j);
++ if (rel == NC_SORTS_BEFORE) /* x.a BEFORE y.a */
++ return NC_SORTS_BEFORE;
++ if (rel == NC_SORTS_AFTER) /* y.a AFTER x.a */
++ return NC_SORTS_AFTER;
++ if (!i && j) /* x.a INCLUDES z.x.a */
++ return NC_INCLUDES;
++ if (i && !j) /* z.x.a INCLUDED_BY x.a */
++ return NC_INCLUDED_BY;
++
++ if (!i && !j) /* r == 0, we ran out of components to compare */
++ return NC_EQUAL;
++ /* r == 0, i && j: step back past a dot and keep comparing */
++ i_end = i = i - 1;
++ j_end = j = j - 1;
++
++ /* support for non-standard ".gr INCLUDES example.gr" [1] */
++ if (!i && j) /* .a INCLUDES x.a */
++ return NC_INCLUDES;
++ if (i && !j) /* x.a INCLUDED_BY .a */
++ return NC_INCLUDED_BY;
++ }
++}
++/* [1] https://mailarchive.ietf.org/arch/msg/saag/Bw6PtreW0G7aEG7SikfzKHES4VA */
++
++/* Rich-compare email name constraints. Example order/relationships:
++ * z@x.a INCLUDED_BY x.a BEFORE y.a INCLUDED_BY a BEFORE x@b BEFORE y@b */
++static enum name_constraint_relation compare_emails(const gnutls_datum_t *n1,
++ const gnutls_datum_t *n2)
++{
++ enum name_constraint_relation domains_rel;
++ unsigned int i, j, i_end, j_end;
++ gnutls_datum_t d1, d2; /* borrow from n1 and n2 */
++
++ /* start from the end of each name */
++ i = i_end = n1->size;
++ j = j_end = n2->size;
++
++ /* rewind to @s to look for domains */
++ while (i && n1->data[i - 1] != '@')
++ i--;
++ d1.size = i_end - i;
++ d1.data = &n1->data[i];
++ while (j && n2->data[j - 1] != '@')
++ j--;
++ d2.size = j_end - j;
++ d2.data = &n2->data[j];
++
++ domains_rel = compare_dns_names(&d1, &d2);
++
++ /* email constraint semantics differ from DNS
++ * DNS: x.a INCLUDED_BY a
++ * Email: x.a INCLUDED_BY .a BEFORE a */
++ if (domains_rel == NC_INCLUDED_BY || domains_rel == NC_INCLUDES) {
++ bool d1_has_dot = (d1.size > 0 && d1.data[0] == '.');
++ bool d2_has_dot = (d2.size > 0 && d2.data[0] == '.');
++ /* a constraint without a dot is exact, excluding subdomains */
++ if (!d2_has_dot && domains_rel == NC_INCLUDED_BY)
++ domains_rel = NC_SORTS_BEFORE; /* x.a BEFORE a */
++ if (!d1_has_dot && domains_rel == NC_INCLUDES)
++ domains_rel = NC_SORTS_AFTER; /* a AFTER x.a */
++ }
++
++ if (!i && !j) { /* both are domains-only */
++ return domains_rel;
++ } else if (i && !j) { /* n1 is email, n2 is domain */
++ switch (domains_rel) {
++ case NC_SORTS_AFTER:
++ return NC_SORTS_AFTER;
++ case NC_SORTS_BEFORE:
++ return NC_SORTS_BEFORE;
++ case NC_INCLUDES: /* n2 is more specific, a@x.a AFTER z.x.a */
++ return NC_SORTS_AFTER;
++ case NC_EQUAL: /* subdomains match, z@x.a INCLUDED_BY x.a */
++ case NC_INCLUDED_BY: /* n1 is more specific */
++ return NC_INCLUDED_BY;
++ }
++ } else if (!i && j) { /* n1 is domain, n2 is email */
++ switch (domains_rel) {
++ case NC_SORTS_AFTER:
++ return NC_SORTS_AFTER;
++ case NC_SORTS_BEFORE:
++ return NC_SORTS_BEFORE;
++ case NC_INCLUDES: /* n2 is more specific, a AFTER z@x.a */
++ return NC_SORTS_AFTER;
++ case NC_EQUAL: /* subdomains match, x.a INCLUDES z@x.a */
++ return NC_INCLUDES;
++ case NC_INCLUDED_BY: /* n1 is more specific, x.a BEFORE z@a */
++ return NC_SORTS_BEFORE;
++ }
++ } else if (i && j) { /* both are emails */
++ switch (domains_rel) {
++ case NC_SORTS_AFTER:
++ return NC_SORTS_AFTER;
++ case NC_SORTS_BEFORE:
++ return NC_SORTS_BEFORE;
++ case NC_INCLUDES: // n2 is more specific
++ return NC_SORTS_AFTER;
++ case NC_INCLUDED_BY: // n1 is more specific
++ return NC_SORTS_BEFORE;
++ case NC_EQUAL: // only case when we need to look before the @
++ break; // see below for readability
++ }
++ }
++
++ /* i && j, both are emails, domain names match, compare up to @ */
++ return compare_strings(n1->data, i - 1, n2->data, j - 1);
++}
++
++/* Rich-compare IP address constraints. Example order/relationships:
++ * 10.0.0.0/24 INCLUDED_BY 10.0.0.0/16 BEFORE 1::1/128 INCLUDED_BY 1::1/127 */
++static enum name_constraint_relation compare_ip_ncs(const gnutls_datum_t *n1,
++ const gnutls_datum_t *n2)
++{
++ unsigned int len, i;
++ int r;
++ const unsigned char *ip1, *ip2, *mask1, *mask2;
++ unsigned char masked11[16], masked22[16], masked12[16], masked21[16];
++
++ if (n1->size < n2->size)
++ return NC_SORTS_BEFORE;
++ if (n1->size > n2->size)
++ return NC_SORTS_AFTER;
++ len = n1->size / 2; /* 4 for IPv4, 16 for IPv6 */
++
++ /* data is a concatenation of prefix and mask */
++ ip1 = n1->data;
++ ip2 = n2->data;
++ mask1 = n1->data + len;
++ mask2 = n2->data + len;
++ for (i = 0; i < len; i++) {
++ masked11[i] = ip1[i] & mask1[i];
++ masked22[i] = ip2[i] & mask2[i];
++ masked12[i] = ip1[i] & mask2[i];
++ masked21[i] = ip2[i] & mask1[i];
++ }
++
++ r = memcmp(mask1, mask2, len);
++ if (r < 0 && !memcmp(masked11, masked21, len)) /* prefix1 < prefix2 */
++ return NC_INCLUDES; /* ip1 & mask1 == ip2 & mask1 */
++ if (r > 0 && !memcmp(masked12, masked22, len)) /* prefix1 > prefix2 */
++ return NC_INCLUDED_BY; /* ip1 & mask2 == ip2 & mask2 */
++
++ r = memcmp(masked11, masked22, len);
++ if (r < 0)
++ return NC_SORTS_BEFORE;
++ else if (r > 0)
++ return NC_SORTS_AFTER;
++ return NC_EQUAL;
++}
++
++static inline bool is_supported_type(unsigned type)
++{
++ return type == GNUTLS_SAN_DNSNAME || type == GNUTLS_SAN_RFC822NAME ||
++ type == GNUTLS_SAN_IPADDRESS;
++}
++
++/* Universal comparison for name constraint nodes.
++ * Unsupported types sort before supported types to allow early handling.
++ * NULL represents end-of-list and sorts after everything else. */
++static enum name_constraint_relation
++compare_name_constraint_nodes(const struct name_constraints_node_st *n1,
++ const struct name_constraints_node_st *n2)
++{
++ bool n1_supported, n2_supported;
++
++ if (!n1 && !n2)
++ return NC_EQUAL;
++ if (!n1)
++ return NC_SORTS_AFTER;
++ if (!n2)
++ return NC_SORTS_BEFORE;
++
++ n1_supported = is_supported_type(n1->type);
++ n2_supported = is_supported_type(n2->type);
++
++ /* unsupported types bubble up (sort first). intersect relies on this */
++ if (!n1_supported && n2_supported)
++ return NC_SORTS_BEFORE;
++ if (n1_supported && !n2_supported)
++ return NC_SORTS_AFTER;
++
++ /* next, sort by type */
++ if (n1->type < n2->type)
++ return NC_SORTS_BEFORE;
++ if (n1->type > n2->type)
++ return NC_SORTS_AFTER;
++
++ /* now look deeper */
++ switch (n1->type) {
++ case GNUTLS_SAN_DNSNAME:
++ return compare_dns_names(&n1->name, &n2->name);
++ case GNUTLS_SAN_RFC822NAME:
++ return compare_emails(&n1->name, &n2->name);
++ case GNUTLS_SAN_IPADDRESS:
++ return compare_ip_ncs(&n1->name, &n2->name);
++ default:
++ /* unsupported types: stable lexicographic order */
++ return compare_strings(n1->name.data, n1->name.size,
++ n2->name.data, n2->name.size);
++ }
++}
++
++/* qsort-compatible wrapper */
++static int compare_name_constraint_nodes_qsort(const void *a, const void *b)
++{
++ const struct name_constraints_node_st *const *n1 = a;
++ const struct name_constraints_node_st *const *n2 = b;
++ enum name_constraint_relation rel;
++
++ rel = compare_name_constraint_nodes(*n1, *n2);
++ switch (rel) {
++ case NC_SORTS_BEFORE:
++ case NC_INCLUDED_BY:
++ return -1;
++ case NC_SORTS_AFTER:
++ case NC_INCLUDES:
++ return 1;
++ case NC_EQUAL:
++ default:
++ return 0;
++ }
++}
++
+ static int
+ name_constraints_node_list_add(struct name_constraints_node_list_st *list,
+ struct name_constraints_node_st *node)
+@@ -420,9 +699,7 @@ static int name_constraints_node_list_intersect(
+ }
+ }
+
+- if (found != NULL && (t->type == GNUTLS_SAN_DNSNAME ||
+- t->type == GNUTLS_SAN_RFC822NAME ||
+- t->type == GNUTLS_SAN_IPADDRESS)) {
++ if (found != NULL && is_supported_type(t->type)) {
+ /* move node from PERMITTED to REMOVED */
+ ret = name_constraints_node_list_add(&removed, t);
+ if (ret < 0) {
+@@ -827,61 +1104,14 @@ cleanup:
+ return ret;
+ }
+
+-static unsigned ends_with(const gnutls_datum_t *str,
+- const gnutls_datum_t *suffix)
+-{
+- unsigned char *tree;
+- unsigned int treelen;
+-
+- if (suffix->size >= str->size)
+- return 0;
+-
+- tree = suffix->data;
+- treelen = suffix->size;
+- if ((treelen > 0) && (tree[0] == '.')) {
+- tree++;
+- treelen--;
+- }
+-
+- if (memcmp(str->data + str->size - treelen, tree, treelen) == 0 &&
+- str->data[str->size - treelen - 1] == '.')
+- return 1; /* match */
+-
+- return 0;
+-}
+-
+-static unsigned email_ends_with(const gnutls_datum_t *str,
+- const gnutls_datum_t *suffix)
+-{
+- if (suffix->size >= str->size) {
+- return 0;
+- }
+-
+- if (suffix->size > 0 && memcmp(str->data + str->size - suffix->size,
+- suffix->data, suffix->size) != 0) {
+- return 0;
+- }
+-
+- if (suffix->size > 1 && suffix->data[0] == '.') { /* .domain.com */
+- return 1; /* match */
+- } else if (str->data[str->size - suffix->size - 1] == '@') {
+- return 1; /* match */
+- }
+-
+- return 0;
+-}
+-
+ static unsigned dnsname_matches(const gnutls_datum_t *name,
+ const gnutls_datum_t *suffix)
+ {
+ _gnutls_hard_log("matching %.*s with DNS constraint %.*s\n", name->size,
+ name->data, suffix->size, suffix->data);
+
+- if (suffix->size == name->size &&
+- memcmp(suffix->data, name->data, suffix->size) == 0)
+- return 1; /* match */
+-
+- return ends_with(name, suffix);
++ enum name_constraint_relation rel = compare_dns_names(name, suffix);
++ return rel == NC_EQUAL || rel == NC_INCLUDED_BY;
+ }
+
+ static unsigned email_matches(const gnutls_datum_t *name,
+@@ -890,11 +1120,8 @@ static unsigned email_matches(const gnutls_datum_t *name,
+ _gnutls_hard_log("matching %.*s with e-mail constraint %.*s\n",
+ name->size, name->data, suffix->size, suffix->data);
+
+- if (suffix->size == name->size &&
+- memcmp(suffix->data, name->data, suffix->size) == 0)
+- return 1; /* match */
+-
+- return email_ends_with(name, suffix);
++ enum name_constraint_relation rel = compare_emails(name, suffix);
++ return rel == NC_EQUAL || rel == NC_INCLUDED_BY;
+ }
+
+ /*-
+@@ -918,8 +1145,7 @@ static int name_constraints_intersect_nodes(
+ // presume empty intersection
+ struct name_constraints_node_st *intersection = NULL;
+ const struct name_constraints_node_st *to_copy = NULL;
+- unsigned iplength = 0;
+- unsigned byte;
++ enum name_constraint_relation rel;
+
+ *_intersection = NULL;
+
+@@ -928,32 +1154,49 @@ static int name_constraints_intersect_nodes(
+ }
+ switch (node1->type) {
+ case GNUTLS_SAN_DNSNAME:
+- if (!dnsname_matches(&node2->name, &node1->name))
++ rel = compare_dns_names(&node1->name, &node2->name);
++ switch (rel) {
++ case NC_EQUAL: // equal means doesn't matter which one
++ case NC_INCLUDES: // node2 is more specific
++ to_copy = node2;
++ break;
++ case NC_INCLUDED_BY: // node1 is more specific
++ to_copy = node1;
++ break;
++ case NC_SORTS_BEFORE: // no intersection
++ case NC_SORTS_AFTER: // no intersection
+ return GNUTLS_E_SUCCESS;
+- to_copy = node2;
++ }
+ break;
+ case GNUTLS_SAN_RFC822NAME:
+- if (!email_matches(&node2->name, &node1->name))
++ rel = compare_emails(&node1->name, &node2->name);
++ switch (rel) {
++ case NC_EQUAL: // equal means doesn't matter which one
++ case NC_INCLUDES: // node2 is more specific
++ to_copy = node2;
++ break;
++ case NC_INCLUDED_BY: // node1 is more specific
++ to_copy = node1;
++ break;
++ case NC_SORTS_BEFORE: // no intersection
++ case NC_SORTS_AFTER: // no intersection
+ return GNUTLS_E_SUCCESS;
+- to_copy = node2;
++ }
+ break;
+ case GNUTLS_SAN_IPADDRESS:
+- if (node1->name.size != node2->name.size)
++ rel = compare_ip_ncs(&node1->name, &node2->name);
++ switch (rel) {
++ case NC_EQUAL: // equal means doesn't matter which one
++ case NC_INCLUDES: // node2 is more specific
++ to_copy = node2;
++ break;
++ case NC_INCLUDED_BY: // node1 is more specific
++ to_copy = node1;
++ break;
++ case NC_SORTS_BEFORE: // no intersection
++ case NC_SORTS_AFTER: // no intersection
+ return GNUTLS_E_SUCCESS;
+- iplength = node1->name.size / 2;
+- for (byte = 0; byte < iplength; byte++) {
+- if (((node1->name.data[byte] ^
+- node2->name.data[byte]) // XOR of addresses
+- & node1->name.data[byte +
+- iplength] // AND mask from nc1
+- & node2->name.data[byte +
+- iplength]) // AND mask from nc2
+- != 0) {
+- // CIDRS do not intersect
+- return GNUTLS_E_SUCCESS;
+- }
+ }
+- to_copy = node2;
+ break;
+ default:
+ // for other types, we don't know how to do the intersection, assume empty
+@@ -970,20 +1213,6 @@ static int name_constraints_intersect_nodes(
+ intersection = *_intersection;
+
+ assert(intersection->name.data != NULL);
+-
+- if (intersection->type == GNUTLS_SAN_IPADDRESS) {
+- // make sure both IP addresses are correctly masked
+- _gnutls_mask_ip(intersection->name.data,
+- intersection->name.data + iplength,
+- iplength);
+- _gnutls_mask_ip(node1->name.data,
+- node1->name.data + iplength, iplength);
+- // update intersection, if necessary (we already know one is subset of other)
+- for (byte = 0; byte < 2 * iplength; byte++) {
+- intersection->name.data[byte] |=
+- node1->name.data[byte];
+- }
+- }
+ }
+
+ return GNUTLS_E_SUCCESS;
+--
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-7.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-7.patch
new file mode 100644
index 00000000000..8dfda528619
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-7.patch
@@ -0,0 +1,124 @@
+From bc62fbb946085527b4b1c02f337dd10c68c54690 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Wed, 4 Feb 2026 09:09:46 +0100
+Subject: [PATCH] x509/name_constraints: add sorted_view in preparation...
+
+... for actually using it later for performance gains.
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/bc62fbb946085527b4b1c02f337dd10c68c54690]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 62 ++++++++++++++++++++++++++++++-------
+ 1 file changed, 51 insertions(+), 11 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index b5d732d0c5..41f30d13b9 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -54,6 +54,9 @@ struct name_constraints_node_list_st {
+ struct name_constraints_node_st **data;
+ size_t size;
+ size_t capacity;
++ /* sorted-on-demand view, valid only when dirty == false */
++ bool dirty;
++ struct name_constraints_node_st **sorted_view;
+ };
+
+ struct gnutls_name_constraints_st {
+@@ -342,6 +345,37 @@ static int compare_name_constraint_nodes_qsort(const void *a, const void *b)
+ }
+ }
+
++/* Bring the sorted view up to date with the list data; clear the dirty flag. */
++static int ensure_sorted(struct name_constraints_node_list_st *list)
++{
++ struct name_constraints_node_st **new_data;
++
++ if (!list->dirty)
++ return GNUTLS_E_SUCCESS;
++ if (!list->size) {
++ list->dirty = false;
++ return GNUTLS_E_SUCCESS;
++ }
++
++ /* reallocate sorted view to match current size */
++ new_data =
++ _gnutls_reallocarray(list->sorted_view, list->size,
++ sizeof(struct name_constraints_node_st *));
++ if (!new_data)
++ return gnutls_assert_val(GNUTLS_E_MEMORY_ERROR);
++ list->sorted_view = new_data;
++
++ /* copy pointers and sort in-place */
++ memcpy(list->sorted_view, list->data,
++ list->size * sizeof(struct name_constraints_node_st *));
++ qsort(list->sorted_view, list->size,
++ sizeof(struct name_constraints_node_st *),
++ compare_name_constraint_nodes_qsort);
++
++ list->dirty = false;
++ return GNUTLS_E_SUCCESS;
++}
++
+ static int
+ name_constraints_node_list_add(struct name_constraints_node_list_st *list,
+ struct name_constraints_node_st *node)
+@@ -361,10 +395,23 @@ name_constraints_node_list_add(struct name_constraints_node_list_st *list,
+ list->capacity = new_capacity;
+ list->data = new_data;
+ }
++ list->dirty = true;
+ list->data[list->size++] = node;
+ return 0;
+ }
+
++static void
++name_constraints_node_list_clear(struct name_constraints_node_list_st *list)
++{
++ gnutls_free(list->data);
++ gnutls_free(list->sorted_view);
++ list->data = NULL;
++ list->sorted_view = NULL;
++ list->capacity = 0;
++ list->size = 0;
++ list->dirty = false;
++}
++
+ static int
+ name_constraints_node_add_new(gnutls_x509_name_constraints_t nc,
+ struct name_constraints_node_list_st *list,
+@@ -711,6 +758,7 @@ static int name_constraints_node_list_intersect(
+ permitted->data[i] =
+ permitted->data[permitted->size - 1];
+ permitted->size--;
++ permitted->dirty = true;
+ continue;
+ }
+ i++;
+@@ -908,17 +956,9 @@ void _gnutls_x509_name_constraints_clear(gnutls_x509_name_constraints_t nc)
+ struct name_constraints_node_st *node = nc->nodes.data[i];
+ name_constraints_node_free(node);
+ }
+- gnutls_free(nc->nodes.data);
+- nc->nodes.capacity = 0;
+- nc->nodes.size = 0;
+-
+- gnutls_free(nc->permitted.data);
+- nc->permitted.capacity = 0;
+- nc->permitted.size = 0;
+-
+- gnutls_free(nc->excluded.data);
+- nc->excluded.capacity = 0;
+- nc->excluded.size = 0;
++ name_constraints_node_list_clear(&nc->nodes);
++ name_constraints_node_list_clear(&nc->permitted);
++ name_constraints_node_list_clear(&nc->excluded);
+ }
+
+ /**
+--
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-8.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-8.patch
new file mode 100644
index 00000000000..28d86ec6273
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-8.patch
@@ -0,0 +1,155 @@
+From 80db5e90fa18d3e34bb91dd027bdf76d31e93dcd Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Wed, 4 Feb 2026 13:30:08 +0100
+Subject: [PATCH] x509/name_constraints: implement
+ name_constraints_node_list_union
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/80db5e90fa18d3e34bb91dd027bdf76d31e93dcd]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 98 ++++++++++++++++++++++++++++++++-----
+ 1 file changed, 86 insertions(+), 12 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index 41f30d13b9..de20dd8ef4 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -41,6 +41,7 @@
+ #include "intprops.h"
+ #include "minmax.h"
+
++#include <assert.h>
+ #include <string.h>
+
+ #define MAX_NC_CHECKS (1 << 20)
+@@ -870,22 +871,95 @@ cleanup:
+ return ret;
+ }
+
+-static int name_constraints_node_list_concat(
+- gnutls_x509_name_constraints_t nc,
+- struct name_constraints_node_list_st *nodes,
+- const struct name_constraints_node_list_st *nodes2)
++static int
++name_constraints_node_list_union(gnutls_x509_name_constraints_t nc,
++ struct name_constraints_node_list_st *nodes,
++ struct name_constraints_node_list_st *nodes2)
+ {
+ int ret;
++ size_t i = 0, j = 0;
++ struct name_constraints_node_st *nc1;
++ const struct name_constraints_node_st *nc2;
++ enum name_constraint_relation rel;
++ struct name_constraints_node_list_st result = { 0 };
++
++ if (nodes2->size == 0) /* nothing to do */
++ return GNUTLS_E_SUCCESS;
++
++ ret = ensure_sorted(nodes);
++ if (ret < 0) {
++ gnutls_assert();
++ goto cleanup;
++ }
++ ret = ensure_sorted(nodes2);
++ if (ret < 0) {
++ gnutls_assert();
++ goto cleanup;
++ }
++
++ /* traverse both lists in a single pass and merge them w/o duplicates */
++ while (i < nodes->size || j < nodes2->size) {
++ nc1 = (i < nodes->size) ? nodes->sorted_view[i] : NULL;
++ nc2 = (j < nodes2->size) ? nodes2->sorted_view[j] : NULL;
+
+- for (size_t i = 0; i < nodes2->size; i++) {
+- ret = name_constraints_node_add_copy(nc, nodes,
+- nodes2->data[i]);
++ rel = compare_name_constraint_nodes(nc1, nc2);
++ switch (rel) {
++ case NC_SORTS_BEFORE:
++ assert(nc1 != NULL); /* comparator-guaranteed */
++ ret = name_constraints_node_list_add(&result, nc1);
++ i++;
++ break;
++ case NC_SORTS_AFTER:
++ assert(nc2 != NULL); /* comparator-guaranteed */
++ ret = name_constraints_node_add_copy(nc, &result, nc2);
++ j++;
++ break;
++ case NC_INCLUDES: /* nc1 is broader, shallow-copy it */
++ assert(nc1 != NULL && nc2 != NULL); /* comparator */
++ ret = name_constraints_node_list_add(&result, nc1);
++ i++;
++ j++;
++ break;
++ case NC_INCLUDED_BY: /* nc2 is broader, deep-copy it */
++ assert(nc1 != NULL && nc2 != NULL); /* comparator */
++ ret = name_constraints_node_add_copy(nc, &result, nc2);
++ i++;
++ j++;
++ break;
++ case NC_EQUAL:
++ assert(nc1 != NULL && nc2 != NULL); /* loop condition */
++ ret = name_constraints_node_list_add(&result, nc1);
++ i++;
++ j++;
++ break;
++ }
+ if (ret < 0) {
+- return gnutls_assert_val(ret);
++ gnutls_assert();
++ goto cleanup;
+ }
+ }
+
+- return 0;
++ gnutls_free(nodes->data);
++ gnutls_free(nodes->sorted_view);
++ nodes->data = result.data;
++ nodes->sorted_view = NULL;
++ nodes->size = result.size;
++ nodes->capacity = result.capacity;
++ nodes->dirty = true;
++ /* since we know it's sorted, populate sorted_view almost for free */
++ nodes->sorted_view = gnutls_calloc(
++ nodes->size, sizeof(struct name_constraints_node_st *));
++ if (!nodes->sorted_view)
++ return GNUTLS_E_SUCCESS; /* we tried, no harm done */
++ memcpy(nodes->sorted_view, nodes->data,
++ nodes->size * sizeof(struct name_constraints_node_st *));
++ nodes->dirty = false;
++
++ result.data = NULL;
++ return GNUTLS_E_SUCCESS;
++cleanup:
++ name_constraints_node_list_clear(&result);
++ return gnutls_assert_val(ret);
+ }
+
+ /**
+@@ -1026,7 +1100,7 @@ static int name_constraints_add(gnutls_x509_name_constraints_t nc,
+ * @nc2: The name constraints to be merged with
+ *
+ * This function will merge the provided name constraints structures
+- * as per RFC5280 p6.1.4. That is, the excluded constraints will be appended,
++ * as per RFC5280 p6.1.4. That is, the excluded constraints will be unioned,
+ * and permitted will be intersected. The intersection assumes that @nc
+ * is the root CA constraints.
+ *
+@@ -1048,8 +1122,8 @@ int _gnutls_x509_name_constraints_merge(gnutls_x509_name_constraints_t nc,
+ return ret;
+ }
+
+- ret = name_constraints_node_list_concat(nc, &nc->excluded,
+- &nc2->excluded);
++ ret = name_constraints_node_list_union(nc, &nc->excluded,
++ &nc2->excluded);
+ if (ret < 0) {
+ gnutls_assert();
+ return ret;
+--
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-9.patch b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-9.patch
new file mode 100644
index 00000000000..ed6be93c54b
--- /dev/null
+++ b/meta/recipes-support/gnutls/gnutls/CVE-2025-14831-9.patch
@@ -0,0 +1,110 @@
+From d0ac999620c8c0aeb6939e1e92d884ca8e40b759 Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Wed, 4 Feb 2026 18:31:37 +0100
+Subject: [PATCH] x509/name_constraints: make types_with_empty_intersection a
+ bitmask
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/d0ac999620c8c0aeb6939e1e92d884ca8e40b759]
+CVE: CVE-2025-14831
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/x509/name_constraints.c | 39 +++++++++++++++++++++++++++----------
+ 1 file changed, 29 insertions(+), 10 deletions(-)
+
+diff --git a/lib/x509/name_constraints.c b/lib/x509/name_constraints.c
+index de20dd8ef4..1d78d1bc50 100644
+--- a/lib/x509/name_constraints.c
++++ b/lib/x509/name_constraints.c
+@@ -275,6 +275,7 @@ static enum name_constraint_relation compare_ip_ncs(const gnutls_datum_t *n1,
+
+ static inline bool is_supported_type(unsigned type)
+ {
++ /* all of these should be under GNUTLS_SAN_MAX (intersect bitmasks) */
+ return type == GNUTLS_SAN_DNSNAME || type == GNUTLS_SAN_RFC822NAME ||
+ type == GNUTLS_SAN_IPADDRESS;
+ }
+@@ -683,6 +684,21 @@ name_constraints_node_new(gnutls_x509_name_constraints_t nc, unsigned type,
+ return tmp;
+ }
+
++static int
++name_constraints_node_list_union(gnutls_x509_name_constraints_t nc,
++ struct name_constraints_node_list_st *nodes,
++ struct name_constraints_node_list_st *nodes2);
++
++#define type_bitmask_t uint8_t /* increase if GNUTLS_SAN_MAX grows */
++#define type_bitmask_set(mask, t) ((mask) |= (1u << (t)))
++#define type_bitmask_clr(mask, t) ((mask) &= ~(1u << (t)))
++#define type_bitmask_in(mask, t) ((mask) & (1u << (t)))
++/* C99-compatible compile-time assertions; gnutls_int.h undefines verify */
++typedef char assert_san_max[(GNUTLS_SAN_MAX < 8) ? 1 : -1];
++typedef char assert_dnsname[(GNUTLS_SAN_DNSNAME <= GNUTLS_SAN_MAX) ? 1 : -1];
++typedef char assert_rfc822[(GNUTLS_SAN_RFC822NAME <= GNUTLS_SAN_MAX) ? 1 : -1];
++typedef char assert_ipaddr[(GNUTLS_SAN_IPADDRESS <= GNUTLS_SAN_MAX) ? 1 : -1];
++
+ /*-
+ * @brief name_constraints_node_list_intersect:
+ * @nc: %gnutls_x509_name_constraints_t
+@@ -710,12 +726,9 @@ static int name_constraints_node_list_intersect(
+ .capacity = 0 };
+ static const unsigned char universal_ip[32] = { 0 };
+
+- /* temporary array to see, if we need to add universal excluded constraints
+- * (see phase 3 for details)
+- * indexed directly by (gnutls_x509_subject_alt_name_t enum - 1) */
+- unsigned char types_with_empty_intersection[GNUTLS_SAN_MAX];
+- memset(types_with_empty_intersection, 0,
+- sizeof(types_with_empty_intersection));
++ /* bitmask to see if we need to add universal excluded constraints
++ * (see phase 3 for details) */
++ type_bitmask_t types_with_empty_intersection = 0;
+
+ if (permitted->size == 0 || permitted2->size == 0)
+ return 0;
+@@ -741,7 +754,8 @@ static int name_constraints_node_list_intersect(
+ // note the possibility of empty intersection for this type
+ // if we add something to the intersection in phase 2,
+ // we will reset this flag back to 0 then
+- types_with_empty_intersection[t->type - 1] = 1;
++ type_bitmask_set(types_with_empty_intersection,
++ t->type);
+ found = t2;
+ break;
+ }
+@@ -798,8 +812,8 @@ static int name_constraints_node_list_intersect(
+ GNUTLS_E_INTERNAL_ERROR);
+ }
+ // we will not add universal excluded constraint for this type
+- types_with_empty_intersection[tmp->type - 1] =
+- 0;
++ type_bitmask_clr(types_with_empty_intersection,
++ tmp->type);
+ // add intersection node to PERMITTED
+ ret = name_constraints_node_list_add(permitted,
+ tmp);
+@@ -827,7 +841,7 @@ static int name_constraints_node_list_intersect(
+ * excluded constraint with universal wildcard
+ * (since the intersection of permitted is now empty). */
+ for (type = 1; type <= GNUTLS_SAN_MAX; type++) {
+- if (types_with_empty_intersection[type - 1] == 0)
++ if (!type_bitmask_in(types_with_empty_intersection, type))
+ continue;
+ _gnutls_hard_log(
+ "Adding universal excluded name constraint for type %d.\n",
+@@ -871,6 +885,11 @@ cleanup:
+ return ret;
+ }
+
++#undef type_bitmask_t
++#undef type_bitmask_set
++#undef type_bitmask_clr
++#undef type_bitmask_in
++
+ static int
+ name_constraints_node_list_union(gnutls_x509_name_constraints_t nc,
+ struct name_constraints_node_list_st *nodes,
+--
+GitLab
+
diff --git a/meta/recipes-support/gnutls/gnutls_3.8.10.bb b/meta/recipes-support/gnutls/gnutls_3.8.10.bb
index 5430ee20856..ce9fdce3f46 100644
--- a/meta/recipes-support/gnutls/gnutls_3.8.10.bb
+++ b/meta/recipes-support/gnutls/gnutls_3.8.10.bb
@@ -24,6 +24,16 @@ SRC_URI = "https://www.gnupg.org/ftp/gcrypt/gnutls/v${SHRT_VER}/gnutls-${PV}.tar
file://run-ptest \
file://Add-ptest-support.patch \
file://CVE-2025-9820.patch \
+ file://CVE-2025-14831-1.patch \
+ file://CVE-2025-14831-2.patch \
+ file://CVE-2025-14831-3.patch \
+ file://CVE-2025-14831-4.patch \
+ file://CVE-2025-14831-5.patch \
+ file://CVE-2025-14831-6.patch \
+ file://CVE-2025-14831-7.patch \
+ file://CVE-2025-14831-8.patch \
+ file://CVE-2025-14831-9.patch \
+ file://CVE-2025-14831-10.patch \
"
SRC_URI[sha256sum] = "db7fab7cce791e7727ebbef2334301c821d79a550ec55c9ef096b610b03eb6b7"
^ permalink raw reply related [flat|nested] 17+ messages in thread* [OE-core][whinlatter v2 16/16] python3-urllib3: patch CVE-2025-66471
2026-03-06 7:22 [OE-core][whinlatter v2 00/16] Patch review Yoann Congal
` (14 preceding siblings ...)
2026-03-06 7:22 ` [OE-core][whinlatter v2 15/16] lz4: Remove a reference to the rejected CVE-2025-62813 Yoann Congal
@ 2026-03-06 7:22 ` Yoann Congal
15 siblings, 0 replies; 17+ messages in thread
From: Yoann Congal @ 2026-03-06 7:22 UTC (permalink / raw)
To: openembedded-core
From: Peter Marko <peter.marko@siemens.com>
Pick patch per [1].
[1] https://nvd.nist.gov/vuln/detail/CVE-2025-66471
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
---
.../python3-urllib3/CVE-2025-66471.patch | 926 ++++++++++++++++++
.../python/python3-urllib3_2.5.0.bb | 1 +
2 files changed, 927 insertions(+)
create mode 100644 meta/recipes-devtools/python/python3-urllib3/CVE-2025-66471.patch
diff --git a/meta/recipes-devtools/python/python3-urllib3/CVE-2025-66471.patch b/meta/recipes-devtools/python/python3-urllib3/CVE-2025-66471.patch
new file mode 100644
index 00000000000..15f601843b8
--- /dev/null
+++ b/meta/recipes-devtools/python/python3-urllib3/CVE-2025-66471.patch
@@ -0,0 +1,926 @@
+From c19571de34c47de3a766541b041637ba5f716ed7 Mon Sep 17 00:00:00 2001
+From: Illia Volochii <illia.volochii@gmail.com>
+Date: Fri, 5 Dec 2025 16:40:41 +0200
+Subject: [PATCH] Merge commit from fork
+
+* Prevent decompression bomb for zstd in Python 3.14
+
+* Add experimental `decompress_iter` for Brotli
+
+* Update changes for Brotli
+
+* Add `GzipDecoder.decompress_iter`
+
+* Test https://github.com/python-hyper/brotlicffi/pull/207
+
+* Pin Brotli
+
+* Add `decompress_iter` to all decoders and make tests pass
+
+* Pin brotlicffi to an official release
+
+* Revert changes to response.py
+
+* Add `max_length` parameter to all `decompress` methods
+
+* Fix the `test_brotlipy` session
+
+* Unset `_data` on gzip error
+
+* Add a test for memory usage
+
+* Test more methods
+
+* Fix the test for `stream`
+
+* Cover more lines with tests
+
+* Add more coverage
+
+* Make `read1` a bit more efficient
+
+* Fix PyPy tests for Brotli
+
+* Revert an unnecessarily moved check
+
+* Add some comments
+
+* Leave just one `self._obj.decompress` call in `GzipDecoder`
+
+* Refactor test params
+
+* Test reads with all data already in the decompressor
+
+* Prevent needless copying of data decoded with `max_length`
+
+* Rename the changed test
+
+* Note that responses of unknown length should be streamed too
+
+* Add a changelog entry
+
+* Avoid returning a memory view from `BytesQueueBuffer`
+
+* Add one more note to the changelog entry
+
+CVE: CVE-2025-66471
+Upstream-Status: Backport [https://github.com/urllib3/urllib3/commit/c19571de34c47de3a766541b041637ba5f716ed7]
+Signed-off-by: Peter Marko <peter.marko@siemens.com>
+---
+ CHANGES.rst | 15 +++
+ docs/advanced-usage.rst | 3 +-
+ docs/user-guide.rst | 4 +-
+ pyproject.toml | 5 +-
+ src/urllib3/response.py | 278 ++++++++++++++++++++++++++++++++++------
+ test/test_response.py | 269 +++++++++++++++++++++++++++++++++++++-
+ 6 files changed, 525 insertions(+), 49 deletions(-)
+
+diff --git a/CHANGES.rst b/CHANGES.rst
+index add194eb..345476f3 100644
+--- a/CHANGES.rst
++++ b/CHANGES.rst
+@@ -10,6 +10,21 @@ Bugfixes
+
+ TODO: add other entries.
+
++- Fixed a security issue where streaming API could improperly handle highly
++ compressed HTTP content ("decompression bombs") leading to excessive resource
++ consumption even when a small amount of data was requested. Reading small
++ chunks of compressed data is safer and much more efficient now.
++
++.. caution::
++ - If urllib3 is not installed with the optional `urllib3[brotli]` extra, but
++ your environment contains a Brotli/brotlicffi/brotlipy package anyway, make
++ sure to upgrade it to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 to
++ benefit from the security fixes and avoid warnings. Prefer using
++ `urllib3[brotli]` to install a compatible Brotli package automatically.
++
++ - If you use custom decompressors, please make sure to update them to
++ respect the changed API of ``urllib3.response.ContentDecoder``.
++
+
+ 2.5.0 (2025-06-18)
+ ==================
+diff --git a/docs/advanced-usage.rst b/docs/advanced-usage.rst
+index ff773662..3ab4fcf3 100644
+--- a/docs/advanced-usage.rst
++++ b/docs/advanced-usage.rst
+@@ -66,7 +66,8 @@ When using ``preload_content=True`` (the default setting) the
+ response body will be read immediately into memory and the HTTP connection
+ will be released back into the pool without manual intervention.
+
+-However, when dealing with large responses it's often better to stream the response
++However, when dealing with responses of large or unknown length,
++it's often better to stream the response
+ content using ``preload_content=False``. Setting ``preload_content`` to ``False`` means
+ that urllib3 will only read from the socket when data is requested.
+
+diff --git a/docs/user-guide.rst b/docs/user-guide.rst
+index 5c78c8af..1d9d0bbd 100644
+--- a/docs/user-guide.rst
++++ b/docs/user-guide.rst
+@@ -145,8 +145,8 @@ to a byte string representing the response content:
+ print(resp.data)
+ # b"\xaa\xa5H?\x95\xe9\x9b\x11"
+
+-.. note:: For larger responses, it's sometimes better to :ref:`stream <stream>`
+- the response.
++.. note:: For responses of large or unknown length, it's sometimes better to
++ :ref:`stream <stream>` the response.
+
+ Using io Wrappers with Response Content
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+diff --git a/pyproject.toml b/pyproject.toml
+index c9aa6d13..45538a6e 100644
+--- a/pyproject.toml
++++ b/pyproject.toml
+@@ -41,8 +41,8 @@ dynamic = ["version"]
+
+ [project.optional-dependencies]
+ brotli = [
+- "brotli>=1.0.9; platform_python_implementation == 'CPython'",
+- "brotlicffi>=0.8.0; platform_python_implementation != 'CPython'"
++ "brotli>=1.2.0; platform_python_implementation == 'CPython'",
++ "brotlicffi>=1.2.0.0; platform_python_implementation != 'CPython'"
+ ]
+ # Once we drop support for Python 3.13 this extra can be removed.
+ # We'll need a deprecation period for the 'zstandard' module support
+@@ -160,6 +160,7 @@ filterwarnings = [
+ '''default:ssl\.PROTOCOL_TLSv1_1 is deprecated:DeprecationWarning''',
+ '''default:ssl\.PROTOCOL_TLSv1_2 is deprecated:DeprecationWarning''',
+ '''default:ssl NPN is deprecated, use ALPN instead:DeprecationWarning''',
++ '''default:Brotli >= 1.2.0 is required to prevent decompression bombs\.:urllib3.exceptions.DependencyWarning''',
+ # https://github.com/SeleniumHQ/selenium/issues/13328
+ '''default:unclosed file <_io\.BufferedWriter name='/dev/null'>:ResourceWarning''',
+ # https://github.com/SeleniumHQ/selenium/issues/14686
+diff --git a/src/urllib3/response.py b/src/urllib3/response.py
+index 3df98184..4ba42136 100644
+--- a/src/urllib3/response.py
++++ b/src/urllib3/response.py
+@@ -33,6 +33,7 @@ from .connection import BaseSSLError, HTTPConnection, HTTPException
+ from .exceptions import (
+ BodyNotHttplibCompatible,
+ DecodeError,
++ DependencyWarning,
+ HTTPError,
+ IncompleteRead,
+ InvalidChunkLength,
+@@ -52,7 +53,11 @@ log = logging.getLogger(__name__)
+
+
+ class ContentDecoder:
+- def decompress(self, data: bytes) -> bytes:
++ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
++ raise NotImplementedError()
++
++ @property
++ def has_unconsumed_tail(self) -> bool:
+ raise NotImplementedError()
+
+ def flush(self) -> bytes:
+@@ -62,30 +67,57 @@ class ContentDecoder:
+ class DeflateDecoder(ContentDecoder):
+ def __init__(self) -> None:
+ self._first_try = True
+- self._data = b""
++ self._first_try_data = b""
++ self._unfed_data = b""
+ self._obj = zlib.decompressobj()
+
+- def decompress(self, data: bytes) -> bytes:
+- if not data:
++ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
++ data = self._unfed_data + data
++ self._unfed_data = b""
++ if not data and not self._obj.unconsumed_tail:
+ return data
++ original_max_length = max_length
++ if original_max_length < 0:
++ max_length = 0
++ elif original_max_length == 0:
++ # We should not pass 0 to the zlib decompressor because 0 is
++ # the default value that will make zlib decompress without a
++ # length limit.
++ # Data should be stored for subsequent calls.
++ self._unfed_data = data
++ return b""
+
++ # Subsequent calls always reuse `self._obj`. zlib requires
++ # passing the unconsumed tail if decompression is to continue.
+ if not self._first_try:
+- return self._obj.decompress(data)
++ return self._obj.decompress(
++ self._obj.unconsumed_tail + data, max_length=max_length
++ )
+
+- self._data += data
++ # First call tries with RFC 1950 ZLIB format.
++ self._first_try_data += data
+ try:
+- decompressed = self._obj.decompress(data)
++ decompressed = self._obj.decompress(data, max_length=max_length)
+ if decompressed:
+ self._first_try = False
+- self._data = None # type: ignore[assignment]
++ self._first_try_data = b""
+ return decompressed
++ # On failure, it falls back to RFC 1951 DEFLATE format.
+ except zlib.error:
+ self._first_try = False
+ self._obj = zlib.decompressobj(-zlib.MAX_WBITS)
+ try:
+- return self.decompress(self._data)
++ return self.decompress(
++ self._first_try_data, max_length=original_max_length
++ )
+ finally:
+- self._data = None # type: ignore[assignment]
++ self._first_try_data = b""
++
++ @property
++ def has_unconsumed_tail(self) -> bool:
++ return bool(self._unfed_data) or (
++ bool(self._obj.unconsumed_tail) and not self._first_try
++ )
+
+ def flush(self) -> bytes:
+ return self._obj.flush()
+@@ -101,27 +133,61 @@ class GzipDecoder(ContentDecoder):
+ def __init__(self) -> None:
+ self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
+ self._state = GzipDecoderState.FIRST_MEMBER
++ self._unconsumed_tail = b""
+
+- def decompress(self, data: bytes) -> bytes:
++ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
+ ret = bytearray()
+- if self._state == GzipDecoderState.SWALLOW_DATA or not data:
++ if self._state == GzipDecoderState.SWALLOW_DATA:
+ return bytes(ret)
++
++ if max_length == 0:
++ # We should not pass 0 to the zlib decompressor because 0 is
++ # the default value that will make zlib decompress without a
++ # length limit.
++ # Data should be stored for subsequent calls.
++ self._unconsumed_tail += data
++ return b""
++
++ # zlib requires passing the unconsumed tail to the subsequent
++ # call if decompression is to continue.
++ data = self._unconsumed_tail + data
++ if not data and self._obj.eof:
++ return bytes(ret)
++
+ while True:
+ try:
+- ret += self._obj.decompress(data)
++ ret += self._obj.decompress(
++ data, max_length=max(max_length - len(ret), 0)
++ )
+ except zlib.error:
+ previous_state = self._state
+ # Ignore data after the first error
+ self._state = GzipDecoderState.SWALLOW_DATA
++ self._unconsumed_tail = b""
+ if previous_state == GzipDecoderState.OTHER_MEMBERS:
+ # Allow trailing garbage acceptable in other gzip clients
+ return bytes(ret)
+ raise
+- data = self._obj.unused_data
++
++ self._unconsumed_tail = data = (
++ self._obj.unconsumed_tail or self._obj.unused_data
++ )
++ if max_length > 0 and len(ret) >= max_length:
++ break
++
+ if not data:
+ return bytes(ret)
+- self._state = GzipDecoderState.OTHER_MEMBERS
+- self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
++ # When the end of a gzip member is reached, a new decompressor
++ # must be created for unused (possibly future) data.
++ if self._obj.eof:
++ self._state = GzipDecoderState.OTHER_MEMBERS
++ self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
++
++ return bytes(ret)
++
++ @property
++ def has_unconsumed_tail(self) -> bool:
++ return bool(self._unconsumed_tail)
+
+ def flush(self) -> bytes:
+ return self._obj.flush()
+@@ -136,9 +202,35 @@ if brotli is not None:
+ def __init__(self) -> None:
+ self._obj = brotli.Decompressor()
+ if hasattr(self._obj, "decompress"):
+- setattr(self, "decompress", self._obj.decompress)
++ setattr(self, "_decompress", self._obj.decompress)
+ else:
+- setattr(self, "decompress", self._obj.process)
++ setattr(self, "_decompress", self._obj.process)
++
++ # Requires Brotli >= 1.2.0 for `output_buffer_limit`.
++ def _decompress(self, data: bytes, output_buffer_limit: int = -1) -> bytes:
++ raise NotImplementedError()
++
++ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
++ try:
++ if max_length > 0:
++ return self._decompress(data, output_buffer_limit=max_length)
++ else:
++ return self._decompress(data)
++ except TypeError:
++ # Fallback for Brotli/brotlicffi/brotlipy versions without
++ # the `output_buffer_limit` parameter.
++ warnings.warn(
++ "Brotli >= 1.2.0 is required to prevent decompression bombs.",
++ DependencyWarning,
++ )
++ return self._decompress(data)
++
++ @property
++ def has_unconsumed_tail(self) -> bool:
++ try:
++ return not self._obj.can_accept_more_data()
++ except AttributeError:
++ return False
+
+ def flush(self) -> bytes:
+ if hasattr(self._obj, "flush"):
+@@ -156,16 +248,46 @@ try:
+ def __init__(self) -> None:
+ self._obj = zstd.ZstdDecompressor()
+
+- def decompress(self, data: bytes) -> bytes:
+- if not data:
++ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
++ if not data and not self.has_unconsumed_tail:
+ return b""
+- data_parts = [self._obj.decompress(data)]
+- while self._obj.eof and self._obj.unused_data:
+- unused_data = self._obj.unused_data
++ if self._obj.eof:
++ data = self._obj.unused_data + data
+ self._obj = zstd.ZstdDecompressor()
+- data_parts.append(self._obj.decompress(unused_data))
++ part = self._obj.decompress(data, max_length=max_length)
++ length = len(part)
++ data_parts = [part]
++ # Every loop iteration is supposed to read data from a separate frame.
++ # The loop breaks when:
++ # - enough data is read;
++ # - no more unused data is available;
++ # - end of the last read frame has not been reached (i.e.,
++ # more data has to be fed).
++ while (
++ self._obj.eof
++ and self._obj.unused_data
++ and (max_length < 0 or length < max_length)
++ ):
++ unused_data = self._obj.unused_data
++ if not self._obj.needs_input:
++ self._obj = zstd.ZstdDecompressor()
++ part = self._obj.decompress(
++ unused_data,
++ max_length=(max_length - length) if max_length > 0 else -1,
++ )
++ if part_length := len(part):
++ data_parts.append(part)
++ length += part_length
++ elif self._obj.needs_input:
++ break
+ return b"".join(data_parts)
+
++ @property
++ def has_unconsumed_tail(self) -> bool:
++ return not (self._obj.needs_input or self._obj.eof) or bool(
++ self._obj.unused_data
++ )
++
+ def flush(self) -> bytes:
+ if not self._obj.eof:
+ raise DecodeError("Zstandard data is incomplete")
+@@ -236,10 +358,35 @@ class MultiDecoder(ContentDecoder):
+ def flush(self) -> bytes:
+ return self._decoders[0].flush()
+
+- def decompress(self, data: bytes) -> bytes:
+- for d in reversed(self._decoders):
+- data = d.decompress(data)
+- return data
++ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
++ if max_length <= 0:
++ for d in reversed(self._decoders):
++ data = d.decompress(data)
++ return data
++
++ ret = bytearray()
++ # Every while loop iteration goes through all decoders once.
++ # It exits when enough data is read or no more data can be read.
++ # It is possible that the while loop iteration does not produce
++ # any data because we retrieve up to `max_length` from every
++ # decoder, and the amount of bytes may be insufficient for the
++ # next decoder to produce enough/any output.
++ while True:
++ any_data = False
++ for d in reversed(self._decoders):
++ data = d.decompress(data, max_length=max_length - len(ret))
++ if data:
++ any_data = True
++ # We should not break when no data is returned because
++ # next decoders may produce data even with empty input.
++ ret += data
++ if not any_data or len(ret) >= max_length:
++ return bytes(ret)
++ data = b""
++
++ @property
++ def has_unconsumed_tail(self) -> bool:
++ return any(d.has_unconsumed_tail for d in self._decoders)
+
+
+ def _get_decoder(mode: str) -> ContentDecoder:
+@@ -272,9 +419,6 @@ class BytesQueueBuffer:
+
+ * self.buffer, which contains the full data
+ * the largest chunk that we will copy in get()
+-
+- The worst case scenario is a single chunk, in which case we'll make a full copy of
+- the data inside get().
+ """
+
+ def __init__(self) -> None:
+@@ -296,6 +440,10 @@ class BytesQueueBuffer:
+ elif n < 0:
+ raise ValueError("n should be > 0")
+
++ if len(self.buffer[0]) == n and isinstance(self.buffer[0], bytes):
++ self._size -= n
++ return self.buffer.popleft()
++
+ fetched = 0
+ ret = io.BytesIO()
+ while fetched < n:
+@@ -502,7 +650,11 @@ class BaseHTTPResponse(io.IOBase):
+ self._decoder = _get_decoder(content_encoding)
+
+ def _decode(
+- self, data: bytes, decode_content: bool | None, flush_decoder: bool
++ self,
++ data: bytes,
++ decode_content: bool | None,
++ flush_decoder: bool,
++ max_length: int | None = None,
+ ) -> bytes:
+ """
+ Decode the data passed in and potentially flush the decoder.
+@@ -515,9 +667,12 @@ class BaseHTTPResponse(io.IOBase):
+ )
+ return data
+
++ if max_length is None or flush_decoder:
++ max_length = -1
++
+ try:
+ if self._decoder:
+- data = self._decoder.decompress(data)
++ data = self._decoder.decompress(data, max_length=max_length)
+ self._has_decoded_content = True
+ except self.DECODER_ERROR_CLASSES as e:
+ content_encoding = self.headers.get("content-encoding", "").lower()
+@@ -988,6 +1143,14 @@ class HTTPResponse(BaseHTTPResponse):
+ elif amt is not None:
+ cache_content = False
+
++ if self._decoder and self._decoder.has_unconsumed_tail:
++ decoded_data = self._decode(
++ b"",
++ decode_content,
++ flush_decoder=False,
++ max_length=amt - len(self._decoded_buffer),
++ )
++ self._decoded_buffer.put(decoded_data)
+ if len(self._decoded_buffer) >= amt:
+ return self._decoded_buffer.get(amt)
+
+@@ -995,7 +1158,11 @@ class HTTPResponse(BaseHTTPResponse):
+
+ flush_decoder = amt is None or (amt != 0 and not data)
+
+- if not data and len(self._decoded_buffer) == 0:
++ if (
++ not data
++ and len(self._decoded_buffer) == 0
++ and not (self._decoder and self._decoder.has_unconsumed_tail)
++ ):
+ return data
+
+ if amt is None:
+@@ -1012,7 +1179,12 @@ class HTTPResponse(BaseHTTPResponse):
+ )
+ return data
+
+- decoded_data = self._decode(data, decode_content, flush_decoder)
++ decoded_data = self._decode(
++ data,
++ decode_content,
++ flush_decoder,
++ max_length=amt - len(self._decoded_buffer),
++ )
+ self._decoded_buffer.put(decoded_data)
+
+ while len(self._decoded_buffer) < amt and data:
+@@ -1020,7 +1192,12 @@ class HTTPResponse(BaseHTTPResponse):
+ # For example, the GZ file header takes 10 bytes, we don't want to read
+ # it one byte at a time
+ data = self._raw_read(amt)
+- decoded_data = self._decode(data, decode_content, flush_decoder)
++ decoded_data = self._decode(
++ data,
++ decode_content,
++ flush_decoder,
++ max_length=amt - len(self._decoded_buffer),
++ )
+ self._decoded_buffer.put(decoded_data)
+ data = self._decoded_buffer.get(amt)
+
+@@ -1055,6 +1232,20 @@ class HTTPResponse(BaseHTTPResponse):
+ "Calling read1(decode_content=False) is not supported after "
+ "read1(decode_content=True) was called."
+ )
++ if (
++ self._decoder
++ and self._decoder.has_unconsumed_tail
++ and (amt is None or len(self._decoded_buffer) < amt)
++ ):
++ decoded_data = self._decode(
++ b"",
++ decode_content,
++ flush_decoder=False,
++ max_length=(
++ amt - len(self._decoded_buffer) if amt is not None else None
++ ),
++ )
++ self._decoded_buffer.put(decoded_data)
+ if len(self._decoded_buffer) > 0:
+ if amt is None:
+ return self._decoded_buffer.get_all()
+@@ -1070,7 +1261,9 @@ class HTTPResponse(BaseHTTPResponse):
+ self._init_decoder()
+ while True:
+ flush_decoder = not data
+- decoded_data = self._decode(data, decode_content, flush_decoder)
++ decoded_data = self._decode(
++ data, decode_content, flush_decoder, max_length=amt
++ )
+ self._decoded_buffer.put(decoded_data)
+ if decoded_data or flush_decoder:
+ break
+@@ -1101,7 +1294,11 @@ class HTTPResponse(BaseHTTPResponse):
+ if self.chunked and self.supports_chunked_reads():
+ yield from self.read_chunked(amt, decode_content=decode_content)
+ else:
+- while not is_fp_closed(self._fp) or len(self._decoded_buffer) > 0:
++ while (
++ not is_fp_closed(self._fp)
++ or len(self._decoded_buffer) > 0
++ or (self._decoder and self._decoder.has_unconsumed_tail)
++ ):
+ data = self.read(amt=amt, decode_content=decode_content)
+
+ if data:
+@@ -1264,7 +1461,10 @@ class HTTPResponse(BaseHTTPResponse):
+ break
+ chunk = self._handle_chunk(amt)
+ decoded = self._decode(
+- chunk, decode_content=decode_content, flush_decoder=False
++ chunk,
++ decode_content=decode_content,
++ flush_decoder=False,
++ max_length=amt,
+ )
+ if decoded:
+ yield decoded
+diff --git a/test/test_response.py b/test/test_response.py
+index c97fdff0..9592fdd9 100644
+--- a/test/test_response.py
++++ b/test/test_response.py
+@@ -1,6 +1,7 @@
+ from __future__ import annotations
+
+ import contextlib
++import gzip
+ import http.client as httplib
+ import socket
+ import ssl
+@@ -43,6 +44,26 @@ def zstd_compress(data: bytes) -> bytes:
+ return zstd.compress(data) # type: ignore[no-any-return]
+
+
++def deflate2_compress(data: bytes) -> bytes:
++ compressor = zlib.compressobj(6, zlib.DEFLATED, -zlib.MAX_WBITS)
++ return compressor.compress(data) + compressor.flush()
++
++
++if brotli:
++ try:
++ brotli.Decompressor().process(b"", output_buffer_limit=1024)
++ _brotli_gte_1_2_0_available = True
++ except (AttributeError, TypeError):
++ _brotli_gte_1_2_0_available = False
++else:
++ _brotli_gte_1_2_0_available = False
++try:
++ zstd_compress(b"")
++ _zstd_available = True
++except ModuleNotFoundError:
++ _zstd_available = False
++
++
+ class TestBytesQueueBuffer:
+ def test_single_chunk(self) -> None:
+ buffer = BytesQueueBuffer()
+@@ -118,12 +139,19 @@ class TestBytesQueueBuffer:
+
+ assert len(get_func(buffer)) == 10 * 2**20
+
++ @pytest.mark.parametrize(
++ "get_func",
++ (lambda b: b.get(len(b)), lambda b: b.get_all()),
++ ids=("get", "get_all"),
++ )
+ @pytest.mark.limit_memory("10.01 MB", current_thread_only=True)
+- def test_get_all_memory_usage_single_chunk(self) -> None:
++ def test_memory_usage_single_chunk(
++ self, get_func: typing.Callable[[BytesQueueBuffer], bytes]
++ ) -> None:
+ buffer = BytesQueueBuffer()
+ chunk = bytes(10 * 2**20) # 10 MiB
+ buffer.put(chunk)
+- assert buffer.get_all() is chunk
++ assert get_func(buffer) is chunk
+
+
+ # A known random (i.e, not-too-compressible) payload generated with:
+@@ -426,7 +454,26 @@ class TestResponse:
+ assert r.data == b"foo"
+
+ @onlyZstd()
+- def test_decode_multiframe_zstd(self) -> None:
++ @pytest.mark.parametrize(
++ "read_amt",
++ (
++ # Read all data at once.
++ None,
++ # Read one byte at a time, data of frames will be returned
++ # separately.
++ 1,
++ # Read two bytes at a time, the second read should return
++ # data from both frames.
++ 2,
++ # Read three bytes at a time, the whole frames will be
++ # returned separately in two calls.
++ 3,
++ # Read four bytes at a time, the first read should return
++ # data from the first frame and a part of the second frame.
++ 4,
++ ),
++ )
++ def test_decode_multiframe_zstd(self, read_amt: int | None) -> None:
+ data = (
+ # Zstandard frame
+ zstd_compress(b"foo")
+@@ -441,8 +488,57 @@ class TestResponse:
+ )
+
+ fp = BytesIO(data)
+- r = HTTPResponse(fp, headers={"content-encoding": "zstd"})
+- assert r.data == b"foobar"
++ result = bytearray()
++ r = HTTPResponse(
++ fp, headers={"content-encoding": "zstd"}, preload_content=False
++ )
++ total_length = 6
++ while len(result) < total_length:
++ chunk = r.read(read_amt, decode_content=True)
++ if read_amt is None:
++ assert len(chunk) == total_length
++ else:
++ assert len(chunk) == min(read_amt, total_length - len(result))
++ result += chunk
++ assert bytes(result) == b"foobar"
++
++ @onlyZstd()
++ def test_decode_multiframe_zstd_with_max_length_close_to_compressed_data_size(
++ self,
++ ) -> None:
++ """
++ Test decoding when the first read from the socket returns all
++ the compressed frames, but then it has to be decompressed in a
++ couple of read calls.
++ """
++ data = (
++ # Zstandard frame
++ zstd_compress(b"x" * 1024)
++ # skippable frame (must be ignored)
++ + bytes.fromhex(
++ "50 2A 4D 18" # Magic_Number (little-endian)
++ "07 00 00 00" # Frame_Size (little-endian)
++ "00 00 00 00 00 00 00" # User_Data
++ )
++ # Zstandard frame
++ + zstd_compress(b"y" * 1024)
++ )
++
++ fp = BytesIO(data)
++ r = HTTPResponse(
++ fp, headers={"content-encoding": "zstd"}, preload_content=False
++ )
++ # Read the whole first frame.
++ assert r.read(1024) == b"x" * 1024
++ assert len(r._decoded_buffer) == 0
++ # Read the whole second frame in two reads.
++ assert r.read(512) == b"y" * 512
++ assert len(r._decoded_buffer) == 0
++ assert r.read(512) == b"y" * 512
++ assert len(r._decoded_buffer) == 0
++ # Ensure no more data is left.
++ assert r.read() == b""
++ assert len(r._decoded_buffer) == 0
+
+ @onlyZstd()
+ def test_chunked_decoding_zstd(self) -> None:
+@@ -535,6 +631,169 @@ class TestResponse:
+ decoded_data += part
+ assert decoded_data == data
+
++ _test_compressor_params: list[
++ tuple[str, tuple[str, typing.Callable[[bytes], bytes]] | None]
++ ] = [
++ ("deflate1", ("deflate", zlib.compress)),
++ ("deflate2", ("deflate", deflate2_compress)),
++ ("gzip", ("gzip", gzip.compress)),
++ ]
++ if _brotli_gte_1_2_0_available:
++ _test_compressor_params.append(("brotli", ("br", brotli.compress)))
++ else:
++ _test_compressor_params.append(("brotli", None))
++ if _zstd_available:
++ _test_compressor_params.append(("zstd", ("zstd", zstd_compress)))
++ else:
++ _test_compressor_params.append(("zstd", None))
++
++ @pytest.mark.parametrize("read_method", ("read", "read1"))
++ @pytest.mark.parametrize(
++ "data",
++ [d[1] for d in _test_compressor_params],
++ ids=[d[0] for d in _test_compressor_params],
++ )
++ def test_read_with_all_data_already_in_decompressor(
++ self,
++ request: pytest.FixtureRequest,
++ read_method: str,
++ data: tuple[str, typing.Callable[[bytes], bytes]] | None,
++ ) -> None:
++ if data is None:
++ pytest.skip(f"Proper {request.node.callspec.id} decoder is not available")
++ original_data = b"bar" * 1000
++ name, compress_func = data
++ compressed_data = compress_func(original_data)
++ fp = mock.Mock(read=mock.Mock(return_value=b""))
++ r = HTTPResponse(fp, headers={"content-encoding": name}, preload_content=False)
++ # Put all data in the decompressor's buffer.
++ r._init_decoder()
++ assert r._decoder is not None # for mypy
++ decoded = r._decoder.decompress(compressed_data, max_length=0)
++ if name == "br":
++ # It's known that some Brotli libraries do not respect
++ # `max_length`.
++ r._decoded_buffer.put(decoded)
++ else:
++ assert decoded == b""
++ # Read the data via `HTTPResponse`.
++ read = getattr(r, read_method)
++ assert read(0) == b""
++ assert read(2500) == original_data[:2500]
++ assert read(500) == original_data[2500:]
++ assert read(0) == b""
++ assert read() == b""
++
++ @pytest.mark.parametrize(
++ "delta",
++ (
++ 0, # First read from socket returns all compressed data.
++ -1, # First read from socket returns all but one byte of compressed data.
++ ),
++ )
++ @pytest.mark.parametrize("read_method", ("read", "read1"))
++ @pytest.mark.parametrize(
++ "data",
++ [d[1] for d in _test_compressor_params],
++ ids=[d[0] for d in _test_compressor_params],
++ )
++ def test_decode_with_max_length_close_to_compressed_data_size(
++ self,
++ request: pytest.FixtureRequest,
++ delta: int,
++ read_method: str,
++ data: tuple[str, typing.Callable[[bytes], bytes]] | None,
++ ) -> None:
++ """
++ Test decoding when the first read from the socket returns all or
++ almost all the compressed data, but then it has to be
++ decompressed in a couple of read calls.
++ """
++ if data is None:
++ pytest.skip(f"Proper {request.node.callspec.id} decoder is not available")
++
++ original_data = b"foo" * 1000
++ name, compress_func = data
++ compressed_data = compress_func(original_data)
++ fp = BytesIO(compressed_data)
++ r = HTTPResponse(fp, headers={"content-encoding": name}, preload_content=False)
++ initial_limit = len(compressed_data) + delta
++ read = getattr(r, read_method)
++ initial_chunk = read(amt=initial_limit, decode_content=True)
++ assert len(initial_chunk) == initial_limit
++ assert (
++ len(read(amt=len(original_data), decode_content=True))
++ == len(original_data) - initial_limit
++ )
++
++ # Prepare 50 MB of compressed data outside of the test measuring
++ # memory usage.
++ _test_memory_usage_decode_with_max_length_params: list[
++ tuple[str, tuple[str, bytes] | None]
++ ] = [
++ (
++ params[0],
++ (params[1][0], params[1][1](b"A" * (50 * 2**20))) if params[1] else None,
++ )
++ for params in _test_compressor_params
++ ]
++
++ @pytest.mark.parametrize(
++ "data",
++ [d[1] for d in _test_memory_usage_decode_with_max_length_params],
++ ids=[d[0] for d in _test_memory_usage_decode_with_max_length_params],
++ )
++ @pytest.mark.parametrize("read_method", ("read", "read1", "read_chunked", "stream"))
++ # Decoders consume different amounts of memory during decompression.
++ # We set the 10 MB limit to ensure that the whole decompressed data
++ # is not stored unnecessarily.
++ #
++ # FYI, the following consumption was observed for the test with
++ # `read` on CPython 3.14.0:
++ # - deflate: 2.3 MiB
++ # - deflate2: 2.1 MiB
++ # - gzip: 2.1 MiB
++ # - brotli:
++ # - brotli v1.2.0: 9 MiB
++ # - brotlicffi v1.2.0.0: 6 MiB
++ # - brotlipy v0.7.0: 105.8 MiB
++ # - zstd: 4.5 MiB
++ @pytest.mark.limit_memory("10 MB", current_thread_only=True)
++ def test_memory_usage_decode_with_max_length(
++ self,
++ request: pytest.FixtureRequest,
++ read_method: str,
++ data: tuple[str, bytes] | None,
++ ) -> None:
++ if data is None:
++ pytest.skip(f"Proper {request.node.callspec.id} decoder is not available")
++
++ name, compressed_data = data
++ limit = 1024 * 1024 # 1 MiB
++ if read_method in ("read_chunked", "stream"):
++ httplib_r = httplib.HTTPResponse(MockSock) # type: ignore[arg-type]
++ httplib_r.fp = MockChunkedEncodingResponse([compressed_data]) # type: ignore[assignment]
++ r = HTTPResponse(
++ httplib_r,
++ preload_content=False,
++ headers={"transfer-encoding": "chunked", "content-encoding": name},
++ )
++ next(getattr(r, read_method)(amt=limit, decode_content=True))
++ else:
++ fp = BytesIO(compressed_data)
++ r = HTTPResponse(
++ fp, headers={"content-encoding": name}, preload_content=False
++ )
++ getattr(r, read_method)(amt=limit, decode_content=True)
++
++ # Check that the internal decoded buffer is empty unless brotli
++ # is used.
++ # Google's brotli library does not fully respect the output
++ # buffer limit: https://github.com/google/brotli/issues/1396
++ # And unmaintained brotlipy cannot limit the output buffer size.
++ if name != "br" or brotli.__name__ == "brotlicffi":
++ assert len(r._decoded_buffer) == 0
++
+ def test_multi_decoding_deflate_deflate(self) -> None:
+ data = zlib.compress(zlib.compress(b"foo"))
+
diff --git a/meta/recipes-devtools/python/python3-urllib3_2.5.0.bb b/meta/recipes-devtools/python/python3-urllib3_2.5.0.bb
index 7892fc0874e..e277a192e53 100644
--- a/meta/recipes-devtools/python/python3-urllib3_2.5.0.bb
+++ b/meta/recipes-devtools/python/python3-urllib3_2.5.0.bb
@@ -10,6 +10,7 @@ inherit pypi python_hatchling
SRC_URI += "\
file://CVE-2025-66418.patch \
file://CVE-2026-21441.patch \
+ file://CVE-2025-66471.patch \
"
DEPENDS += "python3-hatch-vcs-native"
^ permalink raw reply related [flat|nested] 17+ messages in thread