From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: stable@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
patches@lists.linux.dev, Will Deacon <will@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
James Morse <james.morse@arm.com>,
Mark Brown <broonie@kernel.org>,
Catalin Marinas <catalin.marinas@arm.com>
Subject: [PATCH 6.18 15/55] arm64: errata: Work around early CME DVMSync acknowledgement
Date: Fri, 24 Apr 2026 15:30:54 +0200 [thread overview]
Message-ID: <20260424132433.296939425@linuxfoundation.org> (raw)
In-Reply-To: <20260424132430.006424517@linuxfoundation.org>
6.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Catalin Marinas <catalin.marinas@arm.com>
commit 0baba94a9779c13c857f6efc55807e6a45b1d4e4 upstream.
C1-Pro acknowledges DVMSync messages before completing the SME/CME
memory accesses. Work around this by issuing an IPI to the affected CPUs
if they are running in EL0 with SME enabled.
Note that we avoid the local DSB in the IPI handler as the kernel runs
with SCTLR_EL1.IESB=1. This is sufficient to complete SME memory
accesses at EL0 on taking an exception to EL1. On the return to user
path, no barrier is necessary either. See the comment in
sme_set_active() and the more detailed explanation in the link below.
To avoid a potential IPI flood from malicious applications (e.g.
madvise(MADV_PAGEOUT) in a tight loop), track where a process is active
via mm_cpumask() and only interrupt those CPUs.
Link: https://lore.kernel.org/r/ablEXwhfKyJW1i7l@J2N7QTR9R3
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Reviewed-by: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
Documentation/arch/arm64/silicon-errata.rst | 2
arch/arm64/Kconfig | 12 ++++
arch/arm64/include/asm/cpucaps.h | 2
arch/arm64/include/asm/fpsimd.h | 21 +++++++
arch/arm64/include/asm/tlbbatch.h | 10 ++-
arch/arm64/include/asm/tlbflush.h | 72 ++++++++++++++++++++++++-
arch/arm64/kernel/cpu_errata.c | 30 ++++++++++
arch/arm64/kernel/entry-common.c | 3 +
arch/arm64/kernel/fpsimd.c | 79 ++++++++++++++++++++++++++++
arch/arm64/kernel/process.c | 36 ++++++++++++
arch/arm64/tools/cpucaps | 1
11 files changed, 264 insertions(+), 4 deletions(-)
--- a/Documentation/arch/arm64/silicon-errata.rst
+++ b/Documentation/arch/arm64/silicon-errata.rst
@@ -202,6 +202,8 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-V3AE | #3312417 | ARM64_ERRATUM_3194386 |
+----------------+-----------------+-----------------+-----------------------------+
+| ARM | C1-Pro | #4193714 | ARM64_ERRATUM_4193714 |
++----------------+-----------------+-----------------+-----------------------------+
| ARM | MMU-500 | #841119,826419 | ARM_SMMU_MMU_500_CPRE_ERRATA|
| | | #562869,1047329 | |
+----------------+-----------------+-----------------+-----------------------------+
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1154,6 +1154,18 @@ config ARM64_ERRATUM_3194386
If unsure, say Y.
+config ARM64_ERRATUM_4193714
+ bool "C1-Pro: 4193714: SME DVMSync early acknowledgement"
+ depends on ARM64_SME
+ default y
+ help
+ Enable workaround for C1-Pro acknowledging the DVMSync before
+ the SME memory accesses are complete. This will cause TLB
+ maintenance for processes using SME to also issue an IPI to
+ the affected CPUs.
+
+ If unsure, say Y.
+
config CAVIUM_ERRATUM_22375
bool "Cavium erratum 22375, 24313"
default y
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -66,6 +66,8 @@ cpucap_is_possible(const unsigned int ca
return IS_ENABLED(CONFIG_ARM64_WORKAROUND_REPEAT_TLBI);
case ARM64_WORKAROUND_SPECULATIVE_SSBS:
return IS_ENABLED(CONFIG_ARM64_ERRATUM_3194386);
+ case ARM64_WORKAROUND_4193714:
+ return IS_ENABLED(CONFIG_ARM64_ERRATUM_4193714);
case ARM64_MPAM:
/*
* KVM MPAM support doesn't rely on the host kernel supporting MPAM.
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -428,6 +428,24 @@ static inline size_t sme_state_size(stru
return __sme_state_size(task_get_sme_vl(task));
}
+void sme_enable_dvmsync(void);
+void sme_set_active(void);
+void sme_clear_active(void);
+
+static inline void sme_enter_from_user_mode(void)
+{
+ if (alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714) &&
+ test_thread_flag(TIF_SME))
+ sme_clear_active();
+}
+
+static inline void sme_exit_to_user_mode(void)
+{
+ if (alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714) &&
+ test_thread_flag(TIF_SME))
+ sme_set_active();
+}
+
#else
static inline void sme_user_disable(void) { BUILD_BUG(); }
@@ -456,6 +474,9 @@ static inline size_t sme_state_size(stru
return 0;
}
+static inline void sme_enter_from_user_mode(void) { }
+static inline void sme_exit_to_user_mode(void) { }
+
#endif /* ! CONFIG_ARM64_SME */
/* For use by EFI runtime services calls only */
--- a/arch/arm64/include/asm/tlbbatch.h
+++ b/arch/arm64/include/asm/tlbbatch.h
@@ -2,11 +2,17 @@
#ifndef _ARCH_ARM64_TLBBATCH_H
#define _ARCH_ARM64_TLBBATCH_H
+#include <linux/cpumask.h>
+
struct arch_tlbflush_unmap_batch {
+#ifdef CONFIG_ARM64_ERRATUM_4193714
/*
- * For arm64, HW can do tlb shootdown, so we don't
- * need to record cpumask for sending IPI
+ * Track CPUs that need SME DVMSync on completion of this batch.
+ * Otherwise, the arm64 HW can do tlb shootdown, so we don't need to
+ * record cpumask for sending IPI
*/
+ cpumask_var_t cpumask;
+#endif
};
#endif /* _ARCH_ARM64_TLBBATCH_H */
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -80,6 +80,71 @@ static inline unsigned long get_trans_gr
}
}
+#ifdef CONFIG_ARM64_ERRATUM_4193714
+
+void sme_do_dvmsync(const struct cpumask *mask);
+
+static inline void sme_dvmsync(struct mm_struct *mm)
+{
+ if (!alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714))
+ return;
+
+ sme_do_dvmsync(mm_cpumask(mm));
+}
+
+static inline void sme_dvmsync_add_pending(struct arch_tlbflush_unmap_batch *batch,
+ struct mm_struct *mm)
+{
+ if (!alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714))
+ return;
+
+ /*
+ * Order the mm_cpumask() read after the hardware DVMSync.
+ */
+ dsb(ish);
+ if (cpumask_empty(mm_cpumask(mm)))
+ return;
+
+ /*
+ * Allocate the batch cpumask on first use. Fall back to an immediate
+ * IPI for this mm in case of failure.
+ */
+ if (!cpumask_available(batch->cpumask) &&
+ !zalloc_cpumask_var(&batch->cpumask, GFP_ATOMIC)) {
+ sme_do_dvmsync(mm_cpumask(mm));
+ return;
+ }
+
+ cpumask_or(batch->cpumask, batch->cpumask, mm_cpumask(mm));
+}
+
+static inline void sme_dvmsync_batch(struct arch_tlbflush_unmap_batch *batch)
+{
+ if (!alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714))
+ return;
+
+ if (!cpumask_available(batch->cpumask))
+ return;
+
+ sme_do_dvmsync(batch->cpumask);
+ cpumask_clear(batch->cpumask);
+}
+
+#else
+
+static inline void sme_dvmsync(struct mm_struct *mm)
+{
+}
+static inline void sme_dvmsync_add_pending(struct arch_tlbflush_unmap_batch *batch,
+ struct mm_struct *mm)
+{
+}
+static inline void sme_dvmsync_batch(struct arch_tlbflush_unmap_batch *batch)
+{
+}
+
+#endif /* CONFIG_ARM64_ERRATUM_4193714 */
+
/*
* Level-based TLBI operations.
*
@@ -189,12 +254,14 @@ static inline void __tlbi_sync_s1ish(str
{
dsb(ish);
__repeat_tlbi_sync(vale1is, 0);
+ sme_dvmsync(mm);
}
-static inline void __tlbi_sync_s1ish_batch(void)
+static inline void __tlbi_sync_s1ish_batch(struct arch_tlbflush_unmap_batch *batch)
{
dsb(ish);
__repeat_tlbi_sync(vale1is, 0);
+ sme_dvmsync_batch(batch);
}
static inline void __tlbi_sync_s1ish_kernel(void)
@@ -357,7 +424,7 @@ static inline bool arch_tlbbatch_should_
*/
static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
{
- __tlbi_sync_s1ish_batch();
+ __tlbi_sync_s1ish_batch(batch);
}
/*
@@ -546,6 +613,7 @@ static inline void arch_tlbbatch_add_pen
struct mm_struct *mm, unsigned long start, unsigned long end)
{
__flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, true, 3);
+ sme_dvmsync_add_pending(batch, mm);
}
#endif
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -11,6 +11,7 @@
#include <asm/cpu.h>
#include <asm/cputype.h>
#include <asm/cpufeature.h>
+#include <asm/fpsimd.h>
#include <asm/kvm_asm.h>
#include <asm/smp_plat.h>
@@ -551,6 +552,23 @@ static const struct midr_range erratum_s
};
#endif
+#ifdef CONFIG_ARM64_ERRATUM_4193714
+static bool has_sme_dvmsync_erratum(const struct arm64_cpu_capabilities *entry,
+ int scope)
+{
+ if (!id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1)))
+ return false;
+
+ return is_affected_midr_range(entry, scope);
+}
+
+static void cpu_enable_sme_dvmsync(const struct arm64_cpu_capabilities *__unused)
+{
+ if (this_cpu_has_cap(ARM64_WORKAROUND_4193714))
+ sme_enable_dvmsync();
+}
+#endif
+
#ifdef CONFIG_AMPERE_ERRATUM_AC03_CPU_38
static const struct midr_range erratum_ac03_cpu_38_list[] = {
MIDR_ALL_VERSIONS(MIDR_AMPERE1),
@@ -870,6 +888,18 @@ const struct arm64_cpu_capabilities arm6
ERRATA_MIDR_RANGE_LIST(erratum_spec_ssbs_list),
},
#endif
+#ifdef CONFIG_ARM64_ERRATUM_4193714
+ {
+ .desc = "C1-Pro SME DVMSync early acknowledgement",
+ .capability = ARM64_WORKAROUND_4193714,
+ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ .matches = has_sme_dvmsync_erratum,
+ .cpu_enable = cpu_enable_sme_dvmsync,
+ /* C1-Pro r0p0 - r1p2 (the latter only when REVIDR_EL1[0]==0) */
+ .midr_range = MIDR_RANGE(MIDR_C1_PRO, 0, 0, 1, 2),
+ MIDR_FIXED(MIDR_CPU_VAR_REV(1, 2), BIT(0)),
+ },
+#endif
#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
{
.desc = "ARM errata 2966298, 3117295",
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -21,6 +21,7 @@
#include <asm/daifflags.h>
#include <asm/esr.h>
#include <asm/exception.h>
+#include <asm/fpsimd.h>
#include <asm/irq_regs.h>
#include <asm/kprobes.h>
#include <asm/mmu.h>
@@ -84,6 +85,7 @@ static __always_inline void __enter_from
{
enter_from_user_mode(regs);
mte_disable_tco_entry(current);
+ sme_enter_from_user_mode();
}
static __always_inline void arm64_enter_from_user_mode(struct pt_regs *regs)
@@ -102,6 +104,7 @@ static __always_inline void arm64_exit_t
local_irq_disable();
exit_to_user_mode_prepare(regs);
local_daif_mask();
+ sme_exit_to_user_mode();
mte_check_tfsr_exit();
exit_to_user_mode();
}
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -15,6 +15,7 @@
#include <linux/compiler.h>
#include <linux/cpu.h>
#include <linux/cpu_pm.h>
+#include <linux/cpumask.h>
#include <linux/ctype.h>
#include <linux/kernel.h>
#include <linux/linkage.h>
@@ -28,6 +29,7 @@
#include <linux/sched/task_stack.h>
#include <linux/signal.h>
#include <linux/slab.h>
+#include <linux/smp.h>
#include <linux/stddef.h>
#include <linux/sysctl.h>
#include <linux/swab.h>
@@ -1384,6 +1386,83 @@ void do_sve_acc(unsigned long esr, struc
put_cpu_fpsimd_context();
}
+#ifdef CONFIG_ARM64_ERRATUM_4193714
+
+/*
+ * SME/CME erratum handling.
+ */
+static cpumask_t sme_dvmsync_cpus;
+
+/*
+ * These helpers are only called from non-preemptible contexts, so
+ * smp_processor_id() is safe here.
+ */
+void sme_set_active(void)
+{
+ unsigned int cpu = smp_processor_id();
+
+ if (!cpumask_test_cpu(cpu, &sme_dvmsync_cpus))
+ return;
+
+ cpumask_set_cpu(cpu, mm_cpumask(current->mm));
+
+ /*
+ * A subsequent (post ERET) SME access may use a stale address
+ * translation. On C1-Pro, a TLBI+DSB on a different CPU will wait for
+ * the completion of cpumask_set_cpu() above as it appears in program
+ * order before the SME access. The post-TLBI+DSB read of mm_cpumask()
+ * will lead to the IPI being issued.
+ *
+ * https://lore.kernel.org/r/ablEXwhfKyJW1i7l@J2N7QTR9R3
+ */
+}
+
+void sme_clear_active(void)
+{
+ unsigned int cpu = smp_processor_id();
+
+ if (!cpumask_test_cpu(cpu, &sme_dvmsync_cpus))
+ return;
+
+ /*
+ * With SCTLR_EL1.IESB enabled, the SME memory transactions are
+ * completed on entering EL1.
+ */
+ cpumask_clear_cpu(cpu, mm_cpumask(current->mm));
+}
+
+static void sme_dvmsync_ipi(void *unused)
+{
+ /*
+ * With SCTLR_EL1.IESB on, taking an exception is sufficient to ensure
+ * the completion of the SME memory accesses, so no need for an
+ * explicit DSB.
+ */
+}
+
+void sme_do_dvmsync(const struct cpumask *mask)
+{
+ /*
+ * This is called from the TLB maintenance functions after the DSB ISH
+ * to send the hardware DVMSync message. If this CPU sees the mask as
+ * empty, the remote CPU executing sme_set_active() would have seen
+ * the DVMSync and no IPI required.
+ */
+ if (cpumask_empty(mask))
+ return;
+
+ preempt_disable();
+ smp_call_function_many(mask, sme_dvmsync_ipi, NULL, true);
+ preempt_enable();
+}
+
+void sme_enable_dvmsync(void)
+{
+ cpumask_set_cpu(smp_processor_id(), &sme_dvmsync_cpus);
+}
+
+#endif /* CONFIG_ARM64_ERRATUM_4193714 */
+
/*
* Trapped SME access
*
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -26,6 +26,7 @@
#include <linux/reboot.h>
#include <linux/interrupt.h>
#include <linux/init.h>
+#include <linux/cpumask.h>
#include <linux/cpu.h>
#include <linux/elfcore.h>
#include <linux/pm.h>
@@ -339,8 +340,41 @@ void flush_thread(void)
flush_gcs();
}
+#ifdef CONFIG_ARM64_ERRATUM_4193714
+
+static void arch_dup_tlbbatch_mask(struct task_struct *dst)
+{
+ /*
+ * Clear the inherited cpumask with memset() to cover both cases where
+ * cpumask_var_t is a pointer or an array. It will be allocated lazily
+ * in sme_dvmsync_add_pending() if CPUMASK_OFFSTACK=y.
+ */
+ if (alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714))
+ memset(&dst->tlb_ubc.arch.cpumask, 0,
+ sizeof(dst->tlb_ubc.arch.cpumask));
+}
+
+static void arch_release_tlbbatch_mask(struct task_struct *tsk)
+{
+ if (alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714))
+ free_cpumask_var(tsk->tlb_ubc.arch.cpumask);
+}
+
+#else
+
+static void arch_dup_tlbbatch_mask(struct task_struct *dst)
+{
+}
+
+static void arch_release_tlbbatch_mask(struct task_struct *tsk)
+{
+}
+
+#endif /* CONFIG_ARM64_ERRATUM_4193714 */
+
void arch_release_task_struct(struct task_struct *tsk)
{
+ arch_release_tlbbatch_mask(tsk);
fpsimd_release_task(tsk);
}
@@ -356,6 +390,8 @@ int arch_dup_task_struct(struct task_str
*dst = *src;
+ arch_dup_tlbbatch_mask(dst);
+
/*
* Drop stale reference to src's sve_state and convert dst to
* non-streaming FPSIMD mode.
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -101,6 +101,7 @@ WORKAROUND_2077057
WORKAROUND_2457168
WORKAROUND_2645198
WORKAROUND_2658417
+WORKAROUND_4193714
WORKAROUND_AMPERE_AC03_CPU_38
WORKAROUND_AMPERE_AC04_CPU_23
WORKAROUND_TRBE_OVERWRITE_FILL_MODE
next prev parent reply other threads:[~2026-04-24 13:41 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-24 13:30 [PATCH 6.18 00/55] 6.18.25-rc1 review Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 01/55] crypto: authencesn - Fix src offset when decrypting in-place Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 02/55] ipv6: add NULL checks for idev in SRv6 paths Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 03/55] net: ethernet: mtk_eth_soc: initialize PPE per-tag-layer MTU registers Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 04/55] drm/amdgpu: replace PASID IDR with XArray Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 05/55] crypto: krb5enc - fix sleepable flag handling in encrypt dispatch Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 06/55] crypto: krb5enc - fix async decrypt skipping hash verification Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 07/55] scripts: generate_rust_analyzer.py: define scripts Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 08/55] ksmbd: fix use-after-free in __ksmbd_close_fd() via durable scavenger Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 09/55] ksmbd: validate owner of durable handle on reconnect Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 10/55] arm64: tlb: Allow XZR argument to TLBI ops Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 11/55] arm64: tlb: Optimize ARM64_WORKAROUND_REPEAT_TLBI Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 12/55] arm64: tlb: Introduce __tlbi_sync_s1ish_{kernel,batch}() for TLB maintenance Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 13/55] arm64: tlb: Pass the corresponding mm to __tlbi_sync_s1ish() Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 14/55] arm64: cputype: Add C1-Pro definitions Greg Kroah-Hartman
2026-04-24 13:30 ` Greg Kroah-Hartman [this message]
2026-04-24 13:30 ` [PATCH 6.18 16/55] sched/debug: Fix avg_vruntime() usage Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 17/55] lib/crc: tests: Make crc_kunit test only the enabled CRC variants Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 18/55] lib/crc: tests: Add CRC_ENABLE_ALL_FOR_KUNIT Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 19/55] lib/crc: tests: Add a .kunitconfig file Greg Kroah-Hartman
2026-04-24 13:30 ` [PATCH 6.18 20/55] kunit: configs: Enable all CRC tests in all_tests.config Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 21/55] lib/crypto: tests: Add a .kunitconfig file Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 22/55] lib/crypto: tests: Introduce CRYPTO_LIB_ENABLE_ALL_FOR_KUNIT Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 23/55] kunit: configs: Enable all crypto library tests in all_tests.config Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 24/55] lib/crypto: tests: Drop the default to CRYPTO_SELFTESTS Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 25/55] scripts/dtc: Remove unused dts_version in dtc-lexer.l Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 26/55] fs/ntfs3: validate rec->used in journal-replay file record check Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 27/55] f2fs: fix to do sanity check on dcc->discard_cmd_cnt conditionally Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 28/55] f2fs: fix UAF caused by decrementing sbi->nr_pages[] in f2fs_write_end_io() Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 29/55] f2fs: fix to avoid memory leak in f2fs_rename() Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 30/55] f2fs: fix to avoid uninit-value access in f2fs_sanity_check_node_footer Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 31/55] fuse: reject oversized dirents in page cache Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 32/55] fuse: abort on fatal signal during sync init Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 33/55] fuse: Check for large folio with SPLICE_F_MOVE Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 34/55] fuse: quiet down complaints in fuse_conn_limit_write Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 35/55] fuse: fuse_dev_ioctl_clone() should wait for device file to be initialized Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 36/55] ksmbd: require minimum ACE size in smb_check_perm_dacl() Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 37/55] smb: server: fix active_num_conn leak on transport allocation failure Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 38/55] smb: server: fix max_connections off-by-one in tcp accept path Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 39/55] smb: client: require a full NFS mode SID before reading mode bits Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 40/55] smb: client: fix OOB read in smb2_ioctl_query_info QUERY_INFO path Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 41/55] ksmbd: validate response sizes in ipc_validate_msg() Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 42/55] ksmbd: validate num_aces and harden ACE walk in smb_inherit_dacl() Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 43/55] ksmbd: fix out-of-bounds write in smb2_get_ea() EA alignment Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 44/55] ksmbd: use check_add_overflow() to prevent u16 DACL size overflow Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 45/55] ksmbd: reset rcount per connection in ksmbd_conn_wait_idle_sess_id() Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 46/55] writeback: Fix use after free in inode_switch_wbs_work_fn() Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 47/55] f2fs: fix use-after-free of sbi in f2fs_compress_write_end_io() Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 48/55] ALSA: usb-audio: apply quirk for MOONDROP JU Jiu Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 49/55] ALSA: hda/realtek: Add quirk for Legion S7 15IMH Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 50/55] ALSA: caiaq: take a reference on the USB device in create_card() Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 51/55] net/packet: fix TOCTOU race on mmapd vnet_hdr in tpacket_snd() Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 52/55] crypto: ccp: Dont attempt to copy CSR to userspace if PSP command failed Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 53/55] crypto: ccp: Dont attempt to copy PDH cert " Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 54/55] crypto: ccp: Dont attempt to copy ID " Greg Kroah-Hartman
2026-04-24 13:31 ` [PATCH 6.18 55/55] rxrpc: Fix missing validation of ticket length in non-XDR key preparsing Greg Kroah-Hartman
2026-04-24 19:35 ` [PATCH 6.18 00/55] 6.18.25-rc1 review Pavel Machek
2026-04-24 20:27 ` Florian Fainelli
2026-04-24 21:45 ` Peter Schneider
2026-04-24 21:51 ` Mark Brown
2026-04-24 22:24 ` Shuah Khan
2026-04-25 7:33 ` Brett A C Sheffield
2026-04-25 12:01 ` Miguel Ojeda
2026-04-25 17:49 ` Wentao Guan
2026-04-25 21:37 ` Dileep malepu
2026-04-26 7:00 ` Barry K. Nathan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260424132433.296939425@linuxfoundation.org \
--to=gregkh@linuxfoundation.org \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=james.morse@arm.com \
--cc=mark.rutland@arm.com \
--cc=patches@lists.linux.dev \
--cc=stable@vger.kernel.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox