Linux Documentation
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Documentation: security-bugs: new updates covering triage and AI
@ 2026-05-03 11:35 Willy Tarreau
  2026-05-03 11:35 ` [PATCH v2 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
                   ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-05-03 11:35 UTC (permalink / raw)
  To: greg
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Willy Tarreau

This series tries to translate recent discussions on the security list
on how to better handle reports. It details:
  - when not to Cc: the security list
  - what classes of bugs do not need to be handled privately
  - minimum requirements for AI-assisted reports

As usual, this is probably perfectible but can already help in the short
term as we can point it to reporters, so barring any strong disagreement,
better continue to proceed in small incremental improvements and observe
the effects.

Thanks!
Willy

---
v2:
  - fixes for issues reported by Randy
  - Greg's ack on the AI part
  - reworded the "when to Cc" part based on Greg's feedback
    (Greg I didn't take your original ack since the wording changed)
  - split the threat model into its own document as per Greg's suggestion

---
Willy Tarreau (3):
  Documentation: security-bugs: do not systematically Cc the security
    team
  Documentation: security-bugs: explain what is and is not a security
    bug
  Documentation: security-bugs: clarify requirements for AI-assisted
    reports

 Documentation/process/index.rst         |   1 +
 Documentation/process/security-bugs.rst |  93 +++++++++-
 Documentation/process/threat-model.rst  | 231 ++++++++++++++++++++++++
 3 files changed, 324 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/process/threat-model.rst

-- 
2.52.0


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v2 1/3] Documentation: security-bugs: do not systematically Cc the security team
  2026-05-03 11:35 [PATCH v2 0/3] Documentation: security-bugs: new updates covering triage and AI Willy Tarreau
@ 2026-05-03 11:35 ` Willy Tarreau
  2026-05-05 14:10   ` Leon Romanovsky
  2026-05-08 15:31   ` Greg KH
  2026-05-03 11:35 ` [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
  2026-05-03 11:35 ` [PATCH v2 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports Willy Tarreau
  2 siblings, 2 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-05-03 11:35 UTC (permalink / raw)
  To: greg
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Willy Tarreau, Greg KH

With the increase of automated reports, the security team is dealing
with way more messages than really needed. The reporting process works
well with most teams so there is no need to systematically involve the
security team in reports.

Let's suggest to keep it for small lists of recipients and new reporters
only. This should continue to cover the risk of lost messages while
reducing the volume from prolific reporters.

Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
---
 Documentation/process/security-bugs.rst | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst
index 27b028e858610..6dc525858125e 100644
--- a/Documentation/process/security-bugs.rst
+++ b/Documentation/process/security-bugs.rst
@@ -148,7 +148,15 @@ run additional tests.  Reports where the reporter does not respond promptly
 or cannot effectively discuss their findings may be abandoned if the
 communication does not quickly improve.
 
-The report must be sent to maintainers, with the security team in ``Cc:``.
+The report must be sent to maintainers.  If there are two or fewer
+recipients in your message, you must also always Cc: the Linux kernel
+security team who will ensure the message is delivered to the proper
+people, and will be able to assist small maintainer teams with processes
+they may not be familiar with.  For larger teams, Cc: the Linux kernel
+security team for your first few reports or when seeking specific help,
+such as when resending a message which got no response within a week.
+Once you have become comfortable with the process for a few reports, it is
+no longer necessary to Cc: the security list when sending to large teams.
 The Linux kernel security team can be contacted by email at
 <security@kernel.org>.  This is a private list of security officers
 who will help verify the bug report and assist developers working on a fix.
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-03 11:35 [PATCH v2 0/3] Documentation: security-bugs: new updates covering triage and AI Willy Tarreau
  2026-05-03 11:35 ` [PATCH v2 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
@ 2026-05-03 11:35 ` Willy Tarreau
  2026-05-05 14:10   ` Leon Romanovsky
                     ` (2 more replies)
  2026-05-03 11:35 ` [PATCH v2 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports Willy Tarreau
  2 siblings, 3 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-05-03 11:35 UTC (permalink / raw)
  To: greg
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Willy Tarreau, Greg KH

The use of automated tools to find bugs in random locations of the kernel
induces a raise of security reports even if most of them should just be
reported as regular bugs. This patch is an attempt at drawing a line
between what qualifies as a security bug and what does not, hoping to
improve the situation and ease decision on the reporter's side.

It defers the enumeration to a new file, threat-model.rst, that tries
to enumerate various classes of issues that are and are not security
bugs. This should permit to more easily update this file for various
subsystem-specific rules without having to revisit the security bug
reporting guide.

Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Leon Romanovsky <leon@kernel.org>
Suggested-by: Leon Romanovsky <leon@kernel.org>
Suggested-by: Greg KH <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
---
 Documentation/process/index.rst         |   1 +
 Documentation/process/security-bugs.rst |  28 +++
 Documentation/process/threat-model.rst  | 231 ++++++++++++++++++++++++
 3 files changed, 260 insertions(+)
 create mode 100644 Documentation/process/threat-model.rst

diff --git a/Documentation/process/index.rst b/Documentation/process/index.rst
index dbd6ea16aca70..aa7c959a52b87 100644
--- a/Documentation/process/index.rst
+++ b/Documentation/process/index.rst
@@ -86,6 +86,7 @@ regressions and security problems.
    debugging/index
    handling-regressions
    security-bugs
+   threat-model
    cve
    embargoed-hardware-issues
 
diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst
index 6dc525858125e..3b44464dd9ba7 100644
--- a/Documentation/process/security-bugs.rst
+++ b/Documentation/process/security-bugs.rst
@@ -66,6 +66,34 @@ In addition, the following information are highly desirable:
     the issue appear. It is useful to share them, as they can be helpful to
     keep end users protected during the time it takes them to apply the fix.
 
+What qualifies as a security bug
+--------------------------------
+
+It is important that most bugs are handled publicly so as to involve the widest
+possible audience and find the best solution.  By nature, bugs that are handled
+in closed discussions between a small set of participants are less likely to
+produce the best possible fix (e.g., risk of missing valid use cases, limited
+testing abilities).
+
+It turns out that the majority of the bugs reported via the security team are
+just regular bugs that have been improperly qualified as security bugs due to
+ignorance or misunderstanding of the Linux kernel's threat model described in
+Documentation/process/threat-model.rst, and ought to have been sent through
+the normal channels described in Documentation/admin-guide/reporting-issues.rst
+instead.
+
+The security list exists for urgent bugs that grant an attacker a capability
+they are not supposed to have on a correctly configured production system, and
+can be easily exploited, representing an imminent threat to many users.  Before
+reporting, consider whether the issue actually crosses a trust boundary on such
+a system.
+
+If you are unsure whether an issue qualifies, err on the side of reporting
+privately: the security team would rather triage a borderline report than miss
+a real vulnerability.  Reporting ordinary bugs to the security list, however,
+does not make them move faster and consumes triage capacity that other reports
+need.
+
 Identifying contacts
 --------------------
 
diff --git a/Documentation/process/threat-model.rst b/Documentation/process/threat-model.rst
new file mode 100644
index 0000000000000..8cd46483cd8b5
--- /dev/null
+++ b/Documentation/process/threat-model.rst
@@ -0,0 +1,231 @@
+.. _threatmodel:
+
+The Linux Kernel threat model
+=============================
+
+There are a lot of assumptions regarding what the kernel protects against and
+what it does not protect against. These assumptions tend to cause confusion for
+bug reports (:doc:`security-related ones <security-bugs>` vs
+:doc:`non-security ones <../admin-guide/reporting-issues>`), and can complicate
+security enforcement when the responsibilities for some boundaries is not clear
+between the kernel, distros, administrators and users.
+
+This document tries to clarify the responsibilities of the kernel in this
+domain.
+
+The kernel's responsibilities
+-----------------------------
+
+The kernel abstracts access to local hardware resources and to remote systems
+in a way that allows multiple local users to get a fair share of the available
+resources granted to them, and, when the underlying hardware permits, to assign
+a level of confidentiality to their communications and to the data they are
+processing or storing.
+
+The kernel assumes that the underlying hardware behaves according to its
+specifications. This includes the integrity of the CPU's instruction set, the
+transparency of the branch prediction unit and the cache units, the consistency
+of the Memory Management Unit (MMU), the isolation of DMA-capable peripherals
+(e.g., via IOMMU), state transitions in controllers, ranges of values read from
+registers, the respect of documented hardware limitations, etc.
+
+When hardware fails to maintain its specified isolation (e.g., CPU bugs,
+side-channels, hardware response to unexpected inputs), the kernel will usually
+attempt to implement reasonable mitigations. These are best-effort measures
+intended to reduce the attack surface or elevate the cost of an attack within
+the limits of the hardware's facilities; they do not constitute a
+kernel-provided safety guarantee.
+
+Users always perform their activities under the authority of an administrator
+who is able to grant or deny various types of permissions that may affect how
+users benefit from available resources, or the level of confidentiality of
+their activities. Administrators may also delegate all or part of their own
+permissions to some users, particularly via capabilities but not only. All this
+is performed via configuration (sysctl, file-system permissions etc).
+
+The Linux Kernel applies a certain collection of default settings that match
+its threat model. Distros have their own threat model and will come with their
+own configuration presets, that the administrator may have to adjust to better
+suit their expectations (relax or restrict).
+
+By default, the Linux Kernel guarantees the following protections when running
+on common processors featuring privilege levels and memory management units:
+
+* **User-based isolation**: an unprivileged user may restrict access to their
+  own data from other unprivileged users running on the same system. This
+  includes:
+
+  * stored data, via file system permissions
+  * in-memory data (pages are not accessible by default to other users)
+  * process activity (ptrace is not permitted to other users)
+  * inter-process communication (other users may not observe data exchanged via
+    UNIX domain sockets or other IPC mechanisms).
+  * network communications within the same or with other systems
+
+* **Capability-based protection**:
+
+  * users not having the ``CAP_SYS_ADMIN`` capability may not alter the
+    kernel's configuration, memory nor state, change other users' view of the
+    file system layout, grant any user capabilities they do not have, nor
+    affect the system's availability (shutdown, reboot, panic, hang, or making
+    the system unresponsive via unbounded resource exhaustion).
+  * users not having the ``CAP_NET_ADMIN`` capability may not alter the network
+    configuration, intercept nor spoof network communications from other users
+    nor systems.
+  * users not having ``CAP_SYS_PTRACE`` may not observe other users' processes
+    activities.
+
+When ``CONFIG_USER_NS`` is set, the kernel also permits unprivileged users to
+create their own user namespace in which they have all capabilities, but with a
+number of restrictions (they may not perform actions that have impacts on the
+initial user namespace, such as changing time, loading modules or mounting
+block devices). Please refer to ``user_namespaces(7)`` for more details, the
+possibilities of user namespaces are not covered in this document.
+
+The kernel also offers a lot of troubleshooting and debugging facilities, which
+can constitute attack vectors when placed in wrong hands. While some of them
+are designed to be accessible to regular local users with a low risk (e.g.
+kernel logs via ``/proc/kmsg``), some would expose enough information to
+represent a risk in most places and the decision to expose them is under the
+administrator's responsibility (perf events, traces), and others are not
+designed to be accessed by non-privileged users (e.g. debugfs). Access to these
+facilities by a user who has been explicitly granted permission by an
+administrator does not constitute a security breach.
+
+Bugs that permit to violate the principles above constitute security breaches.
+However, bugs that permit one violation only once another one was already
+achieved are only weaknesses. The kernel applies a number of self-protection
+measures whose purpose is to avoid crossing a security boundary when certain
+classes of bugs are found, but a failure of these extra protections do not
+constitute a vulnerability alone.
+
+What does not constitute a security bug
+---------------------------------------
+
+In the Linux kernel's threat model, the following classes of problems are
+**NOT** considered as Linux Kernel security bugs. However, when it is believed
+that the kernel could do better, they should be reported, so that they can be
+reviewed and fixed where reasonably possible, but they will be handled as any
+regular bug:
+
+* **Configuration**:
+
+  * outdated kernels and particularly end-of-life branches are out of the scope
+    of the kernel's threat model: administrators are responsible for keeping
+    their system up to date. For a bug to qualify as a security bug, it must be
+    demonstrated that it affects actively maintained versions.
+
+  * build-level: changes to the kernel configuration that are explicitly
+    documented as lowering the security level (e.g. ``CONFIG_NOMMU``), or
+    targeted at developers only.
+
+  * OS-level: changes to command line parameters, sysctls, filesystem
+    permissions, user capabilities, exposure of privileged interfaces, that
+    explicitly increase exposure by either offering non-default access to
+    unprivileged users, or reduce the kernel's ability to enforce some
+    protections or mitigations. Example: write access to procfs or debugfs.
+
+  * issues triggered only when using features intended for development or
+    debugging (e.g., lockdep, KASAN, fault-injection): these features are known
+    to introduce overhead and potential instability and are not intended for
+    production use.
+
+  * loading of explicitly insecure/broken/staging modules, and generally any
+    using any subsystem marked as experimental or not intended for production
+    use.
+
+  * running out-of-tree modules or unofficial kernel forks; these should be
+    reported to the relevant vendor.
+
+* **Excess of initial privileges**:
+
+  * actions performed by a user already possessing the privileges required to
+    perform that action or modify that state (e.g. ``CAP_SYS_ADMIN``,
+    ``CAP_NET_ADMIN``, ``CAP_SYS_RAWIO``, ``CAP_SYS_MODULE`` with no further
+    boundary being crossed).
+
+  * actions performed in user namespace without permitting anything in the
+    initial namespace that was not already permitted to the same user there.
+
+  * anything performed by the root user in the initial namespace (e.g. kernel
+    oops when writing to a privileged device).
+
+* **Out of production use**:
+
+  This covers theoretical/probabilistic attacks that rely on laboratory
+  conditions with zero system noise, or those requiring an unrealistic number
+  of attempts (e.g., billions of trials) that would be detected by standard
+  system monitoring long before success, such as:
+
+  * prediction of random numbers that only works in a totally silent
+    environment (such as IP ID, TCP ports or sequence numbers that can only be
+    guessed in a lab).
+
+  * activity observation and information leaks based on probabilistic
+    approaches that are prone to measurement noise and not realistically
+    reproducible on a production system.
+
+  * issues that can only be triggered by heavy attacks (e.g. brute force) whose
+    impact on the system makes it unlikely or impossible to remain undetected
+    before they succeed (e.g. consuming all memory before succeeding).
+
+  * problems seen only under development simulators, emulators, or combinations
+    that do not exist on real systems at the time of reporting (issues
+    involving tens of millions of threads, tens of thousands of CPUs,
+    unrealistic CPU frequencies, RAM sizes or disk capacities, network speeds.
+
+  * issues whose reproduction requires hardware modification or emulation,
+    including fake USB devices that pretend to be another one.
+
+  * as well as issues that can be triggered at a cost that is orders of
+    magnitude higher than the expected benefits (e.g. fully functional keyboard
+    emulator only to retrieve 7 uninitialized bytes in a structure, or
+    brute-force method involving millions of connection attempts to guess a
+    port number).
+
+* **Hardening failures**:
+
+  * ability to bypass some of the kernel's hardening measures with no
+    demonstrable exploit path (e.g. ASLR bypass, events timing or probing with
+    no demonstrable consequence). These are just weaknesses, not
+    vulnerabilities.
+
+  * missing argument checks and failure to report certain errors with no
+    immediate consequence.
+
+* **Random information leaks**:
+
+  This concerns information leaks of small data parts that happen to be there
+  and that cannot be chosen by the attacker, or face access restrictions:
+
+  * structure padding reported by syscalls or other interfaces.
+
+  * identifiers, partial data, non-terminated strings reported in error
+    messages.
+
+  * Leaks of kernel memory addresses/pointers do not constitute an immediately
+    exploitable vector and are not security bugs, though they must be reported
+    and fixed.
+
+* **Crafted file system images**:
+
+  * bugs triggered by mounting a corrupted or maliciously crafted file system
+    image are generally not security bugs, as the kernel assumes the underlying
+    storage media is under the administrator's control, unless the filesystem
+    driver is specifically documented as being hardened against untrusted media.
+
+  * issues that are resolved, mitigated, or detected by running a filesystem
+    consistency check (fsck) on the image prior to mounting.
+
+* **Physical access**:
+
+  Issues that require physical access to the machine, hardware modification, or
+  the use of specialized hardware (e.g., logic analyzers, DMA-attack tools over
+  PCI-E/Thunderbolt) are out of scope unless the system is explicitly
+  configured with technologies meant to defend against such attacks
+  (e.g. IOMMU).
+
+* **Functional and performance regressions**:
+
+  Any issue that can be mitigated by setting proper permissions and limits
+  doesn't qualify as a security bug.
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports
  2026-05-03 11:35 [PATCH v2 0/3] Documentation: security-bugs: new updates covering triage and AI Willy Tarreau
  2026-05-03 11:35 ` [PATCH v2 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
  2026-05-03 11:35 ` [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
@ 2026-05-03 11:35 ` Willy Tarreau
  2026-05-05 14:09   ` Leon Romanovsky
  2 siblings, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-05-03 11:35 UTC (permalink / raw)
  To: greg
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Willy Tarreau, Greg KH

AI tools are increasingly used to assist in bug discovery. While these
tools can identify valid issues, reports that are submitted without
manual verification often lack context, contain speculative impact
assessments, or include unnecessary formatting. Such reports increase
triage effort, waste maintainers' time and may be ignored.

Reports where the reporter has verified the issue and the proposed fix
typically meet quality standards. This documentation outlines specific
requirements for length, formatting, and impact evaluation to reduce
the effort needed to deal with these reports.

Cc: Greg KH <gregkh@linuxfoundation.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
---
 Documentation/process/security-bugs.rst | 55 +++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst
index 3b44464dd9ba7..bf62469a81266 100644
--- a/Documentation/process/security-bugs.rst
+++ b/Documentation/process/security-bugs.rst
@@ -159,6 +159,61 @@ the Linux kernel security team only.  Your message will be triaged, and you
 will receive instructions about whom to contact, if needed.  Your message may
 equally be forwarded as-is to the relevant maintainers.
 
+Responsible use of AI to find bugs
+----------------------------------
+
+A significant fraction of bug reports submitted to the security team are
+actually the result of code reviews assisted by AI tools. While this can be an
+efficient means to find bugs in rarely explored areas, it causes an overload on
+maintainers, who are sometimes forced to ignore such reports due to their poor
+quality or accuracy. As such, reporters must be particularly cautious about a
+number of points which tend to make these reports needlessly difficult to
+handle:
+
+  * **Length**: AI-generated reports tend to be excessively long, containing
+    multiple sections and excessive detail. This makes it difficult to spot
+    important information such as affected files, versions, and impact. Please
+    ensure that a clear summary of the problem and all critical details are
+    presented first. Do not require triage engineers to scan multiple pages of
+    text. Configure your tools to produce concise, human-style reports.
+
+  * **Formatting**: Most AI-generated reports are littered with Markdown tags.
+    These decorations complicate the search for important information and do
+    not survive the quoting processes involved in forwarding or replying.
+    Please **always convert your report to plain text** without any formatting
+    decorations before sending it.
+
+  * **Impact Evaluation**: Many AI-generated reports lack an understanding of
+    the kernel's threat model and go to great lengths inventing theoretical
+    consequences. This adds noise and complicates triage. Please stick to
+    verifiable facts (e.g., "this bug permits any user to gain CAP_NET_ADMIN")
+    without enumerating speculative implications. Have your tool read this
+    documentation as part of the evaluation process.
+
+  * **Reproducer**: AI-based tools are often capable of generating reproducers.
+    Please always ensure your tool provides one and **test it thoroughly**. If
+    the reproducer does not work, or if the tool cannot produce one, the
+    validity of the report should be seriously questioned.
+
+  * **Propose a Fix**: Many AI tools are actually better at writing code than
+    evaluating it. Please ask your tool to propose a fix and **test it** before
+    reporting the problem. If the fix cannot be tested because it relies on
+    rare hardware or almost extinct network protocols, the issue is likely not
+    a security bug. In any case, if a fix is proposed, it must adhere to
+    Documentation/process/submitting-patches.rst and include a 'Fixes:' tag
+    designating the commit that introduced the bug.
+
+Failure to consider these points exposes your report to the risk of being
+ignored.
+
+Use common sense when evaluating the report. If the affected file has not been
+touched for more than one year and is maintained by a single individual, it is
+likely that usage has declined and exposed users are virtually non-existent
+(e.g., drivers for very old hardware, obsolete filesystems). In such cases,
+there is no need to consume a maintainer's time with an unimportant report. If
+the issue is clearly trivial and publicly discoverable, you should report it
+directly to the public mailing lists.
+
 Sending the report
 ------------------
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports
  2026-05-03 11:35 ` [PATCH v2 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports Willy Tarreau
@ 2026-05-05 14:09   ` Leon Romanovsky
  0 siblings, 0 replies; 25+ messages in thread
From: Leon Romanovsky @ 2026-05-05 14:09 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: greg, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Greg KH

On Sun, May 03, 2026 at 01:35:06PM +0200, Willy Tarreau wrote:
> AI tools are increasingly used to assist in bug discovery. While these
> tools can identify valid issues, reports that are submitted without
> manual verification often lack context, contain speculative impact
> assessments, or include unnecessary formatting. Such reports increase
> triage effort, waste maintainers' time and may be ignored.
> 
> Reports where the reporter has verified the issue and the proposed fix
> typically meet quality standards. This documentation outlines specific
> requirements for length, formatting, and impact evaluation to reduce
> the effort needed to deal with these reports.
> 
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Signed-off-by: Willy Tarreau <w@1wt.eu>
> ---
>  Documentation/process/security-bugs.rst | 55 +++++++++++++++++++++++++
>  1 file changed, 55 insertions(+)
> 

Thanks,
Reviewed-by: Leon Romanovsky <leon@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-03 11:35 ` [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
@ 2026-05-05 14:10   ` Leon Romanovsky
  2026-05-06 15:46   ` Linus Torvalds
  2026-05-08 20:52   ` Shuah Khan
  2 siblings, 0 replies; 25+ messages in thread
From: Leon Romanovsky @ 2026-05-05 14:10 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: greg, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Greg KH

On Sun, May 03, 2026 at 01:35:05PM +0200, Willy Tarreau wrote:
> The use of automated tools to find bugs in random locations of the kernel
> induces a raise of security reports even if most of them should just be
> reported as regular bugs. This patch is an attempt at drawing a line
> between what qualifies as a security bug and what does not, hoping to
> improve the situation and ease decision on the reporter's side.
> 
> It defers the enumeration to a new file, threat-model.rst, that tries
> to enumerate various classes of issues that are and are not security
> bugs. This should permit to more easily update this file for various
> subsystem-specific rules without having to revisit the security bug
> reporting guide.
> 
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Leon Romanovsky <leon@kernel.org>
> Suggested-by: Leon Romanovsky <leon@kernel.org>
> Suggested-by: Greg KH <gregkh@linuxfoundation.org>
> Signed-off-by: Willy Tarreau <w@1wt.eu>
> ---
>  Documentation/process/index.rst         |   1 +
>  Documentation/process/security-bugs.rst |  28 +++
>  Documentation/process/threat-model.rst  | 231 ++++++++++++++++++++++++
>  3 files changed, 260 insertions(+)
>  create mode 100644 Documentation/process/threat-model.rst

Thanks,
Reviewed-by: Leon Romanovsky <leon@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/3] Documentation: security-bugs: do not systematically Cc the security team
  2026-05-03 11:35 ` [PATCH v2 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
@ 2026-05-05 14:10   ` Leon Romanovsky
  2026-05-08 15:31   ` Greg KH
  1 sibling, 0 replies; 25+ messages in thread
From: Leon Romanovsky @ 2026-05-05 14:10 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: greg, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Greg KH

On Sun, May 03, 2026 at 01:35:04PM +0200, Willy Tarreau wrote:
> With the increase of automated reports, the security team is dealing
> with way more messages than really needed. The reporting process works
> well with most teams so there is no need to systematically involve the
> security team in reports.
> 
> Let's suggest to keep it for small lists of recipients and new reporters
> only. This should continue to cover the risk of lost messages while
> reducing the volume from prolific reporters.
> 
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Leon Romanovsky <leon@kernel.org>
> Signed-off-by: Willy Tarreau <w@1wt.eu>
> ---
>  Documentation/process/security-bugs.rst | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)

Thanks,
Reviewed-by: Leon Romanovsky <leon@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-03 11:35 ` [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
  2026-05-05 14:10   ` Leon Romanovsky
@ 2026-05-06 15:46   ` Linus Torvalds
  2026-05-06 16:02     ` Willy Tarreau
  2026-05-08 15:35     ` Greg KH
  2026-05-08 20:52   ` Shuah Khan
  2 siblings, 2 replies; 25+ messages in thread
From: Linus Torvalds @ 2026-05-06 15:46 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: greg, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel, Greg KH

[ Coming back to this after a week of trying to clean up the disaster
that is my inbox after the merge window ]

On Sun, 3 May 2026 at 04:35, Willy Tarreau <w@1wt.eu> wrote:
>
> The use of automated tools to find bugs in random locations of the kernel
> induces a raise of security reports even if most of them should just be
> reported as regular bugs. This patch is an attempt at drawing a line
> between what qualifies as a security bug and what does not, hoping to
> improve the situation and ease decision on the reporter's side.

I actually think we may want to go further than this.

I think we should simply make it a rule that "a 'security' bug that is
found by AI is public".

Now, I may be influenced by that "my inbox is a disaster during the
merge window" thing, but I do think this is pretty fundamental: if
somebody finds a bug with more or less standard AI tools (ie we're not
talking magical special hardware and nation-state level efforts), then
that bug pretty much by definition IS NOT SECRET.

So why should be consider it special and have it be on the security list?

Yes, yes, I know - some people think that "security bugs are special".
And I've been on the record before calling that opinion special - in
the short bus sense.

Bugs are bugs. And not having them in public only makes them harder to
deal with.

Do we want to make bugs with potential security impact harder to deal
with? No. No, we really don't.

So I claim that the only reason for a security list is the non-public
nature of the bug and the whole "responsible disclosure" argument.

But that argument is complete and utter garbage in the face of some
mostly automated AI discovery (now, that argument is mostly a fiction
in the first place, but I am not going to argue with people who have
vested interest in making their special  patches "security bugs").

To recap - I think this "document the scope of security bugs" is good,
but I think we should go even further, and just document the fact that
anything found by regular AI tools should just always go to public
lists and is simply not special.

                Linus

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-06 15:46   ` Linus Torvalds
@ 2026-05-06 16:02     ` Willy Tarreau
  2026-05-07  4:18       ` Willy Tarreau
  2026-05-07  7:07       ` Peter Zijlstra
  2026-05-08 15:35     ` Greg KH
  1 sibling, 2 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-05-06 16:02 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: greg, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel, Greg KH

Hi Linus,

On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote:
> [ Coming back to this after a week of trying to clean up the disaster
> that is my inbox after the merge window ]
> 
> On Sun, 3 May 2026 at 04:35, Willy Tarreau <w@1wt.eu> wrote:
> >
> > The use of automated tools to find bugs in random locations of the kernel
> > induces a raise of security reports even if most of them should just be
> > reported as regular bugs. This patch is an attempt at drawing a line
> > between what qualifies as a security bug and what does not, hoping to
> > improve the situation and ease decision on the reporter's side.
> 
> I actually think we may want to go further than this.
> 
> I think we should simply make it a rule that "a 'security' bug that is
> found by AI is public".

This would definitely help us a lot on sec@k.o, but...

> Now, I may be influenced by that "my inbox is a disaster during the
> merge window" thing, but I do think this is pretty fundamental: if
> somebody finds a bug with more or less standard AI tools (ie we're not
> talking magical special hardware and nation-state level efforts), then
> that bug pretty much by definition IS NOT SECRET.

I think it's only 99.9% true. I mean, I've used such tools myself to
find bugs that were not found otherwise and I know that:
  - interactions with the tools count a lot
  - luck counts even more

There remains a faint possibility that the reporter has worked a lot
with their tool to be able to find the problem. I.e. the user helped
the LLM and not the opposite. In this case it might be possible that
it's not public. But clearly from what we've seen over the last few
weeks, the number of duplicates has exploded, with up to 3 reports
for the same issue within 2 days, so it's clear that they're not in
the category I mention above.

Maybe we should leave some rope for "if you are fairly confident that
the work you did is unlikely to have been replicated by anyone else,
the you can report it here" but I think we'll both agree that for now
most reporters really think they did something exceptional while we all
saw it was not the case (or they all do the same exceptional thing).

Thus I'm embarrassed with that.

> So why should be consider it special and have it be on the security list?
> 
> Yes, yes, I know - some people think that "security bugs are special".
> And I've been on the record before calling that opinion special - in
> the short bus sense.
> 
> Bugs are bugs. And not having them in public only makes them harder to
> deal with.
> 
> Do we want to make bugs with potential security impact harder to deal
> with? No. No, we really don't.
> 
> So I claim that the only reason for a security list is the non-public
> nature of the bug and the whole "responsible disclosure" argument.

As you probably guess, I totally agree with these points. I'm just
trying to leave the door open for the rare exceptions without having
to accept all the flood.

> But that argument is complete and utter garbage in the face of some
> mostly automated AI discovery (now, that argument is mostly a fiction
> in the first place, but I am not going to argue with people who have
> vested interest in making their special  patches "security bugs").
> 
> To recap - I think this "document the scope of security bugs" is good,

Thanks for the feedback.

> but I think we should go even further, and just document the fact that
> anything found by regular AI tools should just always go to public
> lists and is simply not special.

I'm fine with that but I'd like to add "except..." though I don't know
how to phrase it. If you have any idea, we can write something for a
start and see how it goes. It looks like these tools are pretty good
at swallowing our doc updates to help reporters so the good thing is
that we can now write instructions that are mostly followed in process
docs ;-)

Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-06 16:02     ` Willy Tarreau
@ 2026-05-07  4:18       ` Willy Tarreau
  2026-05-07  7:14         ` Peter Zijlstra
  2026-05-07  7:07       ` Peter Zijlstra
  1 sibling, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-05-07  4:18 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: greg, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel, Greg KH

On Wed, May 06, 2026 at 06:02:15PM +0200, Willy Tarreau wrote:
> Hi Linus,
> 
> On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote:
> > [ Coming back to this after a week of trying to clean up the disaster
> > that is my inbox after the merge window ]
> > 
> > On Sun, 3 May 2026 at 04:35, Willy Tarreau <w@1wt.eu> wrote:
> > >
> > > The use of automated tools to find bugs in random locations of the kernel
> > > induces a raise of security reports even if most of them should just be
> > > reported as regular bugs. This patch is an attempt at drawing a line
> > > between what qualifies as a security bug and what does not, hoping to
> > > improve the situation and ease decision on the reporter's side.
> > 
> > I actually think we may want to go further than this.
> > 
> > I think we should simply make it a rule that "a 'security' bug that is
> > found by AI is public".
> 
> This would definitely help us a lot on sec@k.o, but...
> 
> > Now, I may be influenced by that "my inbox is a disaster during the
> > merge window" thing, but I do think this is pretty fundamental: if
> > somebody finds a bug with more or less standard AI tools (ie we're not
> > talking magical special hardware and nation-state level efforts), then
> > that bug pretty much by definition IS NOT SECRET.
> 
> I think it's only 99.9% true. I mean, I've used such tools myself to
> find bugs that were not found otherwise and I know that:
>   - interactions with the tools count a lot
>   - luck counts even more

Thinking more about it, there's still something that won't go round:

- people have always been looking for vulnerabilities, sometimes for
  fun, and often to proudly show a CVE on their resume ; we've been
  dealing with that for many years.
- now they can do the same using AI and making much less effort, but
  their approach still stems from actively searching a vulnerability
- when they find something, they're certain it's a vulnerability
  because it's what they asked for (hence the threat model addition).
- if we tell them "don't report this to s@k.o" they will simply send
  them directly to the maintainers, who are even less accustomed to
  the process and will not benefit from the security team's experience
  in triaging nor support in saying "no". And we all know how stressful
  a vulnerability report can be for a developer who instantly has to
  stop doing everything and start to look at it just in case it would
  be valid.

For these reasons I'd rather propose that we say something around these
lines:

    Note that the security team will generally consider AI-assisted
    findings as public and will often ask you to repost your report
    to public lists.

Another point is that for many vulns there are two types of adversaries:
  - criminals
  - script kiddies

The former must be assumed to also have discovered the same vuln, possibly
earlier, and to be actively exploiting it. The latter however, is just
going to use whatever published exploit to say "look mum, I'm root".
Public reports containing too many details will speed up usability for
this group and that's not good for users.

And we *know* that some reports contain working PoC that need very little
modification. Passing them through s@k.o for triaging feels safer than
directing them to public lists with no early validation.

So in short, I think that:
  - AI reports should be considered public, but not necessarily well known
    yet
  - AI reports often contain repros that shouldn't be posted publicly
  - AI reports wording can be intimidating to developers not used to
    receiving these things

 -> the security team should remain the first filtering layer for this
    for new reporters even if it means continuing to see some noise.
    I think that instead it's the 3rd patch about the threat model that
    should help us receive less noise by explaining what is not a
    vulnerability.

I can rework that part a bit to reflect this.

Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-06 16:02     ` Willy Tarreau
  2026-05-07  4:18       ` Willy Tarreau
@ 2026-05-07  7:07       ` Peter Zijlstra
  2026-05-07 15:37         ` Linus Torvalds
  1 sibling, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2026-05-07  7:07 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: Linus Torvalds, greg, leon, security, Jonathan Corbet, skhan,
	workflows, linux-doc, linux-kernel, Greg KH

On Wed, May 06, 2026 at 06:02:15PM +0200, Willy Tarreau wrote:
> Hi Linus,
> 
> On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote:
> > [ Coming back to this after a week of trying to clean up the disaster
> > that is my inbox after the merge window ]
> > 
> > On Sun, 3 May 2026 at 04:35, Willy Tarreau <w@1wt.eu> wrote:
> > >
> > > The use of automated tools to find bugs in random locations of the kernel
> > > induces a raise of security reports even if most of them should just be
> > > reported as regular bugs. This patch is an attempt at drawing a line
> > > between what qualifies as a security bug and what does not, hoping to
> > > improve the situation and ease decision on the reporter's side.
> > 
> > I actually think we may want to go further than this.
> > 
> > I think we should simply make it a rule that "a 'security' bug that is
> > found by AI is public".
> 
> This would definitely help us a lot on sec@k.o, but...
> 
> > Now, I may be influenced by that "my inbox is a disaster during the
> > merge window" thing, but I do think this is pretty fundamental: if
> > somebody finds a bug with more or less standard AI tools (ie we're not
> > talking magical special hardware and nation-state level efforts), then
> > that bug pretty much by definition IS NOT SECRET.
> 
> I think it's only 99.9% true. I mean, I've used such tools myself to
> find bugs that were not found otherwise and I know that:
>   - interactions with the tools count a lot
>   - luck counts even more

Perhaps also note that including a reproducer for a crash in public is
fine, including a full blown exploit is not.

So perhaps that can serve as a guide; if they went and put in the effort
of making a full exploit (with or without LLM aid), keep it on security,
otherwise do the public thing.

And yes, I realize this too might be a very thin/short rope.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-07  4:18       ` Willy Tarreau
@ 2026-05-07  7:14         ` Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: Peter Zijlstra @ 2026-05-07  7:14 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: Linus Torvalds, greg, leon, security, Jonathan Corbet, skhan,
	workflows, linux-doc, linux-kernel, Greg KH

On Thu, May 07, 2026 at 06:18:27AM +0200, Willy Tarreau wrote:

> Another point is that for many vulns there are two types of adversaries:
>   - criminals
>   - script kiddies
> 
> The former must be assumed to also have discovered the same vuln, possibly
> earlier, and to be actively exploiting it. The latter however, is just
> going to use whatever published exploit to say "look mum, I'm root".
> Public reports containing too many details will speed up usability for
> this group and that's not good for users.
> 
> And we *know* that some reports contain working PoC that need very little
> modification. Passing them through s@k.o for triaging feels safer than
> directing them to public lists with no early validation.
> 
> So in short, I think that:
>   - AI reports should be considered public, but not necessarily well known
>     yet
>   - AI reports often contain repros that shouldn't be posted publicly

So, I think a targeted repro that exposes just the initial bug is in
most cases useful and shouldn't be held back. Full blown exploits on the
other hand should definitely be kept from the public list.

Most times, it still takes skill to get from the former to the latter,
although I suppose with LLMs this gap is shrinking too.

>   - AI reports wording can be intimidating to developers not used to
>     receiving these things
> 
>  -> the security team should remain the first filtering layer for this
>     for new reporters even if it means continuing to see some noise.
>     I think that instead it's the 3rd patch about the threat model that
>     should help us receive less noise by explaining what is not a
>     vulnerability.
> 
> I can rework that part a bit to reflect this.

Yes, I think that covers my earlier point well. And yes AI babble should
be sanitized, both for brevity and for explaining how to do the rest of
the exploit :-)



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-07  7:07       ` Peter Zijlstra
@ 2026-05-07 15:37         ` Linus Torvalds
  2026-05-07 15:48           ` Willy Tarreau
  0 siblings, 1 reply; 25+ messages in thread
From: Linus Torvalds @ 2026-05-07 15:37 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Willy Tarreau, greg, leon, security, Jonathan Corbet, skhan,
	workflows, linux-doc, linux-kernel, Greg KH

On Thu, 7 May 2026 at 00:07, Peter Zijlstra <peterz@infradead.org> wrote:
>
> Perhaps also note that including a reproducer for a crash in public is
> fine, including a full blown exploit is not.
>
> So perhaps that can serve as a guide

That would be a good rule, I think - and I like how it has the
advantage of being very explicit and black-and-white, rather than some
"I think my bug is so important that it should be sent to the speshul
super-sikret list".

Because we all think we are special. Our mothers told us so, and even
the AI bots are typically explicitly told to act as experts. So they
think they are special too.

              Linus

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-07 15:37         ` Linus Torvalds
@ 2026-05-07 15:48           ` Willy Tarreau
  0 siblings, 0 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-05-07 15:48 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Peter Zijlstra, greg, leon, security, Jonathan Corbet, skhan,
	workflows, linux-doc, linux-kernel, Greg KH

On Thu, May 07, 2026 at 08:37:29AM -0700, Linus Torvalds wrote:
> On Thu, 7 May 2026 at 00:07, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > Perhaps also note that including a reproducer for a crash in public is
> > fine, including a full blown exploit is not.
> >
> > So perhaps that can serve as a guide
> 
> That would be a good rule, I think - and I like how it has the
> advantage of being very explicit and black-and-white, rather than some
> "I think my bug is so important that it should be sent to the speshul
> super-sikret list".
> 
> Because we all think we are special. Our mothers told us so, and even
> the AI bots are typically explicitly told to act as experts. So they
> think they are special too.

These points correspond to what I mentioned in my second message a few
hours ago, but I want to protect maintainers against the flood of crap
they're not necessarily used to. I think that the balance I proposed
could work as it more or less covers this. When you have a time to look
at it I'd be glad to have your opinion/criticism (sorry if it's a bit
long but the topic is far from being trivial).

willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/3] Documentation: security-bugs: do not systematically Cc the security team
  2026-05-03 11:35 ` [PATCH v2 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
  2026-05-05 14:10   ` Leon Romanovsky
@ 2026-05-08 15:31   ` Greg KH
  1 sibling, 0 replies; 25+ messages in thread
From: Greg KH @ 2026-05-08 15:31 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Sun, May 03, 2026 at 01:35:04PM +0200, Willy Tarreau wrote:
> With the increase of automated reports, the security team is dealing
> with way more messages than really needed. The reporting process works
> well with most teams so there is no need to systematically involve the
> security team in reports.
> 
> Let's suggest to keep it for small lists of recipients and new reporters
> only. This should continue to cover the risk of lost messages while
> reducing the volume from prolific reporters.
> 
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Leon Romanovsky <leon@kernel.org>
> Signed-off-by: Willy Tarreau <w@1wt.eu>
> ---
>  Documentation/process/security-bugs.rst | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst
> index 27b028e858610..6dc525858125e 100644
> --- a/Documentation/process/security-bugs.rst
> +++ b/Documentation/process/security-bugs.rst
> @@ -148,7 +148,15 @@ run additional tests.  Reports where the reporter does not respond promptly
>  or cannot effectively discuss their findings may be abandoned if the
>  communication does not quickly improve.
>  
> -The report must be sent to maintainers, with the security team in ``Cc:``.
> +The report must be sent to maintainers.  If there are two or fewer
> +recipients in your message, you must also always Cc: the Linux kernel
> +security team who will ensure the message is delivered to the proper
> +people, and will be able to assist small maintainer teams with processes
> +they may not be familiar with.  For larger teams, Cc: the Linux kernel
> +security team for your first few reports or when seeking specific help,
> +such as when resending a message which got no response within a week.
> +Once you have become comfortable with the process for a few reports, it is
> +no longer necessary to Cc: the security list when sending to large teams.
>  The Linux kernel security team can be contacted by email at
>  <security@kernel.org>.  This is a private list of security officers
>  who will help verify the bug report and assist developers working on a fix.
> -- 
> 2.52.0
> 

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-06 15:46   ` Linus Torvalds
  2026-05-06 16:02     ` Willy Tarreau
@ 2026-05-08 15:35     ` Greg KH
  2026-05-08 15:54       ` Joshua Peisach
  2026-05-08 15:59       ` Willy Tarreau
  1 sibling, 2 replies; 25+ messages in thread
From: Greg KH @ 2026-05-08 15:35 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Willy Tarreau, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel

On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote:
> [ Coming back to this after a week of trying to clean up the disaster
> that is my inbox after the merge window ]
> 
> On Sun, 3 May 2026 at 04:35, Willy Tarreau <w@1wt.eu> wrote:
> >
> > The use of automated tools to find bugs in random locations of the kernel
> > induces a raise of security reports even if most of them should just be
> > reported as regular bugs. This patch is an attempt at drawing a line
> > between what qualifies as a security bug and what does not, hoping to
> > improve the situation and ease decision on the reporter's side.
> 
> I actually think we may want to go further than this.
> 
> I think we should simply make it a rule that "a 'security' bug that is
> found by AI is public".
> 
> Now, I may be influenced by that "my inbox is a disaster during the
> merge window" thing, but I do think this is pretty fundamental: if
> somebody finds a bug with more or less standard AI tools (ie we're not
> talking magical special hardware and nation-state level efforts), then
> that bug pretty much by definition IS NOT SECRET.

After the past 2 weeks, and the past 2 months, I am going to violently
agree with you here.  We've seen so many "duplicate" bug reports it's
not funny.  All of the modern LLMs are feeding the output back into the
model for future runs, which makes the data totally public.  Even if
not, the output is being monitored by external companies at the very
least.

> So why should be consider it special and have it be on the security list?

I don't think we should anymore.

Yes, having a full reproducer in public is not good, but the general
"this is a bug" comments we should start redirecting to public lists
more.  That's the only way we are going to handle this influx as our
"normal" bug workflow works very well, especially when it comes with a
fix, as these LLM tools can provide very easily.

So if this could be reworded somehow to reflect that, maybe?

But the "what is and is not a security bug" is a good thing overall.  We
need a solid definition of our threat model if for no other reason to
keep me from having to write over and over "Once a driver is bound to
the kernel, we trust the hardware"...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-08 15:35     ` Greg KH
@ 2026-05-08 15:54       ` Joshua Peisach
  2026-05-08 16:07         ` Willy Tarreau
  2026-05-08 15:59       ` Willy Tarreau
  1 sibling, 1 reply; 25+ messages in thread
From: Joshua Peisach @ 2026-05-08 15:54 UTC (permalink / raw)
  To: Greg KH, Linus Torvalds
  Cc: Willy Tarreau, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel

On Fri May 8, 2026 at 11:35 AM EDT, Greg KH wrote:
> On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote:
>> [ Coming back to this after a week of trying to clean up the disaster
>> that is my inbox after the merge window ]
>> 
>> On Sun, 3 May 2026 at 04:35, Willy Tarreau <w@1wt.eu> wrote:
>> >
>> > The use of automated tools to find bugs in random locations of the kernel
>> > induces a raise of security reports even if most of them should just be
>> > reported as regular bugs. This patch is an attempt at drawing a line
>> > between what qualifies as a security bug and what does not, hoping to
>> > improve the situation and ease decision on the reporter's side.
>> 
>> I actually think we may want to go further than this.
>> 
>> I think we should simply make it a rule that "a 'security' bug that is
>> found by AI is public".

Whether my opinion is cared about or not, I feel it should be put in here:

Yes, *in theory* the bug is public. Anyone can find it. But just like bugs
sitting in open source code repositories, anyone can look for if it they try.

The only difference is that a LLM is making it more apparent and noticable
to people, if you ask it to.

The choice to then decide "therefore we can disclose it immediately", in my
opinion, is not great. Because then you are bringing attention to a bug that
nobody, or at most, relatively few people knew about (even in small circles)
to a broader audience.

Take Dirty Frag - even though the embargo is said to have been broken, and
all parties agreed to release the disclosure, it was put on GitHub. Of course,
information that is public, is public. But putting it on GitHub and then
buying the domain dirtyfrag.io makes it easy to bring attention to the bug
that was disclosed **with no patch or CVE.**

Even if the mitigation is "just disable the module", I still think that by
giving up the embargo entirely, we are creating more attention, and more
opportunity for exploitation. Even if it's a PoC and not an exploit for
malicious purposes.


>> 
>> Now, I may be influenced by that "my inbox is a disaster during the
>> merge window" thing, but I do think this is pretty fundamental: if
>> somebody finds a bug with more or less standard AI tools (ie we're not
>> talking magical special hardware and nation-state level efforts), then
>> that bug pretty much by definition IS NOT SECRET.
>

Yes. I agree. But in theory that person did not need to use AI to find the
bug, so by that logic, the bug was already known about.

> After the past 2 weeks, and the past 2 months, I am going to violently
> agree with you here.  We've seen so many "duplicate" bug reports it's
> not funny.  All of the modern LLMs are feeding the output back into the
> model for future runs, which makes the data totally public.  Even if
> not, the output is being monitored by external companies at the very
> least.
>

I think that's more "unresponsible disclosure" - maybe there is some way
that LLM emails can be filtered?

And again, yes, the data is being trained. But **you have to look for it.**.
It is still a needle in a haystack, but it's not a black hole absorbing
said haystack.

>> So why should be consider it special and have it be on the security list?
>
> I don't think we should anymore.
>
> Yes, having a full reproducer in public is not good, but the general
> "this is a bug" comments we should start redirecting to public lists
> more.  That's the only way we are going to handle this influx as our
> "normal" bug workflow works very well, especially when it comes with a
> fix, as these LLM tools can provide very easily.
>

Could this at least be temporary? There are only a finite number of bugs
that can exist in a codebase.

-Josh

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-08 15:35     ` Greg KH
  2026-05-08 15:54       ` Joshua Peisach
@ 2026-05-08 15:59       ` Willy Tarreau
  2026-05-08 16:39         ` Willy Tarreau
  1 sibling, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-05-08 15:59 UTC (permalink / raw)
  To: Greg KH
  Cc: Linus Torvalds, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel

On Fri, May 08, 2026 at 05:35:39PM +0200, Greg KH wrote:
> On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote:
> > [ Coming back to this after a week of trying to clean up the disaster
> > that is my inbox after the merge window ]
> > 
> > On Sun, 3 May 2026 at 04:35, Willy Tarreau <w@1wt.eu> wrote:
> > >
> > > The use of automated tools to find bugs in random locations of the kernel
> > > induces a raise of security reports even if most of them should just be
> > > reported as regular bugs. This patch is an attempt at drawing a line
> > > between what qualifies as a security bug and what does not, hoping to
> > > improve the situation and ease decision on the reporter's side.
> > 
> > I actually think we may want to go further than this.
> > 
> > I think we should simply make it a rule that "a 'security' bug that is
> > found by AI is public".
> > 
> > Now, I may be influenced by that "my inbox is a disaster during the
> > merge window" thing, but I do think this is pretty fundamental: if
> > somebody finds a bug with more or less standard AI tools (ie we're not
> > talking magical special hardware and nation-state level efforts), then
> > that bug pretty much by definition IS NOT SECRET.
> 
> After the past 2 weeks, and the past 2 months, I am going to violently
> agree with you here.  We've seen so many "duplicate" bug reports it's
> not funny.  All of the modern LLMs are feeding the output back into the
> model for future runs, which makes the data totally public.  Even if
> not, the output is being monitored by external companies at the very
> least.
> 
> > So why should be consider it special and have it be on the security list?
> 
> I don't think we should anymore.
> 
> Yes, having a full reproducer in public is not good, but the general
> "this is a bug" comments we should start redirecting to public lists
> more.  That's the only way we are going to handle this influx as our
> "normal" bug workflow works very well, especially when it comes with a
> fix, as these LLM tools can provide very easily.
> 
> So if this could be reworded somehow to reflect that, maybe?

What I'm trying to do is to make sure the reports don't flood just to
maintainers (some of whom never got a report, and getting an intimidating
one written by an LLM can be really painful). And in parallel we're trying
to limit public reports for non-AI. So I think the split point revolves
to:
  - all bugs (AI and non-AI) affecting the threat model are security bugs,
    but AI reports must be considered public as others will find them in
    parallel (and we do know that pretty well now).
  - if non-AI, send to maintainers and Cc: security, send all repros 
    you can share
  - if AI,  the report must be considered public so send to maintainers
    and Cc: public lists AND always LKML, and never security@, and do
    not send the repros publicly.

=> this reinforces the role of security@ to be for triage, coordination
   and assitance to maintainers so that they're never left to themselves
   (i.e. private bugs=maint+s@k.o; public bugs=maint+public list).

Also, I'll add "for AI, please see the points below" (the 3rd patch with
all the rules).

There remains a gray zone with the repros from AI tools (since they're
good at writing them). They should sent to maintainers only (no need to
involve s@k.o) but it requires a second message.

> But the "what is and is not a security bug" is a good thing overall.  We
> need a solid definition of our threat model if for no other reason to
> keep me from having to write over and over "Once a driver is bound to
> the kernel, we trust the hardware"...

Over the last two weeks I felt like you needed a macro on your keyboard
that would post a link to that doc in lore!

Thanks,
Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-08 15:54       ` Joshua Peisach
@ 2026-05-08 16:07         ` Willy Tarreau
  0 siblings, 0 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-05-08 16:07 UTC (permalink / raw)
  To: Joshua Peisach
  Cc: Greg KH, Linus Torvalds, leon, security, Jonathan Corbet, skhan,
	workflows, linux-doc, linux-kernel

On Fri, May 08, 2026 at 11:54:32AM -0400, Joshua Peisach wrote:
> On Fri May 8, 2026 at 11:35 AM EDT, Greg KH wrote:
> > On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote:
> > > [ Coming back to this after a week of trying to clean up the disaster
> > > that is my inbox after the merge window ]
> > > 
> > > On Sun, 3 May 2026 at 04:35, Willy Tarreau <w@1wt.eu> wrote:
> > > >
> > > > The use of automated tools to find bugs in random locations of the kernel
> > > > induces a raise of security reports even if most of them should just be
> > > > reported as regular bugs. This patch is an attempt at drawing a line
> > > > between what qualifies as a security bug and what does not, hoping to
> > > > improve the situation and ease decision on the reporter's side.
> > > 
> > > I actually think we may want to go further than this.
> > > 
> > > I think we should simply make it a rule that "a 'security' bug that is
> > > found by AI is public".
> 
> Whether my opinion is cared about or not, I feel it should be put in here:
> 
> Yes, *in theory* the bug is public. Anyone can find it. But just like bugs
> sitting in open source code repositories, anyone can look for if it they try.
> 
> The only difference is that a LLM is making it more apparent and noticable
> to people, if you ask it to.
> 
> The choice to then decide "therefore we can disclose it immediately", in my
> opinion, is not great. Because then you are bringing attention to a bug that
> nobody, or at most, relatively few people knew about (even in small circles)
> to a broader audience.

It's no longer needed, please trust us. Last week we've seen about one
duplicate every day and some bugs had up to 2 duplicates. I tried myself
to ask my *local* LLM to find bugs in a certain class over the whole net
tree, and it found one of the recently pubished ones without me having to
give it any hint about this.

Really, these days LLMs can swallow huge amounts of data and correlate
complex patterns very easily over an immense context. You don't need to
ask them to analyze a patch anymore nor to work on this or that file.

An issue found by an LLM is just a proof that this issue CAN BE FOUND by 
an LLM, thus a good indication that someone else will find it, and very
likely that someone else might already be using it.

> Take Dirty Frag - even though the embargo is said to have been broken, and
> all parties agreed to release the disclosure, it was put on GitHub. Of course,
> information that is public, is public. But putting it on GitHub and then
> buying the domain dirtyfrag.io makes it easy to bring attention to the bug
> that was disclosed **with no patch or CVE.**

It's really not how it works nor how it worked.

> Even if the mitigation is "just disable the module", I still think that by
> giving up the embargo entirely, we are creating more attention, and more
> opportunity for exploitation. Even if it's a PoC and not an exploit for
> malicious purposes.

What is important is that we insist on no longer sharing PoCs publicly.
This will slow down script kiddies (who are the ones doing the most
damage because they don't need the bug for their business yet they cause
harm using it). Criminals are probably already playing with it and might
have been for weeks or months already.

> > After the past 2 weeks, and the past 2 months, I am going to violently
> > agree with you here.  We've seen so many "duplicate" bug reports it's
> > not funny.  All of the modern LLMs are feeding the output back into the
> > model for future runs, which makes the data totally public.  Even if
> > not, the output is being monitored by external companies at the very
> > least.
> > 
> 
> I think that's more "unresponsible disclosure" - maybe there is some way
> that LLM emails can be filtered?

No :-(  If you saw the flood we're receiving on s@k.o, feels like
taking a shower under the niagara falls. Sometimes we just say "trim
this and repost it, it's too long we can't read it".

> And again, yes, the data is being trained. But **you have to look for it.**.
> It is still a needle in a haystack, but it's not a black hole absorbing
> said haystack.

No, really not at all. Not since the last month at least.

> > > So why should be consider it special and have it be on the security list?
> > 
> > I don't think we should anymore.
> > 
> > Yes, having a full reproducer in public is not good, but the general
> > "this is a bug" comments we should start redirecting to public lists
> > more.  That's the only way we are going to handle this influx as our
> > "normal" bug workflow works very well, especially when it comes with a
> > fix, as these LLM tools can provide very easily.
> > 
> 
> Could this at least be temporary? There are only a finite number of bugs
> that can exist in a codebase.

We regularly update the doc based on circumstances so we don't need to
care now. It looks like AI-based reports consume the doc and this will
make them less painful to maintainers, which is already a great thing.

Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-08 15:59       ` Willy Tarreau
@ 2026-05-08 16:39         ` Willy Tarreau
  2026-05-09  6:39           ` Greg KH
  0 siblings, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-05-08 16:39 UTC (permalink / raw)
  To: Greg KH
  Cc: Linus Torvalds, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel

Greg,

does this addition on top of the current patch address your concerns ?

--- a/Documentation/process/security-bugs.rst
+++ b/Documentation/process/security-bugs.rst
@@ -88,6 +88,14 @@ can be easily exploited, representing an imminent threat to many users.  Before
 reporting, consider whether the issue actually crosses a trust boundary on such
 a system.

+**If you resorted to AI assistance to identify a bug, you must treat it as
+public**. While you may have valid reasons to believe it is not, the security
+team's experience shows that bugs discovered this way systematically surface
+simultaneously across multiple researchers, often on the same day. In this
+case, do not publicly share a reproducer, as this could cause unintended harm;
+just mention that one is available and maintainers might ask for it privately
+if they need it.
+
 If you are unsure whether an issue qualifies, err on the side of reporting
 privately: the security team would rather triage a borderline report than miss
 a real vulnerability.  Reporting ordinary bugs to the security list, however,
@@ -102,7 +110,7 @@ affected subsystem's maintainers and Cc: the Linux kernel security team.  Do
 not send it to a public list at this stage, unless you have good reasons to
 consider the issue as being public or trivial to discover (e.g. result of a
 widely available automated vulnerability scanning tool that can be repeated by
-anyone).
+anyone, or use of AI-based tools).

 If you're sending a report for issues affecting multiple parts in the kernel,
 even if they're fairly similar issues, please send individual messages (think

If so I can resend with it.

Thanks,
Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-03 11:35 ` [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
  2026-05-05 14:10   ` Leon Romanovsky
  2026-05-06 15:46   ` Linus Torvalds
@ 2026-05-08 20:52   ` Shuah Khan
  2026-05-09  4:48     ` Willy Tarreau
  2 siblings, 1 reply; 25+ messages in thread
From: Shuah Khan @ 2026-05-08 20:52 UTC (permalink / raw)
  To: Willy Tarreau, greg
  Cc: leon, security, Jonathan Corbet, workflows, linux-doc,
	linux-kernel, Greg KH, Shuah Khan

On 5/3/26 05:35, Willy Tarreau wrote:
> The use of automated tools to find bugs in random locations of the kernel
> induces a raise of security reports even if most of them should just be
> reported as regular bugs. This patch is an attempt at drawing a line
> between what qualifies as a security bug and what does not, hoping to
> improve the situation and ease decision on the reporter's side.
> 
> It defers the enumeration to a new file, threat-model.rst, that tries
> to enumerate various classes of issues that are and are not security
> bugs. This should permit to more easily update this file for various
> subsystem-specific rules without having to revisit the security bug
> reporting guide.
> 
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Leon Romanovsky <leon@kernel.org>
> Suggested-by: Leon Romanovsky <leon@kernel.org>
> Suggested-by: Greg KH <gregkh@linuxfoundation.org>
> Signed-off-by: Willy Tarreau <w@1wt.eu>
> ---
>   Documentation/process/index.rst         |   1 +
>   Documentation/process/security-bugs.rst |  28 +++
>   Documentation/process/threat-model.rst  | 231 ++++++++++++++++++++++++
>   3 files changed, 260 insertions(+)
>   create mode 100644 Documentation/process/threat-model.rst
> 
> diff --git a/Documentation/process/index.rst b/Documentation/process/index.rst
> index dbd6ea16aca70..aa7c959a52b87 100644
> --- a/Documentation/process/index.rst
> +++ b/Documentation/process/index.rst
> @@ -86,6 +86,7 @@ regressions and security problems.
>      debugging/index
>      handling-regressions
>      security-bugs
> +   threat-model
>      cve
>      embargoed-hardware-issues
>   
> diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst
> index 6dc525858125e..3b44464dd9ba7 100644
> --- a/Documentation/process/security-bugs.rst
> +++ b/Documentation/process/security-bugs.rst
> @@ -66,6 +66,34 @@ In addition, the following information are highly desirable:
>       the issue appear. It is useful to share them, as they can be helpful to
>       keep end users protected during the time it takes them to apply the fix.
>   
> +What qualifies as a security bug
> +--------------------------------
> +
> +It is important that most bugs are handled publicly so as to involve the widest
> +possible audience and find the best solution.  By nature, bugs that are handled
> +in closed discussions between a small set of participants are less likely to
> +produce the best possible fix (e.g., risk of missing valid use cases, limited
> +testing abilities).
> +
> +It turns out that the majority of the bugs reported via the security team are
> +just regular bugs that have been improperly qualified as security bugs due to
> +ignorance or misunderstanding of the Linux kernel's threat model described in

"lack of understanding" instead of ignorance?

> +Documentation/process/threat-model.rst, and ought to have been sent through
> +the normal channels described in Documentation/admin-guide/reporting-issues.rst
> +instead.
> +
> +The security list exists for urgent bugs that grant an attacker a capability
> +they are not supposed to have on a correctly configured production system, and
> +can be easily exploited, representing an imminent threat to many users.  Before
> +reporting, consider whether the issue actually crosses a trust boundary on such
> +a system.
> +
> +If you are unsure whether an issue qualifies, err on the side of reporting
> +privately: the security team would rather triage a borderline report than miss
> +a real vulnerability.  Reporting ordinary bugs to the security list, however,
> +does not make them move faster and consumes triage capacity that other reports
> +need.
> +
>   Identifying contacts
>   --------------------
>   
> diff --git a/Documentation/process/threat-model.rst b/Documentation/process/threat-model.rst
> new file mode 100644
> index 0000000000000..8cd46483cd8b5
> --- /dev/null
> +++ b/Documentation/process/threat-model.rst
> @@ -0,0 +1,231 @@
> +.. _threatmodel:
> +
> +The Linux Kernel threat model
> +=============================
> +
> +There are a lot of assumptions regarding what the kernel protects against and
> +what it does not protect against. These assumptions tend to cause confusion for

Could simply say "what it does not" or "what the kernel does and does not protect
against"

> +bug reports (:doc:`security-related ones <security-bugs>` vs
> +:doc:`non-security ones <../admin-guide/reporting-issues>`), and can complicate
> +security enforcement when the responsibilities for some boundaries is not clear
> +between the kernel, distros, administrators and users.
> +
> +This document tries to clarify the responsibilities of the kernel in this
> +domain.
> +
> +The kernel's responsibilities
> +-----------------------------
> +
> +The kernel abstracts access to local hardware resources and to remote systems
> +in a way that allows multiple local users to get a fair share of the available
> +resources granted to them, and, when the underlying hardware permits, to assign
> +a level of confidentiality to their communications and to the data they are
> +processing or storing.
> +
> +The kernel assumes that the underlying hardware behaves according to its
> +specifications. This includes the integrity of the CPU's instruction set, the
> +transparency of the branch prediction unit and the cache units, the consistency
> +of the Memory Management Unit (MMU), the isolation of DMA-capable peripherals
> +(e.g., via IOMMU), state transitions in controllers, ranges of values read from
> +registers, the respect of documented hardware limitations, etc.
> +
> +When hardware fails to maintain its specified isolation (e.g., CPU bugs,
> +side-channels, hardware response to unexpected inputs), the kernel will usually
> +attempt to implement reasonable mitigations. These are best-effort measures
> +intended to reduce the attack surface or elevate the cost of an attack within
> +the limits of the hardware's facilities; they do not constitute a
> +kernel-provided safety guarantee.
> +
> +Users always perform their activities under the authority of an administrator
> +who is able to grant or deny various types of permissions that may affect how
> +users benefit from available resources, or the level of confidentiality of
> +their activities. Administrators may also delegate all or part of their own
> +permissions to some users, particularly via capabilities but not only. All this
> +is performed via configuration (sysctl, file-system permissions etc).
> +
> +The Linux Kernel applies a certain collection of default settings that match
> +its threat model. Distros have their own threat model and will come with their
> +own configuration presets, that the administrator may have to adjust to better
> +suit their expectations (relax or restrict).
> +
> +By default, the Linux Kernel guarantees the following protections when running
> +on common processors featuring privilege levels and memory management units:
> +
> +* **User-based isolation**: an unprivileged user may restrict access to their
> +  own data from other unprivileged users running on the same system. This
> +  includes:
> +
> +  * stored data, via file system permissions
> +  * in-memory data (pages are not accessible by default to other users)
> +  * process activity (ptrace is not permitted to other users)
> +  * inter-process communication (other users may not observe data exchanged via
> +    UNIX domain sockets or other IPC mechanisms).
> +  * network communications within the same or with other systems
> +
> +* **Capability-based protection**:
> +
> +  * users not having the ``CAP_SYS_ADMIN`` capability may not alter the
> +    kernel's configuration, memory nor state, change other users' view of the
> +    file system layout, grant any user capabilities they do not have, nor
> +    affect the system's availability (shutdown, reboot, panic, hang, or making
> +    the system unresponsive via unbounded resource exhaustion).
> +  * users not having the ``CAP_NET_ADMIN`` capability may not alter the network
> +    configuration, intercept nor spoof network communications from other users
> +    nor systems.
> +  * users not having ``CAP_SYS_PTRACE`` may not observe other users' processes
> +    activities.
> +
> +When ``CONFIG_USER_NS`` is set, the kernel also permits unprivileged users to
> +create their own user namespace in which they have all capabilities, but with a
> +number of restrictions (they may not perform actions that have impacts on the
> +initial user namespace, such as changing time, loading modules or mounting
> +block devices). Please refer to ``user_namespaces(7)`` for more details, the
> +possibilities of user namespaces are not covered in this document.
> +
> +The kernel also offers a lot of troubleshooting and debugging facilities, which
> +can constitute attack vectors when placed in wrong hands. While some of them
> +are designed to be accessible to regular local users with a low risk (e.g.
> +kernel logs via ``/proc/kmsg``), some would expose enough information to
> +represent a risk in most places and the decision to expose them is under the
> +administrator's responsibility (perf events, traces), and others are not
> +designed to be accessed by non-privileged users (e.g. debugfs). Access to these
> +facilities by a user who has been explicitly granted permission by an
> +administrator does not constitute a security breach.
> +
> +Bugs that permit to violate the principles above constitute security breaches.
> +However, bugs that permit one violation only once another one was already
> +achieved are only weaknesses. The kernel applies a number of self-protection
> +measures whose purpose is to avoid crossing a security boundary when certain
> +classes of bugs are found, but a failure of these extra protections do not
> +constitute a vulnerability alone.
> +
> +What does not constitute a security bug
> +---------------------------------------
> +
> +In the Linux kernel's threat model, the following classes of problems are
> +**NOT** considered as Linux Kernel security bugs. However, when it is believed
> +that the kernel could do better, they should be reported, so that they can be
> +reviewed and fixed where reasonably possible, but they will be handled as any
> +regular bug:
> +
> +* **Configuration**:
> +
> +  * outdated kernels and particularly end-of-life branches are out of the scope
> +    of the kernel's threat model: administrators are responsible for keeping
> +    their system up to date. For a bug to qualify as a security bug, it must be
> +    demonstrated that it affects actively maintained versions.
> +
> +  * build-level: changes to the kernel configuration that are explicitly
> +    documented as lowering the security level (e.g. ``CONFIG_NOMMU``), or
> +    targeted at developers only.
> +
> +  * OS-level: changes to command line parameters, sysctls, filesystem
> +    permissions, user capabilities, exposure of privileged interfaces, that
> +    explicitly increase exposure by either offering non-default access to
> +    unprivileged users, or reduce the kernel's ability to enforce some
> +    protections or mitigations. Example: write access to procfs or debugfs.
> +
> +  * issues triggered only when using features intended for development or
> +    debugging (e.g., lockdep, KASAN, fault-injection): these features are known
> +    to introduce overhead and potential instability and are not intended for
> +    production use.

Can we call out features and tools (the ones in kernel repo)

sched_ext's Kconfig enables
a few debug options including LOCKDEP

tools/sched_ext/Kconfig:CONFIG_DEBUG_LOCKDEP=y

> +
> +  * loading of explicitly insecure/broken/staging modules, and generally any
> +    using any subsystem marked as experimental or not intended for production
> +    use.
> +
> +  * running out-of-tree modules or unofficial kernel forks; these should be
> +    reported to the relevant vendor.
> +
> +* **Excess of initial privileges**:
> +
> +  * actions performed by a user already possessing the privileges required to
> +    perform that action or modify that state (e.g. ``CAP_SYS_ADMIN``,
> +    ``CAP_NET_ADMIN``, ``CAP_SYS_RAWIO``, ``CAP_SYS_MODULE`` with no further
> +    boundary being crossed).
> +
> +  * actions performed in user namespace without permitting anything in the
> +    initial namespace that was not already permitted to the same user there.

This was a bit hard to parse - examples might help here

> +
> +  * anything performed by the root user in the initial namespace (e.g. kernel
> +    oops when writing to a privileged device).
> +
> +* **Out of production use**:
> +
> +  This covers theoretical/probabilistic attacks that rely on laboratory
> +  conditions with zero system noise, or those requiring an unrealistic number
> +  of attempts (e.g., billions of trials) that would be detected by standard
> +  system monitoring long before success, such as:
> +
> +  * prediction of random numbers that only works in a totally silent
> +    environment (such as IP ID, TCP ports or sequence numbers that can only be
> +    guessed in a lab).
> +
> +  * activity observation and information leaks based on probabilistic
> +    approaches that are prone to measurement noise and not realistically
> +    reproducible on a production system.
> +
> +  * issues that can only be triggered by heavy attacks (e.g. brute force) whose
> +    impact on the system makes it unlikely or impossible to remain undetected
> +    before they succeed (e.g. consuming all memory before succeeding).
> +
> +  * problems seen only under development simulators, emulators, or combinations
> +    that do not exist on real systems at the time of reporting (issues
> +    involving tens of millions of threads, tens of thousands of CPUs,
> +    unrealistic CPU frequencies, RAM sizes or disk capacities, network speeds.
> +
> +  * issues whose reproduction requires hardware modification or emulation,
> +    including fake USB devices that pretend to be another one.
> +
> +  * as well as issues that can be triggered at a cost that is orders of
> +    magnitude higher than the expected benefits (e.g. fully functional keyboard
> +    emulator only to retrieve 7 uninitialized bytes in a structure, or
> +    brute-force method involving millions of connection attempts to guess a
> +    port number).

Can we add a section about problems found using experimental or tools
in development stage?

> +
> +* **Hardening failures**:
> +
> +  * ability to bypass some of the kernel's hardening measures with no
> +    demonstrable exploit path (e.g. ASLR bypass, events timing or probing with
> +    no demonstrable consequence). These are just weaknesses, not
> +    vulnerabilities.
> +
> +  * missing argument checks and failure to report certain errors with no
> +    immediate consequence.
> +
> +* **Random information leaks**:
> +
> +  This concerns information leaks of small data parts that happen to be there
> +  and that cannot be chosen by the attacker, or face access restrictions:
> +
> +  * structure padding reported by syscalls or other interfaces.
> +
> +  * identifiers, partial data, non-terminated strings reported in error
> +    messages.
> +
> +  * Leaks of kernel memory addresses/pointers do not constitute an immediately
> +    exploitable vector and are not security bugs, though they must be reported
> +    and fixed.
> +
> +* **Crafted file system images**:
> +
> +  * bugs triggered by mounting a corrupted or maliciously crafted file system
> +    image are generally not security bugs, as the kernel assumes the underlying
> +    storage media is under the administrator's control, unless the filesystem
> +    driver is specifically documented as being hardened against untrusted media.
> +
> +  * issues that are resolved, mitigated, or detected by running a filesystem
> +    consistency check (fsck) on the image prior to mounting.
> +
> +* **Physical access**:
> +
> +  Issues that require physical access to the machine, hardware modification, or
> +  the use of specialized hardware (e.g., logic analyzers, DMA-attack tools over
> +  PCI-E/Thunderbolt) are out of scope unless the system is explicitly
> +  configured with technologies meant to defend against such attacks
> +  (e.g. IOMMU).
> +
> +* **Functional and performance regressions**:
> +
> +  Any issue that can be mitigated by setting proper permissions and limits
> +  doesn't qualify as a security bug.

Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>

thanks,
-- Shuah


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-08 20:52   ` Shuah Khan
@ 2026-05-09  4:48     ` Willy Tarreau
  2026-05-09 19:50       ` Shuah Khan
  0 siblings, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-05-09  4:48 UTC (permalink / raw)
  To: Shuah Khan
  Cc: greg, leon, security, Jonathan Corbet, workflows, linux-doc,
	linux-kernel, Greg KH

Hi Shuah,

On Fri, May 08, 2026 at 02:52:13PM -0600, Shuah Khan wrote:
> > +What qualifies as a security bug
> > +--------------------------------
> > +
> > +It is important that most bugs are handled publicly so as to involve the widest
> > +possible audience and find the best solution.  By nature, bugs that are handled
> > +in closed discussions between a small set of participants are less likely to
> > +produce the best possible fix (e.g., risk of missing valid use cases, limited
> > +testing abilities).
> > +
> > +It turns out that the majority of the bugs reported via the security team are
> > +just regular bugs that have been improperly qualified as security bugs due to
> > +ignorance or misunderstanding of the Linux kernel's threat model described in
> 
> "lack of understanding" instead of ignorance?

I already had "misunderstanding", here I wanted to express the idea that
people could simply ignore that this file exists (since it's new). Do you
think we shouldn't care about this and just keep "misunderstanding" ?

(...)
> > +The Linux Kernel threat model
> > +=============================
> > +
> > +There are a lot of assumptions regarding what the kernel protects against and
> > +what it does not protect against. These assumptions tend to cause confusion for
> 
> Could simply say "what it does not" or "what the kernel does and does not protect
> against"

Ah OK good point, I'll rephrase it.

> > +* **Configuration**:
> > +
> > +  * outdated kernels and particularly end-of-life branches are out of the scope
> > +    of the kernel's threat model: administrators are responsible for keeping
> > +    their system up to date. For a bug to qualify as a security bug, it must be
> > +    demonstrated that it affects actively maintained versions.
> > +
> > +  * build-level: changes to the kernel configuration that are explicitly
> > +    documented as lowering the security level (e.g. ``CONFIG_NOMMU``), or
> > +    targeted at developers only.
> > +
> > +  * OS-level: changes to command line parameters, sysctls, filesystem
> > +    permissions, user capabilities, exposure of privileged interfaces, that
> > +    explicitly increase exposure by either offering non-default access to
> > +    unprivileged users, or reduce the kernel's ability to enforce some
> > +    protections or mitigations. Example: write access to procfs or debugfs.
> > +
> > +  * issues triggered only when using features intended for development or
> > +    debugging (e.g., lockdep, KASAN, fault-injection): these features are known
> > +    to introduce overhead and potential instability and are not intended for
> > +    production use.
> 
> Can we call out features and tools (the ones in kernel repo)

Sure!

> sched_ext's Kconfig enables
> a few debug options including LOCKDEP
> 
> tools/sched_ext/Kconfig:CONFIG_DEBUG_LOCKDEP=y

It's still there but maybe not visible enough, I should probably write
it in upper case:

   debugging (e.g., lockdep, KASAN, fault-injection):

> > +* **Excess of initial privileges**:
> > +
> > +  * actions performed by a user already possessing the privileges required to
> > +    perform that action or modify that state (e.g. ``CAP_SYS_ADMIN``,
> > +    ``CAP_NET_ADMIN``, ``CAP_SYS_RAWIO``, ``CAP_SYS_MODULE`` with no further
> > +    boundary being crossed).
> > +
> > +  * actions performed in user namespace without permitting anything in the
> > +    initial namespace that was not already permitted to the same user there.
> 
> This was a bit hard to parse - examples might help here

Yeah when rereading it now, I fully agree. I think I should avoid the
double negation here and use a form such as;

  * actions performed in user namespace that do not bypass the restrictions
    imposed to the initial user.

If examples are still needed, I could possibly add: "(e.g. ptrace, signals,
FS or device access, system/network configuration, network binding)".

> > +  * anything performed by the root user in the initial namespace (e.g. kernel
> > +    oops when writing to a privileged device).
> > +
> > +* **Out of production use**:
> > +
> > +  This covers theoretical/probabilistic attacks that rely on laboratory
> > +  conditions with zero system noise, or those requiring an unrealistic number
> > +  of attempts (e.g., billions of trials) that would be detected by standard
> > +  system monitoring long before success, such as:
> > +
> > +  * prediction of random numbers that only works in a totally silent
> > +    environment (such as IP ID, TCP ports or sequence numbers that can only be
> > +    guessed in a lab).
> > +
> > +  * activity observation and information leaks based on probabilistic
> > +    approaches that are prone to measurement noise and not realistically
> > +    reproducible on a production system.
> > +
> > +  * issues that can only be triggered by heavy attacks (e.g. brute force) whose
> > +    impact on the system makes it unlikely or impossible to remain undetected
> > +    before they succeed (e.g. consuming all memory before succeeding).
> > +
> > +  * problems seen only under development simulators, emulators, or combinations
> > +    that do not exist on real systems at the time of reporting (issues
> > +    involving tens of millions of threads, tens of thousands of CPUs,
> > +    unrealistic CPU frequencies, RAM sizes or disk capacities, network speeds.
> > +
> > +  * issues whose reproduction requires hardware modification or emulation,
> > +    including fake USB devices that pretend to be another one.
> > +
> > +  * as well as issues that can be triggered at a cost that is orders of
> > +    magnitude higher than the expected benefits (e.g. fully functional keyboard
> > +    emulator only to retrieve 7 uninitialized bytes in a structure, or
> > +    brute-force method involving millions of connection attempts to guess a
> > +    port number).
> 
> Can we add a section about problems found using experimental or tools
> in development stage?

You mean one more paragraph about CONFIG_EXPERIMENTAL ? Or what else do
you have in mind ? Do not hesiate to propose a paragraph if you have
anything in mind!

(...)

> Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>
> 
> thanks,

Thank you!
Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-08 16:39         ` Willy Tarreau
@ 2026-05-09  6:39           ` Greg KH
  2026-05-09  7:43             ` Willy Tarreau
  0 siblings, 1 reply; 25+ messages in thread
From: Greg KH @ 2026-05-09  6:39 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: Linus Torvalds, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel

On Fri, May 08, 2026 at 06:39:07PM +0200, Willy Tarreau wrote:
> Greg,
> 
> does this addition on top of the current patch address your concerns ?
> 
> --- a/Documentation/process/security-bugs.rst
> +++ b/Documentation/process/security-bugs.rst
> @@ -88,6 +88,14 @@ can be easily exploited, representing an imminent threat to many users.  Before
>  reporting, consider whether the issue actually crosses a trust boundary on such
>  a system.
> 
> +**If you resorted to AI assistance to identify a bug, you must treat it as
> +public**. While you may have valid reasons to believe it is not, the security
> +team's experience shows that bugs discovered this way systematically surface
> +simultaneously across multiple researchers, often on the same day. In this
> +case, do not publicly share a reproducer, as this could cause unintended harm;
> +just mention that one is available and maintainers might ask for it privately
> +if they need it.
> +
>  If you are unsure whether an issue qualifies, err on the side of reporting
>  privately: the security team would rather triage a borderline report than miss
>  a real vulnerability.  Reporting ordinary bugs to the security list, however,
> @@ -102,7 +110,7 @@ affected subsystem's maintainers and Cc: the Linux kernel security team.  Do
>  not send it to a public list at this stage, unless you have good reasons to
>  consider the issue as being public or trivial to discover (e.g. result of a
>  widely available automated vulnerability scanning tool that can be repeated by
> -anyone).
> +anyone, or use of AI-based tools).
> 
>  If you're sending a report for issues affecting multiple parts in the kernel,
>  even if they're fairly similar issues, please send individual messages (think
> 
> If so I can resend with it.

Looks good to me, thanks!

greg k-h

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-09  6:39           ` Greg KH
@ 2026-05-09  7:43             ` Willy Tarreau
  0 siblings, 0 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-05-09  7:43 UTC (permalink / raw)
  To: Greg KH
  Cc: Linus Torvalds, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel

On Sat, May 09, 2026 at 08:39:37AM +0200, Greg KH wrote:
> On Fri, May 08, 2026 at 06:39:07PM +0200, Willy Tarreau wrote:
> > Greg,
> > 
> > does this addition on top of the current patch address your concerns ?
> > 
> > --- a/Documentation/process/security-bugs.rst
> > +++ b/Documentation/process/security-bugs.rst
> > @@ -88,6 +88,14 @@ can be easily exploited, representing an imminent threat to many users.  Before
> >  reporting, consider whether the issue actually crosses a trust boundary on such
> >  a system.
> > 
> > +**If you resorted to AI assistance to identify a bug, you must treat it as
> > +public**. While you may have valid reasons to believe it is not, the security
> > +team's experience shows that bugs discovered this way systematically surface
> > +simultaneously across multiple researchers, often on the same day. In this
> > +case, do not publicly share a reproducer, as this could cause unintended harm;
> > +just mention that one is available and maintainers might ask for it privately
> > +if they need it.
> > +
> >  If you are unsure whether an issue qualifies, err on the side of reporting
> >  privately: the security team would rather triage a borderline report than miss
> >  a real vulnerability.  Reporting ordinary bugs to the security list, however,
> > @@ -102,7 +110,7 @@ affected subsystem's maintainers and Cc: the Linux kernel security team.  Do
> >  not send it to a public list at this stage, unless you have good reasons to
> >  consider the issue as being public or trivial to discover (e.g. result of a
> >  widely available automated vulnerability scanning tool that can be repeated by
> > -anyone).
> > +anyone, or use of AI-based tools).
> > 
> >  If you're sending a report for issues affecting multiple parts in the kernel,
> >  even if they're fairly similar issues, please send individual messages (think
> > 
> > If so I can resend with it.
> 
> Looks good to me, thanks!

Thank you. I'll integrate Shuah's comments and will send a v3. After
that I'll see if we can better split the public vs private part, because
I'm starting to find it complicated, but I don't want to postpone for
too long if having all of this can already help us.

Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-09  4:48     ` Willy Tarreau
@ 2026-05-09 19:50       ` Shuah Khan
  0 siblings, 0 replies; 25+ messages in thread
From: Shuah Khan @ 2026-05-09 19:50 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: greg, leon, security, Jonathan Corbet, workflows, linux-doc,
	linux-kernel, Greg KH, Shuah Khan

On 5/8/26 22:48, Willy Tarreau wrote:
> Hi Shuah,
> 
> On Fri, May 08, 2026 at 02:52:13PM -0600, Shuah Khan wrote:
>>> +What qualifies as a security bug
>>> +--------------------------------
>>> +
>>> +It is important that most bugs are handled publicly so as to involve the widest
>>> +possible audience and find the best solution.  By nature, bugs that are handled
>>> +in closed discussions between a small set of participants are less likely to
>>> +produce the best possible fix (e.g., risk of missing valid use cases, limited
>>> +testing abilities).
>>> +
>>> +It turns out that the majority of the bugs reported via the security team are
>>> +just regular bugs that have been improperly qualified as security bugs due to
>>> +ignorance or misunderstanding of the Linux kernel's threat model described in
>>
>> "lack of understanding" instead of ignorance?
> 
> I already had "misunderstanding", here I wanted to express the idea that
> people could simply ignore that this file exists (since it's new). Do you
> think we shouldn't care about this and just keep "misunderstanding" ?
> 
> (...)
>>> +The Linux Kernel threat model
>>> +=============================
>>> +
>>> +There are a lot of assumptions regarding what the kernel protects against and
>>> +what it does not protect against. These assumptions tend to cause confusion for
>>
>> Could simply say "what it does not" or "what the kernel does and does not protect
>> against"
> 
> Ah OK good point, I'll rephrase it.
> 
>>> +* **Configuration**:
>>> +
>>> +  * outdated kernels and particularly end-of-life branches are out of the scope
>>> +    of the kernel's threat model: administrators are responsible for keeping
>>> +    their system up to date. For a bug to qualify as a security bug, it must be
>>> +    demonstrated that it affects actively maintained versions.
>>> +
>>> +  * build-level: changes to the kernel configuration that are explicitly
>>> +    documented as lowering the security level (e.g. ``CONFIG_NOMMU``), or
>>> +    targeted at developers only.
>>> +
>>> +  * OS-level: changes to command line parameters, sysctls, filesystem
>>> +    permissions, user capabilities, exposure of privileged interfaces, that
>>> +    explicitly increase exposure by either offering non-default access to
>>> +    unprivileged users, or reduce the kernel's ability to enforce some
>>> +    protections or mitigations. Example: write access to procfs or debugfs.
>>> +
>>> +  * issues triggered only when using features intended for development or
>>> +    debugging (e.g., lockdep, KASAN, fault-injection): these features are known
>>> +    to introduce overhead and potential instability and are not intended for
>>> +    production use.
>>
>> Can we call out features and tools (the ones in kernel repo)
> 
> Sure!
> 
>> sched_ext's Kconfig enables
>> a few debug options including LOCKDEP
>>
>> tools/sched_ext/Kconfig:CONFIG_DEBUG_LOCKDEP=y
> 
> It's still there but maybe not visible enough, I should probably write
> it in upper case:
> 
>     debugging (e.g., lockdep, KASAN, fault-injection):
> 
>>> +* **Excess of initial privileges**:
>>> +
>>> +  * actions performed by a user already possessing the privileges required to
>>> +    perform that action or modify that state (e.g. ``CAP_SYS_ADMIN``,
>>> +    ``CAP_NET_ADMIN``, ``CAP_SYS_RAWIO``, ``CAP_SYS_MODULE`` with no further
>>> +    boundary being crossed).
>>> +
>>> +  * actions performed in user namespace without permitting anything in the
>>> +    initial namespace that was not already permitted to the same user there.
>>
>> This was a bit hard to parse - examples might help here
> 
> Yeah when rereading it now, I fully agree. I think I should avoid the
> double negation here and use a form such as;
> 
>    * actions performed in user namespace that do not bypass the restrictions
>      imposed to the initial user.
> 
> If examples are still needed, I could possibly add: "(e.g. ptrace, signals,
> FS or device access, system/network configuration, network binding)".
> 
>>> +  * anything performed by the root user in the initial namespace (e.g. kernel
>>> +    oops when writing to a privileged device).
>>> +
>>> +* **Out of production use**:
>>> +
>>> +  This covers theoretical/probabilistic attacks that rely on laboratory
>>> +  conditions with zero system noise, or those requiring an unrealistic number
>>> +  of attempts (e.g., billions of trials) that would be detected by standard
>>> +  system monitoring long before success, such as:
>>> +
>>> +  * prediction of random numbers that only works in a totally silent
>>> +    environment (such as IP ID, TCP ports or sequence numbers that can only be
>>> +    guessed in a lab).
>>> +
>>> +  * activity observation and information leaks based on probabilistic
>>> +    approaches that are prone to measurement noise and not realistically
>>> +    reproducible on a production system.
>>> +
>>> +  * issues that can only be triggered by heavy attacks (e.g. brute force) whose
>>> +    impact on the system makes it unlikely or impossible to remain undetected
>>> +    before they succeed (e.g. consuming all memory before succeeding).
>>> +
>>> +  * problems seen only under development simulators, emulators, or combinations
>>> +    that do not exist on real systems at the time of reporting (issues
>>> +    involving tens of millions of threads, tens of thousands of CPUs,
>>> +    unrealistic CPU frequencies, RAM sizes or disk capacities, network speeds.
>>> +
>>> +  * issues whose reproduction requires hardware modification or emulation,
>>> +    including fake USB devices that pretend to be another one.
>>> +
>>> +  * as well as issues that can be triggered at a cost that is orders of
>>> +    magnitude higher than the expected benefits (e.g. fully functional keyboard
>>> +    emulator only to retrieve 7 uninitialized bytes in a structure, or
>>> +    brute-force method involving millions of connection attempts to guess a
>>> +    port number).
>>
>> Can we add a section about problems found using experimental or tools
>> in development stage?
> 
> You mean one more paragraph about CONFIG_EXPERIMENTAL ? Or what else do
> you have in mind ? Do not hesiate to propose a paragraph if you have
> anything in mind!

This is what I have in mind:

issues found by closed source static and dynamic checkers that are
in development by individual or research groups.

I see that you sent out v3 and we can add this later. My Reviewed-by
hold for your v3.

thanks,
-- Shuah



^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2026-05-09 19:50 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-03 11:35 [PATCH v2 0/3] Documentation: security-bugs: new updates covering triage and AI Willy Tarreau
2026-05-03 11:35 ` [PATCH v2 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
2026-05-05 14:10   ` Leon Romanovsky
2026-05-08 15:31   ` Greg KH
2026-05-03 11:35 ` [PATCH v2 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
2026-05-05 14:10   ` Leon Romanovsky
2026-05-06 15:46   ` Linus Torvalds
2026-05-06 16:02     ` Willy Tarreau
2026-05-07  4:18       ` Willy Tarreau
2026-05-07  7:14         ` Peter Zijlstra
2026-05-07  7:07       ` Peter Zijlstra
2026-05-07 15:37         ` Linus Torvalds
2026-05-07 15:48           ` Willy Tarreau
2026-05-08 15:35     ` Greg KH
2026-05-08 15:54       ` Joshua Peisach
2026-05-08 16:07         ` Willy Tarreau
2026-05-08 15:59       ` Willy Tarreau
2026-05-08 16:39         ` Willy Tarreau
2026-05-09  6:39           ` Greg KH
2026-05-09  7:43             ` Willy Tarreau
2026-05-08 20:52   ` Shuah Khan
2026-05-09  4:48     ` Willy Tarreau
2026-05-09 19:50       ` Shuah Khan
2026-05-03 11:35 ` [PATCH v2 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports Willy Tarreau
2026-05-05 14:09   ` Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox