public inbox for linux-doc@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] Documentation: security-bugs: new updates covering triage and AI
@ 2026-04-26 16:39 Willy Tarreau
  2026-04-26 16:39 ` [PATCH 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
                   ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-04-26 16:39 UTC (permalink / raw)
  To: greg
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Willy Tarreau

This series tries to translate recent discussions on the security list
on how to better handle reports. It details:
  - when not to Cc: the security list
  - what classes of bugs do not need to be handled privately
  - minimum requirements for AI-assisted reports

As usual, this is probably perfectible but can already help in the short
term as we can point it to reporters, so barring any strong disagreement,
better continue to proceed in small incremental improvements and observe
the effects.

Thanks!
Willy

---
Willy Tarreau (3):
  Documentation: security-bugs: do not systematically Cc the security
    team
  Documentation: security-bugs: explain what is and is not a security
    bug
  Documentation: security-bugs: clarify requirements for AI-assisted
    reports

 Documentation/process/security-bugs.rst | 131 ++++++++++++++++++++++--
 1 file changed, 120 insertions(+), 11 deletions(-)

-- 
2.52.0


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 1/3] Documentation: security-bugs: do not systematically Cc the security team
  2026-04-26 16:39 [PATCH 0/3] Documentation: security-bugs: new updates covering triage and AI Willy Tarreau
@ 2026-04-26 16:39 ` Willy Tarreau
  2026-04-27 13:49   ` Greg KH
  2026-04-26 16:39 ` [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
  2026-04-26 16:39 ` [PATCH 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports Willy Tarreau
  2 siblings, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-04-26 16:39 UTC (permalink / raw)
  To: greg
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Willy Tarreau, Greg KH

With the increase of automated reports, the security team is dealing
with way more messages than really needed. The reporting process works
well with most teams so there is no need to systematically involve the
security team in reports.

Let's suggest to keep it for small lists of recipients, to cover the
risk of lost messages (spam, vacation etc) but to avoid it for larger
teams.

Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
---
 Documentation/process/security-bugs.rst | 26 ++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst
index 27b028e858610..a8a8fc724e8c8 100644
--- a/Documentation/process/security-bugs.rst
+++ b/Documentation/process/security-bugs.rst
@@ -70,11 +70,10 @@ Identifying contacts
 --------------------
 
 The most effective way to report a security bug is to send it directly to the
-affected subsystem's maintainers and Cc: the Linux kernel security team.  Do
-not send it to a public list at this stage, unless you have good reasons to
-consider the issue as being public or trivial to discover (e.g. result of a
-widely available automated vulnerability scanning tool that can be repeated by
-anyone).
+affected subsystem's maintainers.  Do not send it to a public list at this
+stage, unless you have good reasons to consider the issue as being public or
+trivial to discover (e.g. result of a widely available automated vulnerability
+scanning tool that can be repeated by anyone).
 
 If you're sending a report for issues affecting multiple parts in the kernel,
 even if they're fairly similar issues, please send individual messages (think
@@ -148,12 +147,17 @@ run additional tests.  Reports where the reporter does not respond promptly
 or cannot effectively discuss their findings may be abandoned if the
 communication does not quickly improve.
 
-The report must be sent to maintainers, with the security team in ``Cc:``.
-The Linux kernel security team can be contacted by email at
-<security@kernel.org>.  This is a private list of security officers
-who will help verify the bug report and assist developers working on a fix.
-It is possible that the security team will bring in extra help from area
-maintainers to understand and fix the security vulnerability.
+The report must be sent to maintainers.  If there are two or fewer recipients
+in your message, and only in this case, you can also Cc: the Linux kernel
+security team who will ensure the message is delivered to the proper people,
+and will be able to assist small maintainers teams with a process they are not
+necessarily familiar with.  For larger teams, please do not Cc: the Linux
+kernel security team, unless you're seeking specific help (e.g. when resending
+a message which got no response within a week).  The Linux kernel security team
+can be contacted by email at <security@kernel.org>.  This is a private list of
+security officers who will help verify the bug report and assist developers
+working on a fix.  It is possible that the security team will bring in extra
+help from area maintainers to understand and fix the security vulnerability.
 
 Please send **plain text** emails without attachments where possible.
 It is much harder to have a context-quoted discussion about a complex
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-26 16:39 [PATCH 0/3] Documentation: security-bugs: new updates covering triage and AI Willy Tarreau
  2026-04-26 16:39 ` [PATCH 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
@ 2026-04-26 16:39 ` Willy Tarreau
  2026-04-26 19:33   ` Randy Dunlap
  2026-04-27 13:48   ` Greg KH
  2026-04-26 16:39 ` [PATCH 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports Willy Tarreau
  2 siblings, 2 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-04-26 16:39 UTC (permalink / raw)
  To: greg
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Willy Tarreau, Greg KH

The use of automated tools to find bugs in random locations of the kernel
induces a raise of security reports even if most of them should just be
reported as regular bugs. This patch is an attempt at drawing a line
between what qualifies as a security bug and what does not, hoping to
improve the situation.

Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Leon Romanovsky <leon@kernel.org>
Suggested-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
---

Leon, while we started this list before our discussion, I reused most of
your proposal which was more comprehensive, and merged our initial work
into it. I added you in Suggested-by: but I think that Co-developed-by:
would be more suitable. If so, for this you'll have to also sign-off the
patch. It's as you prefer, I personally don't care.

---
 Documentation/process/security-bugs.rst | 50 +++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst
index a8a8fc724e8c8..7cc3a1970ca00 100644
--- a/Documentation/process/security-bugs.rst
+++ b/Documentation/process/security-bugs.rst
@@ -66,6 +66,56 @@ In addition, the following information are highly desirable:
     the issue appear. It is useful to share them, as they can be helpful to
     keep end users protected during the time it takes them to apply the fix.
 
+What qualifies as a security bug
+--------------------------------
+
+It is important that most bugs are handled publicly so as to involve the widest
+possible audience and find the best solution.  By nature, bugs that are handled
+in closed discussions between a small set of participants are less likely to
+produce the best possible fix (e.g., risk of missing valid use cases, limited
+testing abilities).
+
+It turns out that the majority of the bugs reported to the security team are
+just regular bugs that have been improperly qualified as security bugs due to a
+misunderstanding of the Linux kernel's threat model, and ought to have been
+sent through the normal channels described in
+'Documentation/admin-guide/reporting-issues.rst'.
+
+The security list exists for urgent bugs that grant an attacker a capability
+they are not supposed to have on a correctly configured production system, and
+can be easily exploited, representing an imminent threat to many users.  Before
+reporting, consider whether the issue actually crosses a trust boundary on such
+a system.
+
+In the Linux kernel's threat model, an issue is **not** a security bug, and
+should not be reported to the security list, when triggering it requires the
+reporter to first undermine the system they are attacking.  This includes, but
+is not limited to, behavior that only manifests after the administrator has
+explicitly enabled it (loading a module, setting a sysctl, writing to a debugfs
+knob, or otherwise using an interface documented as privileged or unsafe); bugs
+reachable only through root or CAP_SYS_ADMIN or CAP_NET_ADMIN on a machine the
+actor already fully controls, with no further privilege boundary being crossed;
+prediction of random numbers that only works in a totally silent environment
+(such as IP ID, TCP ports or sequence numbers that can only be guessed in a
+lab), issues that appear only in debug, lockdep, KASAN, fault-injection,
+CONFIG_NOMMU, or other developer-oriented kernel builds that are not intended
+for production use; problems seen only under development simulators, emulators,
+or fuzzing harnesses that present hardware or input states which cannot occur
+on real systems; bugs that require modified or emulated hardware; missing
+hardening or defence-in-depth suggestions with no demonstrable exploit path
+(including local ASLR bypass); mounting file systems that would be fixed or
+rejected by fsck; and bugs in out-of-tree modules or vendor forks, which should
+be reported to the relevant vendor.  Functional and performance regressions,
+and disagreements with documented kernel policy (for example, "root can load
+modules"), are likewise ordinary bugs or feature requests rather than security
+issues, and should be reported via the usual channels.
+
+If you are unsure whether an issue qualifies, err on the side of reporting
+privately: the security team would rather triage a borderline report than miss
+a real vulnerability.  Reporting ordinary bugs to the security list, however,
+does not make them move faster and consumes triage capacity that other reports
+need.
+
 Identifying contacts
 --------------------
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports
  2026-04-26 16:39 [PATCH 0/3] Documentation: security-bugs: new updates covering triage and AI Willy Tarreau
  2026-04-26 16:39 ` [PATCH 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
  2026-04-26 16:39 ` [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
@ 2026-04-26 16:39 ` Willy Tarreau
  2026-04-26 19:36   ` Randy Dunlap
  2026-04-27 13:50   ` Greg KH
  2 siblings, 2 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-04-26 16:39 UTC (permalink / raw)
  To: greg
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Willy Tarreau, Greg KH

AI tools are increasingly used to assist in bug discovery. While these
tools can identify valid issues, reports that are submitted without
manual verification often lack context, contain speculative impact
assessments, or include unnecessary formatting. Such reports increase
triage effort, waste maintainers' time and may be ignored.

Reports where the reporter has verified the issue and the proposed fix
typically meet quality standards. This documentation outlines specific
requirements for length, formatting, and impact evaluation to reduce
the effort needed to deal with these reports.

Cc: Greg KH <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
---
 Documentation/process/security-bugs.rst | 55 +++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst
index 7cc3a1970ca00..803d8819694e7 100644
--- a/Documentation/process/security-bugs.rst
+++ b/Documentation/process/security-bugs.rst
@@ -180,6 +180,61 @@ the Linux kernel security team only.  Your message will be triaged, and you
 will receive instructions about whom to contact, if needed.  Your message may
 equally be forwarded as-is to the relevant maintainers.
 
+Responsible use of AI to find bugs
+----------------------------------
+
+A significant fraction of bug reports submitted to the security team are
+actually the result of code reviews assisted by AI tools. While this can be an
+efficient means to find bugs in rarely explored areas, it causes an overload on
+maintainers, who are sometimes forced to ignore such reports due to their poor
+quality or accuracy. As such, reporters must be particularly cautious about a
+number of points which tend to make these reports needlessly difficult to
+handle:
+
+  * **Length**: AI-generated reports tend to be excessively long, containing
+    multiple sections and excessive detail. This makes it difficult to spot
+    important information such as affected files, versions, and impact. Please
+    ensure that a clear summary of the problem and all critical details are
+    presented first. Do not require triage engineers to scan multiple pages of
+    text. Configure your tools to produce concise, human-style reports.
+
+  * **Formatting**: Most AI-generated reports are littered with Markdown tags.
+    These decorations complicate the search for important information and do
+    not survive the quoting processes involved in forwarding or replying.
+    Please **always convert your report to plain text** without any formatting
+    decorations before sending it.
+
+  * **Impact Evaluation**: Many AI-generated reports lack an understanding of
+    the kernel's threat model and go to great lengths inventing theoretical
+    consequences. This adds noise and complicates triage. Please stick to
+    verifiable facts (e.g., "this bug permits any user to gain CAP_NET_ADMIN")
+    without enumerating speculative implications. Have your tool read this
+    documentation as part of the evaluation process.
+
+  * **Reproducer**: AI-based tools are often capable of generating reproducers.
+    Please always ensure your tool provides one and **test it thoroughly**. If
+    the reproducer does not work, or if the tool cannot produce one, the
+    validity of the report should be seriously questioned.
+
+  * **Propose a Fix**: Many AI tools are actually better at writing code than
+    evaluating it. Please ask your tool to propose a fix and **test it** before
+    reporting the problem. If the fix cannot be tested because it relies on
+    rare hardware or almost extinct network protocols, the issue is likely not
+    a security bug. In any case, if a fix is proposed, it must adhere to
+    Documentation/process/submitting-patches.rst and include a 'Fixes:' tag
+    designating the commit that introduced the bug.
+
+Failure to consider these points exposes your report to the risk of being
+ignored.
+
+Use common sense when evaluating the report. If the affected file has not been
+touched for more than one year and is maintained by a single individual, it is
+likely that usage has declined and exposed users are virtually non-existent
+(e.g., drivers for very old hardware, obsolte filesystems). In such cases,
+there is no need to consume a maintainer's time with an unimportant report. If
+the issue is clearly trivial and publicly discoverable, you should report it
+directly to the public mailing lists.
+
 Sending the report
 ------------------
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-26 16:39 ` [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
@ 2026-04-26 19:33   ` Randy Dunlap
  2026-04-27 13:48   ` Greg KH
  1 sibling, 0 replies; 25+ messages in thread
From: Randy Dunlap @ 2026-04-26 19:33 UTC (permalink / raw)
  To: Willy Tarreau, greg
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Greg KH



On 4/26/26 9:39 AM, Willy Tarreau wrote:
> The use of automated tools to find bugs in random locations of the kernel
> induces a raise of security reports even if most of them should just be
> reported as regular bugs. This patch is an attempt at drawing a line
> between what qualifies as a security bug and what does not, hoping to
> improve the situation.
> 
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Leon Romanovsky <leon@kernel.org>
> Suggested-by: Leon Romanovsky <leon@kernel.org>
> Signed-off-by: Willy Tarreau <w@1wt.eu>
> ---
> 
> Leon, while we started this list before our discussion, I reused most of
> your proposal which was more comprehensive, and merged our initial work
> into it. I added you in Suggested-by: but I think that Co-developed-by:
> would be more suitable. If so, for this you'll have to also sign-off the
> patch. It's as you prefer, I personally don't care.
> 
> ---
>  Documentation/process/security-bugs.rst | 50 +++++++++++++++++++++++++
>  1 file changed, 50 insertions(+)
> 
> diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst
> index a8a8fc724e8c8..7cc3a1970ca00 100644
> --- a/Documentation/process/security-bugs.rst
> +++ b/Documentation/process/security-bugs.rst
> @@ -66,6 +66,56 @@ In addition, the following information are highly desirable:
>      the issue appear. It is useful to share them, as they can be helpful to
>      keep end users protected during the time it takes them to apply the fix.
>  
> +What qualifies as a security bug
> +--------------------------------
> +
> +It is important that most bugs are handled publicly so as to involve the widest
> +possible audience and find the best solution.  By nature, bugs that are handled
> +in closed discussions between a small set of participants are less likely to
> +produce the best possible fix (e.g., risk of missing valid use cases, limited
> +testing abilities).
> +
> +It turns out that the majority of the bugs reported to the security team are
> +just regular bugs that have been improperly qualified as security bugs due to a
> +misunderstanding of the Linux kernel's threat model, and ought to have been
> +sent through the normal channels described in
> +'Documentation/admin-guide/reporting-issues.rst'.

Remove the <'> marks and let automarkup handle the filename.

-- 
~Randy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports
  2026-04-26 16:39 ` [PATCH 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports Willy Tarreau
@ 2026-04-26 19:36   ` Randy Dunlap
  2026-04-27  2:22     ` Willy Tarreau
  2026-04-27 13:50   ` Greg KH
  1 sibling, 1 reply; 25+ messages in thread
From: Randy Dunlap @ 2026-04-26 19:36 UTC (permalink / raw)
  To: Willy Tarreau, greg
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Greg KH



On 4/26/26 9:39 AM, Willy Tarreau wrote:
> +Use common sense when evaluating the report. If the affected file has not been
> +touched for more than one year and is maintained by a single individual, it is
> +likely that usage has declined and exposed users are virtually non-existent
> +(e.g., drivers for very old hardware, obsolte filesystems). In such cases,

                                         obsolete

> +there is no need to consume a maintainer's time with an unimportant report. If
> +the issue is clearly trivial and publicly discoverable, you should report it
> +directly to the public mailing lists.

-- 
~Randy


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports
  2026-04-26 19:36   ` Randy Dunlap
@ 2026-04-27  2:22     ` Willy Tarreau
  0 siblings, 0 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-04-27  2:22 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: greg, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel, Greg KH

On Sun, Apr 26, 2026 at 12:36:28PM -0700, Randy Dunlap wrote:
> 
> 
> On 4/26/26 9:39 AM, Willy Tarreau wrote:
> > +Use common sense when evaluating the report. If the affected file has not been
> > +touched for more than one year and is maintained by a single individual, it is
> > +likely that usage has declined and exposed users are virtually non-existent
> > +(e.g., drivers for very old hardware, obsolte filesystems). In such cases,
> 
>                                          obsolete
(...)

Thank you Randy for your reviews! I'll apply the fixes and resend in a
few days if there are no more comments.

Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-26 16:39 ` [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
  2026-04-26 19:33   ` Randy Dunlap
@ 2026-04-27 13:48   ` Greg KH
  2026-04-27 15:27     ` Willy Tarreau
  1 sibling, 1 reply; 25+ messages in thread
From: Greg KH @ 2026-04-27 13:48 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Sun, Apr 26, 2026 at 06:39:13PM +0200, Willy Tarreau wrote:
> +In the Linux kernel's threat model, an issue is **not** a security bug, and
> +should not be reported to the security list, when triggering it requires the
> +reporter to first undermine the system they are attacking.  This includes, but
> +is not limited to, behavior that only manifests after the administrator has
> +explicitly enabled it (loading a module, setting a sysctl, writing to a debugfs
> +knob, or otherwise using an interface documented as privileged or unsafe); bugs
> +reachable only through root or CAP_SYS_ADMIN or CAP_NET_ADMIN on a machine the
> +actor already fully controls, with no further privilege boundary being crossed;
> +prediction of random numbers that only works in a totally silent environment
> +(such as IP ID, TCP ports or sequence numbers that can only be guessed in a
> +lab), issues that appear only in debug, lockdep, KASAN, fault-injection,
> +CONFIG_NOMMU, or other developer-oriented kernel builds that are not intended
> +for production use; problems seen only under development simulators, emulators,
> +or fuzzing harnesses that present hardware or input states which cannot occur
> +on real systems; bugs that require modified or emulated hardware; missing
> +hardening or defence-in-depth suggestions with no demonstrable exploit path
> +(including local ASLR bypass); mounting file systems that would be fixed or
> +rejected by fsck; and bugs in out-of-tree modules or vendor forks, which should
> +be reported to the relevant vendor.  Functional and performance regressions,
> +and disagreements with documented kernel policy (for example, "root can load
> +modules"), are likewise ordinary bugs or feature requests rather than security
> +issues, and should be reported via the usual channels.

This is a great list to start with, but perhaps we should put it in list
form so that it's easier to read?

Also, I can see this turning into a separate document eventually as
different subsystems should have a chance to weigh in on what they
consider the threat model to be (like what the IB subsystem does which I
don't think you listed above, or the USB subsystem.)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/3] Documentation: security-bugs: do not systematically Cc the security team
  2026-04-26 16:39 ` [PATCH 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
@ 2026-04-27 13:49   ` Greg KH
  2026-04-27 15:24     ` Willy Tarreau
  0 siblings, 1 reply; 25+ messages in thread
From: Greg KH @ 2026-04-27 13:49 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Sun, Apr 26, 2026 at 06:39:12PM +0200, Willy Tarreau wrote:
> With the increase of automated reports, the security team is dealing
> with way more messages than really needed. The reporting process works
> well with most teams so there is no need to systematically involve the
> security team in reports.
> 
> Let's suggest to keep it for small lists of recipients, to cover the
> risk of lost messages (spam, vacation etc) but to avoid it for larger
> teams.
> 
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Leon Romanovsky <leon@kernel.org>
> Signed-off-by: Willy Tarreau <w@1wt.eu>

This is going to cut down on emails to us a bunch, which might be good,
or not, as now we'll not have a way to know what's going on overall.
But hey, let's try it and see what happens!

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports
  2026-04-26 16:39 ` [PATCH 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports Willy Tarreau
  2026-04-26 19:36   ` Randy Dunlap
@ 2026-04-27 13:50   ` Greg KH
  1 sibling, 0 replies; 25+ messages in thread
From: Greg KH @ 2026-04-27 13:50 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Sun, Apr 26, 2026 at 06:39:14PM +0200, Willy Tarreau wrote:
> AI tools are increasingly used to assist in bug discovery. While these
> tools can identify valid issues, reports that are submitted without
> manual verification often lack context, contain speculative impact
> assessments, or include unnecessary formatting. Such reports increase
> triage effort, waste maintainers' time and may be ignored.
> 
> Reports where the reporter has verified the issue and the proposed fix
> typically meet quality standards. This documentation outlines specific
> requirements for length, formatting, and impact evaluation to reduce
> the effort needed to deal with these reports.
> 
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Signed-off-by: Willy Tarreau <w@1wt.eu>
> ---
>  Documentation/process/security-bugs.rst | 55 +++++++++++++++++++++++++
>  1 file changed, 55 insertions(+)

Nice addition!

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/3] Documentation: security-bugs: do not systematically Cc the security team
  2026-04-27 13:49   ` Greg KH
@ 2026-04-27 15:24     ` Willy Tarreau
  2026-04-27 15:33       ` Greg KH
  0 siblings, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-04-27 15:24 UTC (permalink / raw)
  To: Greg KH
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Mon, Apr 27, 2026 at 07:49:08AM -0600, Greg KH wrote:
> On Sun, Apr 26, 2026 at 06:39:12PM +0200, Willy Tarreau wrote:
> > With the increase of automated reports, the security team is dealing
> > with way more messages than really needed. The reporting process works
> > well with most teams so there is no need to systematically involve the
> > security team in reports.
> > 
> > Let's suggest to keep it for small lists of recipients, to cover the
> > risk of lost messages (spam, vacation etc) but to avoid it for larger
> > teams.
> > 
> > Cc: Greg KH <gregkh@linuxfoundation.org>
> > Cc: Leon Romanovsky <leon@kernel.org>
> > Signed-off-by: Willy Tarreau <w@1wt.eu>
> 
> This is going to cut down on emails to us a bunch, which might be good,
> or not, as now we'll not have a way to know what's going on overall.
> But hey, let's try it and see what happens!

Or maybe we could suggest that first reports from a reporter should
always Cc the list ? After all, every time we asked to drop the list
was for senders at their 5th or 10th submission. Maybe we could just
say that the list members prefer not being repetitively CCed by the
same submitters to invest more time on newcomers ?

> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Thanks!
willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-27 13:48   ` Greg KH
@ 2026-04-27 15:27     ` Willy Tarreau
  2026-04-27 15:35       ` Greg KH
  0 siblings, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-04-27 15:27 UTC (permalink / raw)
  To: Greg KH
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Mon, Apr 27, 2026 at 07:48:23AM -0600, Greg KH wrote:
> On Sun, Apr 26, 2026 at 06:39:13PM +0200, Willy Tarreau wrote:
> > +In the Linux kernel's threat model, an issue is **not** a security bug, and
> > +should not be reported to the security list, when triggering it requires the
> > +reporter to first undermine the system they are attacking.  This includes, but
> > +is not limited to, behavior that only manifests after the administrator has
> > +explicitly enabled it (loading a module, setting a sysctl, writing to a debugfs
> > +knob, or otherwise using an interface documented as privileged or unsafe); bugs
> > +reachable only through root or CAP_SYS_ADMIN or CAP_NET_ADMIN on a machine the
> > +actor already fully controls, with no further privilege boundary being crossed;
> > +prediction of random numbers that only works in a totally silent environment
> > +(such as IP ID, TCP ports or sequence numbers that can only be guessed in a
> > +lab), issues that appear only in debug, lockdep, KASAN, fault-injection,
> > +CONFIG_NOMMU, or other developer-oriented kernel builds that are not intended
> > +for production use; problems seen only under development simulators, emulators,
> > +or fuzzing harnesses that present hardware or input states which cannot occur
> > +on real systems; bugs that require modified or emulated hardware; missing
> > +hardening or defence-in-depth suggestions with no demonstrable exploit path
> > +(including local ASLR bypass); mounting file systems that would be fixed or
> > +rejected by fsck; and bugs in out-of-tree modules or vendor forks, which should
> > +be reported to the relevant vendor.  Functional and performance regressions,
> > +and disagreements with documented kernel policy (for example, "root can load
> > +modules"), are likewise ordinary bugs or feature requests rather than security
> > +issues, and should be reported via the usual channels.
> 
> This is a great list to start with, but perhaps we should put it in list
> form so that it's easier to read?

In fact that's what I tried first and it was super long with many short
lines, making it possibly worse. But maybe aggregating several short
entries on a line by similarities could work, I can give it a try.

> Also, I can see this turning into a separate document eventually as
> different subsystems should have a chance to weigh in on what they
> consider the threat model to be

My fear if we redirect to other files is that it won't be read again.
However, we could possibly suggest to always look for the subsystem's
specific rules in this subsytem's doc, leaving enough freedom to
maintainers to reject more things.

> (like what the IB subsystem does which I
> don't think you listed above, or the USB subsystem.)

Indeed I didn't list IB (I'm never sure about it, I seem to remember
we simply trust any peer, is that right?), nor did I make specific
mentions for USB which is implicitly covered by "hardware emulation
or modification".

thanks!
Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/3] Documentation: security-bugs: do not systematically Cc the security team
  2026-04-27 15:24     ` Willy Tarreau
@ 2026-04-27 15:33       ` Greg KH
  2026-04-27 16:09         ` Willy Tarreau
  0 siblings, 1 reply; 25+ messages in thread
From: Greg KH @ 2026-04-27 15:33 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Mon, Apr 27, 2026 at 05:24:06PM +0200, Willy Tarreau wrote:
> On Mon, Apr 27, 2026 at 07:49:08AM -0600, Greg KH wrote:
> > On Sun, Apr 26, 2026 at 06:39:12PM +0200, Willy Tarreau wrote:
> > > With the increase of automated reports, the security team is dealing
> > > with way more messages than really needed. The reporting process works
> > > well with most teams so there is no need to systematically involve the
> > > security team in reports.
> > > 
> > > Let's suggest to keep it for small lists of recipients, to cover the
> > > risk of lost messages (spam, vacation etc) but to avoid it for larger
> > > teams.
> > > 
> > > Cc: Greg KH <gregkh@linuxfoundation.org>
> > > Cc: Leon Romanovsky <leon@kernel.org>
> > > Signed-off-by: Willy Tarreau <w@1wt.eu>
> > 
> > This is going to cut down on emails to us a bunch, which might be good,
> > or not, as now we'll not have a way to know what's going on overall.
> > But hey, let's try it and see what happens!
> 
> Or maybe we could suggest that first reports from a reporter should
> always Cc the list ? After all, every time we asked to drop the list
> was for senders at their 5th or 10th submission. Maybe we could just
> say that the list members prefer not being repetitively CCed by the
> same submitters to invest more time on newcomers ?

Yes, that might be better, otherwise maintainers are going to get some
pretty foolish reports with out the context of howing to properly at
least push back on them, like we have gotten good at doing :)

thanks,
greg k-h

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-27 15:27     ` Willy Tarreau
@ 2026-04-27 15:35       ` Greg KH
  2026-04-27 16:14         ` Willy Tarreau
  0 siblings, 1 reply; 25+ messages in thread
From: Greg KH @ 2026-04-27 15:35 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Mon, Apr 27, 2026 at 05:27:46PM +0200, Willy Tarreau wrote:
> On Mon, Apr 27, 2026 at 07:48:23AM -0600, Greg KH wrote:
> > On Sun, Apr 26, 2026 at 06:39:13PM +0200, Willy Tarreau wrote:
> > > +In the Linux kernel's threat model, an issue is **not** a security bug, and
> > > +should not be reported to the security list, when triggering it requires the
> > > +reporter to first undermine the system they are attacking.  This includes, but
> > > +is not limited to, behavior that only manifests after the administrator has
> > > +explicitly enabled it (loading a module, setting a sysctl, writing to a debugfs
> > > +knob, or otherwise using an interface documented as privileged or unsafe); bugs
> > > +reachable only through root or CAP_SYS_ADMIN or CAP_NET_ADMIN on a machine the
> > > +actor already fully controls, with no further privilege boundary being crossed;
> > > +prediction of random numbers that only works in a totally silent environment
> > > +(such as IP ID, TCP ports or sequence numbers that can only be guessed in a
> > > +lab), issues that appear only in debug, lockdep, KASAN, fault-injection,
> > > +CONFIG_NOMMU, or other developer-oriented kernel builds that are not intended
> > > +for production use; problems seen only under development simulators, emulators,
> > > +or fuzzing harnesses that present hardware or input states which cannot occur
> > > +on real systems; bugs that require modified or emulated hardware; missing
> > > +hardening or defence-in-depth suggestions with no demonstrable exploit path
> > > +(including local ASLR bypass); mounting file systems that would be fixed or
> > > +rejected by fsck; and bugs in out-of-tree modules or vendor forks, which should
> > > +be reported to the relevant vendor.  Functional and performance regressions,
> > > +and disagreements with documented kernel policy (for example, "root can load
> > > +modules"), are likewise ordinary bugs or feature requests rather than security
> > > +issues, and should be reported via the usual channels.
> > 
> > This is a great list to start with, but perhaps we should put it in list
> > form so that it's easier to read?
> 
> In fact that's what I tried first and it was super long with many short
> lines, making it possibly worse. But maybe aggregating several short
> entries on a line by similarities could work, I can give it a try.
> 
> > Also, I can see this turning into a separate document eventually as
> > different subsystems should have a chance to weigh in on what they
> > consider the threat model to be
> 
> My fear if we redirect to other files is that it won't be read again.
> However, we could possibly suggest to always look for the subsystem's
> specific rules in this subsytem's doc, leaving enough freedom to
> maintainers to reject more things.

AI tools are good at following links, so I wouldn't worry about that.
We can point at other files, as this list is going to get long over
time, which is a good thing.

> > (like what the IB subsystem does which I
> > don't think you listed above, or the USB subsystem.)
> 
> Indeed I didn't list IB (I'm never sure about it, I seem to remember
> we simply trust any peer, is that right?), nor did I make specific
> mentions for USB which is implicitly covered by "hardware emulation
> or modification".

Ah, but USB does cover "some" modification of devices, so this is going
to be something that is good to document over time, if for no other
reason to keep these scanning tools in check from hallucinating crazy
situations that are obviously not a valid thing we care about.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/3] Documentation: security-bugs: do not systematically Cc the security team
  2026-04-27 15:33       ` Greg KH
@ 2026-04-27 16:09         ` Willy Tarreau
  0 siblings, 0 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-04-27 16:09 UTC (permalink / raw)
  To: Greg KH
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Mon, Apr 27, 2026 at 09:33:12AM -0600, Greg KH wrote:
> On Mon, Apr 27, 2026 at 05:24:06PM +0200, Willy Tarreau wrote:
> > On Mon, Apr 27, 2026 at 07:49:08AM -0600, Greg KH wrote:
> > > On Sun, Apr 26, 2026 at 06:39:12PM +0200, Willy Tarreau wrote:
> > > > With the increase of automated reports, the security team is dealing
> > > > with way more messages than really needed. The reporting process works
> > > > well with most teams so there is no need to systematically involve the
> > > > security team in reports.
> > > > 
> > > > Let's suggest to keep it for small lists of recipients, to cover the
> > > > risk of lost messages (spam, vacation etc) but to avoid it for larger
> > > > teams.
> > > > 
> > > > Cc: Greg KH <gregkh@linuxfoundation.org>
> > > > Cc: Leon Romanovsky <leon@kernel.org>
> > > > Signed-off-by: Willy Tarreau <w@1wt.eu>
> > > 
> > > This is going to cut down on emails to us a bunch, which might be good,
> > > or not, as now we'll not have a way to know what's going on overall.
> > > But hey, let's try it and see what happens!
> > 
> > Or maybe we could suggest that first reports from a reporter should
> > always Cc the list ? After all, every time we asked to drop the list
> > was for senders at their 5th or 10th submission. Maybe we could just
> > say that the list members prefer not being repetitively CCed by the
> > same submitters to invest more time on newcomers ?
> 
> Yes, that might be better, otherwise maintainers are going to get some
> pretty foolish reports with out the context of howing to properly at
> least push back on them, like we have gotten good at doing :)

Yes, and more importantly, we know how to react while some maintainers
getting their first report are stressed. Let me try to rework it.

Thanks!
Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-27 15:35       ` Greg KH
@ 2026-04-27 16:14         ` Willy Tarreau
  2026-04-28 21:13           ` Greg KH
  0 siblings, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-04-27 16:14 UTC (permalink / raw)
  To: Greg KH
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Mon, Apr 27, 2026 at 09:35:04AM -0600, Greg KH wrote:
> On Mon, Apr 27, 2026 at 05:27:46PM +0200, Willy Tarreau wrote:
> > On Mon, Apr 27, 2026 at 07:48:23AM -0600, Greg KH wrote:
> > > On Sun, Apr 26, 2026 at 06:39:13PM +0200, Willy Tarreau wrote:
> > > > +In the Linux kernel's threat model, an issue is **not** a security bug, and
> > > > +should not be reported to the security list, when triggering it requires the
> > > > +reporter to first undermine the system they are attacking.  This includes, but
> > > > +is not limited to, behavior that only manifests after the administrator has
> > > > +explicitly enabled it (loading a module, setting a sysctl, writing to a debugfs
> > > > +knob, or otherwise using an interface documented as privileged or unsafe); bugs
> > > > +reachable only through root or CAP_SYS_ADMIN or CAP_NET_ADMIN on a machine the
> > > > +actor already fully controls, with no further privilege boundary being crossed;
> > > > +prediction of random numbers that only works in a totally silent environment
> > > > +(such as IP ID, TCP ports or sequence numbers that can only be guessed in a
> > > > +lab), issues that appear only in debug, lockdep, KASAN, fault-injection,
> > > > +CONFIG_NOMMU, or other developer-oriented kernel builds that are not intended
> > > > +for production use; problems seen only under development simulators, emulators,
> > > > +or fuzzing harnesses that present hardware or input states which cannot occur
> > > > +on real systems; bugs that require modified or emulated hardware; missing
> > > > +hardening or defence-in-depth suggestions with no demonstrable exploit path
> > > > +(including local ASLR bypass); mounting file systems that would be fixed or
> > > > +rejected by fsck; and bugs in out-of-tree modules or vendor forks, which should
> > > > +be reported to the relevant vendor.  Functional and performance regressions,
> > > > +and disagreements with documented kernel policy (for example, "root can load
> > > > +modules"), are likewise ordinary bugs or feature requests rather than security
> > > > +issues, and should be reported via the usual channels.
> > > 
> > > This is a great list to start with, but perhaps we should put it in list
> > > form so that it's easier to read?
> > 
> > In fact that's what I tried first and it was super long with many short
> > lines, making it possibly worse. But maybe aggregating several short
> > entries on a line by similarities could work, I can give it a try.
> > 
> > > Also, I can see this turning into a separate document eventually as
> > > different subsystems should have a chance to weigh in on what they
> > > consider the threat model to be
> > 
> > My fear if we redirect to other files is that it won't be read again.
> > However, we could possibly suggest to always look for the subsystem's
> > specific rules in this subsytem's doc, leaving enough freedom to
> > maintainers to reject more things.
> 
> AI tools are good at following links, so I wouldn't worry about that.

Yes but let's not forget the minority of humble humans still sending
honest reports ;-)

> We can point at other files, as this list is going to get long over
> time, which is a good thing.

Sure. I'm just unsure where this could be enumerated, as it's likely
that there would be just one or two lines max per subsystem for the
majority of them. Or we could have a totally separate file, "threat
model", that goes into great lengths detailing all this with sections
per category or subsystem when they start to grow maybe, and refer only
to that one from security-bugs ?

> > > (like what the IB subsystem does which I
> > > don't think you listed above, or the USB subsystem.)
> > 
> > Indeed I didn't list IB (I'm never sure about it, I seem to remember
> > we simply trust any peer, is that right?), nor did I make specific
> > mentions for USB which is implicitly covered by "hardware emulation
> > or modification".
> 
> Ah, but USB does cover "some" modification of devices, so this is going
> to be something that is good to document over time, if for no other
> reason to keep these scanning tools in check from hallucinating crazy
> situations that are obviously not a valid thing we care about.

OK but does this mean you still want to get these reports in the end ?

Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-27 16:14         ` Willy Tarreau
@ 2026-04-28 21:13           ` Greg KH
  2026-04-29  3:09             ` Willy Tarreau
  2026-05-02  5:20             ` Demi Marie Obenour
  0 siblings, 2 replies; 25+ messages in thread
From: Greg KH @ 2026-04-28 21:13 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Mon, Apr 27, 2026 at 06:14:15PM +0200, Willy Tarreau wrote:
> On Mon, Apr 27, 2026 at 09:35:04AM -0600, Greg KH wrote:
> > On Mon, Apr 27, 2026 at 05:27:46PM +0200, Willy Tarreau wrote:
> > > On Mon, Apr 27, 2026 at 07:48:23AM -0600, Greg KH wrote:
> > > > On Sun, Apr 26, 2026 at 06:39:13PM +0200, Willy Tarreau wrote:
> > > > > +In the Linux kernel's threat model, an issue is **not** a security bug, and
> > > > > +should not be reported to the security list, when triggering it requires the
> > > > > +reporter to first undermine the system they are attacking.  This includes, but
> > > > > +is not limited to, behavior that only manifests after the administrator has
> > > > > +explicitly enabled it (loading a module, setting a sysctl, writing to a debugfs
> > > > > +knob, or otherwise using an interface documented as privileged or unsafe); bugs
> > > > > +reachable only through root or CAP_SYS_ADMIN or CAP_NET_ADMIN on a machine the
> > > > > +actor already fully controls, with no further privilege boundary being crossed;
> > > > > +prediction of random numbers that only works in a totally silent environment
> > > > > +(such as IP ID, TCP ports or sequence numbers that can only be guessed in a
> > > > > +lab), issues that appear only in debug, lockdep, KASAN, fault-injection,
> > > > > +CONFIG_NOMMU, or other developer-oriented kernel builds that are not intended
> > > > > +for production use; problems seen only under development simulators, emulators,
> > > > > +or fuzzing harnesses that present hardware or input states which cannot occur
> > > > > +on real systems; bugs that require modified or emulated hardware; missing
> > > > > +hardening or defence-in-depth suggestions with no demonstrable exploit path
> > > > > +(including local ASLR bypass); mounting file systems that would be fixed or
> > > > > +rejected by fsck; and bugs in out-of-tree modules or vendor forks, which should
> > > > > +be reported to the relevant vendor.  Functional and performance regressions,
> > > > > +and disagreements with documented kernel policy (for example, "root can load
> > > > > +modules"), are likewise ordinary bugs or feature requests rather than security
> > > > > +issues, and should be reported via the usual channels.
> > > > 
> > > > This is a great list to start with, but perhaps we should put it in list
> > > > form so that it's easier to read?
> > > 
> > > In fact that's what I tried first and it was super long with many short
> > > lines, making it possibly worse. But maybe aggregating several short
> > > entries on a line by similarities could work, I can give it a try.
> > > 
> > > > Also, I can see this turning into a separate document eventually as
> > > > different subsystems should have a chance to weigh in on what they
> > > > consider the threat model to be
> > > 
> > > My fear if we redirect to other files is that it won't be read again.
> > > However, we could possibly suggest to always look for the subsystem's
> > > specific rules in this subsytem's doc, leaving enough freedom to
> > > maintainers to reject more things.
> > 
> > AI tools are good at following links, so I wouldn't worry about that.
> 
> Yes but let's not forget the minority of humble humans still sending
> honest reports ;-)
> 
> > We can point at other files, as this list is going to get long over
> > time, which is a good thing.
> 
> Sure. I'm just unsure where this could be enumerated, as it's likely
> that there would be just one or two lines max per subsystem for the
> majority of them. Or we could have a totally separate file, "threat
> model", that goes into great lengths detailing all this with sections
> per category or subsystem when they start to grow maybe, and refer only
> to that one from security-bugs ?

I think a separate file is good, I know I need to write up what the USB
model is, and it's different from PCI, and different from other
subsystems.  All should probably be documented eventually.

> > > > (like what the IB subsystem does which I
> > > > don't think you listed above, or the USB subsystem.)
> > > 
> > > Indeed I didn't list IB (I'm never sure about it, I seem to remember
> > > we simply trust any peer, is that right?), nor did I make specific
> > > mentions for USB which is implicitly covered by "hardware emulation
> > > or modification".
> > 
> > Ah, but USB does cover "some" modification of devices, so this is going
> > to be something that is good to document over time, if for no other
> > reason to keep these scanning tools in check from hallucinating crazy
> > situations that are obviously not a valid thing we care about.
> 
> OK but does this mean you still want to get these reports in the end ?

I want a patch if a user cares about that threat-model (as Android does
but no one else) as it's up to the user groups that want to change the
default kernel's behavior like this to actually submit patches to do so.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-28 21:13           ` Greg KH
@ 2026-04-29  3:09             ` Willy Tarreau
  2026-04-29  6:10               ` Greg KH
  2026-05-02  5:20             ` Demi Marie Obenour
  1 sibling, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-04-29  3:09 UTC (permalink / raw)
  To: Greg KH
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Tue, Apr 28, 2026 at 03:13:01PM -0600, Greg KH wrote:
> > > We can point at other files, as this list is going to get long over
> > > time, which is a good thing.
> > 
> > Sure. I'm just unsure where this could be enumerated, as it's likely
> > that there would be just one or two lines max per subsystem for the
> > majority of them. Or we could have a totally separate file, "threat
> > model", that goes into great lengths detailing all this with sections
> > per category or subsystem when they start to grow maybe, and refer only
> > to that one from security-bugs ?
> 
> I think a separate file is good, I know I need to write up what the USB
> model is, and it's different from PCI, and different from other
> subsystems.  All should probably be documented eventually.

Would you be interested in me trying to initiate a new "threat-model.rst"
file that tries to unroll the points mentioned in the list ? I'm concerned
that that withuot having many details initially, it could look a bit odd,
because the list we currently have would be more suitable for an "other"
section.

> > > > > (like what the IB subsystem does which I
> > > > > don't think you listed above, or the USB subsystem.)
> > > > 
> > > > Indeed I didn't list IB (I'm never sure about it, I seem to remember
> > > > we simply trust any peer, is that right?), nor did I make specific
> > > > mentions for USB which is implicitly covered by "hardware emulation
> > > > or modification".
> > > 
> > > Ah, but USB does cover "some" modification of devices, so this is going
> > > to be something that is good to document over time, if for no other
> > > reason to keep these scanning tools in check from hallucinating crazy
> > > situations that are obviously not a valid thing we care about.
> > 
> > OK but does this mean you still want to get these reports in the end ?
> 
> I want a patch if a user cares about that threat-model (as Android does
> but no one else) as it's up to the user groups that want to change the
> default kernel's behavior like this to actually submit patches to do so.

Yes, OK, but we want them in any case. That's the idea I tried to convey
in the proposed doc (maybe not well enough), basically "this is a bug and
it is worth reporting, but no need to involve s@k.o for this".

thanks,
Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-29  3:09             ` Willy Tarreau
@ 2026-04-29  6:10               ` Greg KH
  2026-05-01 13:57                 ` Willy Tarreau
  0 siblings, 1 reply; 25+ messages in thread
From: Greg KH @ 2026-04-29  6:10 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

On Wed, Apr 29, 2026 at 05:09:43AM +0200, Willy Tarreau wrote:
> On Tue, Apr 28, 2026 at 03:13:01PM -0600, Greg KH wrote:
> > > > We can point at other files, as this list is going to get long over
> > > > time, which is a good thing.
> > > 
> > > Sure. I'm just unsure where this could be enumerated, as it's likely
> > > that there would be just one or two lines max per subsystem for the
> > > majority of them. Or we could have a totally separate file, "threat
> > > model", that goes into great lengths detailing all this with sections
> > > per category or subsystem when they start to grow maybe, and refer only
> > > to that one from security-bugs ?
> > 
> > I think a separate file is good, I know I need to write up what the USB
> > model is, and it's different from PCI, and different from other
> > subsystems.  All should probably be documented eventually.
> 
> Would you be interested in me trying to initiate a new "threat-model.rst"
> file that tries to unroll the points mentioned in the list ? I'm concerned
> that that withuot having many details initially, it could look a bit odd,
> because the list we currently have would be more suitable for an "other"
> section.

Sure, a small file to start with would be good for people to work off
of and add to.

> > > > > > (like what the IB subsystem does which I
> > > > > > don't think you listed above, or the USB subsystem.)
> > > > > 
> > > > > Indeed I didn't list IB (I'm never sure about it, I seem to remember
> > > > > we simply trust any peer, is that right?), nor did I make specific
> > > > > mentions for USB which is implicitly covered by "hardware emulation
> > > > > or modification".
> > > > 
> > > > Ah, but USB does cover "some" modification of devices, so this is going
> > > > to be something that is good to document over time, if for no other
> > > > reason to keep these scanning tools in check from hallucinating crazy
> > > > situations that are obviously not a valid thing we care about.
> > > 
> > > OK but does this mean you still want to get these reports in the end ?
> > 
> > I want a patch if a user cares about that threat-model (as Android does
> > but no one else) as it's up to the user groups that want to change the
> > default kernel's behavior like this to actually submit patches to do so.
> 
> Yes, OK, but we want them in any case. That's the idea I tried to convey
> in the proposed doc (maybe not well enough), basically "this is a bug and
> it is worth reporting, but no need to involve s@k.o for this".

Yes, you conveyed that, sorry if I insinuated otherwise.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-29  6:10               ` Greg KH
@ 2026-05-01 13:57                 ` Willy Tarreau
  0 siblings, 0 replies; 25+ messages in thread
From: Willy Tarreau @ 2026-05-01 13:57 UTC (permalink / raw)
  To: Greg KH
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel

Hi Greg,

On Wed, Apr 29, 2026 at 12:10:51AM -0600, Greg KH wrote:
> On Wed, Apr 29, 2026 at 05:09:43AM +0200, Willy Tarreau wrote:
> > On Tue, Apr 28, 2026 at 03:13:01PM -0600, Greg KH wrote:
> > > > > We can point at other files, as this list is going to get long over
> > > > > time, which is a good thing.
> > > > 
> > > > Sure. I'm just unsure where this could be enumerated, as it's likely
> > > > that there would be just one or two lines max per subsystem for the
> > > > majority of them. Or we could have a totally separate file, "threat
> > > > model", that goes into great lengths detailing all this with sections
> > > > per category or subsystem when they start to grow maybe, and refer only
> > > > to that one from security-bugs ?
> > > 
> > > I think a separate file is good, I know I need to write up what the USB
> > > model is, and it's different from PCI, and different from other
> > > subsystems.  All should probably be documented eventually.
> > 
> > Would you be interested in me trying to initiate a new "threat-model.rst"
> > file that tries to unroll the points mentioned in the list ? I'm concerned
> > that that withuot having many details initially, it could look a bit odd,
> > because the list we currently have would be more suitable for an "other"
> > section.
> 
> Sure, a small file to start with would be good for people to work off
> of and add to.

I'm appending below what I came up with today. It's not a patch, just
a dump of what I've been typing for 3 hours and should go into
process/threat-model.rst I think. If you think it constitutes a good
starting point, then I can make a patch to add it, and update my other
patch to reference it by basically saying "what is excluded from the
kernel threat model in threat-model.rst does not have to be reported
as a security issue".

cheers,
Willy
---

.. _threatmodel:

The Linux Kernel threat model
=============================

There are a lot of assumptions regarding what the kernel protects against and
what it does not protect against. These assumptions tend to cause confusion for
bug reports (security-related ones vs non-security ones), and can complicate
security enforcement when the responsibilities for some boundaries is not clear
between the kernel, distros, administrators and users.

This document tries to clarify the responsibilities of the kernel in this
domain.

The kernel's responsibilities
-----------------------------

The kernel abstracts access to local hardware resources and to remote systems
in a way that allows multiple local users to get a fair share of the available
resources granted to them, and, when the underlying hardware permits, to assign
a level of confidentiality to their communications and to the data they are
processing or storing.

The kernel assumes that the underlying hardware behaves according to its
specifications. This includes the integrity of the CPU's instruction set, the
transparency of the branch prediction unit and the cache units, the consistency
of the Memory Management Unit (MMU), the isolation of DMA-capable peripherals
(e.g., via IOMMU), state transitions in controllers, ranges of values read from
registers, the respect of documented hardware limitations, etc.

When hardware fails to maintain its specified isolation (e.g., CPU bugs,
side-channels, hardware response to unexpected inputs), the kernel will usually
attempt to implement reasonable mitigations. These are best-effort measures
intended to reduce the attack surface or elevate the cost of an attack within
the limits of the hardware's facilities; they do not constitute a
kernel-provided safety guarantee.

Users always perform their activities under the authority of an administrator
who is able to grant or deny various types of permissions that may affect how
users benefit from available resources, or the level of confidentiality of
their activities. Administrators may also delegate all or part of their own
permissions to some users, particularly via capabilities but not only. All this
is performed via configuration (sysctl, file-system permissions etc).

The Linux Kernel applies a certain collection of default settings that match
its threat model. Distros have their own threat model and will come with their
own configuration presets, that the administrator may have to adjust to better
suit their expectations (relax or restrict).

By default, the Linux Kernel guarantees the following protections when running
on common processors featuring privilege levels and memory management units:

- user-based isolation: an unprivileged user may restrict access to their own
  data from other unprivileged users running on the same system. This includes:
  * stored data, via file system permissions
  * in-memory data (pages are not accessible by default to other users)
  * process activity (ptrace is not permitted to other users)
  * inter-process communication (other users may not observe data exchanged via
    UNIX domain sockets or other IPC mechanisms)
  * network communications within the same or with other systems

- capability-based protection:
  * users not having the CAP_SYS_ADMIN capability may not alter the kernel's
    configuration, memory nor state, change other users' view of the file
    system layout, grant any user capabilities they do not have, nor affect the
    system's availability (shutdown, reboot, panic, hang, or making the system
    unresponsive via unbounded resource exhaustion).
  * users not having the CAP_NET_ADMIN capability may not alter the network
    configuration, intercept nor spoof network communications from other
    users nor systems.
  * users not having CAP_SYS_PTRACE may not observe other users' processes
    activities.

When CONFIG_USER_NS is set, the kernel also permits unprivileged users to
create their own user namespace in which they have all capabilities, but with a
number of restrictions (they may not perform actions that have impacts on the
initial user namespace, such as changing time, loading modules or mounting
block devices). Please refer to user_namespaces(7) for more details, the
possibilities of user namespaces are not covered in this document.

The kernel also offers a lot of troubleshooting and debugging facilities, which
can constitute attack vectors when placed in wrong hands. While some of them
are designed to be accessible to regular local users with a low risk (e.g.
kernel logs via /proc/kmsg), some would expose enough information to represent
a risk in most places and the decision to expose them is under the
administrator's responsibility (perf events, traces), and others are not
designed to be accessed by non-privileged users (e.g. debugfs). Access to these
facilities by a user who has been explicitly granted permission by an
administrator does not constitute a security breach.

Bugs that permit to violate the principles above constitute security breaches.
However, bugs that permit one violation only once another one was already
achieved are only weaknesses. The kernel applies a number of self-protection
measures whose purpose is to avoid crossing a security boundary when certain
classes of bugs are found, but a failure of these extra protections do not
constitute a vulnerability alone.

What does not constitute a security bug
--------------------------------------

In the Linux kernel's threat model, the following classes of problems are
**NOT** considered as Linux Kernel security bugs. However, when it is believed
that the kernel could do better, they should be reported, so that they can be
reviewed and fixed where reasonably possible, but they will be handled as any
regular bug:

- configuration:
  
  * outdated kernels and particularly end-of-life branches are out of the scope
    of the kernel's threat model: administrators are responsible for keeping
    their system up to date. For a bug to qualify as a security bug, it must be
    demonstrated that it affects actively maintained versions.

  * build-level: changes to the kernel configuration that are explicitly
    documented as lowering the security level (e.g. CONFIG_NOMMU), or targeted
    at developers only.

  * OS-level: changes to command line parameters, sysctls, filesystem
    permissions, user capabilities, exposure of privileged interfaces, that
    explicitly increase exposure by either offering non-default access to
    unprivileged users, or reduce the kernel's ability to enforce some
    protections or mitigations. Example: write access to procfs or debugfs.

  * issues triggered only when using features intended for development or
    debugging (e.g., lockdep, KASAN, fault-injection): these features are known
    to introduce overhead and potential instability and are not intended for
    production use.

  * loading of explicitly insecure/broken/staging modules, and generally any
    using any subsystem marked as experimental or not intended for production
    use.

  * running out-of-tree modules or unofficial kernel forks; these should be
    reported to the relevant vendor.

- excess of initial privileges:

  * actions performed by a user already possessing the privileges required to
    perform that action or modify that state (e.g. CAP_SYS_ADMIN, CAP_NET_ADMIN,
    CAP_SYS_RAWIO, CAP_SYS_MODULE with no further boundary being crossed).

  * actions performed in user namespace without permitting anything in the
    initial namespace that was not already permitted to the same user there.

  * anything performed by the root user in the initial namespace (e.g. kernel
    oops when writing to a privileged device).

- out of production use:

  This covers theoretical/probabilistic attacks that rely on laboratory
  conditions with zero system noise, or those requiring an unrealistic number
  of attempts (e.g., billions of trials) that would be detected by standard
  system monitoring long before success, such as:

  * prediction of random numbers that only works in a totally silent
    environment (such as IP ID, TCP ports or sequence numbers that can only be
    guessed in a lab).

  * activity observation and information leaks based on probabilistic
    approaches that are prone to measurement noise and not realistically
    reproducible on a production system.

  * issues that can only be triggered by heavy attacks (e.g. brute force) whose
    impact on the system makes it unlikely or impossible to remain undetected
    before they succeed (e.g. consuming all memory before succeeding).
    
  * problems seen only under development simulators, emulators, or combinations
    that do not exist on real systems at the time of reporting (issues
    involving tens of millions of threads, tens of thousands of CPUs,
    unrealistic CPU frequencies, RAM sizes or disk capacities, network speeds.

  * issues whose reproduction requires hardware modification or emulation,
    including fake USB devices that pretend to be another one.

  * as well as issues that can be triggered at a cost that is orders of
    magnitude higher than the expected benefits (e.g. fully functional keyboard
    emulator only to retrieve 7 uninitialized bytes in a structure, or
    brute-force method involving millions of connection attempts to guess a
    port number).

- hardening failures:

  * ability to bypass some of the kernel's hardening measures with no
    demonstrable exploit path (e.g. ASLR bypass, events timing or probing with
    no demonstrable consequence). These are just weaknesses, not
    vulnerabilities.

  * missing argument checks and failure to report certain errors with no
    immediate consequence.

- random information leaks:

  This concerns information leaks of small data parts that happen to be there
  and that cannot be chosen by the attacker, or face access restrictions:

  * structure padding reported by syscalls or other interfaces.

  * identifiers, partial data, non-terminated strings reported in error
    messages.

  * Leaks of kernel memory addresses/pointers do not constitute an immediately
    exploitable vector and are not security bugs, though they must be reported
    and fixed.

- crafted file system images:

  * bugs triggered by mounting a corrupted or maliciously crafted file system
    image are generally not security bugs, as the kernel assumes the underlying
    storage media is under the administrator's control, unless the filesystem
    driver is specifically documented as being hardened against untrusted media.

  * issues that are resolved, mitigated, or detected by running a filesystem
    consistency check (fsck) on the image prior to mounting.
  
- physical access:

  Issues that require physical access to the machine, hardware modification, or
  the use of specialized hardware (e.g., logic analyzers, DMA-attack tools over
  PCI-E/Thunderbolt) are out of scope unless the system is explicitly
  configured with technologies meant to defend against such attacks
  (e.g. IOMMU).

- functional and performance regressions:

  Any issue that can be mitigated by setting proper permissions and limits
  doesn't qualify as a security bug.
---

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-04-28 21:13           ` Greg KH
  2026-04-29  3:09             ` Willy Tarreau
@ 2026-05-02  5:20             ` Demi Marie Obenour
  2026-05-02  5:35               ` Willy Tarreau
  1 sibling, 1 reply; 25+ messages in thread
From: Demi Marie Obenour @ 2026-05-02  5:20 UTC (permalink / raw)
  To: Greg KH, Willy Tarreau
  Cc: leon, security, Jonathan Corbet, skhan, workflows, linux-doc,
	linux-kernel, Qubes Developer Mailing List


[-- Attachment #1.1.1: Type: text/plain, Size: 5780 bytes --]

On 4/28/26 17:13, Greg KH wrote:
> On Mon, Apr 27, 2026 at 06:14:15PM +0200, Willy Tarreau wrote:
>> On Mon, Apr 27, 2026 at 09:35:04AM -0600, Greg KH wrote:
>>> On Mon, Apr 27, 2026 at 05:27:46PM +0200, Willy Tarreau wrote:
>>>> On Mon, Apr 27, 2026 at 07:48:23AM -0600, Greg KH wrote:
>>>>> On Sun, Apr 26, 2026 at 06:39:13PM +0200, Willy Tarreau wrote:
>>>>>> +In the Linux kernel's threat model, an issue is **not** a security bug, and
>>>>>> +should not be reported to the security list, when triggering it requires the
>>>>>> +reporter to first undermine the system they are attacking.  This includes, but
>>>>>> +is not limited to, behavior that only manifests after the administrator has
>>>>>> +explicitly enabled it (loading a module, setting a sysctl, writing to a debugfs
>>>>>> +knob, or otherwise using an interface documented as privileged or unsafe); bugs
>>>>>> +reachable only through root or CAP_SYS_ADMIN or CAP_NET_ADMIN on a machine the
>>>>>> +actor already fully controls, with no further privilege boundary being crossed;
>>>>>> +prediction of random numbers that only works in a totally silent environment
>>>>>> +(such as IP ID, TCP ports or sequence numbers that can only be guessed in a
>>>>>> +lab), issues that appear only in debug, lockdep, KASAN, fault-injection,
>>>>>> +CONFIG_NOMMU, or other developer-oriented kernel builds that are not intended
>>>>>> +for production use; problems seen only under development simulators, emulators,
>>>>>> +or fuzzing harnesses that present hardware or input states which cannot occur
>>>>>> +on real systems; bugs that require modified or emulated hardware; missing
>>>>>> +hardening or defence-in-depth suggestions with no demonstrable exploit path
>>>>>> +(including local ASLR bypass); mounting file systems that would be fixed or
>>>>>> +rejected by fsck; and bugs in out-of-tree modules or vendor forks, which should
>>>>>> +be reported to the relevant vendor.  Functional and performance regressions,
>>>>>> +and disagreements with documented kernel policy (for example, "root can load
>>>>>> +modules"), are likewise ordinary bugs or feature requests rather than security
>>>>>> +issues, and should be reported via the usual channels.
>>>>>
>>>>> This is a great list to start with, but perhaps we should put it in list
>>>>> form so that it's easier to read?
>>>>
>>>> In fact that's what I tried first and it was super long with many short
>>>> lines, making it possibly worse. But maybe aggregating several short
>>>> entries on a line by similarities could work, I can give it a try.
>>>>
>>>>> Also, I can see this turning into a separate document eventually as
>>>>> different subsystems should have a chance to weigh in on what they
>>>>> consider the threat model to be
>>>>
>>>> My fear if we redirect to other files is that it won't be read again.
>>>> However, we could possibly suggest to always look for the subsystem's
>>>> specific rules in this subsytem's doc, leaving enough freedom to
>>>> maintainers to reject more things.
>>>
>>> AI tools are good at following links, so I wouldn't worry about that.
>>
>> Yes but let's not forget the minority of humble humans still sending
>> honest reports ;-)
>>
>>> We can point at other files, as this list is going to get long over
>>> time, which is a good thing.
>>
>> Sure. I'm just unsure where this could be enumerated, as it's likely
>> that there would be just one or two lines max per subsystem for the
>> majority of them. Or we could have a totally separate file, "threat
>> model", that goes into great lengths detailing all this with sections
>> per category or subsystem when they start to grow maybe, and refer only
>> to that one from security-bugs ?
> 
> I think a separate file is good, I know I need to write up what the USB
> model is, and it's different from PCI, and different from other
> subsystems.  All should probably be documented eventually.
> 
>>>>> (like what the IB subsystem does which I
>>>>> don't think you listed above, or the USB subsystem.)
>>>>
>>>> Indeed I didn't list IB (I'm never sure about it, I seem to remember
>>>> we simply trust any peer, is that right?), nor did I make specific
>>>> mentions for USB which is implicitly covered by "hardware emulation
>>>> or modification".
>>>
>>> Ah, but USB does cover "some" modification of devices, so this is going
>>> to be something that is good to document over time, if for no other
>>> reason to keep these scanning tools in check from hallucinating crazy
>>> situations that are obviously not a valid thing we care about.
>>
>> OK but does this mean you still want to get these reports in the end ?
> 
> I want a patch if a user cares about that threat-model (as Android does
> but no one else) as it's up to the user groups that want to change the
> default kernel's behavior like this to actually submit patches to do so.
FYI, I don't think this is limited to Android.  Chrome OS definitely
cares about malicious USB devices, and the whole purpose of USBGuard is
to prevent a USB device from being able to compromise the system unless
authorized.  I believe Qubes OS also cares, as it supports USB device
assignment to virtual machines.  CCing qubes-devel for confirmation.

What should that patch look like?  Could there be a way for these user
groups to be informed of vulnerabilities in the USB subsystem, so that
they can take responsibility for fixing them before they become public?

It does make sense for those who care about the security of a subsystem
to be responsible for vulnerabilities in that system, but right now
I'm not sure how one would offer to take up that responsibility.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 7253 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-02  5:20             ` Demi Marie Obenour
@ 2026-05-02  5:35               ` Willy Tarreau
  2026-05-02  5:51                 ` Demi Marie Obenour
  0 siblings, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-05-02  5:35 UTC (permalink / raw)
  To: Demi Marie Obenour
  Cc: Greg KH, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel, Qubes Developer Mailing List

Hi Demi Marie,

On Sat, May 02, 2026 at 01:20:10AM -0400, Demi Marie Obenour wrote:
> >>> Ah, but USB does cover "some" modification of devices, so this is going
> >>> to be something that is good to document over time, if for no other
> >>> reason to keep these scanning tools in check from hallucinating crazy
> >>> situations that are obviously not a valid thing we care about.
> >>
> >> OK but does this mean you still want to get these reports in the end ?
> > 
> > I want a patch if a user cares about that threat-model (as Android does
> > but no one else) as it's up to the user groups that want to change the
> > default kernel's behavior like this to actually submit patches to do so.
> FYI, I don't think this is limited to Android.  Chrome OS definitely
> cares about malicious USB devices, and the whole purpose of USBGuard is
> to prevent a USB device from being able to compromise the system unless
> authorized.  I believe Qubes OS also cares, as it supports USB device
> assignment to virtual machines.  CCing qubes-devel for confirmation.
> 
> What should that patch look like?  Could there be a way for these user
> groups to be informed of vulnerabilities in the USB subsystem, so that
> they can take responsibility for fixing them before they become public?

I've posted a proposal elsewhere in the same thread:

   https://lore.kernel.org/lkml/afSxSX8RK0Z4kkOI@1wt.eu/

> It does make sense for those who care about the security of a subsystem
> to be responsible for vulnerabilities in that system, but right now
> I'm not sure how one would offer to take up that responsibility.

I think that at least some subsystems will want to add their own
restrictions based on the bug reports they keep receiving, and I hope
it can help distros figure where there's a gap between is promised to
users and what the kernel promises, that needs to be filled by userland
verification tools for example.

Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-02  5:35               ` Willy Tarreau
@ 2026-05-02  5:51                 ` Demi Marie Obenour
  2026-05-02  6:07                   ` Willy Tarreau
  0 siblings, 1 reply; 25+ messages in thread
From: Demi Marie Obenour @ 2026-05-02  5:51 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: Greg KH, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel, Qubes Developer Mailing List


[-- Attachment #1.1.1: Type: text/plain, Size: 3308 bytes --]

On 5/2/26 01:35, Willy Tarreau wrote:
> Hi Demi Marie,
> 
> On Sat, May 02, 2026 at 01:20:10AM -0400, Demi Marie Obenour wrote:
>>>>> Ah, but USB does cover "some" modification of devices, so this is going
>>>>> to be something that is good to document over time, if for no other
>>>>> reason to keep these scanning tools in check from hallucinating crazy
>>>>> situations that are obviously not a valid thing we care about.
>>>>
>>>> OK but does this mean you still want to get these reports in the end ?
>>>
>>> I want a patch if a user cares about that threat-model (as Android does
>>> but no one else) as it's up to the user groups that want to change the
>>> default kernel's behavior like this to actually submit patches to do so.
>> FYI, I don't think this is limited to Android.  Chrome OS definitely
>> cares about malicious USB devices, and the whole purpose of USBGuard is
>> to prevent a USB device from being able to compromise the system unless
>> authorized.  I believe Qubes OS also cares, as it supports USB device
>> assignment to virtual machines.  CCing qubes-devel for confirmation.
>>
>> What should that patch look like?  Could there be a way for these user
>> groups to be informed of vulnerabilities in the USB subsystem, so that
>> they can take responsibility for fixing them before they become public?
> 
> I've posted a proposal elsewhere in the same thread:
> 
>    https://lore.kernel.org/lkml/afSxSX8RK0Z4kkOI@1wt.eu/

I saw that, but it's still not quite clear what is meant here.
My understanding is that those concerned about malicious USB devices
are generally concerned about _arbitrary_ malicious USB devices.
The one thing a USB device shouldn't be able to spoof is the port
it is plugged into, and userspace tools like USBGuard can use that
information.  But to do that, they have to trust that the device
can't harm the system if it isn't assigned to any drivers.

>> It does make sense for those who care about the security of a subsystem
>> to be responsible for vulnerabilities in that system, but right now
>> I'm not sure how one would offer to take up that responsibility.
> 
> I think that at least some subsystems will want to add their own
> restrictions based on the bug reports they keep receiving, and I hope
> it can help distros figure where there's a gap between is promised to
> users and what the kernel promises, that needs to be filled by userland
> verification tools for example.
> 
> Willy

I think there might be another category, which is were there is a
third party who is much more interested in the security of a subsystem
than its primary maintainers are.  I suspect that Google is said
third party in multiple such cases, especially various USB drivers.
In particular, exploiting the kernel via USB is a common attack
technique used in the wild by tools like Cellebrite.

In these cases, I think it makes sense to funnel vulnerability
reports to the people who actually seriously care about fixing them.
For instance, problems in USB might be funneled to the Chrome OS and
Android security teams.  They will get fixed much more quickly, and
upstream maintainers won't be flooded with reports that don't have
attached patches.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 7253 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-02  5:51                 ` Demi Marie Obenour
@ 2026-05-02  6:07                   ` Willy Tarreau
  2026-05-02  6:27                     ` Demi Marie Obenour
  0 siblings, 1 reply; 25+ messages in thread
From: Willy Tarreau @ 2026-05-02  6:07 UTC (permalink / raw)
  To: Demi Marie Obenour
  Cc: Greg KH, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel, Qubes Developer Mailing List

On Sat, May 02, 2026 at 01:51:08AM -0400, Demi Marie Obenour wrote:
> On 5/2/26 01:35, Willy Tarreau wrote:
> > Hi Demi Marie,
> > 
> > On Sat, May 02, 2026 at 01:20:10AM -0400, Demi Marie Obenour wrote:
> >>>>> Ah, but USB does cover "some" modification of devices, so this is going
> >>>>> to be something that is good to document over time, if for no other
> >>>>> reason to keep these scanning tools in check from hallucinating crazy
> >>>>> situations that are obviously not a valid thing we care about.
> >>>>
> >>>> OK but does this mean you still want to get these reports in the end ?
> >>>
> >>> I want a patch if a user cares about that threat-model (as Android does
> >>> but no one else) as it's up to the user groups that want to change the
> >>> default kernel's behavior like this to actually submit patches to do so.
> >> FYI, I don't think this is limited to Android.  Chrome OS definitely
> >> cares about malicious USB devices, and the whole purpose of USBGuard is
> >> to prevent a USB device from being able to compromise the system unless
> >> authorized.  I believe Qubes OS also cares, as it supports USB device
> >> assignment to virtual machines.  CCing qubes-devel for confirmation.
> >>
> >> What should that patch look like?  Could there be a way for these user
> >> groups to be informed of vulnerabilities in the USB subsystem, so that
> >> they can take responsibility for fixing them before they become public?
> > 
> > I've posted a proposal elsewhere in the same thread:
> > 
> >    https://lore.kernel.org/lkml/afSxSX8RK0Z4kkOI@1wt.eu/
> 
> I saw that, but it's still not quite clear what is meant here.
> My understanding is that those concerned about malicious USB devices
> are generally concerned about _arbitrary_ malicious USB devices.
> The one thing a USB device shouldn't be able to spoof is the port
> it is plugged into, and userspace tools like USBGuard can use that
> information.  But to do that, they have to trust that the device
> can't harm the system if it isn't assigned to any drivers.

The goal sought by that early document precisely is to draw the line
between what is a regular bug and hwat is a kernel bug. The kernel
currently doesn't consider problems posed by a crafted USB device as a
security issue because the kernel trusts the hardware in runs on. Of
course there can be valid reasons to disagree with this, but it's just
the current situation and the purpose of the document is to clarify it
so that bugs are reported to the right place and handled efficiently.

> >> It does make sense for those who care about the security of a subsystem
> >> to be responsible for vulnerabilities in that system, but right now
> >> I'm not sure how one would offer to take up that responsibility.
> > 
> > I think that at least some subsystems will want to add their own
> > restrictions based on the bug reports they keep receiving, and I hope
> > it can help distros figure where there's a gap between is promised to
> > users and what the kernel promises, that needs to be filled by userland
> > verification tools for example.
> > 
> > Willy
> 
> I think there might be another category, which is were there is a
> third party who is much more interested in the security of a subsystem
> than its primary maintainers are.  I suspect that Google is said
> third party in multiple such cases, especially various USB drivers.
> In particular, exploiting the kernel via USB is a common attack
> technique used in the wild by tools like Cellebrite.

Possibly that such ones might appear there at some point. The best
way for these might be to have such teams try to step up as
co-maintainers for the parts they care about though.

> In these cases, I think it makes sense to funnel vulnerability
> reports to the people who actually seriously care about fixing them.
> For instance, problems in USB might be funneled to the Chrome OS and
> Android security teams.  They will get fixed much more quickly, and
> upstream maintainers won't be flooded with reports that don't have
> attached patches.

Trust me, patches written behind closed doors rarely resist publication
and discovery by the maintainer. And treating bugs as regular ones in
fact tends to make them move faster than as security ones. No need to
go back-and-forth asking for data that reporters hesitate to share, nor
to have to first convince them that their bug needs to be fixed even
though they were planning on speaking about them at a conference, etc.

Cheers,
Willy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug
  2026-05-02  6:07                   ` Willy Tarreau
@ 2026-05-02  6:27                     ` Demi Marie Obenour
  0 siblings, 0 replies; 25+ messages in thread
From: Demi Marie Obenour @ 2026-05-02  6:27 UTC (permalink / raw)
  To: Willy Tarreau
  Cc: Greg KH, leon, security, Jonathan Corbet, skhan, workflows,
	linux-doc, linux-kernel, Qubes Developer Mailing List


[-- Attachment #1.1.1: Type: text/plain, Size: 5195 bytes --]

On 5/2/26 02:07, Willy Tarreau wrote:
> On Sat, May 02, 2026 at 01:51:08AM -0400, Demi Marie Obenour wrote:
>> On 5/2/26 01:35, Willy Tarreau wrote:
>>> Hi Demi Marie,
>>>
>>> On Sat, May 02, 2026 at 01:20:10AM -0400, Demi Marie Obenour wrote:
>>>>>>> Ah, but USB does cover "some" modification of devices, so this is going
>>>>>>> to be something that is good to document over time, if for no other
>>>>>>> reason to keep these scanning tools in check from hallucinating crazy
>>>>>>> situations that are obviously not a valid thing we care about.
>>>>>>
>>>>>> OK but does this mean you still want to get these reports in the end ?
>>>>>
>>>>> I want a patch if a user cares about that threat-model (as Android does
>>>>> but no one else) as it's up to the user groups that want to change the
>>>>> default kernel's behavior like this to actually submit patches to do so.
>>>> FYI, I don't think this is limited to Android.  Chrome OS definitely
>>>> cares about malicious USB devices, and the whole purpose of USBGuard is
>>>> to prevent a USB device from being able to compromise the system unless
>>>> authorized.  I believe Qubes OS also cares, as it supports USB device
>>>> assignment to virtual machines.  CCing qubes-devel for confirmation.
>>>>
>>>> What should that patch look like?  Could there be a way for these user
>>>> groups to be informed of vulnerabilities in the USB subsystem, so that
>>>> they can take responsibility for fixing them before they become public?
>>>
>>> I've posted a proposal elsewhere in the same thread:
>>>
>>>    https://lore.kernel.org/lkml/afSxSX8RK0Z4kkOI@1wt.eu/
>>
>> I saw that, but it's still not quite clear what is meant here.
>> My understanding is that those concerned about malicious USB devices
>> are generally concerned about _arbitrary_ malicious USB devices.
>> The one thing a USB device shouldn't be able to spoof is the port
>> it is plugged into, and userspace tools like USBGuard can use that
>> information.  But to do that, they have to trust that the device
>> can't harm the system if it isn't assigned to any drivers.
> 
> The goal sought by that early document precisely is to draw the line
> between what is a regular bug and hwat is a kernel bug. The kernel
> currently doesn't consider problems posed by a crafted USB device as a
> security issue because the kernel trusts the hardware in runs on. Of
> course there can be valid reasons to disagree with this, but it's just
> the current situation and the purpose of the document is to clarify it
> so that bugs are reported to the right place and handled efficiently.

Fair.  That does bring up the question of what those who want
to change that situation should do, and they definitely exist.
A documented path for them to follow (even if long and convoluted,
such as becoming co-maintainers of the subsystem) could be helpful.

I'd offer to help write something, but I suspect that this is
something that can only really be written by a member of the kernel
security team.  Hence this request.

>>>> It does make sense for those who care about the security of a subsystem
>>>> to be responsible for vulnerabilities in that system, but right now
>>>> I'm not sure how one would offer to take up that responsibility.
>>>
>>> I think that at least some subsystems will want to add their own
>>> restrictions based on the bug reports they keep receiving, and I hope
>>> it can help distros figure where there's a gap between is promised to
>>> users and what the kernel promises, that needs to be filled by userland
>>> verification tools for example.
>>>
>>> Willy
>>
>> I think there might be another category, which is were there is a
>> third party who is much more interested in the security of a subsystem
>> than its primary maintainers are.  I suspect that Google is said
>> third party in multiple such cases, especially various USB drivers.
>> In particular, exploiting the kernel via USB is a common attack
>> technique used in the wild by tools like Cellebrite.
> 
> Possibly that such ones might appear there at some point. The best
> way for these might be to have such teams try to step up as
> co-maintainers for the parts they care about though.

Makes sense.

>> In these cases, I think it makes sense to funnel vulnerability
>> reports to the people who actually seriously care about fixing them.
>> For instance, problems in USB might be funneled to the Chrome OS and
>> Android security teams.  They will get fixed much more quickly, and
>> upstream maintainers won't be flooded with reports that don't have
>> attached patches.
> 
> Trust me, patches written behind closed doors rarely resist publication
> and discovery by the maintainer. And treating bugs as regular ones in
> fact tends to make them move faster than as security ones. No need to
> go back-and-forth asking for data that reporters hesitate to share, nor
> to have to first convince them that their bug needs to be fixed even
> though they were planning on speaking about them at a conference, etc.

That doesn't surprise me, actually.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 7253 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2026-05-02  6:27 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-26 16:39 [PATCH 0/3] Documentation: security-bugs: new updates covering triage and AI Willy Tarreau
2026-04-26 16:39 ` [PATCH 1/3] Documentation: security-bugs: do not systematically Cc the security team Willy Tarreau
2026-04-27 13:49   ` Greg KH
2026-04-27 15:24     ` Willy Tarreau
2026-04-27 15:33       ` Greg KH
2026-04-27 16:09         ` Willy Tarreau
2026-04-26 16:39 ` [PATCH 2/3] Documentation: security-bugs: explain what is and is not a security bug Willy Tarreau
2026-04-26 19:33   ` Randy Dunlap
2026-04-27 13:48   ` Greg KH
2026-04-27 15:27     ` Willy Tarreau
2026-04-27 15:35       ` Greg KH
2026-04-27 16:14         ` Willy Tarreau
2026-04-28 21:13           ` Greg KH
2026-04-29  3:09             ` Willy Tarreau
2026-04-29  6:10               ` Greg KH
2026-05-01 13:57                 ` Willy Tarreau
2026-05-02  5:20             ` Demi Marie Obenour
2026-05-02  5:35               ` Willy Tarreau
2026-05-02  5:51                 ` Demi Marie Obenour
2026-05-02  6:07                   ` Willy Tarreau
2026-05-02  6:27                     ` Demi Marie Obenour
2026-04-26 16:39 ` [PATCH 3/3] Documentation: security-bugs: clarify requirements for AI-assisted reports Willy Tarreau
2026-04-26 19:36   ` Randy Dunlap
2026-04-27  2:22     ` Willy Tarreau
2026-04-27 13:50   ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox