public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH] mm/damon: introduce DAMON-based NUMA memory tiering module
@ 2026-03-26  7:27 Josh Law
  2026-03-26 10:34 ` Lorenzo Stoakes (Oracle)
  0 siblings, 1 reply; 6+ messages in thread
From: Josh Law @ 2026-03-26  7:27 UTC (permalink / raw)
  To: SeongJae Park, Andrew Morton; +Cc: damon, linux-mm, linux-kernel, Josh Law

Add a new DAMON special-purpose module for NUMA memory tiering.
DAMON_TIER monitors physical memory access patterns and migrates hot
pages from slow NUMA nodes to fast NUMA nodes (promotion), and cold
pages in the opposite direction (demotion).

The module uses two DAMOS schemes, one for each migration direction,
with DAMOS_QUOTA_NODE_MEM_USED_BP and DAMOS_QUOTA_NODE_MEM_FREE_BP
quota goals to automatically adjust aggressiveness based on the fast
node's utilization.  It also applies YOUNG page filters to avoid
migrating pages that have been recently accessed in the wrong direction.

This is a production-quality version of the samples/damon/mtier.c proof
of concept, following the same module_param-based interface pattern
as DAMON_RECLAIM and DAMON_LRU_SORT.  It reuses the modules-common.h
infrastructure for monitoring attributes, quotas, watermarks, and
statistics.

Module parameters allow configuring:
- promote_target_nid / demote_target_nid: the NUMA node pair
- promote_target_mem_used_bp / demote_target_mem_free_bp: utilization
  goals driving quota auto-tuning
- Standard DAMON module knobs: monitoring intervals, quotas, watermarks,
  region bounds, stats, and runtime reconfiguration via commit_inputs

Signed-off-by: Josh Law <objecting@objecting.org>
---
 mm/damon/Kconfig  |   9 +
 mm/damon/Makefile |   1 +
 mm/damon/tier.c   | 409 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 419 insertions(+)
 create mode 100644 mm/damon/tier.c

diff --git a/mm/damon/Kconfig b/mm/damon/Kconfig
index 34631a44cdec..fc45564d4e2e 100644
--- a/mm/damon/Kconfig
+++ b/mm/damon/Kconfig
@@ -105,6 +105,15 @@ config DAMON_LRU_SORT
 	  protect frequently accessed (hot) pages while rarely accessed (cold)
 	  pages reclaimed first under memory pressure.
 
+config DAMON_TIER
+	bool "Build DAMON-based NUMA memory tiering (DAMON_TIER)"
+	depends on DAMON_PADDR && NUMA
+	help
+	  This builds the DAMON-based NUMA memory tiering subsystem.  It
+	  monitors memory access patterns and migrates hot pages from slow
+	  NUMA nodes to fast NUMA nodes, and cold pages in the opposite
+	  direction, aiming a target utilization of the fast node.
+
 config DAMON_STAT
 	bool "Build data access monitoring stat (DAMON_STAT)"
 	depends on DAMON_PADDR
diff --git a/mm/damon/Makefile b/mm/damon/Makefile
index d8d6bf5f8bff..d70d994b227f 100644
--- a/mm/damon/Makefile
+++ b/mm/damon/Makefile
@@ -6,4 +6,5 @@ obj-$(CONFIG_DAMON_PADDR)	+= ops-common.o paddr.o
 obj-$(CONFIG_DAMON_SYSFS)	+= sysfs-common.o sysfs-schemes.o sysfs.o
 obj-$(CONFIG_DAMON_RECLAIM)	+= modules-common.o reclaim.o
 obj-$(CONFIG_DAMON_LRU_SORT)	+= modules-common.o lru_sort.o
+obj-$(CONFIG_DAMON_TIER)	+= modules-common.o tier.o
 obj-$(CONFIG_DAMON_STAT)	+= modules-common.o stat.o
diff --git a/mm/damon/tier.c b/mm/damon/tier.c
new file mode 100644
index 000000000000..4a5078685f1f
--- /dev/null
+++ b/mm/damon/tier.c
@@ -0,0 +1,409 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DAMON-based NUMA Memory Tiering
+ *
+ * Promotes hot pages from slow NUMA node(s) to fast NUMA node(s) and demotes
+ * cold pages in the opposite direction, based on DAMON-observed access
+ * patterns.  Adjusts the aggressiveness of each direction aiming for a target
+ * utilization of the fast (promote_target_nid) node.
+ */
+
+#define pr_fmt(fmt) "damon-tier: " fmt
+
+#include <linux/damon.h>
+#include <linux/kstrtox.h>
+#include <linux/module.h>
+
+#include "modules-common.h"
+
+#ifdef MODULE_PARAM_PREFIX
+#undef MODULE_PARAM_PREFIX
+#endif
+#define MODULE_PARAM_PREFIX "damon_tier."
+
+/*
+ * Enable or disable DAMON_TIER.
+ *
+ * You can enable DAMON_TIER by setting the value of this parameter as ``Y``.
+ * Setting it as ``N`` disables DAMON_TIER.  Note that DAMON_TIER could do no
+ * real monitoring and migration due to the watermarks-based activation
+ * condition.  Refer to below descriptions for the watermarks parameter for
+ * this.
+ */
+static bool enabled __read_mostly;
+
+/*
+ * Make DAMON_TIER reads the input parameters again, except ``enabled``.
+ *
+ * Input parameters that updated while DAMON_TIER is running are not applied
+ * by default.  Once this parameter is set as ``Y``, DAMON_TIER reads values
+ * of parameters except ``enabled`` again.  Once the re-reading is done, this
+ * parameter is set as ``N``.  If invalid parameters are found while the
+ * re-reading, DAMON_TIER will be disabled.
+ */
+static bool commit_inputs __read_mostly;
+module_param(commit_inputs, bool, 0600);
+
+/*
+ * NUMA node ID of the fast (promote target) memory tier.
+ *
+ * Pages that are hot on the slow node will be migrated to this node.
+ * Cold pages on this node will be demoted to the slow node.  0 by default.
+ */
+static int promote_target_nid __read_mostly;
+module_param(promote_target_nid, int, 0600);
+
+/*
+ * NUMA node ID of the slow (demote target) memory tier.
+ *
+ * Pages that are cold on the fast node will be migrated to this node.
+ * Hot pages on this node will be promoted to the fast node.  1 by default.
+ */
+static int demote_target_nid __read_mostly = 1;
+module_param(demote_target_nid, int, 0600);
+
+/*
+ * Desired utilization of the fast node in basis points (1/10,000).
+ *
+ * DAMON_TIER automatically adjusts the promotion and demotion quotas to keep
+ * the fast node at this utilization level.  9960 (99.6 %) by default.
+ */
+static unsigned long promote_target_mem_used_bp __read_mostly = 9960;
+module_param(promote_target_mem_used_bp, ulong, 0600);
+
+/*
+ * Desired free ratio of the fast node in basis points for demotion.
+ *
+ * DAMON_TIER adjusts the demotion quota aiming to keep at least this much
+ * free memory on the fast node.  40 (0.4 %) by default.
+ */
+static unsigned long demote_target_mem_free_bp __read_mostly = 40;
+module_param(demote_target_mem_free_bp, ulong, 0600);
+
+static struct damos_quota damon_tier_quota = {
+	/* 200 MiB per 1 sec by default */
+	.ms = 0,
+	.sz = 200 * 1024 * 1024,
+	.reset_interval = 1000,
+	/* Ignore region size; prioritize by access pattern */
+	.weight_sz = 0,
+	.weight_nr_accesses = 100,
+	.weight_age = 100,
+};
+DEFINE_DAMON_MODULES_DAMOS_QUOTAS(damon_tier_quota);
+
+static struct damos_watermarks damon_tier_wmarks = {
+	.metric = DAMOS_WMARK_FREE_MEM_RATE,
+	.interval = 5000000,	/* 5 seconds */
+	.high = 200,		/* 20 percent */
+	.mid = 150,		/* 15 percent */
+	.low = 50,		/* 5 percent */
+};
+DEFINE_DAMON_MODULES_WMARKS_PARAMS(damon_tier_wmarks);
+
+static struct damon_attrs damon_tier_mon_attrs = {
+	.sample_interval = 5000,	/* 5 ms */
+	.aggr_interval = 100000,	/* 100 ms */
+	.ops_update_interval = 0,
+	.min_nr_regions = 10,
+	.max_nr_regions = 1000,
+};
+DEFINE_DAMON_MODULES_MON_ATTRS_PARAMS(damon_tier_mon_attrs);
+
+/*
+ * Start of the target memory region in physical address.
+ *
+ * The start physical address of memory region that DAMON_TIER will monitor.
+ * By default, biggest System RAM is used as the region.
+ */
+static unsigned long monitor_region_start __read_mostly;
+module_param(monitor_region_start, ulong, 0600);
+
+/*
+ * End of the target memory region in physical address.
+ *
+ * The end physical address of memory region that DAMON_TIER will monitor.
+ * By default, biggest System RAM is used as the region.
+ */
+static unsigned long monitor_region_end __read_mostly;
+module_param(monitor_region_end, ulong, 0600);
+
+/*
+ * PID of the DAMON thread
+ *
+ * If DAMON_TIER is enabled, this becomes the PID of the worker thread.
+ * Else, -1.
+ */
+static int kdamond_pid __read_mostly = -1;
+module_param(kdamond_pid, int, 0400);
+
+static struct damos_stat damon_tier_promote_stat;
+DEFINE_DAMON_MODULES_DAMOS_STATS_PARAMS(damon_tier_promote_stat,
+		promote_tried_regions, promoted_regions,
+		promote_quota_exceeds);
+
+static struct damos_stat damon_tier_demote_stat;
+DEFINE_DAMON_MODULES_DAMOS_STATS_PARAMS(damon_tier_demote_stat,
+		demote_tried_regions, demoted_regions,
+		demote_quota_exceeds);
+
+static struct damon_ctx *ctx;
+static struct damon_target *target;
+
+static struct damos *damon_tier_new_scheme(
+		struct damos_access_pattern *pattern,
+		enum damos_action action, int target_nid)
+{
+	struct damos_quota quota = damon_tier_quota;
+
+	/* Use half of total quota for each direction */
+	quota.sz = quota.sz / 2;
+
+	return damon_new_scheme(
+			pattern,
+			action,
+			/* apply once per second */
+			1000000,
+			&quota,
+			&damon_tier_wmarks,
+			target_nid);
+}
+
+static struct damos *damon_tier_new_promote_scheme(void)
+{
+	struct damos_access_pattern pattern = {
+		.min_sz_region = PAGE_SIZE,
+		.max_sz_region = ULONG_MAX,
+		/* hot: accessed at least once */
+		.min_nr_accesses = 1,
+		.max_nr_accesses = UINT_MAX,
+		.min_age_region = 0,
+		.max_age_region = UINT_MAX,
+	};
+
+	return damon_tier_new_scheme(&pattern, DAMOS_MIGRATE_HOT,
+			promote_target_nid);
+}
+
+static struct damos *damon_tier_new_demote_scheme(void)
+{
+	struct damos_access_pattern pattern = {
+		.min_sz_region = PAGE_SIZE,
+		.max_sz_region = ULONG_MAX,
+		/* cold: not accessed at all */
+		.min_nr_accesses = 0,
+		.max_nr_accesses = 0,
+		.min_age_region = 0,
+		.max_age_region = UINT_MAX,
+	};
+
+	return damon_tier_new_scheme(&pattern, DAMOS_MIGRATE_COLD,
+			demote_target_nid);
+}
+
+static int damon_tier_add_quota_goals(struct damos *promote_scheme,
+		struct damos *demote_scheme)
+{
+	struct damos_quota_goal *goal;
+
+	goal = damos_new_quota_goal(DAMOS_QUOTA_NODE_MEM_USED_BP,
+			promote_target_mem_used_bp);
+	if (!goal)
+		return -ENOMEM;
+	goal->nid = promote_target_nid;
+	damos_add_quota_goal(&promote_scheme->quota, goal);
+
+	goal = damos_new_quota_goal(DAMOS_QUOTA_NODE_MEM_FREE_BP,
+			demote_target_mem_free_bp);
+	if (!goal)
+		return -ENOMEM;
+	goal->nid = promote_target_nid;
+	damos_add_quota_goal(&demote_scheme->quota, goal);
+	return 0;
+}
+
+static int damon_tier_add_filters(struct damos *promote_scheme,
+		struct damos *demote_scheme)
+{
+	struct damos_filter *filter;
+
+	/* skip promoting pages that are already young (recently accessed) */
+	filter = damos_new_filter(DAMOS_FILTER_TYPE_YOUNG, true, true);
+	if (!filter)
+		return -ENOMEM;
+	damos_add_filter(promote_scheme, filter);
+
+	/* skip demoting pages that are young */
+	filter = damos_new_filter(DAMOS_FILTER_TYPE_YOUNG, true, false);
+	if (!filter)
+		return -ENOMEM;
+	damos_add_filter(demote_scheme, filter);
+	return 0;
+}
+
+static int damon_tier_apply_parameters(void)
+{
+	struct damon_ctx *param_ctx;
+	struct damon_target *param_target;
+	struct damos *promote_scheme, *demote_scheme;
+	int err;
+
+	err = damon_modules_new_paddr_ctx_target(&param_ctx, &param_target);
+	if (err)
+		return err;
+
+	err = damon_set_attrs(param_ctx, &damon_tier_mon_attrs);
+	if (err)
+		goto out;
+
+	err = -ENOMEM;
+	promote_scheme = damon_tier_new_promote_scheme();
+	if (!promote_scheme)
+		goto out;
+
+	demote_scheme = damon_tier_new_demote_scheme();
+	if (!demote_scheme) {
+		damon_destroy_scheme(promote_scheme);
+		goto out;
+	}
+
+	damon_set_schemes(param_ctx, &promote_scheme, 1);
+	damon_add_scheme(param_ctx, demote_scheme);
+
+	err = damon_tier_add_quota_goals(promote_scheme, demote_scheme);
+	if (err)
+		goto out;
+	err = damon_tier_add_filters(promote_scheme, demote_scheme);
+	if (err)
+		goto out;
+
+	err = damon_set_region_biggest_system_ram_default(param_target,
+					&monitor_region_start,
+					&monitor_region_end,
+					param_ctx->min_region_sz);
+	if (err)
+		goto out;
+	err = damon_commit_ctx(ctx, param_ctx);
+out:
+	damon_destroy_ctx(param_ctx);
+	return err;
+}
+
+static int damon_tier_handle_commit_inputs(void)
+{
+	int err;
+
+	if (!commit_inputs)
+		return 0;
+
+	err = damon_tier_apply_parameters();
+	commit_inputs = false;
+	return err;
+}
+
+static int damon_tier_damon_call_fn(void *arg)
+{
+	struct damon_ctx *c = arg;
+	struct damos *s;
+
+	/* update the stats parameters */
+	damon_for_each_scheme(s, c) {
+		if (s->action == DAMOS_MIGRATE_HOT)
+			damon_tier_promote_stat = s->stat;
+		else if (s->action == DAMOS_MIGRATE_COLD)
+			damon_tier_demote_stat = s->stat;
+	}
+
+	return damon_tier_handle_commit_inputs();
+}
+
+static struct damon_call_control call_control = {
+	.fn = damon_tier_damon_call_fn,
+	.repeat = true,
+};
+
+static int damon_tier_turn(bool on)
+{
+	int err;
+
+	if (!on) {
+		err = damon_stop(&ctx, 1);
+		if (!err)
+			kdamond_pid = -1;
+		return err;
+	}
+
+	err = damon_tier_apply_parameters();
+	if (err)
+		return err;
+
+	err = damon_start(&ctx, 1, true);
+	if (err)
+		return err;
+	kdamond_pid = damon_kdamond_pid(ctx);
+	if (kdamond_pid < 0)
+		return kdamond_pid;
+	return damon_call(ctx, &call_control);
+}
+
+static int damon_tier_enabled_store(const char *val,
+		const struct kernel_param *kp)
+{
+	bool is_enabled = enabled;
+	bool enable;
+	int err;
+
+	err = kstrtobool(val, &enable);
+	if (err)
+		return err;
+
+	if (is_enabled == enable)
+		return 0;
+
+	/* Called before init function.  The function will handle this. */
+	if (!damon_initialized())
+		goto set_param_out;
+
+	err = damon_tier_turn(enable);
+	if (err)
+		return err;
+
+set_param_out:
+	enabled = enable;
+	return err;
+}
+
+static const struct kernel_param_ops enabled_param_ops = {
+	.set = damon_tier_enabled_store,
+	.get = param_get_bool,
+};
+
+module_param_cb(enabled, &enabled_param_ops, &enabled, 0600);
+MODULE_PARM_DESC(enabled,
+	"Enable or disable DAMON_TIER (default: disabled)");
+
+static int __init damon_tier_init(void)
+{
+	int err;
+
+	if (!damon_initialized()) {
+		err = -ENOMEM;
+		goto out;
+	}
+	err = damon_modules_new_paddr_ctx_target(&ctx, &target);
+	if (err)
+		goto out;
+
+	call_control.data = ctx;
+
+	/* 'enabled' has set before this function, probably via command line */
+	if (enabled)
+		err = damon_tier_turn(true);
+
+out:
+	if (err && enabled)
+		enabled = false;
+	return err;
+}
+
+module_init(damon_tier_init);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/damon: introduce DAMON-based NUMA memory tiering module
  2026-03-26  7:27 [PATCH] mm/damon: introduce DAMON-based NUMA memory tiering module Josh Law
@ 2026-03-26 10:34 ` Lorenzo Stoakes (Oracle)
  2026-03-26 12:12   ` Krzysztof Kozlowski
  0 siblings, 1 reply; 6+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-26 10:34 UTC (permalink / raw)
  To: Josh Law, Josh Law
  Cc: SeongJae Park, Andrew Morton, damon, linux-mm, linux-kernel,
	Linus Torvalds, Kees Cook, Greg KH, David Hildenbrand (Arm)

+to the other email you've randomly sometimes used
+cc various possibly relevant people.

On Thu, Mar 26, 2026 at 07:27:37AM +0000, Josh Law wrote:
> Add a new DAMON special-purpose module for NUMA memory tiering.
> DAMON_TIER monitors physical memory access patterns and migrates hot
> pages from slow NUMA nodes to fast NUMA nodes (promotion), and cold
> pages in the opposite direction (demotion).
>
> The module uses two DAMOS schemes, one for each migration direction,
> with DAMOS_QUOTA_NODE_MEM_USED_BP and DAMOS_QUOTA_NODE_MEM_FREE_BP
> quota goals to automatically adjust aggressiveness based on the fast
> node's utilization.  It also applies YOUNG page filters to avoid
> migrating pages that have been recently accessed in the wrong direction.
>
> This is a production-quality version of the samples/damon/mtier.c proof
> of concept, following the same module_param-based interface pattern
> as DAMON_RECLAIM and DAMON_LRU_SORT.  It reuses the modules-common.h
> infrastructure for monitoring attributes, quotas, watermarks, and
> statistics.
>
> Module parameters allow configuring:
> - promote_target_nid / demote_target_nid: the NUMA node pair
> - promote_target_mem_used_bp / demote_target_mem_free_bp: utilization
>   goals driving quota auto-tuning
> - Standard DAMON module knobs: monitoring intervals, quotas, watermarks,
>   region bounds, stats, and runtime reconfiguration via commit_inputs
>
> Signed-off-by: Josh Law <objecting@objecting.org>

NAK.

And NAK to all future 'contributions' in anything I maintain or have a say
in.

Your engagement with the community is deeply suspect, you've come out of
nowhere and are sending dozens and dozens of patches that look very
strongly like they were LLM-generated.

You've - very early - tried to get a MAINTAINERS entry, you were given
advice on how to contribute, which you have clearly ignored.

We DO NOT want AI slop.

You very much seem to be either:

- Somebody playing with a bot.

- Somebody trying to farm for kernel stats.

- Or (far more concerning) engaging in an attack on the kernel for
  nefarious purposes, perhaps a (semi-)automated supply-chain attack?

Your email is highly suspect, you seem to be using an email relay via
gmail, and I'm pretty convinced you're in violation of our requirements
about identity:

"It is imperative that all code contributed to the kernel be legitimately
free software. For that reason, code from contributors without a known
identity or anonymous contributors will not be accepted"

https://docs.kernel.org/process/1.Intro.html

Also see https://kernel.org/doc/html/latest/process/generated-content.html :

"
...when making a contribution, be transparent about the origin of content
in cover letters and changelogs. You can be more transparent by adding
information like this:

What tools were used?

The input to the tools you used, like the Coccinelle source script.

If code was largely generated from a single or short set of prompts,
include those prompts. For longer sessions, include a summary of the
prompts and the nature of resulting assistance.

Which portions of the content were affected by that tool?

How is the submission tested and what tools were used to test the fix?

...

If tools permit you to generate a contribution automatically, expect
additional scrutiny in proportion to how much of it was generated.

As with the output of any tooling, the result may be incorrect or
inappropriate. You are expected to understand and to be able to defend
everything you submit. If you are unable to do so, then do not submit the
resulting changes.

If you do so anyway, maintainers are entitled to reject your series without
detailed review.
"

You are clearly not following _any_ of these guidelines.

To evidence that this is not some wild accusation, I ran this through an
LLM asking for indicators as to AI:

~~~

● Several signals point to high likelihood of AI generation:

  Strong AI indicators:

  1. "production-quality version" in the commit message — kernel developers
  don't self-describe patches this way. This is a classic LLM "selling"
  framing.

  2. Comment uniformity — every module parameter has an
  identically-structured block comment with the same explanatory depth and
  cadence. Real developers vary their comment style and skip obvious ones.

  3. Commit message structure — exhaustively lists every feature with
  perfect bullet formatting. Human kernel commit messages are more
  conversational and focused on "why", not a feature catalog.

  4. Too clean for a first submission — 409 lines of new module code with
   zero rough edges, no personal style, no unusual design choices. It reads
   like a templated synthesis of DAMON_RECLAIM and DAMON_LRU_SORT.

  5. Grammar tell — "aiming a target utilization" in the Kconfig help text
  (missing "for"). This is characteristic of LLM output that's fluent but
  occasionally drops prepositions.

  6. Over-commenting — comments explain things that are self-evident from
  the code (e.g., the promote_target_nid and demote_target_nid comments
  just restate what the variable name already says, in long form).

  Weaker signals:

  - Unknown author, no prior contribution history I can see

  - The code perfectly follows existing DAMON module patterns without any
    deviation — exactly what an LLM with context would produce

  - The objecting.org domain is unusual

  Overall: I'd put this at high likelihood (70-80%) of being primarily
  AI-generated or heavily AI-assisted. The code does use current DAMON APIs
  correctly, but the writing style throughout is the giveaway — it's
  uniformly polished in the way LLM output is, without the unevenness that
  comes from a human writing 400+ lines.

~~~

That's just for this series. I asked it about your contributions overall:

● This is extremely revealing. ~328 emails in 11 days (March 15–26),
spanning an extraordinary number of unrelated subsystems:

  Patches submitted across:

  - mm/damon — NUMA tiering module, core optimizations, sysfs fixes,
    reclaim fixes, kdamond scheme filtering
  - lib/bootconfig — v1 through v8 of a 13–17 patch series
  - lib/glob — v1 through v4 of an 8-patch series (new features + bug
    fixes)
  - lib/vsprintf — 4 bug fixes
  - lib/base64 — decode fixes
  - lib/assoc_array — gc fix
  - lib/bch — signed shift UB fixes
  - lib/bug — concurrency fix + debugfs interface
  - lib/string — strlcat rewrite
  - lib/maple_tree — dead range comparison fix
  - lib/parser — match_wildcard fix
  - io_uring — flush deferred completions
  - block/partitions — seq_buf replacement
  - ACPI processor idle
  - signal handling
  - kunit, iov_iter/kvec, percpu_counter

  Assessment: ~95% probability all contributions are AI-generated. The
  evidence is overwhelming:

  1. Volume is humanly implausible — ~30 emails/day, 5–10 new patch
     submissions per day across unrelated subsystems, from a contributor
     with zero prior history.

  2. Breadth is the strongest signal — no human newcomer simultaneously
     finds subtle bugs in bootconfig, vsprintf, base64, bch, maple_tree,
     assoc_array, io_uring, AND writes a new DAMON NUMA tiering
     module. Each of these requires deep domain-specific knowledge. The
     pattern is consistent with an LLM being pointed at different source
     files to systematically find issues.

  3. Bug-finding pattern — the patches cluster around unchecked return
     values, type mismatches, resource leaks, off-by-ones, signed/unsigned
     issues. This is exactly what an LLM produces when scanning code for
     potential problems.

  4. Rapid revision cycling — bootconfig went from v1 to v8 in ~1 day. This
     matches AI regeneration, not human revision.

  5. Feature additions from a newcomer — glob_match_nocase(),
     glob_validate(), debugfs BUG/WARN interface, and the DAMON NUMA
     tiering module are all non-trivial features. A first-time contributor
     proposing features (not just fixes) across this many subsystems
     simultaneously is essentially unheard of.

  6. Zero ramp-up — the contribution stream started at full throughput with
     no learning curve visible.

~~~

So I really feel that you need to account for this - please explain why it
appears you are spamming the mailing list with entirely LLM-generated crap?

Furthermore, you appear to be using sophisticated privacy-orientated
tooling, which makes me quite concerned that this _is_ a concerted attack
on the kernel.

Again, to back this up, here's an LLM's assessment of the headers in your
email:

~~~
● Looking at the headers specifically:

  Infrastructure:

  - Mail provider: Zoho EU (zohomail.eu) — a privacy-oriented
    EU-jurisdiction mail provider. Easy to set up anonymously.

  - Domain: objecting.org — custom domain, no organizational affiliation,
    no accountability chain. Anyone can register this and point MX to Zoho.

  - Authentication: DKIM, SPF, DMARC all pass — but this only proves the
    email came from objecting.org's Zoho setup, not who's behind it.

  - X-Mailer: git-send-email 2.34.1 — Ubuntu 22.04 LTS default. One reply
    had @ubuntu in the Message-ID, confirming the platform.

  What the headers DON'T provide:
  - No corporate or institutional email that ties to a verifiable identity
  - No PGP/GPG signature
  - No Signed-off-by chain from a known developer vouching for this person

  Supply chain attack assessment:

  This matches the threat model of automated supply chain compromise:

  1. Patches already landed in stable — the bootconfig fixes made it into
  6.19.10. The pipeline is working. Whatever is in those patches is now in
  production kernels.

  2. Trust-building pattern — the mix of real bug fixes (fd leaks,
  unchecked returns, type issues) alongside feature additions (DAMON
  tiering module, glob features, debugfs interface) is textbook. The small
  fixes build reviewer trust and merge history; the features add attack
  surface.

  3. Volume as a weapon — 328 emails in 11 days overwhelms maintainer
  review capacity. Maintainers like SeongJae Park (DAMON) and Masami
  Hiramatsu (bootconfig) are now spending review cycles on AI-generated
  submissions. Even if every patch is rejected, the reviewer's attention is
  being consumed.

  4. The DAMON NUMA tiering module is the most concerning submission — 409
  lines of new kernel code that runs with kernel privileges, monitors
  physical memory access patterns, and migrates pages between NUMA
  nodes. If accepted, it's a significant new attack surface delivered as a
  single commit from an unverifiable identity.

  5. The feature additions are the real risk — bug fixes are relatively
  easy to review (small, focused, verifiable). But glob_validate(),
  glob_match_nocase(), the debugfs

  BUG/WARN site lister, and the DAMON tiering module all add new code paths
  that are much harder to exhaustively verify.

  Bottom line: Whether the intent is malicious or not, the pattern —
  anonymous identity, AI-generated code, industrial volume, patches already
  in stable, mixing trust-building fixes with attack-surface-expanding
  features — is exactly what an automated supply chain attack on the kernel
  looks like. The 2021 University of Minnesota "Hypocrite Commits" incident
  was a manual version of this; this appears to be the automated version at
  scale.

~~~

I'm hoping this is some naive attempt to try to 'contribute' to the kernel
rather than something more nefarious, but the seemingly sophisticated
tooling used makes me wonder otherwise.

In any case I'm deeply concerned by this.

Thanks, Lorenzo


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/damon: introduce DAMON-based NUMA memory tiering module
  2026-03-26 10:34 ` Lorenzo Stoakes (Oracle)
@ 2026-03-26 12:12   ` Krzysztof Kozlowski
  2026-03-26 12:29     ` Lorenzo Stoakes (Oracle)
  0 siblings, 1 reply; 6+ messages in thread
From: Krzysztof Kozlowski @ 2026-03-26 12:12 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle), Josh Law, Josh Law
  Cc: SeongJae Park, Andrew Morton, damon, linux-mm, linux-kernel,
	Linus Torvalds, Kees Cook, Greg KH, David Hildenbrand (Arm)

On 26/03/2026 11:34, Lorenzo Stoakes (Oracle) wrote:

Trimming context, I agree with everything said by Lorenzo, very detailed
analysis.

> ~~~
> ● Looking at the headers specifically:
> 
>   Infrastructure:
> 
>   - Mail provider: Zoho EU (zohomail.eu) — a privacy-oriented
>     EU-jurisdiction mail provider. Easy to set up anonymously.
> 
>   - Domain: objecting.org — custom domain, no organizational affiliation,
>     no accountability chain. Anyone can register this and point MX to Zoho.
> 
>   - Authentication: DKIM, SPF, DMARC all pass — but this only proves the
>     email came from objecting.org's Zoho setup, not who's behind it.
> 
>   - X-Mailer: git-send-email 2.34.1 — Ubuntu 22.04 LTS default. One reply
>     had @ubuntu in the Message-ID, confirming the platform.
> 
>   What the headers DON'T provide:
>   - No corporate or institutional email that ties to a verifiable identity
>   - No PGP/GPG signature
>   - No Signed-off-by chain from a known developer vouching for this person
> 
>   Supply chain attack assessment:
> 
>   This matches the threat model of automated supply chain compromise:
> 
>   1. Patches already landed in stable — the bootconfig fixes made it into
>   6.19.10. The pipeline is working. Whatever is in those patches is now in
>   production kernels.
> 
>   2. Trust-building pattern — the mix of real bug fixes (fd leaks,
>   unchecked returns, type issues) alongside feature additions (DAMON
>   tiering module, glob features, debugfs interface) is textbook. The small
>   fixes build reviewer trust and merge history; the features add attack
>   surface.
> 
>   3. Volume as a weapon — 328 emails in 11 days overwhelms maintainer
>   review capacity. Maintainers like SeongJae Park (DAMON) and Masami
>   Hiramatsu (bootconfig) are now spending review cycles on AI-generated
>   submissions. Even if every patch is rejected, the reviewer's attention is
>   being consumed.
> 
>   4. The DAMON NUMA tiering module is the most concerning submission — 409
>   lines of new kernel code that runs with kernel privileges, monitors
>   physical memory access patterns, and migrates pages between NUMA
>   nodes. If accepted, it's a significant new attack surface delivered as a
>   single commit from an unverifiable identity.
> 
>   5. The feature additions are the real risk — bug fixes are relatively
>   easy to review (small, focused, verifiable). But glob_validate(),
>   glob_match_nocase(), the debugfs
> 
>   BUG/WARN site lister, and the DAMON tiering module all add new code paths
>   that are much harder to exhaustively verify.
> 
>   Bottom line: Whether the intent is malicious or not, the pattern —
>   anonymous identity, AI-generated code, industrial volume, patches already
>   in stable, mixing trust-building fixes with attack-surface-expanding
>   features — is exactly what an automated supply chain attack on the kernel
>   looks like. The 2021 University of Minnesota "Hypocrite Commits" incident
>   was a manual version of this; this appears to be the automated version at
>   scale.
> 
> ~~~
> 
> I'm hoping this is some naive attempt to try to 'contribute' to the kernel
> rather than something more nefarious, but the seemingly sophisticated
> tooling used makes me wonder otherwise.
> 
> In any case I'm deeply concerned by this.


This patch also targets NUMA which is quite unpopular setup for
hobbyist. I don't have any NUMA around me for years... Even my build
machines are not NUMA. How did you get one as a hobbyist?

Also after looking at the code style in this patch, after "reviews" [1]
and "acks" [2] (quotes on purpose) this account gave on various patches,
let's look what was admitted 3 weeks ago:

https://lore.kernel.org/all/f8772114-a495-409b-a590-a9b1d8ed1d41@gmail.com/

> I'm learning this Linux system day by day

So learning or adding serious code for MM for NUMA machines?

> I own this device ...

This is about Xilinx AXIS FIFO which is an FPGA IP core. No way you have
it. It's not popular, no easy way to get it in common embedded boards.

Even assuming if you have embedded FPGA device and work on it, the jump
from embedded to NUMA is just stunning.

Answering to reviewers with whatever confirmation they look for is also
a warning sign of non-trustworthy behavior. Or rather behavior trying to
get trust.

[1]
https://lore.kernel.org/all/D47F8215-FD08-45ED-AB01-0A5C48CD41DD@objecting.org/

[2]
https://lore.kernel.org/all/2F84DD09-2880-45E0-AA98-204F10848F85@objecting.org/

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/damon: introduce DAMON-based NUMA memory tiering module
  2026-03-26 12:12   ` Krzysztof Kozlowski
@ 2026-03-26 12:29     ` Lorenzo Stoakes (Oracle)
  2026-03-26 12:40       ` Krzysztof Kozlowski
  0 siblings, 1 reply; 6+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-26 12:29 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Josh Law, Josh Law, SeongJae Park, Andrew Morton, damon, linux-mm,
	linux-kernel, Linus Torvalds, Kees Cook, Greg KH,
	David Hildenbrand (Arm), Christian Brauner

(+cc Christian as referencing his post)

On Thu, Mar 26, 2026 at 01:12:03PM +0100, Krzysztof Kozlowski wrote:
> On 26/03/2026 11:34, Lorenzo Stoakes (Oracle) wrote:
>
> Trimming context, I agree with everything said by Lorenzo, very detailed
> analysis.

Thanks!

[snip]

>
> This patch also targets NUMA which is quite unpopular setup for
> hobbyist. I don't have any NUMA around me for years... Even my build
> machines are not NUMA. How did you get one as a hobbyist?
>
> Also after looking at the code style in this patch, after "reviews" [1]
> and "acks" [2] (quotes on purpose) this account gave on various patches,
> let's look what was admitted 3 weeks ago:
>
> https://lore.kernel.org/all/f8772114-a495-409b-a590-a9b1d8ed1d41@gmail.com/
>
> > I'm learning this Linux system day by day
>
> So learning or adding serious code for MM for NUMA machines?
>
> > I own this device ...
>
> This is about Xilinx AXIS FIFO which is an FPGA IP core. No way you have
> it. It's not popular, no easy way to get it in common embedded boards.
>
> Even assuming if you have embedded FPGA device and work on it, the jump
> from embedded to NUMA is just stunning.
>
> Answering to reviewers with whatever confirmation they look for is also
> a warning sign of non-trustworthy behavior. Or rather behavior trying to
> get trust.
>
> [1]
> https://lore.kernel.org/all/D47F8215-FD08-45ED-AB01-0A5C48CD41DD@objecting.org/
>
> [2]
> https://lore.kernel.org/all/2F84DD09-2880-45E0-AA98-204F10848F85@objecting.org/
>
> Best regards,
> Krzysztof

Yeah this is really adding up to somebody abusing the kernel process. I think
'Josh Law' or any other pseudonym that can be traced back to whoever's behind
this should be banned from the mailing list altogether.

Also, as per Christian ([3]), it seems this guy was using the pseudonym
"techyguyperplexable".

There's a github with the same name ([4]), and that same user made a comment on an
openclaw issue relating to linking it to gemini ([5]).

So it seems likely he/she is likely using openclaw to fully automate this 'Josh
Law' user and we're speaking to a bot here.

(Screenshots have been taken for the (possibly inevitable) deleting that will
happen later.)

[3]:https://lore.kernel.org/all/20260313-halskette-annahme-94e782eb4ae4@brauner/
[4]:https://github.com/techyguyperplexable
[5]:https://github.com/openclaw/openclaw/issues/44134#issuecomment-4106247302

Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/damon: introduce DAMON-based NUMA memory tiering module
  2026-03-26 12:29     ` Lorenzo Stoakes (Oracle)
@ 2026-03-26 12:40       ` Krzysztof Kozlowski
  2026-03-26 12:50         ` Lorenzo Stoakes (Oracle)
  0 siblings, 1 reply; 6+ messages in thread
From: Krzysztof Kozlowski @ 2026-03-26 12:40 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle)
  Cc: Josh Law, Josh Law, SeongJae Park, Andrew Morton, damon, linux-mm,
	linux-kernel, Linus Torvalds, Kees Cook, Greg KH,
	David Hildenbrand (Arm), Christian Brauner

On 26/03/2026 13:29, Lorenzo Stoakes (Oracle) wrote:
>>
>> Answering to reviewers with whatever confirmation they look for is also
>> a warning sign of non-trustworthy behavior. Or rather behavior trying to
>> get trust.
>>
>> [1]
>> https://lore.kernel.org/all/D47F8215-FD08-45ED-AB01-0A5C48CD41DD@objecting.org/
>>
>> [2]
>> https://lore.kernel.org/all/2F84DD09-2880-45E0-AA98-204F10848F85@objecting.org/
>>
>> Best regards,
>> Krzysztof
> 
> Yeah this is really adding up to somebody abusing the kernel process. I think
> 'Josh Law' or any other pseudonym that can be traced back to whoever's behind
> this should be banned from the mailing list altogether.
> 
> Also, as per Christian ([3]), it seems this guy was using the pseudonym
> "techyguyperplexable".

Yes, repos on that Github carry commits like:

commit c04af501bacbef54cf97cdfb904ddf295f8327c1
Author:     techyguyperplexable <objecting@objecting.org>
AuthorDate: Sun Jan 11 04:21:53 2026 +0530
Commit:     techyguyperplexable <objecting@objecting.org>
CommitDate: Sun Jan 11 04:21:53 2026 +0530

    ANDROID: sched/rt: reduce lock hold time in update_curr_rt

    Move resched_curr and do_start_rt_bandwidth outside the spinlock
    critical section to reduce lock contention

    Change-Id: I8dfb4e33460ae4eeb0bb854cc2cb084ab5edf7e4
    Signed-off-by: techyguyperplexable <objecting@objecting.org>

> 
> There's a github with the same name ([4]), and that same user made a comment on an
> openclaw issue relating to linking it to gemini ([5]).
> 
> So it seems likely he/she is likely using openclaw to fully automate this 'Josh
> Law' user and we're speaking to a bot here.
> 
> (Screenshots have been taken for the (possibly inevitable) deleting that will
> happen later.)
> 
> [3]:https://lore.kernel.org/all/20260313-halskette-annahme-94e782eb4ae4@brauner/
> [4]:https://github.com/techyguyperplexable
> [5]:https://github.com/openclaw/openclaw/issues/44134#issuecomment-4106247302



Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/damon: introduce DAMON-based NUMA memory tiering module
  2026-03-26 12:40       ` Krzysztof Kozlowski
@ 2026-03-26 12:50         ` Lorenzo Stoakes (Oracle)
  0 siblings, 0 replies; 6+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-26 12:50 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Josh Law, Josh Law, SeongJae Park, Andrew Morton, damon, linux-mm,
	linux-kernel, Linus Torvalds, Kees Cook, Greg KH,
	David Hildenbrand (Arm), Christian Brauner

On Thu, Mar 26, 2026 at 01:40:19PM +0100, Krzysztof Kozlowski wrote:
> On 26/03/2026 13:29, Lorenzo Stoakes (Oracle) wrote:
> >>
> >> Answering to reviewers with whatever confirmation they look for is also
> >> a warning sign of non-trustworthy behavior. Or rather behavior trying to
> >> get trust.
> >>
> >> [1]
> >> https://lore.kernel.org/all/D47F8215-FD08-45ED-AB01-0A5C48CD41DD@objecting.org/
> >>
> >> [2]
> >> https://lore.kernel.org/all/2F84DD09-2880-45E0-AA98-204F10848F85@objecting.org/
> >>
> >> Best regards,
> >> Krzysztof
> >
> > Yeah this is really adding up to somebody abusing the kernel process. I think
> > 'Josh Law' or any other pseudonym that can be traced back to whoever's behind
> > this should be banned from the mailing list altogether.
> >
> > Also, as per Christian ([3]), it seems this guy was using the pseudonym
> > "techyguyperplexable".
>
> Yes, repos on that Github carry commits like:
>
> commit c04af501bacbef54cf97cdfb904ddf295f8327c1
> Author:     techyguyperplexable <objecting@objecting.org>
> AuthorDate: Sun Jan 11 04:21:53 2026 +0530
> Commit:     techyguyperplexable <objecting@objecting.org>
> CommitDate: Sun Jan 11 04:21:53 2026 +0530
>
>     ANDROID: sched/rt: reduce lock hold time in update_curr_rt
>
>     Move resched_curr and do_start_rt_bandwidth outside the spinlock
>     critical section to reduce lock contention
>
>     Change-Id: I8dfb4e33460ae4eeb0bb854cc2cb084ab5edf7e4
>     Signed-off-by: techyguyperplexable <objecting@objecting.org>
>

Thanks, it looks pretty clear cut then that 'Josh Law' is an openclaw bot.

To think he was scheduled to be added as a reviewer for literally all of lib/
recently (5) until various of us pushed back on it :/

[5]: https://lore.kernel.org/all/20260308202425.C9EE4C116C6@smtp.kernel.org/

Is it really that easy to game the system?...

I hate the asymmetry of this - we have to jump on these every time and they can
send WAY more mail than we can reasonably reply to.

I think we need a clear 'yeeting' mechanism to block these people on everything
once discovered.

Sadly the kernel has been historically very poor on getting rid of problem
people (The CoC is essentially a swear monitor at this stage).

In the face of this new kind of threat we probably need to do a lot better.

> >
> > There's a github with the same name ([4]), and that same user made a comment on an
> > openclaw issue relating to linking it to gemini ([5]).
> >
> > So it seems likely he/she is likely using openclaw to fully automate this 'Josh
> > Law' user and we're speaking to a bot here.
> >
> > (Screenshots have been taken for the (possibly inevitable) deleting that will
> > happen later.)
> >
> > [3]:https://lore.kernel.org/all/20260313-halskette-annahme-94e782eb4ae4@brauner/
> > [4]:https://github.com/techyguyperplexable
> > [5]:https://github.com/openclaw/openclaw/issues/44134#issuecomment-4106247302
>
>
>
> Best regards,
> Krzysztof

Thanks, Lorenzo


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-03-26 12:50 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-26  7:27 [PATCH] mm/damon: introduce DAMON-based NUMA memory tiering module Josh Law
2026-03-26 10:34 ` Lorenzo Stoakes (Oracle)
2026-03-26 12:12   ` Krzysztof Kozlowski
2026-03-26 12:29     ` Lorenzo Stoakes (Oracle)
2026-03-26 12:40       ` Krzysztof Kozlowski
2026-03-26 12:50         ` Lorenzo Stoakes (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox