* [PATCH -V3 0/4] memory tiering: calculate abstract distance based on ACPI HMAT
@ 2023-09-12 8:20 Huang Ying
2023-09-12 8:20 ` [PATCH -V3 1/4] memory tiering: add abstract distance calculation algorithms management Huang Ying
` (3 more replies)
0 siblings, 4 replies; 14+ messages in thread
From: Huang Ying @ 2023-09-12 8:20 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Huang Ying, Aneesh Kumar K . V, Wei Xu,
Alistair Popple, Dan Williams, Dave Hansen, Davidlohr Bueso,
Johannes Weiner, Jonathan Cameron, Michal Hocko, Yang Shi,
Dave Jiang, Rafael J Wysocki
We have the explicit memory tiers framework to manage systems with
multiple types of memory, e.g., DRAM in DIMM slots and CXL memory
devices. Where, same kind of memory devices will be grouped into
memory types, then put into memory tiers. To describe the performance
of a memory type, abstract distance is defined. Which is in direct
proportion to the memory latency and inversely proportional to the
memory bandwidth. To keep the code as simple as possible, fixed
abstract distance is used in dax/kmem to describe slow memory such as
Optane DCPMM.
To support more memory types, in this series, we added the abstract
distance calculation algorithm management mechanism, provided a
algorithm implementation based on ACPI HMAT, and used the general
abstract distance calculation interface in dax/kmem driver. So,
dax/kmem can support HBM (high bandwidth memory) in addition to the
original Optane DCPMM.
Changelog:
v3:
- Move algorithm to calculate abstract distance from read/write
latency/bandwidth from hmat.c to memory-ters.c per Alistair's
comments.
- Fix memory types putting in kmem.c for error path.
V2:
- Fix a typo in 4/4.
- Collected reviewed-by and tested-by.
V1 (from RFC):
- Added some comments per Aneesh's comments, Thanks!
Best Regards,
Huang, Ying
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH -V3 1/4] memory tiering: add abstract distance calculation algorithms management
2023-09-12 8:20 [PATCH -V3 0/4] memory tiering: calculate abstract distance based on ACPI HMAT Huang Ying
@ 2023-09-12 8:20 ` Huang Ying
2023-09-14 17:29 ` Dave Jiang
2023-09-19 5:13 ` Alistair Popple
2023-09-12 8:20 ` [PATCH -V3 2/4] acpi, hmat: refactor hmat_register_target_initiators() Huang Ying
` (2 subsequent siblings)
3 siblings, 2 replies; 14+ messages in thread
From: Huang Ying @ 2023-09-12 8:20 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Huang Ying, Bharata B Rao,
Aneesh Kumar K . V, Wei Xu, Alistair Popple, Dan Williams,
Dave Hansen, Davidlohr Bueso, Johannes Weiner, Jonathan Cameron,
Michal Hocko, Yang Shi, Dave Jiang, Rafael J Wysocki
The abstract distance may be calculated by various drivers, such as
ACPI HMAT, CXL CDAT, etc. While it may be used by various code which
hot-add memory node, such as dax/kmem etc. To decouple the algorithm
users and the providers, the abstract distance calculation algorithms
management mechanism is implemented in this patch. It provides
interface for the providers to register the implementation, and
interface for the users.
Multiple algorithm implementations can cooperate via calculating
abstract distance for different memory nodes. The preference of
algorithm implementations can be specified via
priority (notifier_block.priority).
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Tested-by: Bharata B Rao <bharata@amd.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com>
---
include/linux/memory-tiers.h | 19 ++++++++++++
mm/memory-tiers.c | 59 ++++++++++++++++++++++++++++++++++++
2 files changed, 78 insertions(+)
diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
index 437441cdf78f..c8382220cced 100644
--- a/include/linux/memory-tiers.h
+++ b/include/linux/memory-tiers.h
@@ -6,6 +6,7 @@
#include <linux/nodemask.h>
#include <linux/kref.h>
#include <linux/mmzone.h>
+#include <linux/notifier.h>
/*
* Each tier cover a abstrace distance chunk size of 128
*/
@@ -36,6 +37,9 @@ struct memory_dev_type *alloc_memory_type(int adistance);
void put_memory_type(struct memory_dev_type *memtype);
void init_node_memory_type(int node, struct memory_dev_type *default_type);
void clear_node_memory_type(int node, struct memory_dev_type *memtype);
+int register_mt_adistance_algorithm(struct notifier_block *nb);
+int unregister_mt_adistance_algorithm(struct notifier_block *nb);
+int mt_calc_adistance(int node, int *adist);
#ifdef CONFIG_MIGRATION
int next_demotion_node(int node);
void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets);
@@ -97,5 +101,20 @@ static inline bool node_is_toptier(int node)
{
return true;
}
+
+static inline int register_mt_adistance_algorithm(struct notifier_block *nb)
+{
+ return 0;
+}
+
+static inline int unregister_mt_adistance_algorithm(struct notifier_block *nb)
+{
+ return 0;
+}
+
+static inline int mt_calc_adistance(int node, int *adist)
+{
+ return NOTIFY_DONE;
+}
#endif /* CONFIG_NUMA */
#endif /* _LINUX_MEMORY_TIERS_H */
diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
index 37a4f59d9585..76c0ad47a5ad 100644
--- a/mm/memory-tiers.c
+++ b/mm/memory-tiers.c
@@ -5,6 +5,7 @@
#include <linux/kobject.h>
#include <linux/memory.h>
#include <linux/memory-tiers.h>
+#include <linux/notifier.h>
#include "internal.h"
@@ -105,6 +106,8 @@ static int top_tier_adistance;
static struct demotion_nodes *node_demotion __read_mostly;
#endif /* CONFIG_MIGRATION */
+static BLOCKING_NOTIFIER_HEAD(mt_adistance_algorithms);
+
static inline struct memory_tier *to_memory_tier(struct device *device)
{
return container_of(device, struct memory_tier, dev);
@@ -592,6 +595,62 @@ void clear_node_memory_type(int node, struct memory_dev_type *memtype)
}
EXPORT_SYMBOL_GPL(clear_node_memory_type);
+/**
+ * register_mt_adistance_algorithm() - Register memory tiering abstract distance algorithm
+ * @nb: The notifier block which describe the algorithm
+ *
+ * Return: 0 on success, errno on error.
+ *
+ * Every memory tiering abstract distance algorithm provider needs to
+ * register the algorithm with register_mt_adistance_algorithm(). To
+ * calculate the abstract distance for a specified memory node, the
+ * notifier function will be called unless some high priority
+ * algorithm has provided result. The prototype of the notifier
+ * function is as follows,
+ *
+ * int (*algorithm_notifier)(struct notifier_block *nb,
+ * unsigned long nid, void *data);
+ *
+ * Where "nid" specifies the memory node, "data" is the pointer to the
+ * returned abstract distance (that is, "int *adist"). If the
+ * algorithm provides the result, NOTIFY_STOP should be returned.
+ * Otherwise, return_value & %NOTIFY_STOP_MASK == 0 to allow the next
+ * algorithm in the chain to provide the result.
+ */
+int register_mt_adistance_algorithm(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_register(&mt_adistance_algorithms, nb);
+}
+EXPORT_SYMBOL_GPL(register_mt_adistance_algorithm);
+
+/**
+ * unregister_mt_adistance_algorithm() - Unregister memory tiering abstract distance algorithm
+ * @nb: the notifier block which describe the algorithm
+ *
+ * Return: 0 on success, errno on error.
+ */
+int unregister_mt_adistance_algorithm(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_unregister(&mt_adistance_algorithms, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_mt_adistance_algorithm);
+
+/**
+ * mt_calc_adistance() - Calculate abstract distance with registered algorithms
+ * @node: the node to calculate abstract distance for
+ * @adist: the returned abstract distance
+ *
+ * Return: if return_value & %NOTIFY_STOP_MASK != 0, then some
+ * abstract distance algorithm provides the result, and return it via
+ * @adist. Otherwise, no algorithm can provide the result and @adist
+ * will be kept as it is.
+ */
+int mt_calc_adistance(int node, int *adist)
+{
+ return blocking_notifier_call_chain(&mt_adistance_algorithms, node, adist);
+}
+EXPORT_SYMBOL_GPL(mt_calc_adistance);
+
static int __meminit memtier_hotplug_callback(struct notifier_block *self,
unsigned long action, void *_arg)
{
--
2.39.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH -V3 2/4] acpi, hmat: refactor hmat_register_target_initiators()
2023-09-12 8:20 [PATCH -V3 0/4] memory tiering: calculate abstract distance based on ACPI HMAT Huang Ying
2023-09-12 8:20 ` [PATCH -V3 1/4] memory tiering: add abstract distance calculation algorithms management Huang Ying
@ 2023-09-12 8:20 ` Huang Ying
2023-09-14 17:30 ` Dave Jiang
2023-09-12 8:21 ` [PATCH -V3 3/4] acpi, hmat: calculate abstract distance with HMAT Huang Ying
2023-09-12 8:21 ` [PATCH -V3 4/4] dax, kmem: calculate abstract distance with general interface Huang Ying
3 siblings, 1 reply; 14+ messages in thread
From: Huang Ying @ 2023-09-12 8:20 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Huang Ying, Alistair Popple,
Bharata B Rao, Aneesh Kumar K . V, Wei Xu, Dan Williams,
Dave Hansen, Davidlohr Bueso, Johannes Weiner, Jonathan Cameron,
Michal Hocko, Yang Shi, Dave Jiang, Rafael J Wysocki
Previously, in hmat_register_target_initiators(), the performance
attributes are calculated and the corresponding sysfs links and files
are created too. Which is called during memory onlining.
But now, to calculate the abstract distance of a memory target before
memory onlining, we need to calculate the performance attributes for
a memory target without creating sysfs links and files.
To do that, hmat_register_target_initiators() is refactored to make it
possible to calculate performance attributes separately.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Tested-by: Alistair Popple <apopple@nvidia.com>
Tested-by: Bharata B Rao <bharata@amd.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com>
---
drivers/acpi/numa/hmat.c | 81 +++++++++++++++-------------------------
1 file changed, 30 insertions(+), 51 deletions(-)
diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
index bba268ecd802..2dee0098f1a9 100644
--- a/drivers/acpi/numa/hmat.c
+++ b/drivers/acpi/numa/hmat.c
@@ -582,28 +582,25 @@ static int initiators_to_nodemask(unsigned long *p_nodes)
return 0;
}
-static void hmat_register_target_initiators(struct memory_target *target)
+static void hmat_update_target_attrs(struct memory_target *target,
+ unsigned long *p_nodes, int access)
{
- static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
struct memory_initiator *initiator;
- unsigned int mem_nid, cpu_nid;
+ unsigned int cpu_nid;
struct memory_locality *loc = NULL;
u32 best = 0;
- bool access0done = false;
int i;
- mem_nid = pxm_to_node(target->memory_pxm);
+ bitmap_zero(p_nodes, MAX_NUMNODES);
/*
- * If the Address Range Structure provides a local processor pxm, link
+ * If the Address Range Structure provides a local processor pxm, set
* only that one. Otherwise, find the best performance attributes and
- * register all initiators that match.
+ * collect all initiators that match.
*/
if (target->processor_pxm != PXM_INVAL) {
cpu_nid = pxm_to_node(target->processor_pxm);
- register_memory_node_under_compute_node(mem_nid, cpu_nid, 0);
- access0done = true;
- if (node_state(cpu_nid, N_CPU)) {
- register_memory_node_under_compute_node(mem_nid, cpu_nid, 1);
+ if (access == 0 || node_state(cpu_nid, N_CPU)) {
+ set_bit(target->processor_pxm, p_nodes);
return;
}
}
@@ -617,47 +614,10 @@ static void hmat_register_target_initiators(struct memory_target *target)
* We'll also use the sorting to prime the candidate nodes with known
* initiators.
*/
- bitmap_zero(p_nodes, MAX_NUMNODES);
list_sort(NULL, &initiators, initiator_cmp);
if (initiators_to_nodemask(p_nodes) < 0)
return;
- if (!access0done) {
- for (i = WRITE_LATENCY; i <= READ_BANDWIDTH; i++) {
- loc = localities_types[i];
- if (!loc)
- continue;
-
- best = 0;
- list_for_each_entry(initiator, &initiators, node) {
- u32 value;
-
- if (!test_bit(initiator->processor_pxm, p_nodes))
- continue;
-
- value = hmat_initiator_perf(target, initiator,
- loc->hmat_loc);
- if (hmat_update_best(loc->hmat_loc->data_type, value, &best))
- bitmap_clear(p_nodes, 0, initiator->processor_pxm);
- if (value != best)
- clear_bit(initiator->processor_pxm, p_nodes);
- }
- if (best)
- hmat_update_target_access(target, loc->hmat_loc->data_type,
- best, 0);
- }
-
- for_each_set_bit(i, p_nodes, MAX_NUMNODES) {
- cpu_nid = pxm_to_node(i);
- register_memory_node_under_compute_node(mem_nid, cpu_nid, 0);
- }
- }
-
- /* Access 1 ignores Generic Initiators */
- bitmap_zero(p_nodes, MAX_NUMNODES);
- if (initiators_to_nodemask(p_nodes) < 0)
- return;
-
for (i = WRITE_LATENCY; i <= READ_BANDWIDTH; i++) {
loc = localities_types[i];
if (!loc)
@@ -667,7 +627,7 @@ static void hmat_register_target_initiators(struct memory_target *target)
list_for_each_entry(initiator, &initiators, node) {
u32 value;
- if (!initiator->has_cpu) {
+ if (access == 1 && !initiator->has_cpu) {
clear_bit(initiator->processor_pxm, p_nodes);
continue;
}
@@ -681,14 +641,33 @@ static void hmat_register_target_initiators(struct memory_target *target)
clear_bit(initiator->processor_pxm, p_nodes);
}
if (best)
- hmat_update_target_access(target, loc->hmat_loc->data_type, best, 1);
+ hmat_update_target_access(target, loc->hmat_loc->data_type, best, access);
}
+}
+
+static void __hmat_register_target_initiators(struct memory_target *target,
+ unsigned long *p_nodes,
+ int access)
+{
+ unsigned int mem_nid, cpu_nid;
+ int i;
+
+ mem_nid = pxm_to_node(target->memory_pxm);
+ hmat_update_target_attrs(target, p_nodes, access);
for_each_set_bit(i, p_nodes, MAX_NUMNODES) {
cpu_nid = pxm_to_node(i);
- register_memory_node_under_compute_node(mem_nid, cpu_nid, 1);
+ register_memory_node_under_compute_node(mem_nid, cpu_nid, access);
}
}
+static void hmat_register_target_initiators(struct memory_target *target)
+{
+ static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
+
+ __hmat_register_target_initiators(target, p_nodes, 0);
+ __hmat_register_target_initiators(target, p_nodes, 1);
+}
+
static void hmat_register_target_cache(struct memory_target *target)
{
unsigned mem_nid = pxm_to_node(target->memory_pxm);
--
2.39.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH -V3 3/4] acpi, hmat: calculate abstract distance with HMAT
2023-09-12 8:20 [PATCH -V3 0/4] memory tiering: calculate abstract distance based on ACPI HMAT Huang Ying
2023-09-12 8:20 ` [PATCH -V3 1/4] memory tiering: add abstract distance calculation algorithms management Huang Ying
2023-09-12 8:20 ` [PATCH -V3 2/4] acpi, hmat: refactor hmat_register_target_initiators() Huang Ying
@ 2023-09-12 8:21 ` Huang Ying
2023-09-14 17:31 ` Dave Jiang
2023-09-19 5:14 ` Alistair Popple
2023-09-12 8:21 ` [PATCH -V3 4/4] dax, kmem: calculate abstract distance with general interface Huang Ying
3 siblings, 2 replies; 14+ messages in thread
From: Huang Ying @ 2023-09-12 8:21 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Huang Ying, Bharata B Rao,
Aneesh Kumar K . V, Wei Xu, Alistair Popple, Dan Williams,
Dave Hansen, Davidlohr Bueso, Johannes Weiner, Jonathan Cameron,
Michal Hocko, Yang Shi, Dave Jiang, Rafael J Wysocki
A memory tiering abstract distance calculation algorithm based on ACPI
HMAT is implemented. The basic idea is as follows.
The performance attributes of system default DRAM nodes are recorded
as the base line. Whose abstract distance is MEMTIER_ADISTANCE_DRAM.
Then, the ratio of the abstract distance of a memory node (target) to
MEMTIER_ADISTANCE_DRAM is scaled based on the ratio of the performance
attributes of the node to that of the default DRAM nodes.
The functions to record the read/write latency/bandwidth of the
default DRAM nodes and calculate abstract distance according to
read/write latency/bandwidth ratio will be used by CXL CDAT (Coherent
Device Attribute Table) and other memory device drivers. So, they are
put in memory-tiers.c.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Tested-by: Bharata B Rao <bharata@amd.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com>
---
drivers/acpi/numa/hmat.c | 62 ++++++++++++++++++++-
include/linux/memory-tiers.h | 18 ++++++
mm/memory-tiers.c | 103 ++++++++++++++++++++++++++++++++++-
3 files changed, 181 insertions(+), 2 deletions(-)
diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
index 2dee0098f1a9..64c0810d324b 100644
--- a/drivers/acpi/numa/hmat.c
+++ b/drivers/acpi/numa/hmat.c
@@ -24,6 +24,7 @@
#include <linux/node.h>
#include <linux/sysfs.h>
#include <linux/dax.h>
+#include <linux/memory-tiers.h>
static u8 hmat_revision;
static int hmat_disable __initdata;
@@ -759,6 +760,61 @@ static int hmat_callback(struct notifier_block *self,
return NOTIFY_OK;
}
+static int hmat_set_default_dram_perf(void)
+{
+ int rc;
+ int nid, pxm;
+ struct memory_target *target;
+ struct node_hmem_attrs *attrs;
+
+ if (!default_dram_type)
+ return -EIO;
+
+ for_each_node_mask(nid, default_dram_type->nodes) {
+ pxm = node_to_pxm(nid);
+ target = find_mem_target(pxm);
+ if (!target)
+ continue;
+ attrs = &target->hmem_attrs[1];
+ rc = mt_set_default_dram_perf(nid, attrs, "ACPI HMAT");
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
+static int hmat_calculate_adistance(struct notifier_block *self,
+ unsigned long nid, void *data)
+{
+ static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
+ struct memory_target *target;
+ struct node_hmem_attrs *perf;
+ int *adist = data;
+ int pxm;
+
+ pxm = node_to_pxm(nid);
+ target = find_mem_target(pxm);
+ if (!target)
+ return NOTIFY_OK;
+
+ mutex_lock(&target_lock);
+ hmat_update_target_attrs(target, p_nodes, 1);
+ mutex_unlock(&target_lock);
+
+ perf = &target->hmem_attrs[1];
+
+ if (mt_perf_to_adistance(perf, adist))
+ return NOTIFY_OK;
+
+ return NOTIFY_STOP;
+}
+
+static struct notifier_block hmat_adist_nb __meminitdata = {
+ .notifier_call = hmat_calculate_adistance,
+ .priority = 100,
+};
+
static __init void hmat_free_structures(void)
{
struct memory_target *target, *tnext;
@@ -801,6 +857,7 @@ static __init int hmat_init(void)
struct acpi_table_header *tbl;
enum acpi_hmat_type i;
acpi_status status;
+ int usage;
if (srat_disabled() || hmat_disable)
return 0;
@@ -841,7 +898,10 @@ static __init int hmat_init(void)
hmat_register_targets();
/* Keep the table and structures if the notifier may use them */
- if (!hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI))
+ usage = !hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI);
+ if (!hmat_set_default_dram_perf())
+ usage += !register_mt_adistance_algorithm(&hmat_adist_nb);
+ if (usage)
return 0;
out_put:
hmat_free_structures();
diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
index c8382220cced..9d27ca3b143e 100644
--- a/include/linux/memory-tiers.h
+++ b/include/linux/memory-tiers.h
@@ -31,8 +31,11 @@ struct memory_dev_type {
struct kref kref;
};
+struct node_hmem_attrs;
+
#ifdef CONFIG_NUMA
extern bool numa_demotion_enabled;
+extern struct memory_dev_type *default_dram_type;
struct memory_dev_type *alloc_memory_type(int adistance);
void put_memory_type(struct memory_dev_type *memtype);
void init_node_memory_type(int node, struct memory_dev_type *default_type);
@@ -40,6 +43,9 @@ void clear_node_memory_type(int node, struct memory_dev_type *memtype);
int register_mt_adistance_algorithm(struct notifier_block *nb);
int unregister_mt_adistance_algorithm(struct notifier_block *nb);
int mt_calc_adistance(int node, int *adist);
+int mt_set_default_dram_perf(int nid, struct node_hmem_attrs *perf,
+ const char *source);
+int mt_perf_to_adistance(struct node_hmem_attrs *perf, int *adist);
#ifdef CONFIG_MIGRATION
int next_demotion_node(int node);
void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets);
@@ -64,6 +70,7 @@ static inline bool node_is_toptier(int node)
#else
#define numa_demotion_enabled false
+#define default_dram_type NULL
/*
* CONFIG_NUMA implementation returns non NULL error.
*/
@@ -116,5 +123,16 @@ static inline int mt_calc_adistance(int node, int *adist)
{
return NOTIFY_DONE;
}
+
+static inline int mt_set_default_dram_perf(int nid, struct node_hmem_attrs *perf,
+ const char *source)
+{
+ return -EIO;
+}
+
+static inline int mt_perf_to_adistance(struct node_hmem_attrs *perf, int *adist)
+{
+ return -EIO;
+}
#endif /* CONFIG_NUMA */
#endif /* _LINUX_MEMORY_TIERS_H */
diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
index 76c0ad47a5ad..fa1a8b418f9a 100644
--- a/mm/memory-tiers.c
+++ b/mm/memory-tiers.c
@@ -37,7 +37,7 @@ struct node_memory_type_map {
static DEFINE_MUTEX(memory_tier_lock);
static LIST_HEAD(memory_tiers);
static struct node_memory_type_map node_memory_types[MAX_NUMNODES];
-static struct memory_dev_type *default_dram_type;
+struct memory_dev_type *default_dram_type;
static struct bus_type memory_tier_subsys = {
.name = "memory_tiering",
@@ -108,6 +108,11 @@ static struct demotion_nodes *node_demotion __read_mostly;
static BLOCKING_NOTIFIER_HEAD(mt_adistance_algorithms);
+static bool default_dram_perf_error;
+static struct node_hmem_attrs default_dram_perf;
+static int default_dram_perf_ref_nid = NUMA_NO_NODE;
+static const char *default_dram_perf_ref_source;
+
static inline struct memory_tier *to_memory_tier(struct device *device)
{
return container_of(device, struct memory_tier, dev);
@@ -595,6 +600,102 @@ void clear_node_memory_type(int node, struct memory_dev_type *memtype)
}
EXPORT_SYMBOL_GPL(clear_node_memory_type);
+static void dump_hmem_attrs(struct node_hmem_attrs *attrs, const char *prefix)
+{
+ pr_info(
+"%sread_latency: %u, write_latency: %u, read_bandwidth: %u, write_bandwidth: %u\n",
+ prefix, attrs->read_latency, attrs->write_latency,
+ attrs->read_bandwidth, attrs->write_bandwidth);
+}
+
+int mt_set_default_dram_perf(int nid, struct node_hmem_attrs *perf,
+ const char *source)
+{
+ int rc = 0;
+
+ mutex_lock(&memory_tier_lock);
+ if (default_dram_perf_error) {
+ rc = -EIO;
+ goto out;
+ }
+
+ if (perf->read_latency + perf->write_latency == 0 ||
+ perf->read_bandwidth + perf->write_bandwidth == 0) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+ if (default_dram_perf_ref_nid == NUMA_NO_NODE) {
+ default_dram_perf = *perf;
+ default_dram_perf_ref_nid = nid;
+ default_dram_perf_ref_source = kstrdup(source, GFP_KERNEL);
+ goto out;
+ }
+
+ /*
+ * The performance of all default DRAM nodes is expected to be
+ * same (that is, the variation is less than 10%). And it
+ * will be used as base to calculate the abstract distance of
+ * other memory nodes.
+ */
+ if (abs(perf->read_latency - default_dram_perf.read_latency) * 10 >
+ default_dram_perf.read_latency ||
+ abs(perf->write_latency - default_dram_perf.write_latency) * 10 >
+ default_dram_perf.write_latency ||
+ abs(perf->read_bandwidth - default_dram_perf.read_bandwidth) * 10 >
+ default_dram_perf.read_bandwidth ||
+ abs(perf->write_bandwidth - default_dram_perf.write_bandwidth) * 10 >
+ default_dram_perf.write_bandwidth) {
+ pr_info(
+"memory-tiers: the performance of DRAM node %d mismatches that of the reference\n"
+"DRAM node %d.\n", nid, default_dram_perf_ref_nid);
+ pr_info(" performance of reference DRAM node %d:\n",
+ default_dram_perf_ref_nid);
+ dump_hmem_attrs(&default_dram_perf, " ");
+ pr_info(" performance of DRAM node %d:\n", nid);
+ dump_hmem_attrs(perf, " ");
+ pr_info(
+" disable default DRAM node performance based abstract distance algorithm.\n");
+ default_dram_perf_error = true;
+ rc = -EINVAL;
+ }
+
+out:
+ mutex_unlock(&memory_tier_lock);
+ return rc;
+}
+
+int mt_perf_to_adistance(struct node_hmem_attrs *perf, int *adist)
+{
+ if (default_dram_perf_error)
+ return -EIO;
+
+ if (default_dram_perf_ref_nid == NUMA_NO_NODE)
+ return -ENOENT;
+
+ if (perf->read_latency + perf->write_latency == 0 ||
+ perf->read_bandwidth + perf->write_bandwidth == 0)
+ return -EINVAL;
+
+ mutex_lock(&memory_tier_lock);
+ /*
+ * The abstract distance of a memory node is in direct proportion to
+ * its memory latency (read + write) and inversely proportional to its
+ * memory bandwidth (read + write). The abstract distance, memory
+ * latency, and memory bandwidth of the default DRAM nodes are used as
+ * the base.
+ */
+ *adist = MEMTIER_ADISTANCE_DRAM *
+ (perf->read_latency + perf->write_latency) /
+ (default_dram_perf.read_latency + default_dram_perf.write_latency) *
+ (default_dram_perf.read_bandwidth + default_dram_perf.write_bandwidth) /
+ (perf->read_bandwidth + perf->write_bandwidth);
+ mutex_unlock(&memory_tier_lock);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(mt_perf_to_adistance);
+
/**
* register_mt_adistance_algorithm() - Register memory tiering abstract distance algorithm
* @nb: The notifier block which describe the algorithm
--
2.39.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH -V3 4/4] dax, kmem: calculate abstract distance with general interface
2023-09-12 8:20 [PATCH -V3 0/4] memory tiering: calculate abstract distance based on ACPI HMAT Huang Ying
` (2 preceding siblings ...)
2023-09-12 8:21 ` [PATCH -V3 3/4] acpi, hmat: calculate abstract distance with HMAT Huang Ying
@ 2023-09-12 8:21 ` Huang Ying
2023-09-14 17:31 ` Dave Jiang
2023-09-19 5:31 ` Alistair Popple
3 siblings, 2 replies; 14+ messages in thread
From: Huang Ying @ 2023-09-12 8:21 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Huang Ying, Bharata B Rao,
Aneesh Kumar K . V, Wei Xu, Alistair Popple, Dan Williams,
Dave Hansen, Davidlohr Bueso, Johannes Weiner, Jonathan Cameron,
Michal Hocko, Yang Shi, Dave Jiang, Rafael J Wysocki
Previously, a fixed abstract distance MEMTIER_DEFAULT_DAX_ADISTANCE is
used for slow memory type in kmem driver. This limits the usage of
kmem driver, for example, it cannot be used for HBM (high bandwidth
memory).
So, we use the general abstract distance calculation mechanism in kmem
drivers to get more accurate abstract distance on systems with proper
support. The original MEMTIER_DEFAULT_DAX_ADISTANCE is used as
fallback only.
Now, multiple memory types may be managed by kmem. These memory types
are put into the "kmem_memory_types" list and protected by
kmem_memory_type_lock.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Tested-by: Bharata B Rao <bharata@amd.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com>
---
drivers/dax/kmem.c | 62 ++++++++++++++++++++++++++++--------
include/linux/memory-tiers.h | 2 ++
mm/memory-tiers.c | 2 +-
3 files changed, 52 insertions(+), 14 deletions(-)
diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
index c57acb73e3db..369c698b7706 100644
--- a/drivers/dax/kmem.c
+++ b/drivers/dax/kmem.c
@@ -49,14 +49,52 @@ struct dax_kmem_data {
struct resource *res[];
};
-static struct memory_dev_type *dax_slowmem_type;
+static DEFINE_MUTEX(kmem_memory_type_lock);
+static LIST_HEAD(kmem_memory_types);
+
+static struct memory_dev_type *kmem_find_alloc_memory_type(int adist)
+{
+ bool found = false;
+ struct memory_dev_type *mtype;
+
+ mutex_lock(&kmem_memory_type_lock);
+ list_for_each_entry(mtype, &kmem_memory_types, list) {
+ if (mtype->adistance == adist) {
+ found = true;
+ break;
+ }
+ }
+ if (!found) {
+ mtype = alloc_memory_type(adist);
+ if (!IS_ERR(mtype))
+ list_add(&mtype->list, &kmem_memory_types);
+ }
+ mutex_unlock(&kmem_memory_type_lock);
+
+ return mtype;
+}
+
+static void kmem_put_memory_types(void)
+{
+ struct memory_dev_type *mtype, *mtn;
+
+ mutex_lock(&kmem_memory_type_lock);
+ list_for_each_entry_safe(mtype, mtn, &kmem_memory_types, list) {
+ list_del(&mtype->list);
+ put_memory_type(mtype);
+ }
+ mutex_unlock(&kmem_memory_type_lock);
+}
+
static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
{
struct device *dev = &dev_dax->dev;
unsigned long total_len = 0;
struct dax_kmem_data *data;
+ struct memory_dev_type *mtype;
int i, rc, mapped = 0;
int numa_node;
+ int adist = MEMTIER_DEFAULT_DAX_ADISTANCE;
/*
* Ensure good NUMA information for the persistent memory.
@@ -71,6 +109,11 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
return -EINVAL;
}
+ mt_calc_adistance(numa_node, &adist);
+ mtype = kmem_find_alloc_memory_type(adist);
+ if (IS_ERR(mtype))
+ return PTR_ERR(mtype);
+
for (i = 0; i < dev_dax->nr_range; i++) {
struct range range;
@@ -88,7 +131,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
return -EINVAL;
}
- init_node_memory_type(numa_node, dax_slowmem_type);
+ init_node_memory_type(numa_node, mtype);
rc = -ENOMEM;
data = kzalloc(struct_size(data, res, dev_dax->nr_range), GFP_KERNEL);
@@ -167,7 +210,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
err_res_name:
kfree(data);
err_dax_kmem_data:
- clear_node_memory_type(numa_node, dax_slowmem_type);
+ clear_node_memory_type(numa_node, mtype);
return rc;
}
@@ -219,7 +262,7 @@ static void dev_dax_kmem_remove(struct dev_dax *dev_dax)
* for that. This implies this reference will be around
* till next reboot.
*/
- clear_node_memory_type(node, dax_slowmem_type);
+ clear_node_memory_type(node, NULL);
}
}
#else
@@ -251,12 +294,6 @@ static int __init dax_kmem_init(void)
if (!kmem_name)
return -ENOMEM;
- dax_slowmem_type = alloc_memory_type(MEMTIER_DEFAULT_DAX_ADISTANCE);
- if (IS_ERR(dax_slowmem_type)) {
- rc = PTR_ERR(dax_slowmem_type);
- goto err_dax_slowmem_type;
- }
-
rc = dax_driver_register(&device_dax_kmem_driver);
if (rc)
goto error_dax_driver;
@@ -264,8 +301,7 @@ static int __init dax_kmem_init(void)
return rc;
error_dax_driver:
- put_memory_type(dax_slowmem_type);
-err_dax_slowmem_type:
+ kmem_put_memory_types();
kfree_const(kmem_name);
return rc;
}
@@ -275,7 +311,7 @@ static void __exit dax_kmem_exit(void)
dax_driver_unregister(&device_dax_kmem_driver);
if (!any_hotremove_failed)
kfree_const(kmem_name);
- put_memory_type(dax_slowmem_type);
+ kmem_put_memory_types();
}
MODULE_AUTHOR("Intel Corporation");
diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
index 9d27ca3b143e..ab6651402d7e 100644
--- a/include/linux/memory-tiers.h
+++ b/include/linux/memory-tiers.h
@@ -24,6 +24,8 @@ struct memory_tier;
struct memory_dev_type {
/* list of memory types that are part of same tier as this type */
struct list_head tier_sibiling;
+ /* list of memory types that are managed by one driver */
+ struct list_head list;
/* abstract distance for this specific memory type */
int adistance;
/* Nodes of same abstract distance */
diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
index fa1a8b418f9a..ca68ef17554b 100644
--- a/mm/memory-tiers.c
+++ b/mm/memory-tiers.c
@@ -586,7 +586,7 @@ EXPORT_SYMBOL_GPL(init_node_memory_type);
void clear_node_memory_type(int node, struct memory_dev_type *memtype)
{
mutex_lock(&memory_tier_lock);
- if (node_memory_types[node].memtype == memtype)
+ if (node_memory_types[node].memtype == memtype || !memtype)
node_memory_types[node].map_count--;
/*
* If we umapped all the attached devices to this node,
--
2.39.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH -V3 1/4] memory tiering: add abstract distance calculation algorithms management
2023-09-12 8:20 ` [PATCH -V3 1/4] memory tiering: add abstract distance calculation algorithms management Huang Ying
@ 2023-09-14 17:29 ` Dave Jiang
2023-09-19 5:13 ` Alistair Popple
1 sibling, 0 replies; 14+ messages in thread
From: Dave Jiang @ 2023-09-14 17:29 UTC (permalink / raw)
To: Huang Ying, Andrew Morton
Cc: linux-mm, linux-kernel, Bharata B Rao, Aneesh Kumar K . V, Wei Xu,
Alistair Popple, Dan Williams, Dave Hansen, Davidlohr Bueso,
Johannes Weiner, Jonathan Cameron, Michal Hocko, Yang Shi,
Rafael J Wysocki
On 9/12/23 01:20, Huang Ying wrote:
> The abstract distance may be calculated by various drivers, such as
> ACPI HMAT, CXL CDAT, etc. While it may be used by various code which
> hot-add memory node, such as dax/kmem etc. To decouple the algorithm
> users and the providers, the abstract distance calculation algorithms
> management mechanism is implemented in this patch. It provides
> interface for the providers to register the implementation, and
> interface for the users.
>
> Multiple algorithm implementations can cooperate via calculating
> abstract distance for different memory nodes. The preference of
> algorithm implementations can be specified via
> priority (notifier_block.priority).
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Tested-by: Bharata B Rao <bharata@amd.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Davidlohr Bueso <dave@stgolabs.net>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Dave Jiang <dave.jiang@intel.com>
> Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
> ---
> include/linux/memory-tiers.h | 19 ++++++++++++
> mm/memory-tiers.c | 59 ++++++++++++++++++++++++++++++++++++
> 2 files changed, 78 insertions(+)
>
> diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
> index 437441cdf78f..c8382220cced 100644
> --- a/include/linux/memory-tiers.h
> +++ b/include/linux/memory-tiers.h
> @@ -6,6 +6,7 @@
> #include <linux/nodemask.h>
> #include <linux/kref.h>
> #include <linux/mmzone.h>
> +#include <linux/notifier.h>
> /*
> * Each tier cover a abstrace distance chunk size of 128
> */
> @@ -36,6 +37,9 @@ struct memory_dev_type *alloc_memory_type(int adistance);
> void put_memory_type(struct memory_dev_type *memtype);
> void init_node_memory_type(int node, struct memory_dev_type *default_type);
> void clear_node_memory_type(int node, struct memory_dev_type *memtype);
> +int register_mt_adistance_algorithm(struct notifier_block *nb);
> +int unregister_mt_adistance_algorithm(struct notifier_block *nb);
> +int mt_calc_adistance(int node, int *adist);
> #ifdef CONFIG_MIGRATION
> int next_demotion_node(int node);
> void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets);
> @@ -97,5 +101,20 @@ static inline bool node_is_toptier(int node)
> {
> return true;
> }
> +
> +static inline int register_mt_adistance_algorithm(struct notifier_block *nb)
> +{
> + return 0;
> +}
> +
> +static inline int unregister_mt_adistance_algorithm(struct notifier_block *nb)
> +{
> + return 0;
> +}
> +
> +static inline int mt_calc_adistance(int node, int *adist)
> +{
> + return NOTIFY_DONE;
> +}
> #endif /* CONFIG_NUMA */
> #endif /* _LINUX_MEMORY_TIERS_H */
> diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
> index 37a4f59d9585..76c0ad47a5ad 100644
> --- a/mm/memory-tiers.c
> +++ b/mm/memory-tiers.c
> @@ -5,6 +5,7 @@
> #include <linux/kobject.h>
> #include <linux/memory.h>
> #include <linux/memory-tiers.h>
> +#include <linux/notifier.h>
>
> #include "internal.h"
>
> @@ -105,6 +106,8 @@ static int top_tier_adistance;
> static struct demotion_nodes *node_demotion __read_mostly;
> #endif /* CONFIG_MIGRATION */
>
> +static BLOCKING_NOTIFIER_HEAD(mt_adistance_algorithms);
> +
> static inline struct memory_tier *to_memory_tier(struct device *device)
> {
> return container_of(device, struct memory_tier, dev);
> @@ -592,6 +595,62 @@ void clear_node_memory_type(int node, struct memory_dev_type *memtype)
> }
> EXPORT_SYMBOL_GPL(clear_node_memory_type);
>
> +/**
> + * register_mt_adistance_algorithm() - Register memory tiering abstract distance algorithm
> + * @nb: The notifier block which describe the algorithm
> + *
> + * Return: 0 on success, errno on error.
> + *
> + * Every memory tiering abstract distance algorithm provider needs to
> + * register the algorithm with register_mt_adistance_algorithm(). To
> + * calculate the abstract distance for a specified memory node, the
> + * notifier function will be called unless some high priority
> + * algorithm has provided result. The prototype of the notifier
> + * function is as follows,
> + *
> + * int (*algorithm_notifier)(struct notifier_block *nb,
> + * unsigned long nid, void *data);
> + *
> + * Where "nid" specifies the memory node, "data" is the pointer to the
> + * returned abstract distance (that is, "int *adist"). If the
> + * algorithm provides the result, NOTIFY_STOP should be returned.
> + * Otherwise, return_value & %NOTIFY_STOP_MASK == 0 to allow the next
> + * algorithm in the chain to provide the result.
> + */
> +int register_mt_adistance_algorithm(struct notifier_block *nb)
> +{
> + return blocking_notifier_chain_register(&mt_adistance_algorithms, nb);
> +}
> +EXPORT_SYMBOL_GPL(register_mt_adistance_algorithm);
> +
> +/**
> + * unregister_mt_adistance_algorithm() - Unregister memory tiering abstract distance algorithm
> + * @nb: the notifier block which describe the algorithm
> + *
> + * Return: 0 on success, errno on error.
> + */
> +int unregister_mt_adistance_algorithm(struct notifier_block *nb)
> +{
> + return blocking_notifier_chain_unregister(&mt_adistance_algorithms, nb);
> +}
> +EXPORT_SYMBOL_GPL(unregister_mt_adistance_algorithm);
> +
> +/**
> + * mt_calc_adistance() - Calculate abstract distance with registered algorithms
> + * @node: the node to calculate abstract distance for
> + * @adist: the returned abstract distance
> + *
> + * Return: if return_value & %NOTIFY_STOP_MASK != 0, then some
> + * abstract distance algorithm provides the result, and return it via
> + * @adist. Otherwise, no algorithm can provide the result and @adist
> + * will be kept as it is.
> + */
> +int mt_calc_adistance(int node, int *adist)
> +{
> + return blocking_notifier_call_chain(&mt_adistance_algorithms, node, adist);
> +}
> +EXPORT_SYMBOL_GPL(mt_calc_adistance);
> +
> static int __meminit memtier_hotplug_callback(struct notifier_block *self,
> unsigned long action, void *_arg)
> {
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -V3 2/4] acpi, hmat: refactor hmat_register_target_initiators()
2023-09-12 8:20 ` [PATCH -V3 2/4] acpi, hmat: refactor hmat_register_target_initiators() Huang Ying
@ 2023-09-14 17:30 ` Dave Jiang
0 siblings, 0 replies; 14+ messages in thread
From: Dave Jiang @ 2023-09-14 17:30 UTC (permalink / raw)
To: Huang Ying, Andrew Morton
Cc: linux-mm, linux-kernel, Alistair Popple, Bharata B Rao,
Aneesh Kumar K . V, Wei Xu, Dan Williams, Dave Hansen,
Davidlohr Bueso, Johannes Weiner, Jonathan Cameron, Michal Hocko,
Yang Shi, Rafael J Wysocki
On 9/12/23 01:20, Huang Ying wrote:
> Previously, in hmat_register_target_initiators(), the performance
> attributes are calculated and the corresponding sysfs links and files
> are created too. Which is called during memory onlining.
>
> But now, to calculate the abstract distance of a memory target before
> memory onlining, we need to calculate the performance attributes for
> a memory target without creating sysfs links and files.
>
> To do that, hmat_register_target_initiators() is refactored to make it
> possible to calculate performance attributes separately.
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Reviewed-by: Alistair Popple <apopple@nvidia.com>
> Tested-by: Alistair Popple <apopple@nvidia.com>
> Tested-by: Bharata B Rao <bharata@amd.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Davidlohr Bueso <dave@stgolabs.net>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Dave Jiang <dave.jiang@intel.com>
> Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
> ---
> drivers/acpi/numa/hmat.c | 81 +++++++++++++++-------------------------
> 1 file changed, 30 insertions(+), 51 deletions(-)
>
> diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
> index bba268ecd802..2dee0098f1a9 100644
> --- a/drivers/acpi/numa/hmat.c
> +++ b/drivers/acpi/numa/hmat.c
> @@ -582,28 +582,25 @@ static int initiators_to_nodemask(unsigned long *p_nodes)
> return 0;
> }
>
> -static void hmat_register_target_initiators(struct memory_target *target)
> +static void hmat_update_target_attrs(struct memory_target *target,
> + unsigned long *p_nodes, int access)
> {
> - static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
> struct memory_initiator *initiator;
> - unsigned int mem_nid, cpu_nid;
> + unsigned int cpu_nid;
> struct memory_locality *loc = NULL;
> u32 best = 0;
> - bool access0done = false;
> int i;
>
> - mem_nid = pxm_to_node(target->memory_pxm);
> + bitmap_zero(p_nodes, MAX_NUMNODES);
> /*
> - * If the Address Range Structure provides a local processor pxm, link
> + * If the Address Range Structure provides a local processor pxm, set
> * only that one. Otherwise, find the best performance attributes and
> - * register all initiators that match.
> + * collect all initiators that match.
> */
> if (target->processor_pxm != PXM_INVAL) {
> cpu_nid = pxm_to_node(target->processor_pxm);
> - register_memory_node_under_compute_node(mem_nid, cpu_nid, 0);
> - access0done = true;
> - if (node_state(cpu_nid, N_CPU)) {
> - register_memory_node_under_compute_node(mem_nid, cpu_nid, 1);
> + if (access == 0 || node_state(cpu_nid, N_CPU)) {
> + set_bit(target->processor_pxm, p_nodes);
> return;
> }
> }
> @@ -617,47 +614,10 @@ static void hmat_register_target_initiators(struct memory_target *target)
> * We'll also use the sorting to prime the candidate nodes with known
> * initiators.
> */
> - bitmap_zero(p_nodes, MAX_NUMNODES);
> list_sort(NULL, &initiators, initiator_cmp);
> if (initiators_to_nodemask(p_nodes) < 0)
> return;
>
> - if (!access0done) {
> - for (i = WRITE_LATENCY; i <= READ_BANDWIDTH; i++) {
> - loc = localities_types[i];
> - if (!loc)
> - continue;
> -
> - best = 0;
> - list_for_each_entry(initiator, &initiators, node) {
> - u32 value;
> -
> - if (!test_bit(initiator->processor_pxm, p_nodes))
> - continue;
> -
> - value = hmat_initiator_perf(target, initiator,
> - loc->hmat_loc);
> - if (hmat_update_best(loc->hmat_loc->data_type, value, &best))
> - bitmap_clear(p_nodes, 0, initiator->processor_pxm);
> - if (value != best)
> - clear_bit(initiator->processor_pxm, p_nodes);
> - }
> - if (best)
> - hmat_update_target_access(target, loc->hmat_loc->data_type,
> - best, 0);
> - }
> -
> - for_each_set_bit(i, p_nodes, MAX_NUMNODES) {
> - cpu_nid = pxm_to_node(i);
> - register_memory_node_under_compute_node(mem_nid, cpu_nid, 0);
> - }
> - }
> -
> - /* Access 1 ignores Generic Initiators */
> - bitmap_zero(p_nodes, MAX_NUMNODES);
> - if (initiators_to_nodemask(p_nodes) < 0)
> - return;
> -
> for (i = WRITE_LATENCY; i <= READ_BANDWIDTH; i++) {
> loc = localities_types[i];
> if (!loc)
> @@ -667,7 +627,7 @@ static void hmat_register_target_initiators(struct memory_target *target)
> list_for_each_entry(initiator, &initiators, node) {
> u32 value;
>
> - if (!initiator->has_cpu) {
> + if (access == 1 && !initiator->has_cpu) {
> clear_bit(initiator->processor_pxm, p_nodes);
> continue;
> }
> @@ -681,14 +641,33 @@ static void hmat_register_target_initiators(struct memory_target *target)
> clear_bit(initiator->processor_pxm, p_nodes);
> }
> if (best)
> - hmat_update_target_access(target, loc->hmat_loc->data_type, best, 1);
> + hmat_update_target_access(target, loc->hmat_loc->data_type, best, access);
> }
> +}
> +
> +static void __hmat_register_target_initiators(struct memory_target *target,
> + unsigned long *p_nodes,
> + int access)
> +{
> + unsigned int mem_nid, cpu_nid;
> + int i;
> +
> + mem_nid = pxm_to_node(target->memory_pxm);
> + hmat_update_target_attrs(target, p_nodes, access);
> for_each_set_bit(i, p_nodes, MAX_NUMNODES) {
> cpu_nid = pxm_to_node(i);
> - register_memory_node_under_compute_node(mem_nid, cpu_nid, 1);
> + register_memory_node_under_compute_node(mem_nid, cpu_nid, access);
> }
> }
>
> +static void hmat_register_target_initiators(struct memory_target *target)
> +{
> + static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
> +
> + __hmat_register_target_initiators(target, p_nodes, 0);
> + __hmat_register_target_initiators(target, p_nodes, 1);
> +}
> +
> static void hmat_register_target_cache(struct memory_target *target)
> {
> unsigned mem_nid = pxm_to_node(target->memory_pxm);
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -V3 3/4] acpi, hmat: calculate abstract distance with HMAT
2023-09-12 8:21 ` [PATCH -V3 3/4] acpi, hmat: calculate abstract distance with HMAT Huang Ying
@ 2023-09-14 17:31 ` Dave Jiang
2023-09-19 5:14 ` Alistair Popple
1 sibling, 0 replies; 14+ messages in thread
From: Dave Jiang @ 2023-09-14 17:31 UTC (permalink / raw)
To: Huang Ying, Andrew Morton
Cc: linux-mm, linux-kernel, Bharata B Rao, Aneesh Kumar K . V, Wei Xu,
Alistair Popple, Dan Williams, Dave Hansen, Davidlohr Bueso,
Johannes Weiner, Jonathan Cameron, Michal Hocko, Yang Shi,
Rafael J Wysocki
On 9/12/23 01:21, Huang Ying wrote:
> A memory tiering abstract distance calculation algorithm based on ACPI
> HMAT is implemented. The basic idea is as follows.
>
> The performance attributes of system default DRAM nodes are recorded
> as the base line. Whose abstract distance is MEMTIER_ADISTANCE_DRAM.
> Then, the ratio of the abstract distance of a memory node (target) to
> MEMTIER_ADISTANCE_DRAM is scaled based on the ratio of the performance
> attributes of the node to that of the default DRAM nodes.
>
> The functions to record the read/write latency/bandwidth of the
> default DRAM nodes and calculate abstract distance according to
> read/write latency/bandwidth ratio will be used by CXL CDAT (Coherent
> Device Attribute Table) and other memory device drivers. So, they are
> put in memory-tiers.c.
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Tested-by: Bharata B Rao <bharata@amd.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Davidlohr Bueso <dave@stgolabs.net>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Dave Jiang <dave.jiang@intel.com>
> Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
> ---
> drivers/acpi/numa/hmat.c | 62 ++++++++++++++++++++-
> include/linux/memory-tiers.h | 18 ++++++
> mm/memory-tiers.c | 103 ++++++++++++++++++++++++++++++++++-
> 3 files changed, 181 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
> index 2dee0098f1a9..64c0810d324b 100644
> --- a/drivers/acpi/numa/hmat.c
> +++ b/drivers/acpi/numa/hmat.c
> @@ -24,6 +24,7 @@
> #include <linux/node.h>
> #include <linux/sysfs.h>
> #include <linux/dax.h>
> +#include <linux/memory-tiers.h>
>
> static u8 hmat_revision;
> static int hmat_disable __initdata;
> @@ -759,6 +760,61 @@ static int hmat_callback(struct notifier_block *self,
> return NOTIFY_OK;
> }
>
> +static int hmat_set_default_dram_perf(void)
> +{
> + int rc;
> + int nid, pxm;
> + struct memory_target *target;
> + struct node_hmem_attrs *attrs;
> +
> + if (!default_dram_type)
> + return -EIO;
> +
> + for_each_node_mask(nid, default_dram_type->nodes) {
> + pxm = node_to_pxm(nid);
> + target = find_mem_target(pxm);
> + if (!target)
> + continue;
> + attrs = &target->hmem_attrs[1];
> + rc = mt_set_default_dram_perf(nid, attrs, "ACPI HMAT");
> + if (rc)
> + return rc;
> + }
> +
> + return 0;
> +}
> +
> +static int hmat_calculate_adistance(struct notifier_block *self,
> + unsigned long nid, void *data)
> +{
> + static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
> + struct memory_target *target;
> + struct node_hmem_attrs *perf;
> + int *adist = data;
> + int pxm;
> +
> + pxm = node_to_pxm(nid);
> + target = find_mem_target(pxm);
> + if (!target)
> + return NOTIFY_OK;
> +
> + mutex_lock(&target_lock);
> + hmat_update_target_attrs(target, p_nodes, 1);
> + mutex_unlock(&target_lock);
> +
> + perf = &target->hmem_attrs[1];
> +
> + if (mt_perf_to_adistance(perf, adist))
> + return NOTIFY_OK;
> +
> + return NOTIFY_STOP;
> +}
> +
> +static struct notifier_block hmat_adist_nb __meminitdata = {
> + .notifier_call = hmat_calculate_adistance,
> + .priority = 100,
> +};
> +
> static __init void hmat_free_structures(void)
> {
> struct memory_target *target, *tnext;
> @@ -801,6 +857,7 @@ static __init int hmat_init(void)
> struct acpi_table_header *tbl;
> enum acpi_hmat_type i;
> acpi_status status;
> + int usage;
>
> if (srat_disabled() || hmat_disable)
> return 0;
> @@ -841,7 +898,10 @@ static __init int hmat_init(void)
> hmat_register_targets();
>
> /* Keep the table and structures if the notifier may use them */
> - if (!hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI))
> + usage = !hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI);
> + if (!hmat_set_default_dram_perf())
> + usage += !register_mt_adistance_algorithm(&hmat_adist_nb);
> + if (usage)
> return 0;
> out_put:
> hmat_free_structures();
> diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
> index c8382220cced..9d27ca3b143e 100644
> --- a/include/linux/memory-tiers.h
> +++ b/include/linux/memory-tiers.h
> @@ -31,8 +31,11 @@ struct memory_dev_type {
> struct kref kref;
> };
>
> +struct node_hmem_attrs;
> +
> #ifdef CONFIG_NUMA
> extern bool numa_demotion_enabled;
> +extern struct memory_dev_type *default_dram_type;
> struct memory_dev_type *alloc_memory_type(int adistance);
> void put_memory_type(struct memory_dev_type *memtype);
> void init_node_memory_type(int node, struct memory_dev_type *default_type);
> @@ -40,6 +43,9 @@ void clear_node_memory_type(int node, struct memory_dev_type *memtype);
> int register_mt_adistance_algorithm(struct notifier_block *nb);
> int unregister_mt_adistance_algorithm(struct notifier_block *nb);
> int mt_calc_adistance(int node, int *adist);
> +int mt_set_default_dram_perf(int nid, struct node_hmem_attrs *perf,
> + const char *source);
> +int mt_perf_to_adistance(struct node_hmem_attrs *perf, int *adist);
> #ifdef CONFIG_MIGRATION
> int next_demotion_node(int node);
> void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets);
> @@ -64,6 +70,7 @@ static inline bool node_is_toptier(int node)
> #else
>
> #define numa_demotion_enabled false
> +#define default_dram_type NULL
> /*
> * CONFIG_NUMA implementation returns non NULL error.
> */
> @@ -116,5 +123,16 @@ static inline int mt_calc_adistance(int node, int *adist)
> {
> return NOTIFY_DONE;
> }
> +
> +static inline int mt_set_default_dram_perf(int nid, struct node_hmem_attrs *perf,
> + const char *source)
> +{
> + return -EIO;
> +}
> +
> +static inline int mt_perf_to_adistance(struct node_hmem_attrs *perf, int *adist)
> +{
> + return -EIO;
> +}
> #endif /* CONFIG_NUMA */
> #endif /* _LINUX_MEMORY_TIERS_H */
> diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
> index 76c0ad47a5ad..fa1a8b418f9a 100644
> --- a/mm/memory-tiers.c
> +++ b/mm/memory-tiers.c
> @@ -37,7 +37,7 @@ struct node_memory_type_map {
> static DEFINE_MUTEX(memory_tier_lock);
> static LIST_HEAD(memory_tiers);
> static struct node_memory_type_map node_memory_types[MAX_NUMNODES];
> -static struct memory_dev_type *default_dram_type;
> +struct memory_dev_type *default_dram_type;
>
> static struct bus_type memory_tier_subsys = {
> .name = "memory_tiering",
> @@ -108,6 +108,11 @@ static struct demotion_nodes *node_demotion __read_mostly;
>
> static BLOCKING_NOTIFIER_HEAD(mt_adistance_algorithms);
>
> +static bool default_dram_perf_error;
> +static struct node_hmem_attrs default_dram_perf;
> +static int default_dram_perf_ref_nid = NUMA_NO_NODE;
> +static const char *default_dram_perf_ref_source;
> +
> static inline struct memory_tier *to_memory_tier(struct device *device)
> {
> return container_of(device, struct memory_tier, dev);
> @@ -595,6 +600,102 @@ void clear_node_memory_type(int node, struct memory_dev_type *memtype)
> }
> EXPORT_SYMBOL_GPL(clear_node_memory_type);
>
> +static void dump_hmem_attrs(struct node_hmem_attrs *attrs, const char *prefix)
> +{
> + pr_info(
> +"%sread_latency: %u, write_latency: %u, read_bandwidth: %u, write_bandwidth: %u\n",
> + prefix, attrs->read_latency, attrs->write_latency,
> + attrs->read_bandwidth, attrs->write_bandwidth);
> +}
> +
> +int mt_set_default_dram_perf(int nid, struct node_hmem_attrs *perf,
> + const char *source)
> +{
> + int rc = 0;
> +
> + mutex_lock(&memory_tier_lock);
> + if (default_dram_perf_error) {
> + rc = -EIO;
> + goto out;
> + }
> +
> + if (perf->read_latency + perf->write_latency == 0 ||
> + perf->read_bandwidth + perf->write_bandwidth == 0) {
> + rc = -EINVAL;
> + goto out;
> + }
> +
> + if (default_dram_perf_ref_nid == NUMA_NO_NODE) {
> + default_dram_perf = *perf;
> + default_dram_perf_ref_nid = nid;
> + default_dram_perf_ref_source = kstrdup(source, GFP_KERNEL);
> + goto out;
> + }
> +
> + /*
> + * The performance of all default DRAM nodes is expected to be
> + * same (that is, the variation is less than 10%). And it
> + * will be used as base to calculate the abstract distance of
> + * other memory nodes.
> + */
> + if (abs(perf->read_latency - default_dram_perf.read_latency) * 10 >
> + default_dram_perf.read_latency ||
> + abs(perf->write_latency - default_dram_perf.write_latency) * 10 >
> + default_dram_perf.write_latency ||
> + abs(perf->read_bandwidth - default_dram_perf.read_bandwidth) * 10 >
> + default_dram_perf.read_bandwidth ||
> + abs(perf->write_bandwidth - default_dram_perf.write_bandwidth) * 10 >
> + default_dram_perf.write_bandwidth) {
> + pr_info(
> +"memory-tiers: the performance of DRAM node %d mismatches that of the reference\n"
> +"DRAM node %d.\n", nid, default_dram_perf_ref_nid);
> + pr_info(" performance of reference DRAM node %d:\n",
> + default_dram_perf_ref_nid);
> + dump_hmem_attrs(&default_dram_perf, " ");
> + pr_info(" performance of DRAM node %d:\n", nid);
> + dump_hmem_attrs(perf, " ");
> + pr_info(
> +" disable default DRAM node performance based abstract distance algorithm.\n");
> + default_dram_perf_error = true;
> + rc = -EINVAL;
> + }
> +
> +out:
> + mutex_unlock(&memory_tier_lock);
> + return rc;
> +}
> +
> +int mt_perf_to_adistance(struct node_hmem_attrs *perf, int *adist)
> +{
> + if (default_dram_perf_error)
> + return -EIO;
> +
> + if (default_dram_perf_ref_nid == NUMA_NO_NODE)
> + return -ENOENT;
> +
> + if (perf->read_latency + perf->write_latency == 0 ||
> + perf->read_bandwidth + perf->write_bandwidth == 0)
> + return -EINVAL;
> +
> + mutex_lock(&memory_tier_lock);
> + /*
> + * The abstract distance of a memory node is in direct proportion to
> + * its memory latency (read + write) and inversely proportional to its
> + * memory bandwidth (read + write). The abstract distance, memory
> + * latency, and memory bandwidth of the default DRAM nodes are used as
> + * the base.
> + */
> + *adist = MEMTIER_ADISTANCE_DRAM *
> + (perf->read_latency + perf->write_latency) /
> + (default_dram_perf.read_latency + default_dram_perf.write_latency) *
> + (default_dram_perf.read_bandwidth + default_dram_perf.write_bandwidth) /
> + (perf->read_bandwidth + perf->write_bandwidth);
> + mutex_unlock(&memory_tier_lock);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(mt_perf_to_adistance);
> +
> /**
> * register_mt_adistance_algorithm() - Register memory tiering abstract distance algorithm
> * @nb: The notifier block which describe the algorithm
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -V3 4/4] dax, kmem: calculate abstract distance with general interface
2023-09-12 8:21 ` [PATCH -V3 4/4] dax, kmem: calculate abstract distance with general interface Huang Ying
@ 2023-09-14 17:31 ` Dave Jiang
2023-09-19 5:31 ` Alistair Popple
1 sibling, 0 replies; 14+ messages in thread
From: Dave Jiang @ 2023-09-14 17:31 UTC (permalink / raw)
To: Huang Ying, Andrew Morton
Cc: linux-mm, linux-kernel, Bharata B Rao, Aneesh Kumar K . V, Wei Xu,
Alistair Popple, Dan Williams, Dave Hansen, Davidlohr Bueso,
Johannes Weiner, Jonathan Cameron, Michal Hocko, Yang Shi,
Rafael J Wysocki
On 9/12/23 01:21, Huang Ying wrote:
> Previously, a fixed abstract distance MEMTIER_DEFAULT_DAX_ADISTANCE is
> used for slow memory type in kmem driver. This limits the usage of
> kmem driver, for example, it cannot be used for HBM (high bandwidth
> memory).
>
> So, we use the general abstract distance calculation mechanism in kmem
> drivers to get more accurate abstract distance on systems with proper
> support. The original MEMTIER_DEFAULT_DAX_ADISTANCE is used as
> fallback only.
>
> Now, multiple memory types may be managed by kmem. These memory types
> are put into the "kmem_memory_types" list and protected by
> kmem_memory_type_lock.
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Tested-by: Bharata B Rao <bharata@amd.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Davidlohr Bueso <dave@stgolabs.net>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Dave Jiang <dave.jiang@intel.com>
> Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
> ---
> drivers/dax/kmem.c | 62 ++++++++++++++++++++++++++++--------
> include/linux/memory-tiers.h | 2 ++
> mm/memory-tiers.c | 2 +-
> 3 files changed, 52 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
> index c57acb73e3db..369c698b7706 100644
> --- a/drivers/dax/kmem.c
> +++ b/drivers/dax/kmem.c
> @@ -49,14 +49,52 @@ struct dax_kmem_data {
> struct resource *res[];
> };
>
> -static struct memory_dev_type *dax_slowmem_type;
> +static DEFINE_MUTEX(kmem_memory_type_lock);
> +static LIST_HEAD(kmem_memory_types);
> +
> +static struct memory_dev_type *kmem_find_alloc_memory_type(int adist)
> +{
> + bool found = false;
> + struct memory_dev_type *mtype;
> +
> + mutex_lock(&kmem_memory_type_lock);
> + list_for_each_entry(mtype, &kmem_memory_types, list) {
> + if (mtype->adistance == adist) {
> + found = true;
> + break;
> + }
> + }
> + if (!found) {
> + mtype = alloc_memory_type(adist);
> + if (!IS_ERR(mtype))
> + list_add(&mtype->list, &kmem_memory_types);
> + }
> + mutex_unlock(&kmem_memory_type_lock);
> +
> + return mtype;
> +}
> +
> +static void kmem_put_memory_types(void)
> +{
> + struct memory_dev_type *mtype, *mtn;
> +
> + mutex_lock(&kmem_memory_type_lock);
> + list_for_each_entry_safe(mtype, mtn, &kmem_memory_types, list) {
> + list_del(&mtype->list);
> + put_memory_type(mtype);
> + }
> + mutex_unlock(&kmem_memory_type_lock);
> +}
> +
> static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
> {
> struct device *dev = &dev_dax->dev;
> unsigned long total_len = 0;
> struct dax_kmem_data *data;
> + struct memory_dev_type *mtype;
> int i, rc, mapped = 0;
> int numa_node;
> + int adist = MEMTIER_DEFAULT_DAX_ADISTANCE;
>
> /*
> * Ensure good NUMA information for the persistent memory.
> @@ -71,6 +109,11 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
> return -EINVAL;
> }
>
> + mt_calc_adistance(numa_node, &adist);
> + mtype = kmem_find_alloc_memory_type(adist);
> + if (IS_ERR(mtype))
> + return PTR_ERR(mtype);
> +
> for (i = 0; i < dev_dax->nr_range; i++) {
> struct range range;
>
> @@ -88,7 +131,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
> return -EINVAL;
> }
>
> - init_node_memory_type(numa_node, dax_slowmem_type);
> + init_node_memory_type(numa_node, mtype);
>
> rc = -ENOMEM;
> data = kzalloc(struct_size(data, res, dev_dax->nr_range), GFP_KERNEL);
> @@ -167,7 +210,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
> err_res_name:
> kfree(data);
> err_dax_kmem_data:
> - clear_node_memory_type(numa_node, dax_slowmem_type);
> + clear_node_memory_type(numa_node, mtype);
> return rc;
> }
>
> @@ -219,7 +262,7 @@ static void dev_dax_kmem_remove(struct dev_dax *dev_dax)
> * for that. This implies this reference will be around
> * till next reboot.
> */
> - clear_node_memory_type(node, dax_slowmem_type);
> + clear_node_memory_type(node, NULL);
> }
> }
> #else
> @@ -251,12 +294,6 @@ static int __init dax_kmem_init(void)
> if (!kmem_name)
> return -ENOMEM;
>
> - dax_slowmem_type = alloc_memory_type(MEMTIER_DEFAULT_DAX_ADISTANCE);
> - if (IS_ERR(dax_slowmem_type)) {
> - rc = PTR_ERR(dax_slowmem_type);
> - goto err_dax_slowmem_type;
> - }
> -
> rc = dax_driver_register(&device_dax_kmem_driver);
> if (rc)
> goto error_dax_driver;
> @@ -264,8 +301,7 @@ static int __init dax_kmem_init(void)
> return rc;
>
> error_dax_driver:
> - put_memory_type(dax_slowmem_type);
> -err_dax_slowmem_type:
> + kmem_put_memory_types();
> kfree_const(kmem_name);
> return rc;
> }
> @@ -275,7 +311,7 @@ static void __exit dax_kmem_exit(void)
> dax_driver_unregister(&device_dax_kmem_driver);
> if (!any_hotremove_failed)
> kfree_const(kmem_name);
> - put_memory_type(dax_slowmem_type);
> + kmem_put_memory_types();
> }
>
> MODULE_AUTHOR("Intel Corporation");
> diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
> index 9d27ca3b143e..ab6651402d7e 100644
> --- a/include/linux/memory-tiers.h
> +++ b/include/linux/memory-tiers.h
> @@ -24,6 +24,8 @@ struct memory_tier;
> struct memory_dev_type {
> /* list of memory types that are part of same tier as this type */
> struct list_head tier_sibiling;
> + /* list of memory types that are managed by one driver */
> + struct list_head list;
> /* abstract distance for this specific memory type */
> int adistance;
> /* Nodes of same abstract distance */
> diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
> index fa1a8b418f9a..ca68ef17554b 100644
> --- a/mm/memory-tiers.c
> +++ b/mm/memory-tiers.c
> @@ -586,7 +586,7 @@ EXPORT_SYMBOL_GPL(init_node_memory_type);
> void clear_node_memory_type(int node, struct memory_dev_type *memtype)
> {
> mutex_lock(&memory_tier_lock);
> - if (node_memory_types[node].memtype == memtype)
> + if (node_memory_types[node].memtype == memtype || !memtype)
> node_memory_types[node].map_count--;
> /*
> * If we umapped all the attached devices to this node,
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -V3 1/4] memory tiering: add abstract distance calculation algorithms management
2023-09-12 8:20 ` [PATCH -V3 1/4] memory tiering: add abstract distance calculation algorithms management Huang Ying
2023-09-14 17:29 ` Dave Jiang
@ 2023-09-19 5:13 ` Alistair Popple
1 sibling, 0 replies; 14+ messages in thread
From: Alistair Popple @ 2023-09-19 5:13 UTC (permalink / raw)
To: Huang Ying
Cc: Andrew Morton, linux-mm, linux-kernel, Bharata B Rao,
Aneesh Kumar K . V, Wei Xu, Dan Williams, Dave Hansen,
Davidlohr Bueso, Johannes Weiner, Jonathan Cameron, Michal Hocko,
Yang Shi, Dave Jiang, Rafael J Wysocki
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Huang Ying <ying.huang@intel.com> writes:
> The abstract distance may be calculated by various drivers, such as
> ACPI HMAT, CXL CDAT, etc. While it may be used by various code which
> hot-add memory node, such as dax/kmem etc. To decouple the algorithm
> users and the providers, the abstract distance calculation algorithms
> management mechanism is implemented in this patch. It provides
> interface for the providers to register the implementation, and
> interface for the users.
>
> Multiple algorithm implementations can cooperate via calculating
> abstract distance for different memory nodes. The preference of
> algorithm implementations can be specified via
> priority (notifier_block.priority).
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Tested-by: Bharata B Rao <bharata@amd.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Davidlohr Bueso <dave@stgolabs.net>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Dave Jiang <dave.jiang@intel.com>
> Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com>
> ---
> include/linux/memory-tiers.h | 19 ++++++++++++
> mm/memory-tiers.c | 59 ++++++++++++++++++++++++++++++++++++
> 2 files changed, 78 insertions(+)
>
> diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
> index 437441cdf78f..c8382220cced 100644
> --- a/include/linux/memory-tiers.h
> +++ b/include/linux/memory-tiers.h
> @@ -6,6 +6,7 @@
> #include <linux/nodemask.h>
> #include <linux/kref.h>
> #include <linux/mmzone.h>
> +#include <linux/notifier.h>
> /*
> * Each tier cover a abstrace distance chunk size of 128
> */
> @@ -36,6 +37,9 @@ struct memory_dev_type *alloc_memory_type(int adistance);
> void put_memory_type(struct memory_dev_type *memtype);
> void init_node_memory_type(int node, struct memory_dev_type *default_type);
> void clear_node_memory_type(int node, struct memory_dev_type *memtype);
> +int register_mt_adistance_algorithm(struct notifier_block *nb);
> +int unregister_mt_adistance_algorithm(struct notifier_block *nb);
> +int mt_calc_adistance(int node, int *adist);
> #ifdef CONFIG_MIGRATION
> int next_demotion_node(int node);
> void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets);
> @@ -97,5 +101,20 @@ static inline bool node_is_toptier(int node)
> {
> return true;
> }
> +
> +static inline int register_mt_adistance_algorithm(struct notifier_block *nb)
> +{
> + return 0;
> +}
> +
> +static inline int unregister_mt_adistance_algorithm(struct notifier_block *nb)
> +{
> + return 0;
> +}
> +
> +static inline int mt_calc_adistance(int node, int *adist)
> +{
> + return NOTIFY_DONE;
> +}
> #endif /* CONFIG_NUMA */
> #endif /* _LINUX_MEMORY_TIERS_H */
> diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
> index 37a4f59d9585..76c0ad47a5ad 100644
> --- a/mm/memory-tiers.c
> +++ b/mm/memory-tiers.c
> @@ -5,6 +5,7 @@
> #include <linux/kobject.h>
> #include <linux/memory.h>
> #include <linux/memory-tiers.h>
> +#include <linux/notifier.h>
>
> #include "internal.h"
>
> @@ -105,6 +106,8 @@ static int top_tier_adistance;
> static struct demotion_nodes *node_demotion __read_mostly;
> #endif /* CONFIG_MIGRATION */
>
> +static BLOCKING_NOTIFIER_HEAD(mt_adistance_algorithms);
> +
> static inline struct memory_tier *to_memory_tier(struct device *device)
> {
> return container_of(device, struct memory_tier, dev);
> @@ -592,6 +595,62 @@ void clear_node_memory_type(int node, struct memory_dev_type *memtype)
> }
> EXPORT_SYMBOL_GPL(clear_node_memory_type);
>
> +/**
> + * register_mt_adistance_algorithm() - Register memory tiering abstract distance algorithm
> + * @nb: The notifier block which describe the algorithm
> + *
> + * Return: 0 on success, errno on error.
> + *
> + * Every memory tiering abstract distance algorithm provider needs to
> + * register the algorithm with register_mt_adistance_algorithm(). To
> + * calculate the abstract distance for a specified memory node, the
> + * notifier function will be called unless some high priority
> + * algorithm has provided result. The prototype of the notifier
> + * function is as follows,
> + *
> + * int (*algorithm_notifier)(struct notifier_block *nb,
> + * unsigned long nid, void *data);
> + *
> + * Where "nid" specifies the memory node, "data" is the pointer to the
> + * returned abstract distance (that is, "int *adist"). If the
> + * algorithm provides the result, NOTIFY_STOP should be returned.
> + * Otherwise, return_value & %NOTIFY_STOP_MASK == 0 to allow the next
> + * algorithm in the chain to provide the result.
> + */
> +int register_mt_adistance_algorithm(struct notifier_block *nb)
> +{
> + return blocking_notifier_chain_register(&mt_adistance_algorithms, nb);
> +}
> +EXPORT_SYMBOL_GPL(register_mt_adistance_algorithm);
> +
> +/**
> + * unregister_mt_adistance_algorithm() - Unregister memory tiering abstract distance algorithm
> + * @nb: the notifier block which describe the algorithm
> + *
> + * Return: 0 on success, errno on error.
> + */
> +int unregister_mt_adistance_algorithm(struct notifier_block *nb)
> +{
> + return blocking_notifier_chain_unregister(&mt_adistance_algorithms, nb);
> +}
> +EXPORT_SYMBOL_GPL(unregister_mt_adistance_algorithm);
> +
> +/**
> + * mt_calc_adistance() - Calculate abstract distance with registered algorithms
> + * @node: the node to calculate abstract distance for
> + * @adist: the returned abstract distance
> + *
> + * Return: if return_value & %NOTIFY_STOP_MASK != 0, then some
> + * abstract distance algorithm provides the result, and return it via
> + * @adist. Otherwise, no algorithm can provide the result and @adist
> + * will be kept as it is.
> + */
> +int mt_calc_adistance(int node, int *adist)
> +{
> + return blocking_notifier_call_chain(&mt_adistance_algorithms, node, adist);
> +}
> +EXPORT_SYMBOL_GPL(mt_calc_adistance);
> +
> static int __meminit memtier_hotplug_callback(struct notifier_block *self,
> unsigned long action, void *_arg)
> {
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -V3 3/4] acpi, hmat: calculate abstract distance with HMAT
2023-09-12 8:21 ` [PATCH -V3 3/4] acpi, hmat: calculate abstract distance with HMAT Huang Ying
2023-09-14 17:31 ` Dave Jiang
@ 2023-09-19 5:14 ` Alistair Popple
2023-09-19 6:11 ` Huang, Ying
1 sibling, 1 reply; 14+ messages in thread
From: Alistair Popple @ 2023-09-19 5:14 UTC (permalink / raw)
To: Huang Ying
Cc: Andrew Morton, linux-mm, linux-kernel, Bharata B Rao,
Aneesh Kumar K . V, Wei Xu, Dan Williams, Dave Hansen,
Davidlohr Bueso, Johannes Weiner, Jonathan Cameron, Michal Hocko,
Yang Shi, Dave Jiang, Rafael J Wysocki
Thanks for making changes here, looks better to me at least.
Huang Ying <ying.huang@intel.com> writes:
> static __init void hmat_free_structures(void)
> {
> struct memory_target *target, *tnext;
> @@ -801,6 +857,7 @@ static __init int hmat_init(void)
> struct acpi_table_header *tbl;
> enum acpi_hmat_type i;
> acpi_status status;
> + int usage;
>
> if (srat_disabled() || hmat_disable)
> return 0;
> @@ -841,7 +898,10 @@ static __init int hmat_init(void)
> hmat_register_targets();
>
> /* Keep the table and structures if the notifier may use them */
> - if (!hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI))
> + usage = !hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI);
> + if (!hmat_set_default_dram_perf())
> + usage += !register_mt_adistance_algorithm(&hmat_adist_nb);
> + if (usage)
> return 0;
Can we simplify the error handling here? As I understand it
hotplug_memory_notifier() and register_mt_adistance_algorithm() aren't
expected to ever fail because hmat_init() should only be called once and
the notifier register shouldn't fail. So wouldn't the below be
effectively the same thing but clearer?
if (hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI))
goto out_put;
if (!hmat_set_default_dram_perf())
register_mt_adistance_algorithm(&hmat_adist_nb);
return 0;
> out_put:
> hmat_free_structures();
Also as an aside while looking at this patch I noticed a minor bug:
status = acpi_get_table(ACPI_SIG_HMAT, 0, &tbl);
if (ACPI_FAILURE(status))
goto out_put;
This will call acpi_put_table(tbl) even though we failed to get the
table.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -V3 4/4] dax, kmem: calculate abstract distance with general interface
2023-09-12 8:21 ` [PATCH -V3 4/4] dax, kmem: calculate abstract distance with general interface Huang Ying
2023-09-14 17:31 ` Dave Jiang
@ 2023-09-19 5:31 ` Alistair Popple
2023-09-19 5:56 ` Huang, Ying
1 sibling, 1 reply; 14+ messages in thread
From: Alistair Popple @ 2023-09-19 5:31 UTC (permalink / raw)
To: Huang Ying
Cc: Andrew Morton, linux-mm, linux-kernel, Bharata B Rao,
Aneesh Kumar K . V, Wei Xu, Dan Williams, Dave Hansen,
Davidlohr Bueso, Johannes Weiner, Jonathan Cameron, Michal Hocko,
Yang Shi, Dave Jiang, Rafael J Wysocki
Huang Ying <ying.huang@intel.com> writes:
> diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
> index fa1a8b418f9a..ca68ef17554b 100644
> --- a/mm/memory-tiers.c
> +++ b/mm/memory-tiers.c
> @@ -586,7 +586,7 @@ EXPORT_SYMBOL_GPL(init_node_memory_type);
> void clear_node_memory_type(int node, struct memory_dev_type *memtype)
> {
> mutex_lock(&memory_tier_lock);
> - if (node_memory_types[node].memtype == memtype)
> + if (node_memory_types[node].memtype == memtype || !memtype)
> node_memory_types[node].map_count--;
>
> /*
> * If we umapped all the attached devices to this node,
This implies it's possible memtype == NULL. Yet we have this:
* clear the node memory type.
*/
if (!node_memory_types[node].map_count) {
node_memory_types[node].memtype = NULL;
put_memory_type(memtype);
}
It's not safe to call put_memory_type(NULL), so what condition guarantees
map_count > 1 when called with memtype == NULL? Thanks.
- Alistair
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -V3 4/4] dax, kmem: calculate abstract distance with general interface
2023-09-19 5:31 ` Alistair Popple
@ 2023-09-19 5:56 ` Huang, Ying
0 siblings, 0 replies; 14+ messages in thread
From: Huang, Ying @ 2023-09-19 5:56 UTC (permalink / raw)
To: Alistair Popple
Cc: Andrew Morton, linux-mm, linux-kernel, Bharata B Rao,
Aneesh Kumar K . V, Wei Xu, Dan Williams, Dave Hansen,
Davidlohr Bueso, Johannes Weiner, Jonathan Cameron, Michal Hocko,
Yang Shi, Dave Jiang, Rafael J Wysocki
Alistair Popple <apopple@nvidia.com> writes:
> Huang Ying <ying.huang@intel.com> writes:
>
>> diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
>> index fa1a8b418f9a..ca68ef17554b 100644
>> --- a/mm/memory-tiers.c
>> +++ b/mm/memory-tiers.c
>> @@ -586,7 +586,7 @@ EXPORT_SYMBOL_GPL(init_node_memory_type);
>> void clear_node_memory_type(int node, struct memory_dev_type *memtype)
>> {
>> mutex_lock(&memory_tier_lock);
>> - if (node_memory_types[node].memtype == memtype)
>> + if (node_memory_types[node].memtype == memtype || !memtype)
>> node_memory_types[node].map_count--;
>>
>> /*
>> * If we umapped all the attached devices to this node,
>
> This implies it's possible memtype == NULL. Yet we have this:
>
> * clear the node memory type.
> */
> if (!node_memory_types[node].map_count) {
> node_memory_types[node].memtype = NULL;
> put_memory_type(memtype);
> }
>
> It's not safe to call put_memory_type(NULL), so what condition guarantees
> map_count > 1 when called with memtype == NULL? Thanks.
Nothing guarantees that. Thanks for pointing this out. I will revise
the code to put_memory_type(node_memory_types[node].memtype) firstly.
--
Best Regards,
Huang, Ying
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH -V3 3/4] acpi, hmat: calculate abstract distance with HMAT
2023-09-19 5:14 ` Alistair Popple
@ 2023-09-19 6:11 ` Huang, Ying
0 siblings, 0 replies; 14+ messages in thread
From: Huang, Ying @ 2023-09-19 6:11 UTC (permalink / raw)
To: Alistair Popple
Cc: Andrew Morton, linux-mm, linux-kernel, Bharata B Rao,
Aneesh Kumar K . V, Wei Xu, Dan Williams, Dave Hansen,
Davidlohr Bueso, Johannes Weiner, Jonathan Cameron, Michal Hocko,
Yang Shi, Dave Jiang, Rafael J Wysocki
Alistair Popple <apopple@nvidia.com> writes:
> Thanks for making changes here, looks better to me at least.
>
> Huang Ying <ying.huang@intel.com> writes:
>
>> static __init void hmat_free_structures(void)
>> {
>> struct memory_target *target, *tnext;
>> @@ -801,6 +857,7 @@ static __init int hmat_init(void)
>> struct acpi_table_header *tbl;
>> enum acpi_hmat_type i;
>> acpi_status status;
>> + int usage;
>>
>> if (srat_disabled() || hmat_disable)
>> return 0;
>> @@ -841,7 +898,10 @@ static __init int hmat_init(void)
>> hmat_register_targets();
>>
>> /* Keep the table and structures if the notifier may use them */
>> - if (!hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI))
>> + usage = !hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI);
>> + if (!hmat_set_default_dram_perf())
>> + usage += !register_mt_adistance_algorithm(&hmat_adist_nb);
>> + if (usage)
>> return 0;
>
> Can we simplify the error handling here? As I understand it
> hotplug_memory_notifier() and register_mt_adistance_algorithm() aren't
> expected to ever fail because hmat_init() should only be called once and
> the notifier register shouldn't fail. So wouldn't the below be
> effectively the same thing but clearer?
>
> if (hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI))
> goto out_put;
>
> if (!hmat_set_default_dram_perf())
> register_mt_adistance_algorithm(&hmat_adist_nb);
>
> return 0;
>
>> out_put:
>> hmat_free_structures();
Looks good to me! Will do that in the next version!
> Also as an aside while looking at this patch I noticed a minor bug:
>
> status = acpi_get_table(ACPI_SIG_HMAT, 0, &tbl);
> if (ACPI_FAILURE(status))
> goto out_put;
>
> This will call acpi_put_table(tbl) even though we failed to get the
> table.
Thanks for pointing this out. This should go through ACPI tree. So
will not do this in a separate patch.
--
Best Regards,
Huang, Ying
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2023-09-19 6:13 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-12 8:20 [PATCH -V3 0/4] memory tiering: calculate abstract distance based on ACPI HMAT Huang Ying
2023-09-12 8:20 ` [PATCH -V3 1/4] memory tiering: add abstract distance calculation algorithms management Huang Ying
2023-09-14 17:29 ` Dave Jiang
2023-09-19 5:13 ` Alistair Popple
2023-09-12 8:20 ` [PATCH -V3 2/4] acpi, hmat: refactor hmat_register_target_initiators() Huang Ying
2023-09-14 17:30 ` Dave Jiang
2023-09-12 8:21 ` [PATCH -V3 3/4] acpi, hmat: calculate abstract distance with HMAT Huang Ying
2023-09-14 17:31 ` Dave Jiang
2023-09-19 5:14 ` Alistair Popple
2023-09-19 6:11 ` Huang, Ying
2023-09-12 8:21 ` [PATCH -V3 4/4] dax, kmem: calculate abstract distance with general interface Huang Ying
2023-09-14 17:31 ` Dave Jiang
2023-09-19 5:31 ` Alistair Popple
2023-09-19 5:56 ` Huang, Ying
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).