* [RFC v6 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration @ 2018-05-22 23:36 Michael Bringmann 2018-05-22 23:36 ` [RFC v5 1/6] powerpc/drmem: Export 'dynamic-memory' loader Michael Bringmann ` (5 more replies) 0 siblings, 6 replies; 10+ messages in thread From: Michael Bringmann @ 2018-05-22 23:36 UTC (permalink / raw) To: linuxppc-dev Cc: Michael Bringmann, Nathan Fontenot, John Allen, Tyrel Datwyler, Thomas Falcon The migration of LPARs across Power systems affects many attributes including that of the associativity of memory blocks and CPUs. The patches in this set execute when a system is coming up fresh upon a migration target. They are intended to, * Recognize changes to the associativity of memory and CPUs recorded in internal data structures when compared to the latest copies in the device tree (e.g. ibm,dynamic-memory, ibm,dynamic-memory-v2, cpus), * Recognize changes to the associativity mapping (e.g. ibm, associativity-lookup-arrays), locate all assigned memory blocks corresponding to each changed row, and readd all such blocks. * Generate calls to other code layers to reset the data structures related to associativity of the CPUs and memory. * Re-register the 'changed' entities into the target system. Re-registration of CPUs and memory blocks mostly entails acting as if they have been newly hot-added into the target system. Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> Michael Bringmann (3): powerpc migration/drmem: Modify DRMEM code to export more features powerpc migration/cpu: Associativity & cpu changes powerpc migration/memory: Associativity & memory updates --- Changes in RFC: -- Restructure and rearrange content of patches to co-locate similar or related modifications -- Rename pseries_update_drconf_cpu to pseries_update_cpu -- Simplify code to update CPU nodes during mobility checks. Remove functions to generate extra HP_ELOG messages in favor of direct function calls to dlpar_cpu_readd_by_index, or dlpar_memory_readd_by_index. -- Revise code order in dlpar_cpu_readd_by_index() to present more appropriate error codes from underlying layers of the implementation. -- Add hotplug device lock around all property updates -- Schedule all CPU and memory changes due to device-tree updates / LPAR mobility as workqueue operations -- Export DRMEM accessor functions to parse 'ibm,dynamic-memory-v2' -- Export DRMEM functions to provide user copies of LMB array -- Compress code using DRMEM accessor functions. -- Split topology timer crash fix into new patch. -- Modify DRMEM code to replace usages of dt_root_addr_cells, and dt_mem_next_cell, as these are only available at first boot. -- Correct a bug in DRC index selection for queued operation. -- Rebase to 4.17-rc5 kernel -- Minor code cleanups -- Correct drc_index for worker fn invocation ^ permalink raw reply [flat|nested] 10+ messages in thread
* [RFC v5 1/6] powerpc/drmem: Export 'dynamic-memory' loader 2018-05-22 23:36 [RFC v6 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration Michael Bringmann @ 2018-05-22 23:36 ` Michael Bringmann 2018-05-22 23:36 ` [RFC v5 2/6] powerpc/cpu: Conditionally acquire/release DRC index Michael Bringmann ` (4 subsequent siblings) 5 siblings, 0 replies; 10+ messages in thread From: Michael Bringmann @ 2018-05-22 23:36 UTC (permalink / raw) To: linuxppc-dev Cc: Michael Bringmann, Nathan Fontenot, John Allen, Tyrel Datwyler, Thomas Falcon powerpc/drmem: Export many of the functions of DRMEM to parse "ibm,dynamic-memory" and "ibm,dynamic-memory-v2" during hotplug operations and for Post Migration events. Also modify the DRMEM initialization code to allow it to, * Be called after system initialization * Provide a separate user copy of the LMB array that is produces * Free the user copy upon request Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> --- Changes in RFC: -- Separate DRMEM changes into a standalone patch -- Do not export excess functions. Make exported names more explicit. -- Add new iterator to work through a pair of drmem_info arrays. -- Modify DRMEM code to replace usages of dt_root_addr_cells, and dt_mem_next_cell, as these are only available at first boot. -- Rebase to 4.17-rc5 kernel -- Apply several code and patch cleanups. --- arch/powerpc/include/asm/drmem.h | 10 +++++ arch/powerpc/mm/drmem.c | 73 ++++++++++++++++++++++++++++---------- 2 files changed, 64 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/include/asm/drmem.h b/arch/powerpc/include/asm/drmem.h index ce242b9..e82d254 100644 --- a/arch/powerpc/include/asm/drmem.h +++ b/arch/powerpc/include/asm/drmem.h @@ -35,6 +35,13 @@ struct drmem_lmb_info { &drmem_info->lmbs[0], \ &drmem_info->lmbs[drmem_info->n_lmbs - 1]) +#define for_each_pair_drmem_lmb(dinfo1, lmb1, dinfo2, lmb2) \ + for ((lmb1) = (&dinfo1->lmbs[0]), \ + (lmb2) = (&dinfo2->lmbs[0]); \ + ((lmb1) <= (&dinfo1->lmbs[dinfo1->n_lmbs - 1])) && \ + ((lmb2) <= (&dinfo2->lmbs[dinfo2->n_lmbs - 1])); \ + (lmb1)++, (lmb2)++) + /* * The of_drconf_cell_v1 struct defines the layout of the LMB data * specified in the ibm,dynamic-memory device tree property. @@ -94,6 +101,9 @@ void __init walk_drmem_lmbs(struct device_node *dn, void (*func)(struct drmem_lmb *, const __be32 **)); int drmem_update_dt(void); +struct drmem_lmb_info *drmem_lmbs_init(struct property *prop); +void drmem_lmbs_free(struct drmem_lmb_info *dinfo); + #ifdef CONFIG_PPC_PSERIES void __init walk_drmem_lmbs_early(unsigned long node, void (*func)(struct drmem_lmb *, const __be32 **)); diff --git a/arch/powerpc/mm/drmem.c b/arch/powerpc/mm/drmem.c index 3f18036..2bd6a70 100644 --- a/arch/powerpc/mm/drmem.c +++ b/arch/powerpc/mm/drmem.c @@ -20,6 +20,7 @@ static struct drmem_lmb_info __drmem_info; struct drmem_lmb_info *drmem_info = &__drmem_info; +static int n_root_addr_cells; u64 drmem_lmb_memory_max(void) { @@ -193,12 +194,13 @@ int drmem_update_dt(void) return rc; } -static void __init read_drconf_v1_cell(struct drmem_lmb *lmb, +static void read_drconf_v1_cell(struct drmem_lmb *lmb, const __be32 **prop) { const __be32 *p = *prop; - lmb->base_addr = dt_mem_next_cell(dt_root_addr_cells, &p); + lmb->base_addr = of_read_number(p, n_root_addr_cells); + p += n_root_addr_cells; lmb->drc_index = of_read_number(p++, 1); p++; /* skip reserved field */ @@ -209,7 +211,7 @@ static void __init read_drconf_v1_cell(struct drmem_lmb *lmb, *prop = p; } -static void __init __walk_drmem_v1_lmbs(const __be32 *prop, const __be32 *usm, +static void __walk_drmem_v1_lmbs(const __be32 *prop, const __be32 *usm, void (*func)(struct drmem_lmb *, const __be32 **)) { struct drmem_lmb lmb; @@ -225,13 +227,14 @@ static void __init __walk_drmem_v1_lmbs(const __be32 *prop, const __be32 *usm, } } -static void __init read_drconf_v2_cell(struct of_drconf_cell_v2 *dr_cell, +static void read_drconf_v2_cell(struct of_drconf_cell_v2 *dr_cell, const __be32 **prop) { const __be32 *p = *prop; dr_cell->seq_lmbs = of_read_number(p++, 1); - dr_cell->base_addr = dt_mem_next_cell(dt_root_addr_cells, &p); + dr_cell->base_addr = of_read_number(p, n_root_addr_cells); + p += n_root_addr_cells; dr_cell->drc_index = of_read_number(p++, 1); dr_cell->aa_index = of_read_number(p++, 1); dr_cell->flags = of_read_number(p++, 1); @@ -239,7 +242,7 @@ static void __init read_drconf_v2_cell(struct of_drconf_cell_v2 *dr_cell, *prop = p; } -static void __init __walk_drmem_v2_lmbs(const __be32 *prop, const __be32 *usm, +static void __walk_drmem_v2_lmbs(const __be32 *prop, const __be32 *usm, void (*func)(struct drmem_lmb *, const __be32 **)) { struct of_drconf_cell_v2 dr_cell; @@ -275,6 +278,9 @@ void __init walk_drmem_lmbs_early(unsigned long node, const __be32 *prop, *usm; int len; + if (n_root_addr_cells == 0) + n_root_addr_cells = dt_root_addr_cells; + prop = of_get_flat_dt_prop(node, "ibm,lmb-size", &len); if (!prop || len < dt_root_size_cells * sizeof(__be32)) return; @@ -353,24 +359,26 @@ void __init walk_drmem_lmbs(struct device_node *dn, } } -static void __init init_drmem_v1_lmbs(const __be32 *prop) +static void init_drmem_v1_lmbs(const __be32 *prop, + struct drmem_lmb_info *dinfo) { struct drmem_lmb *lmb; - drmem_info->n_lmbs = of_read_number(prop++, 1); - if (drmem_info->n_lmbs == 0) + dinfo->n_lmbs = of_read_number(prop++, 1); + if (dinfo->n_lmbs == 0) return; - drmem_info->lmbs = kcalloc(drmem_info->n_lmbs, sizeof(*lmb), + dinfo->lmbs = kcalloc(dinfo->n_lmbs, sizeof(*lmb), GFP_KERNEL); - if (!drmem_info->lmbs) + if (!dinfo->lmbs) return; for_each_drmem_lmb(lmb) read_drconf_v1_cell(lmb, &prop); } -static void __init init_drmem_v2_lmbs(const __be32 *prop) +static void init_drmem_v2_lmbs(const __be32 *prop, + struct drmem_lmb_info *dinfo) { struct drmem_lmb *lmb; struct of_drconf_cell_v2 dr_cell; @@ -386,12 +394,12 @@ static void __init init_drmem_v2_lmbs(const __be32 *prop) p = prop; for (i = 0; i < lmb_sets; i++) { read_drconf_v2_cell(&dr_cell, &p); - drmem_info->n_lmbs += dr_cell.seq_lmbs; + dinfo->n_lmbs += dr_cell.seq_lmbs; } - drmem_info->lmbs = kcalloc(drmem_info->n_lmbs, sizeof(*lmb), + dinfo->lmbs = kcalloc(dinfo->n_lmbs, sizeof(*lmb), GFP_KERNEL); - if (!drmem_info->lmbs) + if (!dinfo->lmbs) return; /* second pass, read in the LMB information */ @@ -402,10 +410,10 @@ static void __init init_drmem_v2_lmbs(const __be32 *prop) read_drconf_v2_cell(&dr_cell, &p); for (j = 0; j < dr_cell.seq_lmbs; j++) { - lmb = &drmem_info->lmbs[lmb_index++]; + lmb = &dinfo->lmbs[lmb_index++]; lmb->base_addr = dr_cell.base_addr; - dr_cell.base_addr += drmem_info->lmb_size; + dr_cell.base_addr += dinfo->lmb_size; lmb->drc_index = dr_cell.drc_index; dr_cell.drc_index++; @@ -416,11 +424,38 @@ static void __init init_drmem_v2_lmbs(const __be32 *prop) } } +void drmem_lmbs_free(struct drmem_lmb_info *dinfo) +{ + if (dinfo) { + kfree(dinfo->lmbs); + kfree(dinfo); + } +} + +struct drmem_lmb_info *drmem_lmbs_init(struct property *prop) +{ + struct drmem_lmb_info *dinfo; + + dinfo = kzalloc(sizeof(*dinfo), GFP_KERNEL); + if (!dinfo) + return NULL; + + if (!strcmp("ibm,dynamic-memory", prop->name)) + init_drmem_v1_lmbs(prop->value, dinfo); + else if (!strcmp("ibm,dynamic-memory-v2", prop->name)) + init_drmem_v2_lmbs(prop->value, dinfo); + + return dinfo; +} + static int __init drmem_init(void) { struct device_node *dn; const __be32 *prop; + if (n_root_addr_cells == 0) + n_root_addr_cells = dt_root_addr_cells; + dn = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory"); if (!dn) { pr_info("No dynamic reconfiguration memory found\n"); @@ -434,11 +469,11 @@ static int __init drmem_init(void) prop = of_get_property(dn, "ibm,dynamic-memory", NULL); if (prop) { - init_drmem_v1_lmbs(prop); + init_drmem_v1_lmbs(prop, drmem_info); } else { prop = of_get_property(dn, "ibm,dynamic-memory-v2", NULL); if (prop) - init_drmem_v2_lmbs(prop); + init_drmem_v2_lmbs(prop, drmem_info); } of_node_put(dn); ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC v5 2/6] powerpc/cpu: Conditionally acquire/release DRC index 2018-05-22 23:36 [RFC v6 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration Michael Bringmann 2018-05-22 23:36 ` [RFC v5 1/6] powerpc/drmem: Export 'dynamic-memory' loader Michael Bringmann @ 2018-05-22 23:36 ` Michael Bringmann 2018-05-22 23:36 ` [RFC v5 3/6] migration/dlpar: Add device readd queuing function Michael Bringmann ` (3 subsequent siblings) 5 siblings, 0 replies; 10+ messages in thread From: Michael Bringmann @ 2018-05-22 23:36 UTC (permalink / raw) To: linuxppc-dev Cc: Michael Bringmann, Nathan Fontenot, John Allen, Tyrel Datwyler, Thomas Falcon powerpc/cpu: Modify dlpar_cpu_add and dlpar_cpu_remove to allow the skipping of DRC index acquire or release operations during the CPU add or remove operations. This is intended to support subsequent changes to provide a 'CPU readd' operation. Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> --- arch/powerpc/platforms/pseries/hotplug-cpu.c | 71 +++++++++++++++----------- 1 file changed, 42 insertions(+), 29 deletions(-) diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c index a408217..ec78cc6 100644 --- a/arch/powerpc/platforms/pseries/hotplug-cpu.c +++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c @@ -474,7 +474,7 @@ static bool valid_cpu_drc_index(struct device_node *parent, u32 drc_index) &cdata); } -static ssize_t dlpar_cpu_add(u32 drc_index) +static ssize_t dlpar_cpu_add(u32 drc_index, bool acquire_drc) { struct device_node *dn, *parent; int rc, saved_rc; @@ -499,19 +499,22 @@ static ssize_t dlpar_cpu_add(u32 drc_index) return -EINVAL; } - rc = dlpar_acquire_drc(drc_index); - if (rc) { - pr_warn("Failed to acquire DRC, rc: %d, drc index: %x\n", - rc, drc_index); - of_node_put(parent); - return -EINVAL; + if (acquire_drc) { + rc = dlpar_acquire_drc(drc_index); + if (rc) { + pr_warn("Failed to acquire DRC, rc: %d, drc index: %x\n", + rc, drc_index); + of_node_put(parent); + return -EINVAL; + } } dn = dlpar_configure_connector(cpu_to_be32(drc_index), parent); if (!dn) { pr_warn("Failed call to configure-connector, drc index: %x\n", drc_index); - dlpar_release_drc(drc_index); + if (acquire_drc) + dlpar_release_drc(drc_index); of_node_put(parent); return -EINVAL; } @@ -526,8 +529,9 @@ static ssize_t dlpar_cpu_add(u32 drc_index) pr_warn("Failed to attach node %s, rc: %d, drc index: %x\n", dn->name, rc, drc_index); - rc = dlpar_release_drc(drc_index); - if (!rc) + if (acquire_drc) + rc = dlpar_release_drc(drc_index); + if (!rc || acquire_drc) dlpar_free_cc_nodes(dn); return saved_rc; @@ -540,7 +544,7 @@ static ssize_t dlpar_cpu_add(u32 drc_index) dn->name, rc, drc_index); rc = dlpar_detach_node(dn); - if (!rc) + if (!rc && acquire_drc) dlpar_release_drc(drc_index); return saved_rc; @@ -608,7 +612,8 @@ static int dlpar_offline_cpu(struct device_node *dn) } -static ssize_t dlpar_cpu_remove(struct device_node *dn, u32 drc_index) +static ssize_t dlpar_cpu_remove(struct device_node *dn, u32 drc_index, + bool release_drc) { int rc; @@ -621,12 +626,14 @@ static ssize_t dlpar_cpu_remove(struct device_node *dn, u32 drc_index) return -EINVAL; } - rc = dlpar_release_drc(drc_index); - if (rc) { - pr_warn("Failed to release drc (%x) for CPU %s, rc: %d\n", - drc_index, dn->name, rc); - dlpar_online_cpu(dn); - return rc; + if (release_drc) { + rc = dlpar_release_drc(drc_index); + if (rc) { + pr_warn("Failed to release drc (%x) for CPU %s, rc: %d\n", + drc_index, dn->name, rc); + dlpar_online_cpu(dn); + return rc; + } } rc = dlpar_detach_node(dn); @@ -635,7 +642,10 @@ static ssize_t dlpar_cpu_remove(struct device_node *dn, u32 drc_index) pr_warn("Failed to detach CPU %s, rc: %d", dn->name, rc); - rc = dlpar_acquire_drc(drc_index); + if (release_drc) + rc = dlpar_acquire_drc(drc_index); + else + rc = 0; if (!rc) dlpar_online_cpu(dn); @@ -664,7 +674,7 @@ static struct device_node *cpu_drc_index_to_dn(u32 drc_index) return dn; } -static int dlpar_cpu_remove_by_index(u32 drc_index) +static int dlpar_cpu_remove_by_index(u32 drc_index, bool release_drc) { struct device_node *dn; int rc; @@ -676,7 +686,7 @@ static int dlpar_cpu_remove_by_index(u32 drc_index) return -ENODEV; } - rc = dlpar_cpu_remove(dn, drc_index); + rc = dlpar_cpu_remove(dn, drc_index, release_drc); of_node_put(dn); return rc; } @@ -741,7 +751,7 @@ static int dlpar_cpu_remove_by_count(u32 cpus_to_remove) } for (i = 0; i < cpus_to_remove; i++) { - rc = dlpar_cpu_remove_by_index(cpu_drcs[i]); + rc = dlpar_cpu_remove_by_index(cpu_drcs[i], true); if (rc) break; @@ -752,7 +762,7 @@ static int dlpar_cpu_remove_by_count(u32 cpus_to_remove) pr_warn("CPU hot-remove failed, adding back removed CPUs\n"); for (i = 0; i < cpus_removed; i++) - dlpar_cpu_add(cpu_drcs[i]); + dlpar_cpu_add(cpu_drcs[i], true); rc = -EINVAL; } else { @@ -843,7 +853,7 @@ static int dlpar_cpu_add_by_count(u32 cpus_to_add) } for (i = 0; i < cpus_to_add; i++) { - rc = dlpar_cpu_add(cpu_drcs[i]); + rc = dlpar_cpu_add(cpu_drcs[i], true); if (rc) break; @@ -854,7 +864,7 @@ static int dlpar_cpu_add_by_count(u32 cpus_to_add) pr_warn("CPU hot-add failed, removing any added CPUs\n"); for (i = 0; i < cpus_added; i++) - dlpar_cpu_remove_by_index(cpu_drcs[i]); + dlpar_cpu_remove_by_index(cpu_drcs[i], true); rc = -EINVAL; } else { @@ -880,7 +890,7 @@ int dlpar_cpu(struct pseries_hp_errorlog *hp_elog) if (hp_elog->id_type == PSERIES_HP_ELOG_ID_DRC_COUNT) rc = dlpar_cpu_remove_by_count(count); else if (hp_elog->id_type == PSERIES_HP_ELOG_ID_DRC_INDEX) - rc = dlpar_cpu_remove_by_index(drc_index); + rc = dlpar_cpu_remove_by_index(drc_index, true); else rc = -EINVAL; break; @@ -888,7 +898,7 @@ int dlpar_cpu(struct pseries_hp_errorlog *hp_elog) if (hp_elog->id_type == PSERIES_HP_ELOG_ID_DRC_COUNT) rc = dlpar_cpu_add_by_count(count); else if (hp_elog->id_type == PSERIES_HP_ELOG_ID_DRC_INDEX) - rc = dlpar_cpu_add(drc_index); + rc = dlpar_cpu_add(drc_index, true); else rc = -EINVAL; break; @@ -913,7 +923,7 @@ static ssize_t dlpar_cpu_probe(const char *buf, size_t count) if (rc) return -EINVAL; - rc = dlpar_cpu_add(drc_index); + rc = dlpar_cpu_add(drc_index, true); return rc ? rc : count; } @@ -934,7 +944,7 @@ static ssize_t dlpar_cpu_release(const char *buf, size_t count) return -EINVAL; } - rc = dlpar_cpu_remove(dn, drc_index); + rc = dlpar_cpu_remove(dn, drc_index, true); of_node_put(dn); return rc ? rc : count; @@ -948,6 +958,9 @@ static int pseries_smp_notifier(struct notifier_block *nb, struct of_reconfig_data *rd = data; int err = 0; + if (strcmp(rd->dn->type, "cpu")) + return notifier_from_errno(err); + switch (action) { case OF_RECONFIG_ATTACH_NODE: err = pseries_add_processor(rd->dn); ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC v5 3/6] migration/dlpar: Add device readd queuing function 2018-05-22 23:36 [RFC v6 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration Michael Bringmann 2018-05-22 23:36 ` [RFC v5 1/6] powerpc/drmem: Export 'dynamic-memory' loader Michael Bringmann 2018-05-22 23:36 ` [RFC v5 2/6] powerpc/cpu: Conditionally acquire/release DRC index Michael Bringmann @ 2018-05-22 23:36 ` Michael Bringmann 2018-05-22 23:36 ` [RFC v5 4/6] powerpc/dlpar: Provide CPU readd operation Michael Bringmann ` (2 subsequent siblings) 5 siblings, 0 replies; 10+ messages in thread From: Michael Bringmann @ 2018-05-22 23:36 UTC (permalink / raw) To: linuxppc-dev Cc: Michael Bringmann, Nathan Fontenot, John Allen, Tyrel Datwyler, Thomas Falcon migration/dlpar: This patch adds function dlpar_readd_action() which will queue a worker function to 'readd' a device in the system. Such devices must be identified by a 'resource' type and a drc_index to be readded. Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> --- arch/powerpc/platforms/pseries/dlpar.c | 14 ++++++++++++++ arch/powerpc/platforms/pseries/pseries.h | 1 + 2 files changed, 15 insertions(+) diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c index a0b20c0..a14684e 100644 --- a/arch/powerpc/platforms/pseries/dlpar.c +++ b/arch/powerpc/platforms/pseries/dlpar.c @@ -407,6 +407,20 @@ void queue_hotplug_event(struct pseries_hp_errorlog *hp_errlog, } } +int dlpar_queue_action(int resource, int action, u32 drc_index) +{ + struct pseries_hp_errorlog hp_elog; + + hp_elog.resource = resource; + hp_elog.action = action; + hp_elog.id_type = PSERIES_HP_ELOG_ID_DRC_INDEX; + hp_elog._drc_u.drc_index = drc_index; + + queue_hotplug_event(&hp_elog, NULL, NULL); + + return 0; +} + static int dlpar_parse_resource(char **cmd, struct pseries_hp_errorlog *hp_elog) { char *arg; diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h index 60db2ee..cb2beb1 100644 --- a/arch/powerpc/platforms/pseries/pseries.h +++ b/arch/powerpc/platforms/pseries/pseries.h @@ -61,6 +61,7 @@ extern struct device_node *dlpar_configure_connector(__be32, void queue_hotplug_event(struct pseries_hp_errorlog *hp_errlog, struct completion *hotplug_done, int *rc); +extern int dlpar_queue_action(int resource, int action, u32 drc_index); #ifdef CONFIG_MEMORY_HOTPLUG int dlpar_memory(struct pseries_hp_errorlog *hp_elog); #else ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC v5 4/6] powerpc/dlpar: Provide CPU readd operation 2018-05-22 23:36 [RFC v6 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration Michael Bringmann ` (2 preceding siblings ...) 2018-05-22 23:36 ` [RFC v5 3/6] migration/dlpar: Add device readd queuing function Michael Bringmann @ 2018-05-22 23:36 ` Michael Bringmann 2018-05-22 23:36 ` [RFC v5 5/6] powerpc/mobility: Add lock/unlock device hotplug Michael Bringmann 2018-05-22 23:36 ` [RFC v5 6/6] migration/memory: Update memory for assoc changes Michael Bringmann 5 siblings, 0 replies; 10+ messages in thread From: Michael Bringmann @ 2018-05-22 23:36 UTC (permalink / raw) To: linuxppc-dev Cc: Michael Bringmann, Nathan Fontenot, John Allen, Tyrel Datwyler, Thomas Falcon powerpc/dlpar: Provide hotplug CPU 'readd by index' operation to support LPAR Post Migration state updates. When such changes are invoked by the PowerPC 'mobility' code, they will be queued up so that modifications to CPU properties will take place after the new property value is written to the device-tree. Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> --- arch/powerpc/platforms/pseries/hotplug-cpu.c | 29 ++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c index ec78cc6..ac08d85 100644 --- a/arch/powerpc/platforms/pseries/hotplug-cpu.c +++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c @@ -691,6 +691,26 @@ static int dlpar_cpu_remove_by_index(u32 drc_index, bool release_drc) return rc; } +static int dlpar_cpu_readd_by_index(u32 drc_index) +{ + int rc = 0; + + pr_info("Attempting to re-add CPU, drc index %x\n", drc_index); + + rc = dlpar_cpu_remove_by_index(drc_index, false); + if (!rc) + rc = dlpar_cpu_add(drc_index, false); + + if (rc) + pr_info("Failed to update cpu at drc_index %lx\n", + (unsigned long int)drc_index); + else + pr_info("CPU at drc_index %lx was updated\n", + (unsigned long int)drc_index); + + return rc; +} + static int find_dlpar_cpus_to_remove(u32 *cpu_drcs, int cpus_to_remove) { struct device_node *dn; @@ -902,6 +922,9 @@ int dlpar_cpu(struct pseries_hp_errorlog *hp_elog) else rc = -EINVAL; break; + case PSERIES_HP_ELOG_ACTION_READD: + rc = dlpar_cpu_readd_by_index(drc_index); + break; default: pr_err("Invalid action (%d) specified\n", hp_elog->action); rc = -EINVAL; @@ -968,6 +991,12 @@ static int pseries_smp_notifier(struct notifier_block *nb, case OF_RECONFIG_DETACH_NODE: pseries_remove_processor(rd->dn); break; + case OF_RECONFIG_UPDATE_PROPERTY: + if (!strcmp(rd->prop->name, "ibm,associativity")) + dlpar_queue_action(PSERIES_HP_ELOG_RESOURCE_CPU, + PSERIES_HP_ELOG_ACTION_READD, + be32_to_cpu(rd->dn->phandle)); + break; } return notifier_from_errno(err); } ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC v5 5/6] powerpc/mobility: Add lock/unlock device hotplug 2018-05-22 23:36 [RFC v6 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration Michael Bringmann ` (3 preceding siblings ...) 2018-05-22 23:36 ` [RFC v5 4/6] powerpc/dlpar: Provide CPU readd operation Michael Bringmann @ 2018-05-22 23:36 ` Michael Bringmann 2018-05-22 23:36 ` [RFC v5 6/6] migration/memory: Update memory for assoc changes Michael Bringmann 5 siblings, 0 replies; 10+ messages in thread From: Michael Bringmann @ 2018-05-22 23:36 UTC (permalink / raw) To: linuxppc-dev Cc: Michael Bringmann, Nathan Fontenot, John Allen, Tyrel Datwyler, Thomas Falcon powerpc/mobility: Add device lock/unlock to PowerPC 'mobility' operation to delay the operation of CPU DLPAR work queue operations by the 'readd' activity until after any changes to the corresponding device-tree properties have been written. Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> --- arch/powerpc/platforms/pseries/mobility.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c index 8a8033a..6d98f84 100644 --- a/arch/powerpc/platforms/pseries/mobility.c +++ b/arch/powerpc/platforms/pseries/mobility.c @@ -283,6 +283,8 @@ int pseries_devicetree_update(s32 scope) if (!rtas_buf) return -ENOMEM; + lock_device_hotplug(); + do { rc = mobility_rtas_call(update_nodes_token, rtas_buf, scope); if (rc && rc != 1) @@ -321,6 +323,7 @@ int pseries_devicetree_update(s32 scope) } while (rc == 1); kfree(rtas_buf); + unlock_device_hotplug(); return rc; } ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC v5 6/6] migration/memory: Update memory for assoc changes 2018-05-22 23:36 [RFC v6 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration Michael Bringmann ` (4 preceding siblings ...) 2018-05-22 23:36 ` [RFC v5 5/6] powerpc/mobility: Add lock/unlock device hotplug Michael Bringmann @ 2018-05-22 23:36 ` Michael Bringmann 5 siblings, 0 replies; 10+ messages in thread From: Michael Bringmann @ 2018-05-22 23:36 UTC (permalink / raw) To: linuxppc-dev Cc: Michael Bringmann, Nathan Fontenot, John Allen, Tyrel Datwyler, Thomas Falcon migration/memory: This patch adds more recognition for changes to the associativity of memory blocks described by the device-tree properties and updates local and general kernel data structures to reflect those changes. These differences may include: * Evaluating 'ibm,dynamic-memory' properties when processing the topology of LPARS in Post Migration events. Previous efforts only recognized whether a memory block's assignment had changed in the property. Changes here include checking the aa_index values for each drc_index of the old/new LMBs and to 'readd' any block for which the setting has changed. * In an LPAR migration scenario, the "ibm,associativity-lookup-arrays" property may change. In the event that a row of the array differs, locate all assigned memory blocks with that 'aa_index' and 're-add' them to the system memory block data structures. In the process of the 're-add', the system routines will update the corresponding entry for the memory in the LMB structures and any other relevant kernel data structures. * Extend the previous work for the 'ibm,associativity-lookup-array' and 'ibm,dynamic-memory' properties to support the property 'ibm,dynamic-memory-v2' by means of the DRMEM LMB interpretation code. Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> --- Changes in RFC: -- Simplify code to update memory nodes during mobility checks. -- Reuse code from DRMEM changes to scan for LMBs when updating aa_index -- Combine common code for properties 'ibm,dynamic-memory' and 'ibm,dynamic-memory-v2' after integrating DRMEM features. -- Rearrange patches to co-locate memory property-related changes. -- Use new paired list iterator for the drmem info arrays. -- Use direct calls to add/remove memory from the update drconf function as those operations are only intended for user DLPAR ops, and should not occur during Migration reconfig notifier changes. -- Correct processing bug in processing of ibm,associativity-lookup-arrays -- Rebase to 4.17-rc5 kernel -- Apply minor code cleanups --- arch/powerpc/platforms/pseries/hotplug-memory.c | 153 ++++++++++++++++++----- 1 file changed, 121 insertions(+), 32 deletions(-) diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c index c1578f5..ac329aa 100644 --- a/arch/powerpc/platforms/pseries/hotplug-memory.c +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c @@ -994,13 +994,11 @@ static int pseries_add_mem_node(struct device_node *np) return (ret < 0) ? -EINVAL : 0; } -static int pseries_update_drconf_memory(struct of_reconfig_data *pr) +static int pseries_update_drconf_memory(struct drmem_lmb_info *new_dinfo) { - struct of_drconf_cell_v1 *new_drmem, *old_drmem; + struct drmem_lmb *old_lmb, *new_lmb; unsigned long memblock_size; - u32 entries; - __be32 *p; - int i, rc = -EINVAL; + int rc = 0; if (rtas_hp_event) return 0; @@ -1009,42 +1007,124 @@ static int pseries_update_drconf_memory(struct of_reconfig_data *pr) if (!memblock_size) return -EINVAL; - p = (__be32 *) pr->old_prop->value; - if (!p) - return -EINVAL; + /* Arrays should have the same size and DRC indexes */ + for_each_pair_drmem_lmb(drmem_info, old_lmb, new_dinfo, new_lmb) { - /* The first int of the property is the number of lmb's described - * by the property. This is followed by an array of of_drconf_cell - * entries. Get the number of entries and skip to the array of - * of_drconf_cell's. - */ - entries = be32_to_cpu(*p++); - old_drmem = (struct of_drconf_cell_v1 *)p; - - p = (__be32 *)pr->prop->value; - p++; - new_drmem = (struct of_drconf_cell_v1 *)p; + if (new_lmb->drc_index != old_lmb->drc_index) + continue; - for (i = 0; i < entries; i++) { - if ((be32_to_cpu(old_drmem[i].flags) & DRCONF_MEM_ASSIGNED) && - (!(be32_to_cpu(new_drmem[i].flags) & DRCONF_MEM_ASSIGNED))) { + if ((old_lmb->flags & DRCONF_MEM_ASSIGNED) && + (!(new_lmb->flags & DRCONF_MEM_ASSIGNED))) { rc = pseries_remove_memblock( - be64_to_cpu(old_drmem[i].base_addr), - memblock_size); + old_lmb->base_addr, memblock_size); break; - } else if ((!(be32_to_cpu(old_drmem[i].flags) & - DRCONF_MEM_ASSIGNED)) && - (be32_to_cpu(new_drmem[i].flags) & - DRCONF_MEM_ASSIGNED)) { - rc = memblock_add(be64_to_cpu(old_drmem[i].base_addr), - memblock_size); + } else if ((!(old_lmb->flags & DRCONF_MEM_ASSIGNED)) && + (new_lmb->flags & DRCONF_MEM_ASSIGNED)) { + rc = memblock_add(old_lmb->base_addr, + memblock_size); rc = (rc < 0) ? -EINVAL : 0; break; + } else if ((old_lmb->aa_index != new_lmb->aa_index) && + (new_lmb->flags & DRCONF_MEM_ASSIGNED)) { + dlpar_queue_action(PSERIES_HP_ELOG_RESOURCE_MEM, + PSERIES_HP_ELOG_ACTION_READD, + new_lmb->drc_index); } } return rc; } +static void pseries_update_ala_memory_aai(int aa_index) +{ + struct drmem_lmb *lmb; + + /* Readd all LMBs which were previously using the + * specified aa_index value. + */ + for_each_drmem_lmb(lmb) { + if ((lmb->aa_index == aa_index) && + (lmb->flags & DRCONF_MEM_ASSIGNED)) { + dlpar_queue_action(PSERIES_HP_ELOG_RESOURCE_MEM, + PSERIES_HP_ELOG_ACTION_READD, + lmb->drc_index); + } + } +} + +struct assoc_arrays { + u32 n_arrays; + u32 array_sz; + const __be32 *arrays; +}; + +static int pseries_update_ala_memory(struct of_reconfig_data *pr) +{ + struct assoc_arrays new_ala, old_ala; + __be32 *p; + int i, lim; + + if (rtas_hp_event) + return 0; + + /* + * The layout of the ibm,associativity-lookup-arrays + * property is a number N indicating the number of + * associativity arrays, followed by a number M + * indicating the size of each associativity array, + * followed by a list of N associativity arrays. + */ + + p = (__be32 *) pr->old_prop->value; + if (!p) + return -EINVAL; + old_ala.n_arrays = of_read_number(p++, 1); + old_ala.array_sz = of_read_number(p++, 1); + old_ala.arrays = p; + + p = (__be32 *) pr->prop->value; + if (!p) + return -EINVAL; + new_ala.n_arrays = of_read_number(p++, 1); + new_ala.array_sz = of_read_number(p++, 1); + new_ala.arrays = p; + + lim = (new_ala.n_arrays > old_ala.n_arrays) ? old_ala.n_arrays : + new_ala.n_arrays; + + if (old_ala.array_sz == new_ala.array_sz) { + + /* Reset any entries where the old and new rows + * the array have changed. + */ + for (i = 0; i < lim; i++) { + int index = (i * new_ala.array_sz); + + if (!memcmp(&old_ala.arrays[index], + &new_ala.arrays[index], + new_ala.array_sz)) + continue; + + pseries_update_ala_memory_aai(i); + } + + /* Reset any entries representing the extra rows. + * There shouldn't be any, but just in case ... + */ + for (i = lim; i < new_ala.n_arrays; i++) + pseries_update_ala_memory_aai(i); + + } else { + /* Update all entries representing these rows; + * as all rows have different sizes, none can + * have equivalent values. + */ + for (i = 0; i < lim; i++) + pseries_update_ala_memory_aai(i); + } + + return 0; +} + static int pseries_memory_notifier(struct notifier_block *nb, unsigned long action, void *data) { @@ -1059,8 +1139,17 @@ static int pseries_memory_notifier(struct notifier_block *nb, err = pseries_remove_mem_node(rd->dn); break; case OF_RECONFIG_UPDATE_PROPERTY: - if (!strcmp(rd->prop->name, "ibm,dynamic-memory")) - err = pseries_update_drconf_memory(rd); + if (!strcmp(rd->prop->name, "ibm,dynamic-memory") || + !strcmp(rd->prop->name, "ibm,dynamic-memory-v2")) { + struct drmem_lmb_info *dinfo = + drmem_lmbs_init(rd->prop); + if (!dinfo) + return -EINVAL; + err = pseries_update_drconf_memory(dinfo); + drmem_lmbs_free(dinfo); + } else if (!strcmp(rd->prop->name, + "ibm,associativity-lookup-arrays")) + err = pseries_update_ala_memory(rd); break; } return notifier_from_errno(err); ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC v5 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration @ 2018-05-21 17:51 Michael Bringmann 2018-05-21 17:52 ` [RFC v5 6/6] migration/memory: Update memory for assoc changes Michael Bringmann 0 siblings, 1 reply; 10+ messages in thread From: Michael Bringmann @ 2018-05-21 17:51 UTC (permalink / raw) To: linuxppc-dev Cc: Michael Bringmann, Nathan Fontenot, John Allen, Tyrel Datwyler, Thomas Falcon The migration of LPARs across Power systems affects many attributes including that of the associativity of memory blocks and CPUs. The patches in this set execute when a system is coming up fresh upon a migration target. They are intended to, * Recognize changes to the associativity of memory and CPUs recorded in internal data structures when compared to the latest copies in the device tree (e.g. ibm,dynamic-memory, ibm,dynamic-memory-v2, cpus), * Recognize changes to the associativity mapping (e.g. ibm, associativity-lookup-arrays), locate all assigned memory blocks corresponding to each changed row, and readd all such blocks. * Generate calls to other code layers to reset the data structures related to associativity of the CPUs and memory. * Re-register the 'changed' entities into the target system. Re-registration of CPUs and memory blocks mostly entails acting as if they have been newly hot-added into the target system. Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> Michael Bringmann (3): powerpc migration/drmem: Modify DRMEM code to export more features powerpc migration/cpu: Associativity & cpu changes powerpc migration/memory: Associativity & memory updates --- Changes in RFC: -- Restructure and rearrange content of patches to co-locate similar or related modifications -- Rename pseries_update_drconf_cpu to pseries_update_cpu -- Simplify code to update CPU nodes during mobility checks. Remove functions to generate extra HP_ELOG messages in favor of direct function calls to dlpar_cpu_readd_by_index, or dlpar_memory_readd_by_index. -- Revise code order in dlpar_cpu_readd_by_index() to present more appropriate error codes from underlying layers of the implementation. -- Add hotplug device lock around all property updates -- Schedule all CPU and memory changes due to device-tree updates / LPAR mobility as workqueue operations -- Export DRMEM accessor functions to parse 'ibm,dynamic-memory-v2' -- Export DRMEM functions to provide user copies of LMB array -- Compress code using DRMEM accessor functions. -- Split topology timer crash fix into new patch. -- Modify DRMEM code to replace usages of dt_root_addr_cells, and dt_mem_next_cell, as these are only available at first boot. -- Correct a bug in DRC index selection for queued operation. -- Rebase to 4.17-rc5 kernel -- Minor code cleanups ^ permalink raw reply [flat|nested] 10+ messages in thread
* [RFC v5 6/6] migration/memory: Update memory for assoc changes 2018-05-21 17:51 [RFC v5 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration Michael Bringmann @ 2018-05-21 17:52 ` Michael Bringmann 2018-05-22 21:11 ` Thomas Falcon 0 siblings, 1 reply; 10+ messages in thread From: Michael Bringmann @ 2018-05-21 17:52 UTC (permalink / raw) To: linuxppc-dev Cc: Michael Bringmann, Nathan Fontenot, John Allen, Tyrel Datwyler, Thomas Falcon migration/memory: This patch adds more recognition for changes to the associativity of memory blocks described by the device-tree properties and updates local and general kernel data structures to reflect those changes. These differences may include: * Evaluating 'ibm,dynamic-memory' properties when processing the topology of LPARS in Post Migration events. Previous efforts only recognized whether a memory block's assignment had changed in the property. Changes here include checking the aa_index values for each drc_index of the old/new LMBs and to 'readd' any block for which the setting has changed. * In an LPAR migration scenario, the "ibm,associativity-lookup-arrays" property may change. In the event that a row of the array differs, locate all assigned memory blocks with that 'aa_index' and 're-add' them to the system memory block data structures. In the process of the 're-add', the system routines will update the corresponding entry for the memory in the LMB structures and any other relevant kernel data structures. * Extend the previous work for the 'ibm,associativity-lookup-array' and 'ibm,dynamic-memory' properties to support the property 'ibm,dynamic-memory-v2' by means of the DRMEM LMB interpretation code. Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> --- Changes in RFC: -- Simplify code to update memory nodes during mobility checks. -- Reuse code from DRMEM changes to scan for LMBs when updating aa_index -- Combine common code for properties 'ibm,dynamic-memory' and 'ibm,dynamic-memory-v2' after integrating DRMEM features. -- Rearrange patches to co-locate memory property-related changes. -- Use new paired list iterator for the drmem info arrays. -- Use direct calls to add/remove memory from the update drconf function as those operations are only intended for user DLPAR ops, and should not occur during Migration reconfig notifier changes. -- Correct processing bug in processing of ibm,associativity-lookup-arrays -- Rebase to 4.17-rc5 kernel -- Apply minor code cleanups --- arch/powerpc/platforms/pseries/hotplug-memory.c | 153 ++++++++++++++++++----- 1 file changed, 121 insertions(+), 32 deletions(-) diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c index c1578f5..ac329aa 100644 --- a/arch/powerpc/platforms/pseries/hotplug-memory.c +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c @@ -994,13 +994,11 @@ static int pseries_add_mem_node(struct device_node *np) return (ret < 0) ? -EINVAL : 0; } -static int pseries_update_drconf_memory(struct of_reconfig_data *pr) +static int pseries_update_drconf_memory(struct drmem_lmb_info *new_dinfo) { - struct of_drconf_cell_v1 *new_drmem, *old_drmem; + struct drmem_lmb *old_lmb, *new_lmb; unsigned long memblock_size; - u32 entries; - __be32 *p; - int i, rc = -EINVAL; + int rc = 0; if (rtas_hp_event) return 0; @@ -1009,42 +1007,124 @@ static int pseries_update_drconf_memory(struct of_reconfig_data *pr) if (!memblock_size) return -EINVAL; - p = (__be32 *) pr->old_prop->value; - if (!p) - return -EINVAL; + /* Arrays should have the same size and DRC indexes */ + for_each_pair_drmem_lmb(drmem_info, old_lmb, new_dinfo, new_lmb) { - /* The first int of the property is the number of lmb's described - * by the property. This is followed by an array of of_drconf_cell - * entries. Get the number of entries and skip to the array of - * of_drconf_cell's. - */ - entries = be32_to_cpu(*p++); - old_drmem = (struct of_drconf_cell_v1 *)p; - - p = (__be32 *)pr->prop->value; - p++; - new_drmem = (struct of_drconf_cell_v1 *)p; + if (new_lmb->drc_index != old_lmb->drc_index) + continue; - for (i = 0; i < entries; i++) { - if ((be32_to_cpu(old_drmem[i].flags) & DRCONF_MEM_ASSIGNED) && - (!(be32_to_cpu(new_drmem[i].flags) & DRCONF_MEM_ASSIGNED))) { + if ((old_lmb->flags & DRCONF_MEM_ASSIGNED) && + (!(new_lmb->flags & DRCONF_MEM_ASSIGNED))) { rc = pseries_remove_memblock( - be64_to_cpu(old_drmem[i].base_addr), - memblock_size); + old_lmb->base_addr, memblock_size); break; - } else if ((!(be32_to_cpu(old_drmem[i].flags) & - DRCONF_MEM_ASSIGNED)) && - (be32_to_cpu(new_drmem[i].flags) & - DRCONF_MEM_ASSIGNED)) { - rc = memblock_add(be64_to_cpu(old_drmem[i].base_addr), - memblock_size); + } else if ((!(old_lmb->flags & DRCONF_MEM_ASSIGNED)) && + (new_lmb->flags & DRCONF_MEM_ASSIGNED)) { + rc = memblock_add(old_lmb->base_addr, + memblock_size); rc = (rc < 0) ? -EINVAL : 0; break; + } else if ((old_lmb->aa_index != new_lmb->aa_index) && + (new_lmb->flags & DRCONF_MEM_ASSIGNED)) { + dlpar_queue_action(PSERIES_HP_ELOG_RESOURCE_MEM, + PSERIES_HP_ELOG_ACTION_READD, + new_lmb->drc_index); } } return rc; } +static void pseries_update_ala_memory_aai(int aa_index) +{ + struct drmem_lmb *lmb; + + /* Readd all LMBs which were previously using the + * specified aa_index value. + */ + for_each_drmem_lmb(lmb) { + if ((lmb->aa_index == aa_index) && + (lmb->flags & DRCONF_MEM_ASSIGNED)) { + dlpar_queue_action(PSERIES_HP_ELOG_RESOURCE_MEM, + PSERIES_HP_ELOG_ACTION_READD, + lmb->drc_index); + } + } +} + +struct assoc_arrays { + u32 n_arrays; + u32 array_sz; + const __be32 *arrays; +}; + +static int pseries_update_ala_memory(struct of_reconfig_data *pr) +{ + struct assoc_arrays new_ala, old_ala; + __be32 *p; + int i, lim; + + if (rtas_hp_event) + return 0; + + /* + * The layout of the ibm,associativity-lookup-arrays + * property is a number N indicating the number of + * associativity arrays, followed by a number M + * indicating the size of each associativity array, + * followed by a list of N associativity arrays. + */ + + p = (__be32 *) pr->old_prop->value; + if (!p) + return -EINVAL; + old_ala.n_arrays = of_read_number(p++, 1); + old_ala.array_sz = of_read_number(p++, 1); + old_ala.arrays = p; + + p = (__be32 *) pr->prop->value; + if (!p) + return -EINVAL; + new_ala.n_arrays = of_read_number(p++, 1); + new_ala.array_sz = of_read_number(p++, 1); + new_ala.arrays = p; + + lim = (new_ala.n_arrays > old_ala.n_arrays) ? old_ala.n_arrays : + new_ala.n_arrays; + + if (old_ala.array_sz == new_ala.array_sz) { + + /* Reset any entries where the old and new rows + * the array have changed. + */ + for (i = 0; i < lim; i++) { + int index = (i * new_ala.array_sz); + + if (!memcmp(&old_ala.arrays[index], + &new_ala.arrays[index], + new_ala.array_sz)) + continue; + + pseries_update_ala_memory_aai(i); + } + + /* Reset any entries representing the extra rows. + * There shouldn't be any, but just in case ... + */ + for (i = lim; i < new_ala.n_arrays; i++) + pseries_update_ala_memory_aai(i); + + } else { + /* Update all entries representing these rows; + * as all rows have different sizes, none can + * have equivalent values. + */ + for (i = 0; i < lim; i++) + pseries_update_ala_memory_aai(i); + } + + return 0; +} + static int pseries_memory_notifier(struct notifier_block *nb, unsigned long action, void *data) { @@ -1059,8 +1139,17 @@ static int pseries_memory_notifier(struct notifier_block *nb, err = pseries_remove_mem_node(rd->dn); break; case OF_RECONFIG_UPDATE_PROPERTY: - if (!strcmp(rd->prop->name, "ibm,dynamic-memory")) - err = pseries_update_drconf_memory(rd); + if (!strcmp(rd->prop->name, "ibm,dynamic-memory") || + !strcmp(rd->prop->name, "ibm,dynamic-memory-v2")) { + struct drmem_lmb_info *dinfo = + drmem_lmbs_init(rd->prop); + if (!dinfo) + return -EINVAL; + err = pseries_update_drconf_memory(dinfo); + drmem_lmbs_free(dinfo); + } else if (!strcmp(rd->prop->name, + "ibm,associativity-lookup-arrays")) + err = pseries_update_ala_memory(rd); break; } return notifier_from_errno(err); ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [RFC v5 6/6] migration/memory: Update memory for assoc changes 2018-05-21 17:52 ` [RFC v5 6/6] migration/memory: Update memory for assoc changes Michael Bringmann @ 2018-05-22 21:11 ` Thomas Falcon 2018-05-22 23:54 ` Michael Bringmann 0 siblings, 1 reply; 10+ messages in thread From: Thomas Falcon @ 2018-05-22 21:11 UTC (permalink / raw) To: Michael Bringmann, linuxppc-dev Cc: Nathan Fontenot, John Allen, Tyrel Datwyler On 05/21/2018 12:52 PM, Michael Bringmann wrote: > migration/memory: This patch adds more recognition for changes to > the associativity of memory blocks described by the device-tree > properties and updates local and general kernel data structures to > reflect those changes. These differences may include: > > * Evaluating 'ibm,dynamic-memory' properties when processing the > topology of LPARS in Post Migration events. Previous efforts > only recognized whether a memory block's assignment had changed > in the property. Changes here include checking the aa_index > values for each drc_index of the old/new LMBs and to 'readd' > any block for which the setting has changed. > > * In an LPAR migration scenario, the "ibm,associativity-lookup-arrays" > property may change. In the event that a row of the array differs, > locate all assigned memory blocks with that 'aa_index' and 're-add' > them to the system memory block data structures. In the process of > the 're-add', the system routines will update the corresponding entry > for the memory in the LMB structures and any other relevant kernel > data structures. > > * Extend the previous work for the 'ibm,associativity-lookup-array' > and 'ibm,dynamic-memory' properties to support the property > 'ibm,dynamic-memory-v2' by means of the DRMEM LMB interpretation > code. > > Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> > --- > Changes in RFC: > -- Simplify code to update memory nodes during mobility checks. > -- Reuse code from DRMEM changes to scan for LMBs when updating > aa_index > -- Combine common code for properties 'ibm,dynamic-memory' and > 'ibm,dynamic-memory-v2' after integrating DRMEM features. > -- Rearrange patches to co-locate memory property-related changes. > -- Use new paired list iterator for the drmem info arrays. > -- Use direct calls to add/remove memory from the update drconf > function as those operations are only intended for user DLPAR > ops, and should not occur during Migration reconfig notifier > changes. > -- Correct processing bug in processing of ibm,associativity-lookup-arrays > -- Rebase to 4.17-rc5 kernel > -- Apply minor code cleanups > --- > arch/powerpc/platforms/pseries/hotplug-memory.c | 153 ++++++++++++++++++----- > 1 file changed, 121 insertions(+), 32 deletions(-) > > diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c > index c1578f5..ac329aa 100644 > --- a/arch/powerpc/platforms/pseries/hotplug-memory.c > +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c > @@ -994,13 +994,11 @@ static int pseries_add_mem_node(struct device_node *np) > return (ret < 0) ? -EINVAL : 0; > } > > -static int pseries_update_drconf_memory(struct of_reconfig_data *pr) > +static int pseries_update_drconf_memory(struct drmem_lmb_info *new_dinfo) > { > - struct of_drconf_cell_v1 *new_drmem, *old_drmem; > + struct drmem_lmb *old_lmb, *new_lmb; > unsigned long memblock_size; > - u32 entries; > - __be32 *p; > - int i, rc = -EINVAL; > + int rc = 0; > > if (rtas_hp_event) > return 0; > @@ -1009,42 +1007,124 @@ static int pseries_update_drconf_memory(struct of_reconfig_data *pr) > if (!memblock_size) > return -EINVAL; > > - p = (__be32 *) pr->old_prop->value; > - if (!p) > - return -EINVAL; > + /* Arrays should have the same size and DRC indexes */ > + for_each_pair_drmem_lmb(drmem_info, old_lmb, new_dinfo, new_lmb) { > > - /* The first int of the property is the number of lmb's described > - * by the property. This is followed by an array of of_drconf_cell > - * entries. Get the number of entries and skip to the array of > - * of_drconf_cell's. > - */ > - entries = be32_to_cpu(*p++); > - old_drmem = (struct of_drconf_cell_v1 *)p; > - > - p = (__be32 *)pr->prop->value; > - p++; > - new_drmem = (struct of_drconf_cell_v1 *)p; > + if (new_lmb->drc_index != old_lmb->drc_index) > + continue; > > - for (i = 0; i < entries; i++) { > - if ((be32_to_cpu(old_drmem[i].flags) & DRCONF_MEM_ASSIGNED) && > - (!(be32_to_cpu(new_drmem[i].flags) & DRCONF_MEM_ASSIGNED))) { > + if ((old_lmb->flags & DRCONF_MEM_ASSIGNED) && > + (!(new_lmb->flags & DRCONF_MEM_ASSIGNED))) { > rc = pseries_remove_memblock( > - be64_to_cpu(old_drmem[i].base_addr), > - memblock_size); > + old_lmb->base_addr, memblock_size); > break; > - } else if ((!(be32_to_cpu(old_drmem[i].flags) & > - DRCONF_MEM_ASSIGNED)) && > - (be32_to_cpu(new_drmem[i].flags) & > - DRCONF_MEM_ASSIGNED)) { > - rc = memblock_add(be64_to_cpu(old_drmem[i].base_addr), > - memblock_size); > + } else if ((!(old_lmb->flags & DRCONF_MEM_ASSIGNED)) && > + (new_lmb->flags & DRCONF_MEM_ASSIGNED)) { > + rc = memblock_add(old_lmb->base_addr, > + memblock_size); > rc = (rc < 0) ? -EINVAL : 0; > break; > + } else if ((old_lmb->aa_index != new_lmb->aa_index) && > + (new_lmb->flags & DRCONF_MEM_ASSIGNED)) { > + dlpar_queue_action(PSERIES_HP_ELOG_RESOURCE_MEM, > + PSERIES_HP_ELOG_ACTION_READD, > + new_lmb->drc_index); > } > } > return rc; > } > > +static void pseries_update_ala_memory_aai(int aa_index) > +{ > + struct drmem_lmb *lmb; > + > + /* Readd all LMBs which were previously using the > + * specified aa_index value. > + */ > + for_each_drmem_lmb(lmb) { > + if ((lmb->aa_index == aa_index) && > + (lmb->flags & DRCONF_MEM_ASSIGNED)) { > + dlpar_queue_action(PSERIES_HP_ELOG_RESOURCE_MEM, > + PSERIES_HP_ELOG_ACTION_READD, > + lmb->drc_index); > + } > + } > +} > + > +struct assoc_arrays { > + u32 n_arrays; > + u32 array_sz; > + const __be32 *arrays; > +}; > + > +static int pseries_update_ala_memory(struct of_reconfig_data *pr) > +{ > + struct assoc_arrays new_ala, old_ala; > + __be32 *p; > + int i, lim; > + > + if (rtas_hp_event) > + return 0; > + > + /* > + * The layout of the ibm,associativity-lookup-arrays > + * property is a number N indicating the number of > + * associativity arrays, followed by a number M > + * indicating the size of each associativity array, > + * followed by a list of N associativity arrays. > + */ > + > + p = (__be32 *) pr->old_prop->value; > + if (!p) > + return -EINVAL; > + old_ala.n_arrays = of_read_number(p++, 1); > + old_ala.array_sz = of_read_number(p++, 1); > + old_ala.arrays = p; > + > + p = (__be32 *) pr->prop->value; > + if (!p) > + return -EINVAL; > + new_ala.n_arrays = of_read_number(p++, 1); > + new_ala.array_sz = of_read_number(p++, 1); > + new_ala.arrays = p; > + I don't know how often associativity lookup arrays needs to be parsed, but maybe it would be helpful to create a helper function to parse those here. > + lim = (new_ala.n_arrays > old_ala.n_arrays) ? old_ala.n_arrays : > + new_ala.n_arrays; > + > + if (old_ala.array_sz == new_ala.array_sz) { > + > + /* Reset any entries where the old and new rows > + * the array have changed. > + */ > + for (i = 0; i < lim; i++) { > + int index = (i * new_ala.array_sz); > + > + if (!memcmp(&old_ala.arrays[index], > + &new_ala.arrays[index], > + new_ala.array_sz)) > + continue; > + > + pseries_update_ala_memory_aai(i); > + } > + > + /* Reset any entries representing the extra rows. > + * There shouldn't be any, but just in case ... > + */ > + for (i = lim; i < new_ala.n_arrays; i++) > + pseries_update_ala_memory_aai(i); > + > + } else { > + /* Update all entries representing these rows; > + * as all rows have different sizes, none can > + * have equivalent values. > + */ > + for (i = 0; i < lim; i++) > + pseries_update_ala_memory_aai(i); > + } > + > + return 0; > +} > + > static int pseries_memory_notifier(struct notifier_block *nb, > unsigned long action, void *data) > { > @@ -1059,8 +1139,17 @@ static int pseries_memory_notifier(struct notifier_block *nb, > err = pseries_remove_mem_node(rd->dn); > break; > case OF_RECONFIG_UPDATE_PROPERTY: > - if (!strcmp(rd->prop->name, "ibm,dynamic-memory")) > - err = pseries_update_drconf_memory(rd); > + if (!strcmp(rd->prop->name, "ibm,dynamic-memory") || > + !strcmp(rd->prop->name, "ibm,dynamic-memory-v2")) { > + struct drmem_lmb_info *dinfo = > + drmem_lmbs_init(rd->prop); > + if (!dinfo) > + return -EINVAL; > + err = pseries_update_drconf_memory(dinfo); > + drmem_lmbs_free(dinfo); Is this block above related to the other associativity changes? It seems to be an update for dynamic-memory-v2, so should probably be in a separate patch. Thanks, Tom > + } else if (!strcmp(rd->prop->name, > + "ibm,associativity-lookup-arrays")) > + err = pseries_update_ala_memory(rd); > break; > } > return notifier_from_errno(err); ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC v5 6/6] migration/memory: Update memory for assoc changes 2018-05-22 21:11 ` Thomas Falcon @ 2018-05-22 23:54 ` Michael Bringmann 0 siblings, 0 replies; 10+ messages in thread From: Michael Bringmann @ 2018-05-22 23:54 UTC (permalink / raw) To: Thomas Falcon, linuxppc-dev; +Cc: Nathan Fontenot, Tyrel Datwyler, John Allen This patch was intended to apply the necessary changes for the 'ibm,dynamic-memory[-v2]' properties. Before the advent of the LMB representation, that code took up a lot more space. At this point, it has shrunk to only one line of unique change. I was hoping to include it here rather than create another patch. But that can be done. Michael On 05/22/2018 04:11 PM, Thomas Falcon wrote: > On 05/21/2018 12:52 PM, Michael Bringmann wrote: >> migration/memory: This patch adds more recognition for changes to >> the associativity of memory blocks described by the device-tree >> properties and updates local and general kernel data structures to >> reflect those changes. These differences may include: >> >> * Evaluating 'ibm,dynamic-memory' properties when processing the >> topology of LPARS in Post Migration events. Previous efforts >> only recognized whether a memory block's assignment had changed >> in the property. Changes here include checking the aa_index >> values for each drc_index of the old/new LMBs and to 'readd' >> any block for which the setting has changed. >> >> * In an LPAR migration scenario, the "ibm,associativity-lookup-arrays" >> property may change. In the event that a row of the array differs, >> locate all assigned memory blocks with that 'aa_index' and 're-add' >> them to the system memory block data structures. In the process of >> the 're-add', the system routines will update the corresponding entry >> for the memory in the LMB structures and any other relevant kernel >> data structures. >> >> * Extend the previous work for the 'ibm,associativity-lookup-array' >> and 'ibm,dynamic-memory' properties to support the property >> 'ibm,dynamic-memory-v2' by means of the DRMEM LMB interpretation >> code. >> >> Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com> >> --- >> Changes in RFC: >> -- Simplify code to update memory nodes during mobility checks. >> -- Reuse code from DRMEM changes to scan for LMBs when updating >> aa_index >> -- Combine common code for properties 'ibm,dynamic-memory' and >> 'ibm,dynamic-memory-v2' after integrating DRMEM features. >> -- Rearrange patches to co-locate memory property-related changes. >> -- Use new paired list iterator for the drmem info arrays. >> -- Use direct calls to add/remove memory from the update drconf >> function as those operations are only intended for user DLPAR >> ops, and should not occur during Migration reconfig notifier >> changes. >> -- Correct processing bug in processing of ibm,associativity-lookup-arrays >> -- Rebase to 4.17-rc5 kernel >> -- Apply minor code cleanups >> --- >> arch/powerpc/platforms/pseries/hotplug-memory.c | 153 ++++++++++++++++++----- >> 1 file changed, 121 insertions(+), 32 deletions(-) >> >> diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c >> index c1578f5..ac329aa 100644 >> --- a/arch/powerpc/platforms/pseries/hotplug-memory.c >> +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c >> @@ -994,13 +994,11 @@ static int pseries_add_mem_node(struct device_node *np) >> return (ret < 0) ? -EINVAL : 0; >> } >> >> -static int pseries_update_drconf_memory(struct of_reconfig_data *pr) >> +static int pseries_update_drconf_memory(struct drmem_lmb_info *new_dinfo) >> { >> - struct of_drconf_cell_v1 *new_drmem, *old_drmem; >> + struct drmem_lmb *old_lmb, *new_lmb; >> unsigned long memblock_size; >> - u32 entries; >> - __be32 *p; >> - int i, rc = -EINVAL; >> + int rc = 0; >> >> if (rtas_hp_event) >> return 0; >> @@ -1009,42 +1007,124 @@ static int pseries_update_drconf_memory(struct of_reconfig_data *pr) >> if (!memblock_size) >> return -EINVAL; >> >> - p = (__be32 *) pr->old_prop->value; >> - if (!p) >> - return -EINVAL; >> + /* Arrays should have the same size and DRC indexes */ >> + for_each_pair_drmem_lmb(drmem_info, old_lmb, new_dinfo, new_lmb) { >> >> - /* The first int of the property is the number of lmb's described >> - * by the property. This is followed by an array of of_drconf_cell >> - * entries. Get the number of entries and skip to the array of >> - * of_drconf_cell's. >> - */ >> - entries = be32_to_cpu(*p++); >> - old_drmem = (struct of_drconf_cell_v1 *)p; >> - >> - p = (__be32 *)pr->prop->value; >> - p++; >> - new_drmem = (struct of_drconf_cell_v1 *)p; >> + if (new_lmb->drc_index != old_lmb->drc_index) >> + continue; >> >> - for (i = 0; i < entries; i++) { >> - if ((be32_to_cpu(old_drmem[i].flags) & DRCONF_MEM_ASSIGNED) && >> - (!(be32_to_cpu(new_drmem[i].flags) & DRCONF_MEM_ASSIGNED))) { >> + if ((old_lmb->flags & DRCONF_MEM_ASSIGNED) && >> + (!(new_lmb->flags & DRCONF_MEM_ASSIGNED))) { >> rc = pseries_remove_memblock( >> - be64_to_cpu(old_drmem[i].base_addr), >> - memblock_size); >> + old_lmb->base_addr, memblock_size); >> break; >> - } else if ((!(be32_to_cpu(old_drmem[i].flags) & >> - DRCONF_MEM_ASSIGNED)) && >> - (be32_to_cpu(new_drmem[i].flags) & >> - DRCONF_MEM_ASSIGNED)) { >> - rc = memblock_add(be64_to_cpu(old_drmem[i].base_addr), >> - memblock_size); >> + } else if ((!(old_lmb->flags & DRCONF_MEM_ASSIGNED)) && >> + (new_lmb->flags & DRCONF_MEM_ASSIGNED)) { >> + rc = memblock_add(old_lmb->base_addr, >> + memblock_size); >> rc = (rc < 0) ? -EINVAL : 0; >> break; >> + } else if ((old_lmb->aa_index != new_lmb->aa_index) && >> + (new_lmb->flags & DRCONF_MEM_ASSIGNED)) { >> + dlpar_queue_action(PSERIES_HP_ELOG_RESOURCE_MEM, >> + PSERIES_HP_ELOG_ACTION_READD, >> + new_lmb->drc_index); >> } >> } >> return rc; >> } >> >> +static void pseries_update_ala_memory_aai(int aa_index) >> +{ >> + struct drmem_lmb *lmb; >> + >> + /* Readd all LMBs which were previously using the >> + * specified aa_index value. >> + */ >> + for_each_drmem_lmb(lmb) { >> + if ((lmb->aa_index == aa_index) && >> + (lmb->flags & DRCONF_MEM_ASSIGNED)) { >> + dlpar_queue_action(PSERIES_HP_ELOG_RESOURCE_MEM, >> + PSERIES_HP_ELOG_ACTION_READD, >> + lmb->drc_index); >> + } >> + } >> +} >> + >> +struct assoc_arrays { >> + u32 n_arrays; >> + u32 array_sz; >> + const __be32 *arrays; >> +}; >> + >> +static int pseries_update_ala_memory(struct of_reconfig_data *pr) >> +{ >> + struct assoc_arrays new_ala, old_ala; >> + __be32 *p; >> + int i, lim; >> + >> + if (rtas_hp_event) >> + return 0; >> + >> + /* >> + * The layout of the ibm,associativity-lookup-arrays >> + * property is a number N indicating the number of >> + * associativity arrays, followed by a number M >> + * indicating the size of each associativity array, >> + * followed by a list of N associativity arrays. >> + */ >> + >> + p = (__be32 *) pr->old_prop->value; >> + if (!p) >> + return -EINVAL; >> + old_ala.n_arrays = of_read_number(p++, 1); >> + old_ala.array_sz = of_read_number(p++, 1); >> + old_ala.arrays = p; >> + >> + p = (__be32 *) pr->prop->value; >> + if (!p) >> + return -EINVAL; >> + new_ala.n_arrays = of_read_number(p++, 1); >> + new_ala.array_sz = of_read_number(p++, 1); >> + new_ala.arrays = p; >> + > > I don't know how often associativity lookup arrays needs to be parsed, but maybe it would be helpful to create a helper function to parse those here. > >> + lim = (new_ala.n_arrays > old_ala.n_arrays) ? old_ala.n_arrays : >> + new_ala.n_arrays; >> + >> + if (old_ala.array_sz == new_ala.array_sz) { >> + >> + /* Reset any entries where the old and new rows >> + * the array have changed. >> + */ >> + for (i = 0; i < lim; i++) { >> + int index = (i * new_ala.array_sz); >> + >> + if (!memcmp(&old_ala.arrays[index], >> + &new_ala.arrays[index], >> + new_ala.array_sz)) >> + continue; >> + >> + pseries_update_ala_memory_aai(i); >> + } >> + >> + /* Reset any entries representing the extra rows. >> + * There shouldn't be any, but just in case ... >> + */ >> + for (i = lim; i < new_ala.n_arrays; i++) >> + pseries_update_ala_memory_aai(i); >> + >> + } else { >> + /* Update all entries representing these rows; >> + * as all rows have different sizes, none can >> + * have equivalent values. >> + */ >> + for (i = 0; i < lim; i++) >> + pseries_update_ala_memory_aai(i); >> + } >> + >> + return 0; >> +} >> + >> static int pseries_memory_notifier(struct notifier_block *nb, >> unsigned long action, void *data) >> { >> @@ -1059,8 +1139,17 @@ static int pseries_memory_notifier(struct notifier_block *nb, >> err = pseries_remove_mem_node(rd->dn); >> break; >> case OF_RECONFIG_UPDATE_PROPERTY: >> - if (!strcmp(rd->prop->name, "ibm,dynamic-memory")) >> - err = pseries_update_drconf_memory(rd); >> + if (!strcmp(rd->prop->name, "ibm,dynamic-memory") || >> + !strcmp(rd->prop->name, "ibm,dynamic-memory-v2")) { >> + struct drmem_lmb_info *dinfo = >> + drmem_lmbs_init(rd->prop); >> + if (!dinfo) >> + return -EINVAL; >> + err = pseries_update_drconf_memory(dinfo); >> + drmem_lmbs_free(dinfo); > > Is this block above related to the other associativity changes? It seems to be an update for dynamic-memory-v2, so should probably be in a separate patch. > > Thanks, > Tom > >> + } else if (!strcmp(rd->prop->name, >> + "ibm,associativity-lookup-arrays")) >> + err = pseries_update_ala_memory(rd); >> break; >> } >> return notifier_from_errno(err); > > > -- Michael W. Bringmann Linux Technology Center IBM Corporation Tie-Line 363-5196 External: (512) 286-5196 Cell: (512) 466-0650 mwb@linux.vnet.ibm.com ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2018-05-22 23:54 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2018-05-22 23:36 [RFC v6 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration Michael Bringmann 2018-05-22 23:36 ` [RFC v5 1/6] powerpc/drmem: Export 'dynamic-memory' loader Michael Bringmann 2018-05-22 23:36 ` [RFC v5 2/6] powerpc/cpu: Conditionally acquire/release DRC index Michael Bringmann 2018-05-22 23:36 ` [RFC v5 3/6] migration/dlpar: Add device readd queuing function Michael Bringmann 2018-05-22 23:36 ` [RFC v5 4/6] powerpc/dlpar: Provide CPU readd operation Michael Bringmann 2018-05-22 23:36 ` [RFC v5 5/6] powerpc/mobility: Add lock/unlock device hotplug Michael Bringmann 2018-05-22 23:36 ` [RFC v5 6/6] migration/memory: Update memory for assoc changes Michael Bringmann -- strict thread matches above, loose matches on Subject: below -- 2018-05-21 17:51 [RFC v5 0/6] powerpc/hotplug: Fix affinity assoc for LPAR migration Michael Bringmann 2018-05-21 17:52 ` [RFC v5 6/6] migration/memory: Update memory for assoc changes Michael Bringmann 2018-05-22 21:11 ` Thomas Falcon 2018-05-22 23:54 ` Michael Bringmann
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).