From: hch@lst.de (Christoph Hellwig)
Subject: [PATCH 3/3] nvme-multipath: automatic NUMA path balancing
Date: Thu, 8 Nov 2018 10:36:09 +0100 [thread overview]
Message-ID: <20181108093609.GA4790@lst.de> (raw)
In-Reply-To: <20181102095641.28504-4-hare@suse.de>
> +void nvme_mpath_distribute_paths(struct nvme_subsystem *subsys, int num_ctrls,
> + struct nvme_ctrl *ctrl, int numa_node)
> +{
This function needs a comment describing it, as it isn't exactly
obvious.
> + int node;
> + int found_node = NUMA_NO_NODE;
> + int max = LOCAL_DISTANCE * num_ctrls;
> +
> + for_each_node(node) {
> + struct nvme_ctrl *c;
> + int sum = 0;
> +
> + list_for_each_entry(c, &subsys->ctrls, subsys_entry)
> + sum += c->node_map[node];
> + if (sum > max) {
> + max = sum;
> + found_node = node;
> + }
> + }
E.g. I really don't get the LOCAL_DISTANCE magic at all..
> +void nvme_mpath_balance_node(struct nvme_subsystem *subsys,
> + int num_ctrls, int numa_node)
> +{
> + struct nvme_ctrl *found = NULL, *ctrl;
> + int max = LOCAL_DISTANCE * num_ctrls, node;
Same here.
> void nvme_mpath_balance_subsys(struct nvme_subsystem *subsys)
> {
> struct nvme_ctrl *ctrl;
> + int num_ctrls = 0;
> int node;
>
> mutex_lock(&subsys->lock);
>
> /*
> - * Reset set NUMA distance
> + * 1. Reset set NUMA distance
> * During creation the NUMA distance is only set
> * per controller, so after connecting the other
> * controllers the NUMA information on the existing
> @@ -280,7 +325,49 @@ void nvme_mpath_balance_subsys(struct nvme_subsystem *subsys)
Ok, this function and the comments make a whole lot more sense
with the new patch. I think your intention might be much more clear
if you merge the two patches.
> + /*
> + * 2. Distribute optimal paths:
> + * Only one primary paths per node.
> + * Additional primary paths are moved to unassigned nodes.
> + */
Btw, what do you mean with 'primary' path, we don't really use that
terminology anywhere else.
next prev parent reply other threads:[~2018-11-08 9:36 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-02 9:56 [PATCHv3 0/3] nvme: NUMA locality for fabrics Hannes Reinecke
2018-11-02 9:56 ` [PATCH 1/3] nvme: NUMA locality information " Hannes Reinecke
2018-11-08 9:22 ` Christoph Hellwig
2018-11-08 9:35 ` Hannes Reinecke
2018-11-02 9:56 ` [PATCH 2/3] nvme-multipath: Select paths based on NUMA locality Hannes Reinecke
2018-11-08 9:32 ` Christoph Hellwig
2018-11-02 9:56 ` [PATCH 3/3] nvme-multipath: automatic NUMA path balancing Hannes Reinecke
2018-11-08 9:36 ` Christoph Hellwig [this message]
2018-11-16 8:12 ` [PATCHv3 0/3] nvme: NUMA locality for fabrics Christoph Hellwig
2018-11-16 8:21 ` Hannes Reinecke
2018-11-16 8:23 ` Christoph Hellwig
2018-11-19 22:31 ` Sagi Grimberg
2018-11-20 6:12 ` Hannes Reinecke
2018-11-20 9:41 ` Christoph Hellwig
2018-11-20 15:47 ` Keith Busch
2018-11-20 19:27 ` James Smart
2018-11-21 8:36 ` Christoph Hellwig
2018-11-20 16:21 ` Hannes Reinecke
2018-11-20 18:12 ` James Smart
-- strict thread matches above, loose matches on Subject: below --
2018-10-26 12:57 [PATCHv2 " Hannes Reinecke
2018-10-26 12:57 ` [PATCH 3/3] nvme-multipath: automatic NUMA path balancing Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181108093609.GA4790@lst.de \
--to=hch@lst.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).