From: Christoph Hellwig <hch@lst.de>
To: John Meneghini <jmeneghi@redhat.com>
Cc: kbusch@kernel.org, hch@lst.de, sagi@grimberg.me,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
emilne@redhat.com, jrani@purestorage.com, randyj@purestorage.com,
chaitanyak@nvidia.com, hare@kernel.org
Subject: Re: [PATCH v7 1/1] nvme-multipath: implement "queue-depth" iopolicy
Date: Thu, 20 Jun 2024 08:56:41 +0200 [thread overview]
Message-ID: <20240620065641.GA22113@lst.de> (raw)
In-Reply-To: <20240619163503.500844-2-jmeneghi@redhat.com>
> [jmeneghi: vairious changes and improvements, addressed review comments]
s/vairious/various/
> + if ((nvme_req(rq)->flags & NVME_MPATH_CNT_ACTIVE))
No need for the double braces here.
> + WARN_ON_ONCE((atomic_dec_if_positive(&ns->ctrl->nr_active)) < 0);
Overly long line.
But I don't understand why you need the WARN_ON anyway. If the value
must always be positive there is no point in atomic_dec_if_positive.
If misaccounting is fine there WARN_ON is counterproductive.
> -static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head,
> - int node, struct nvme_ns *old)
> +static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head)
> {
> - struct nvme_ns *ns, *found = NULL;
> + struct nvme_ns *ns, *old, *found = NULL;
> + int node = numa_node_id();
> +
> + old = srcu_dereference(head->current_path[node], &head->srcu);
> + if (unlikely(!old))
> + return __nvme_find_path(head, node);
Can you split the refactoring of the existing path selectors into a
prep patch, please?
> +static void nvme_subsys_iopolicy_update(struct nvme_subsystem *subsys,
> + int iopolicy)
> +{
> + struct nvme_ctrl *ctrl;
> + int old_iopolicy = READ_ONCE(subsys->iopolicy);
> +
> + if (old_iopolicy == iopolicy)
> + return;
> +
> + WRITE_ONCE(subsys->iopolicy, iopolicy);
What is the atomicy model here? There doesn't seem to be any
global lock protecting it? Maybe move it into the
nvme_subsystems_lock critical section?
> + pr_notice("%s: changed from %s to %s for subsysnqn %s\n", __func__,
> + nvme_iopolicy_names[old_iopolicy], nvme_iopolicy_names[iopolicy],
> + subsys->subnqn);
The function is not really relevant here, this should become something
like:
pr_notice("%s: changing iopolicy from %s to %s\n",
subsys->subnqn,
nvme_iopolicy_names[old_iopolicy],
nvme_iopolicy_names[iopolicy]);
or maybe:
dev_notice(changing iopolicy from %s to %s\n",
&subsys->dev,
nvme_iopolicy_names[old_iopolicy],
nvme_iopolicy_names[iopolicy]);
next prev parent reply other threads:[~2024-06-20 6:56 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-19 16:35 [PATCH v7 0/1] nvme: queue-depth multipath iopolicy John Meneghini
2024-06-19 16:35 ` [PATCH v7 1/1] nvme-multipath: implement "queue-depth" iopolicy John Meneghini
2024-06-20 6:56 ` Christoph Hellwig [this message]
2024-06-20 14:41 ` John Meneghini
2024-06-20 17:54 ` John Meneghini
2024-06-24 8:46 ` Christoph Hellwig
2024-06-24 17:50 ` John Meneghini
2024-06-24 17:54 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240620065641.GA22113@lst.de \
--to=hch@lst.de \
--cc=chaitanyak@nvidia.com \
--cc=emilne@redhat.com \
--cc=hare@kernel.org \
--cc=jmeneghi@redhat.com \
--cc=jrani@purestorage.com \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=randyj@purestorage.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox