Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Hannes Reinecke <hare@suse.de>, Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <keith.busch@wdc.com>, linux-nvme@lists.infradead.org
Subject: Re: [PATCHv3] nvme: generate uevent once a multipath namespace is operational again
Date: Tue, 18 May 2021 12:04:25 -0700	[thread overview]
Message-ID: <79b986da-249d-8bdf-501e-b73ea38acbcb@grimberg.me> (raw)
In-Reply-To: <799078b2-0409-53ea-c462-b074b69d8a57@suse.de>


>>>>>>> diff --git a/drivers/nvme/host/multipath.c
>>>>>>> b/drivers/nvme/host/multipath.c
>>>>>>> index 0551796517e6..ecc99bd5f8ad 100644
>>>>>>> --- a/drivers/nvme/host/multipath.c
>>>>>>> +++ b/drivers/nvme/host/multipath.c
>>>>>>> @@ -100,8 +100,11 @@ void nvme_kick_requeue_lists(struct nvme_ctrl
>>>>>>> *ctrl)
>>>>>>>         down_read(&ctrl->namespaces_rwsem);
>>>>>>>         list_for_each_entry(ns, &ctrl->namespaces, list) {
>>>>>>> -        if (ns->head->disk)
>>>>>>> -            kblockd_schedule_work(&ns->head->requeue_work);
>>>>>>> +        if (!ns->head->disk)
>>>>>>> +            continue;
>>>>>>> +        kblockd_schedule_work(&ns->head->requeue_work);
>>>>>>> +        if (ctrl->state == NVME_CTRL_LIVE)
>>>>>>> +            disk_uevent(ns->head->disk, KOBJ_CHANGE);
>>>>>>>         }
>>>>>>
>>>>>> I asked this on v1, is this only needed for mpath devices?
>>>>>
>>>>> Yes; we need to send the KOBJ_CHANGE event on the mpath device as it's
>>>>> not backed by hardware. The only non-multipathed devices I've seen so
>>>>> far are PCI devices where events are generated by the PCI device
>>>>> itself.
>>>>
>>>> And for fabrics?
>>>
>>> No events whatsoever.
>>> Hence this patch.
>>
>> Non-multipath fabrics I meant
> 
> I know. As said, I've never seen them. Did you?
> 
> In fact, I wouldn't be surprised if that would open a completely
> different can of worms.

I've seen such, but I'm fine with ignoring them...

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

      reply	other threads:[~2021-05-18 19:05 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-17  8:32 [PATCHv3] nvme: generate uevent once a multipath namespace is operational again Hannes Reinecke
2021-05-17 17:49 ` Sagi Grimberg
2021-05-18  6:59   ` Hannes Reinecke
2021-05-18  7:05     ` Christoph Hellwig
2021-05-18  7:49       ` Hannes Reinecke
2021-05-18 18:00     ` Sagi Grimberg
2021-05-18 18:09       ` Hannes Reinecke
2021-05-18 18:39         ` Sagi Grimberg
2021-05-18 18:49           ` Hannes Reinecke
2021-05-18 19:04             ` Sagi Grimberg [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=79b986da-249d-8bdf-501e-b73ea38acbcb@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=keith.busch@wdc.com \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox