From: Klaus Jensen <its@irrelevant.dk>
To: Jinhao Fan <fanjinhao21s@ict.ac.cn>
Cc: qemu-devel@nongnu.org, kbusch@kernel.org, stefanha@gmail.com,
"open list:nvme" <qemu-block@nongnu.org>
Subject: Re: [PATCH v2 1/3] hw/nvme: support irq(de)assertion with eventfd
Date: Thu, 25 Aug 2022 15:59:02 +0200 [thread overview]
Message-ID: <YweAJsEfLPBomz2W@apples> (raw)
In-Reply-To: <29A5902D-D6FD-413A-B540-9C0E18B6329A@ict.ac.cn>
[-- Attachment #1: Type: text/plain, Size: 5977 bytes --]
On Aug 25 21:09, Jinhao Fan wrote:
>
>
>
> > 在 2022年8月25日,20:39,Klaus Jensen <its@irrelevant.dk> 写道:
> >
> > On Aug 25 13:56, Klaus Jensen wrote:
> >>> On Aug 25 19:16, Jinhao Fan wrote:
> >>> On 8/25/2022 5:33 PM, Klaus Jensen wrote:
> >>>> I'm still a bit perplexed by this issue, so I just tried moving
> >>>> nvme_init_irq_notifier() to the end of nvme_init_cq() and removing this
> >>>> first_io_cqe thing. I did not observe any particular issues?
> >>>>
> >>>> What bad behavior did you encounter, it seems to work fine to me
> >>>
> >>> The kernel boots up and got stuck, waiting for interrupts. Then the request
> >>> times out and got retried three times. Finally the driver seems to decide
> >>> that the drive is down and continues to boot.
> >>>
> >>> I added some prints during debugging and found that the MSI-X message which
> >>> got registered in KVM via kvm_irqchip_add_msi_route() is not the same as the
> >>> one actually used in msix_notify().
> >>>
> >>> Are you sure you are using KVM's irqfd?
> >>>
> >>
> >> Pretty sure? Using "ioeventfd=on,irq-eventfd=on" on the controller.
> >>
> >> And the following patch.
> >>
> >>
> >> diff --git i/hw/nvme/ctrl.c w/hw/nvme/ctrl.c
> >> index 30bbda7bb5ae..b2e41d3bd745 100644
> >> --- i/hw/nvme/ctrl.c
> >> +++ w/hw/nvme/ctrl.c
> >> @@ -1490,21 +1490,6 @@ static void nvme_post_cqes(void *opaque)
> >> if (!pending) {
> >> n->cq_pending++;
> >> }
> >> -
> >> - if (unlikely(cq->first_io_cqe)) {
> >> - /*
> >> - * Initilize event notifier when first cqe is posted. For irqfd
> >> - * support we need to register the MSI message in KVM. We
> >> - * can not do this registration at CQ creation time because
> >> - * Linux's NVMe driver changes the MSI message after CQ creation.
> >> - */
> >> - cq->first_io_cqe = false;
> >> -
> >> - if (n->params.irq_eventfd) {
> >> - nvme_init_irq_notifier(n, cq);
> >> - }
> >> - }
> >> -
> >> }
> >>
> >> nvme_irq_assert(n, cq);
> >> @@ -4914,11 +4899,14 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr,
> >> }
> >> n->cq[cqid] = cq;
> >> cq->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, nvme_post_cqes, cq);
> >> +
> >> /*
> >> * Only enable irqfd for IO queues since we always emulate admin queue
> >> * in main loop thread
> >> */
> >> - cq->first_io_cqe = cqid != 0;
> >> + if (cqid && n->params.irq_eventfd) {
> >> + nvme_init_irq_notifier(n, cq);
> >> + }
> >> }
> >>
> >>
> >
> > From a trace, this is what I observe:
> >
> > First, the queue is created and a virq (0) is assigned.
> >
> > msix_table_mmio_write dev nvme hwaddr 0xc val 0x0 size 4
> > pci_nvme_mmio_write addr 0x1000 data 0x7 size 4
> > pci_nvme_mmio_doorbell_sq sqid 0 new_tail 7
> > pci_nvme_admin_cmd cid 4117 sqid 0 opc 0x5 opname 'NVME_ADM_CMD_CREATE_CQ'
> > pci_nvme_create_cq create completion queue, addr=0x104318000, cqid=1, vector=1, qsize=1023, qflags=3, ien=1
> > kvm_irqchip_add_msi_route dev nvme vector 1 virq 0
> > kvm_irqchip_commit_routes
> > pci_nvme_enqueue_req_completion cid 4117 cqid 0 dw0 0x0 dw1 0x0 status 0x0
> > pci_nvme_irq_msix raising MSI-X IRQ vector 0
> > pci_nvme_mmio_write addr 0x1004 data 0x7 size 4
> > pci_nvme_mmio_doorbell_cq cqid 0 new_head 7
> >
> > We go on and the SQ is created as well.
> >
> > pci_nvme_mmio_write addr 0x1000 data 0x8 size 4
> > pci_nvme_mmio_doorbell_sq sqid 0 new_tail 8
> > pci_nvme_admin_cmd cid 4118 sqid 0 opc 0x1 opname 'NVME_ADM_CMD_CREATE_SQ'
> > pci_nvme_create_sq create submission queue, addr=0x1049a0000, sqid=1, cqid=1, qsize=1023, qflags=1
> > pci_nvme_enqueue_req_completion cid 4118 cqid 0 dw0 0x0 dw1 0x0 status 0x0
> > pci_nvme_irq_msix raising MSI-X IRQ vector 0
> > pci_nvme_mmio_write addr 0x1004 data 0x8 size 4
> > pci_nvme_mmio_doorbell_cq cqid 0 new_head 8
> >
> >
> > Then i get a bunch of update_msi_routes, but the virq's are not related
> > to the nvme device.
> >
> > However, I then assume we hit queue_request_irq() in the kernel and we
> > see the MSI-X table updated:
> >
> > msix_table_mmio_write dev nvme hwaddr 0x1c val 0x1 size 4
> > msix_table_mmio_write dev nvme hwaddr 0x10 val 0xfee003f8 size 4
> > msix_table_mmio_write dev nvme hwaddr 0x14 val 0x0 size 4
> > msix_table_mmio_write dev nvme hwaddr 0x18 val 0x0 size 4
> > msix_table_mmio_write dev nvme hwaddr 0x1c val 0x0 size 4
> > kvm_irqchip_update_msi_route Updating MSI route virq=0
> > ... other virq updates
> > kvm_irqchip_commit_routes
> >
> > Notice the last trace line. The route for virq 0 is updated.
> >
> > Looks to me that the virq route is implicitly updated with the new
> > message, no?
>
> Could you try without the msix masking patch? I suspect our unmask function actually did the “implicit” update here.
>
> >
RIGHT.
target/i386/kvm/kvm.c:
5353 if (!notify_list_inited) {
5354 /* For the first time we do add route, add ourselves into
5355 * IOMMU's IEC notify list if needed. */
5356 X86IOMMUState *iommu = x86_iommu_get_default();
5357 if (iommu) {
5358 x86_iommu_iec_register_notifier(iommu,
5359 kvm_update_msi_routes_all,
5360 NULL);
5361 }
5362 notify_list_inited = true;
5363 }
If we have an IOMMU, then it all just works. I always run with a viommu
configured, so that is why I was not seeing the issue. The masking has
nothing to do with it.
I wonder if this can be made to work without the iommu as well...
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2022-08-25 14:00 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-25 7:47 [PATCH v2 0/3] hw/nvme: add irqfd support Jinhao Fan
2022-08-25 7:47 ` [PATCH v2 1/3] hw/nvme: support irq(de)assertion with eventfd Jinhao Fan
2022-08-25 9:33 ` Klaus Jensen
2022-08-25 11:16 ` Jinhao Fan
2022-08-25 11:56 ` Klaus Jensen
2022-08-25 12:38 ` Klaus Jensen
2022-08-25 13:09 ` Jinhao Fan
2022-08-25 13:59 ` Klaus Jensen [this message]
2022-08-25 14:11 ` Jinhao Fan
2022-08-25 14:05 ` Jinhao Fan
2022-08-25 7:47 ` [PATCH v2 2/3] hw/nvme: use KVM irqfd when available Jinhao Fan
2022-08-25 7:47 ` [PATCH v2 3/3] hw/nvme: add MSI-x mask handlers for irqfd Jinhao Fan
-- strict thread matches above, loose matches on Subject: below --
2022-08-26 15:12 [PATCH v2 0/3] iothread and irqfd support Jinhao Fan
2022-08-26 15:12 ` [PATCH v2 1/3] hw/nvme: support irq(de)assertion with eventfd Jinhao Fan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YweAJsEfLPBomz2W@apples \
--to=its@irrelevant.dk \
--cc=fanjinhao21s@ict.ac.cn \
--cc=kbusch@kernel.org \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).