public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@linux.intel.com>
To: liaohengquan1986 <liaohengquan1986@163.com>
Cc: "Alexander Gordeev" <agordeev@redhat.com>,
	linux-kernel@vger.kernel.org,
	"Keith Busch" <keith.busch@intel.com>,
	linux-nvme@lists.infradead.org
Subject: Re: A question about NVMe's nvme-irq
Date: Fri, 21 Mar 2014 16:11:09 -0400	[thread overview]
Message-ID: <20140321201109.GC5705@linux.intel.com> (raw)
In-Reply-To: <785daaf7.5696.144e2af49b0.Coremail.liaohengquan1986@163.com>

On Fri, Mar 21, 2014 at 11:29:02AM +0800, liaohengquan1986 wrote:
> hello,
>          There is question confusing me recently. In the function of nvme-irq as belows:
>           static irqreturn_t nvme_irq(int irq, void *data)
>           {
>                  irqreturn_t result;
>                  struct nvme_queue *nvmeq = data;
>                  spin_lock(&nvmeq->q_lock);
>                  nvme_process_cq(nvmeq);
>                  result = nvmeq->cqe_seen ? IRQ_HANDLED : IRQ_NONE;
>                  nvmeq->cqe_seen = 0;
>                  spin_unlock(&nvmeq->q_lock);
>                  return result;
>        }
>        If there are two cqes which trigger two irqs, but they are so closed that the first nvme-irq() handles both the cqes( including the second cqe which triggers the second irq),
>       then the second nvme_process_cq() will find there is no cqe in the CQ and return nvmeq->cqe_seen = 0, and nvme-irq will return IRQ_NONE.
>       I think maybe this is a bug, because there actually are two irqs, it's not right to return IRQ_NONE, isn't it?

        /* If the controller ignores the cq head doorbell and continuously
         * writes to the queue, it is theoretically possible to wrap around
         * the queue twice and mistakenly return IRQ_NONE.  Linux only
         * requires that 0.1% of your interrupts are handled, so this isn't
         * a big problem.
         */

I should probably update & move that comment, but nevertheless, it
applies to your situation too.

> 

  parent reply	other threads:[~2014-03-21 20:11 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-28  8:38 [PATCH 00/14] NVMe: Cleanup device initialization Alexander Gordeev
2014-01-28  8:38 ` [PATCH 01/14] NVMe: Fix setup of affinity hint for unallocated queues Alexander Gordeev
2014-01-28  8:38 ` [PATCH 02/14] NVMe: Cleanup nvme_alloc_queue() and nvme_free_queue() Alexander Gordeev
2014-01-28  8:38 ` [PATCH 03/14] NVMe: Cleanup nvme_create_queue() and nvme_disable_queue() Alexander Gordeev
2014-01-28  8:38 ` [PATCH 04/14] NVMe: Cleanup adapter_alloc_cq/sg() and adapter_delete_cq/sg() Alexander Gordeev
2014-01-28  8:38 ` [PATCH 05/14] NVMe: Get rid of superfluous qid parameter to nvme_init_queue() Alexander Gordeev
2014-01-28  8:38 ` [PATCH 06/14] NVMe: Get rid of superfluous dev parameter to queue_request_irq() Alexander Gordeev
2014-01-28  8:38 ` [PATCH 07/14] NVMe: Make returning value consistent across all functions Alexander Gordeev
2014-01-28  8:38 ` [PATCH 08/14] NVMe: nvme_dev_map() is a bad place to set admin queue IRQ number Alexander Gordeev
2014-01-28  8:38 ` [PATCH 09/14] NVMe: Access interrupt vectors using nvme_queue::cq_vector only Alexander Gordeev
2014-01-28  8:38 ` [PATCH 10/14] NVMe: Factor out nvme_set_queue_count() Alexander Gordeev
2014-01-28  8:38 ` [PATCH 11/14] NVMe: Factor out nvme_init_bar() Alexander Gordeev
2014-01-28  8:38 ` [PATCH 12/14] NVMe: Factor out nvme_init_interrupts() Alexander Gordeev
2014-01-28  8:38 ` [PATCH 13/14] NVMe: Factor out nvme_setup_interrupts() Alexander Gordeev
2014-01-28  8:39 ` [PATCH 14/14] NVMe: Rework "NVMe: Disable admin queue on init failure" commit Alexander Gordeev
2014-02-18 16:53 ` [PATCH 00/14] NVMe: Cleanup device initialization Alexander Gordeev
2014-03-11 15:08 ` Alexander Gordeev
     [not found]   ` <196070e8.d690.144bb7e553f.Coremail.liaohengquan1986@163.com>
2014-03-13 19:20     ` problem in the development of nvme disk Matthew Wilcox
     [not found]       ` <785daaf7.5696.144e2af49b0.Coremail.liaohengquan1986@163.com>
2014-03-21 20:11         ` Matthew Wilcox [this message]
     [not found]           ` <45840288.abae.144f2312ef0.Coremail.liaohengquan1986@163.com>
2014-03-24  6:09             ` if the NVMe driver has been validated on dual-cpu platform Matthew Wilcox
     [not found]           ` <42744e02.6f7d.144f1b148b3.Coremail.liaohengquan1986@163.com>
2014-03-24  6:09             ` Re: A question about NVMe's nvme-irq Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140321201109.GC5705@linux.intel.com \
    --to=willy@linux.intel.com \
    --cc=agordeev@redhat.com \
    --cc=keith.busch@intel.com \
    --cc=liaohengquan1986@163.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox