linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: willy@linux.intel.com (Matthew Wilcox)
Subject: [PATCHv2] NVMe: IO Queue NUMA locality
Date: Tue, 9 Jul 2013 09:41:29 -0400	[thread overview]
Message-ID: <20130709134129.GI30142@linux.intel.com> (raw)
In-Reply-To: <1373312159-2255-1-git-send-email-keith.busch@intel.com>

On Mon, Jul 08, 2013@01:35:59PM -0600, Keith Busch wrote:
> There is measurable difference when running IO on a cpu on another
> domain; however, my particular device hits its peak performance on
> either domain at higher queue depths and block sizes, so I'm only able
> to see a difference at lower io depths. The best gains topped out at 2%
> improvement with this patch vs the existing code.

That's not too shabby.  This is only a two-socket system you're testing
on, so I'd expect larger gains on systems with more sockets.

> I understand this method of allocating and mapping memory may not work
> for CPUs without cache-coherency, but I'm not sure if there is another
> way to allocate coherent memory for a specific NUMA node.

I found a way in the networking drivers:

int ixgbe_setup_tx_resources(struct ixgbe_ring *tx_ring)
{
        int orig_node = dev_to_node(dev);
        int numa_node = -1;
...
        if (tx_ring->q_vector)
                numa_node = tx_ring->q_vector->numa_node;
...
        set_dev_node(dev, numa_node);
        tx_ring->desc = dma_alloc_coherent(dev,
                                           tx_ring->size,
                                           &tx_ring->dma,
                                           GFP_KERNEL);
        set_dev_node(dev, orig_node);
        if (!tx_ring->desc)
                tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size,
                                                   &tx_ring->dma, GFP_KERNEL);
        if (!tx_ring->desc)
                goto err;


> diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
> index 711b51c..9cedfa0 100644
> --- a/drivers/block/nvme-core.c
> +++ b/drivers/block/nvme-core.c
> @@ -1200,7 +1206,7 @@ static int nvme_configure_admin_queue(struct nvme_dev *dev)
>  	if (result < 0)
>  		return result;
>  
> -	nvmeq = nvme_alloc_queue(dev, 0, 64, 0);
> +	nvmeq = nvme_alloc_queue(dev, 0, 64, 0, -1);
>  	if (!nvmeq)
>  		return -ENOMEM;
>  

I suppose we should really have the admin queue allocated on the node
closest to the device, so pass in dev_to_node(dev) instead of -1 here?

      reply	other threads:[~2013-07-09 13:41 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-08 19:35 [PATCHv2] NVMe: IO Queue NUMA locality Keith Busch
2013-07-09 13:41 ` Matthew Wilcox [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130709134129.GI30142@linux.intel.com \
    --to=willy@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).