linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: sagi@grimberg.me (Sagi Grimberg)
Subject: [PATCH v2 0/6] Automatic affinity settings for nvme over rdma
Date: Thu,  6 Apr 2017 13:36:32 +0300	[thread overview]
Message-ID: <1491474998-16423-1-git-send-email-sagi@grimberg.me> (raw)

This patch set is aiming to automatically find the optimal
queue <-> irq multi-queue assignments in storage ULPs (demonstrated
on nvme-rdma) based on the underlying rdma device irq affinity
settings.

Changes from v1:
- Removed mlx5e_get_cpu as Christoph suggested
- Fixed up nvme-rdma queue comp_vector selection to get a better match
- Added a comment on why we limit on @dev->num_comp_vectors
- rebased to Jens's for-4.12/block
- Collected review tags

Sagi Grimberg (6):
  mlx5: convert to generic pci_alloc_irq_vectors
  mlx5: move affinity hints assignments to generic code
  RDMA/core: expose affinity mappings per completion vector
  mlx5: support ->get_vector_affinity
  block: Add rdma affinity based queue mapping helper
  nvme-rdma: use intelligent affinity based queue mappings

 block/Kconfig                                      |   5 +
 block/Makefile                                     |   1 +
 block/blk-mq-rdma.c                                |  54 +++++++++++
 drivers/infiniband/hw/mlx5/main.c                  |  10 ++
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c  |  14 +--
 drivers/net/ethernet/mellanox/mlx5/core/eq.c       |   9 +-
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c  |   2 +-
 drivers/net/ethernet/mellanox/mlx5/core/health.c   |   2 +-
 drivers/net/ethernet/mellanox/mlx5/core/main.c     | 108 +++------------------
 .../net/ethernet/mellanox/mlx5/core/mlx5_core.h    |   1 -
 drivers/nvme/host/rdma.c                           |  29 ++++--
 include/linux/blk-mq-rdma.h                        |  10 ++
 include/linux/mlx5/driver.h                        |   2 -
 include/rdma/ib_verbs.h                            |  24 +++++
 14 files changed, 149 insertions(+), 122 deletions(-)
 create mode 100644 block/blk-mq-rdma.c
 create mode 100644 include/linux/blk-mq-rdma.h

-- 
2.7.4

             reply	other threads:[~2017-04-06 10:36 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-06 10:36 Sagi Grimberg [this message]
2017-04-06 10:36 ` [PATCH v2 1/6] mlx5: convert to generic pci_alloc_irq_vectors Sagi Grimberg
2017-04-06 14:24   ` Leon Romanovsky
2017-04-06 10:36 ` [PATCH v2 2/6] mlx5: move affinity hints assignments to generic code Sagi Grimberg
2017-04-06 14:27   ` Leon Romanovsky
2017-04-06 10:36 ` [PATCH v2 3/6] RDMA/core: expose affinity mappings per completion vector Sagi Grimberg
2017-04-28 22:48   ` Doug Ledford
2017-05-03  8:02     ` Sagi Grimberg
2017-05-03 10:34   ` Håkon Bugge
2017-04-06 10:36 ` [PATCH v2 4/6] mlx5: support ->get_vector_affinity Sagi Grimberg
2017-04-06 14:30   ` Leon Romanovsky
2017-04-06 10:36 ` [PATCH v2 5/6] block: Add rdma affinity based queue mapping helper Sagi Grimberg
2017-04-06 10:36 ` [PATCH v2 6/6] nvme-rdma: use intelligent affinity based queue mappings Sagi Grimberg
2017-04-23 12:01 ` [PATCH v2 0/6] Automatic affinity settings for nvme over rdma Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1491474998-16423-1-git-send-email-sagi@grimberg.me \
    --to=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).