From: dledford@redhat.com (Doug Ledford)
Subject: [PATCH v3 0/9] Introduce per-device completion queue pools
Date: Mon, 13 Nov 2017 17:11:50 -0500 [thread overview]
Message-ID: <1510611110.3735.50.camel@redhat.com> (raw)
In-Reply-To: <20171108095742.25365-1-sagi@grimberg.me>
On Wed, 2017-11-08@11:57 +0200, Sagi Grimberg wrote:
> Comments and feedback is welcome.
From what I gathered reading the feedback, there is still some concern
as to whether or not the design of this is ready to set in stone, so I'm
going to skip this series for this merge window.
--
Doug Ledford <dledford at redhat.com>
GPG KeyID: B826A3330E572FDD
Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57 2FDD
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20171113/1173bc0d/attachment.sig>
prev parent reply other threads:[~2017-11-13 22:11 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-11-08 9:57 [PATCH v3 0/9] Introduce per-device completion queue pools Sagi Grimberg
2017-11-08 9:57 ` [PATCH v3 1/9] RDMA/core: Add implicit " Sagi Grimberg
2017-11-09 10:45 ` Max Gurtovoy
2017-11-09 17:31 ` Sagi Grimberg
2017-11-09 17:33 ` Bart Van Assche
2017-11-13 20:28 ` Sagi Grimberg
2017-11-14 16:28 ` Bart Van Assche
2017-11-20 12:31 ` Sagi Grimberg
2017-12-11 23:50 ` [v3,1/9] " Jason Gunthorpe
2018-01-03 17:47 ` Jason Gunthorpe
2017-11-08 9:57 ` [PATCH v3 2/9] IB/isert: use implicit CQ allocation Sagi Grimberg
2017-11-08 10:27 ` Nicholas A. Bellinger
2017-11-14 9:14 ` Max Gurtovoy
2017-11-08 9:57 ` [PATCH v3 3/9] IB/iser: " Sagi Grimberg
2017-11-08 10:25 ` Nicholas A. Bellinger
2017-11-08 9:57 ` [PATCH v3 4/9] IB/srpt: " Sagi Grimberg
2017-11-08 9:57 ` [PATCH v3 5/9] svcrdma: Use RDMA core " Sagi Grimberg
2017-11-08 9:57 ` [PATCH v3 6/9] nvme-rdma: use " Sagi Grimberg
2017-11-08 9:57 ` [PATCH v3 7/9] nvmet-rdma: " Sagi Grimberg
2017-11-08 9:57 ` [PATCH v3 8/9] nvmet: allow assignment of a cpulist for each nvmet port Sagi Grimberg
2017-11-08 9:57 ` [PATCH v3 9/9] nvmet-rdma: assign cq completion vector based on the port allowed cpus Sagi Grimberg
2017-11-08 16:42 ` [PATCH v3 0/9] Introduce per-device completion queue pools Chuck Lever
2017-11-09 17:06 ` Sagi Grimberg
2017-11-10 19:27 ` Chuck Lever
2017-11-13 20:47 ` Sagi Grimberg
2017-11-13 22:15 ` Chuck Lever
2017-11-20 12:08 ` Sagi Grimberg
2017-11-20 15:54 ` Chuck Lever
2017-11-09 16:42 ` Bart Van Assche
2017-11-09 17:22 ` Sagi Grimberg
2017-11-09 17:31 ` Bart Van Assche
2017-11-13 20:31 ` Sagi Grimberg
2017-11-13 20:34 ` Jason Gunthorpe
2017-11-13 20:48 ` Sagi Grimberg
2017-11-14 2:48 ` Jason Gunthorpe
2017-11-20 12:10 ` Sagi Grimberg
2017-11-20 19:24 ` Jason Gunthorpe
2017-11-20 21:29 ` Bart Van Assche
2017-11-14 16:21 ` Bart Van Assche
2017-11-20 12:26 ` Sagi Grimberg
2017-11-14 10:06 ` Max Gurtovoy
2017-11-20 12:20 ` Sagi Grimberg
2017-11-09 18:52 ` Leon Romanovsky
2017-11-13 20:32 ` Sagi Grimberg
2017-11-13 22:11 ` Doug Ledford [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1510611110.3735.50.camel@redhat.com \
--to=dledford@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).