From: Jason Gunthorpe <jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
To: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Cc: Linux RDMA Mailing List
<linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Subject: Re: RFC: RPC/RDMA memory invalidation
Date: Wed, 28 Oct 2015 15:51:19 -0600 [thread overview]
Message-ID: <20151028215119.GA30564@obsidianresearch.com> (raw)
In-Reply-To: <59849A38-0C8F-46AB-BB76-71216C6C0631-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
On Wed, Oct 28, 2015 at 05:30:17PM -0400, Chuck Lever wrote:
> IBTA spec states:
>
> > MW access operations (i.e. RDMA Write, RDMA Reads, and Atom-
> > ics) are only allowed if the Type 2B MW is in the Valid state and the
> > QP Number (QPN) and PD of the QP performing the MW access op-
> > eration matches the QPN and PD associated with the Bound Type 2B
> > MW.
>
> Once the QP is out of RTS, there can be no incoming RDMA
> requests that match the R_key, QPN, PD tuple. I think you
> are saying that the QP state change has the same problem
> as not waiting for an invalidation to complete.
MW (Memory Window) is something different from a MR.
MR's do not match on the QPN.
> > If there was one PD per QP then the above would be true, since the MR
> > is linked to the PD.
>
> There is a per-connection struct rpcrdma_ia that contains
> both a PD and a QP. Therefore there is one PD and only one
> QP (on the client) per connection.
Oh, that is great then
> > FWIW, the same is true on the send side too, if the RPC had send
> > buffers and gets canceled, you have to block until a CQ linked to that
> > send is seen.
>
> By “you have to block” you mean the send buffer cannot be reused
> until the Send WR is known to have completed, and new Send WRs
> cannot be posted until it is known that enough send queue resources
> are available.
Yes
> I’m not certain we are careful to ensure
> the hardware has truly relinquished the send buffer before it is
> made available for re-use. A known issue.
This is the issue I was thinking of, yes. Ideally the CPU would not
touch the send buffer until the HW is done with it under any
situation. This is less serious than having a rouge writable R_Key
however.
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
prev parent reply other threads:[~2015-10-28 21:51 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-28 19:56 RFC: RPC/RDMA memory invalidation Chuck Lever
[not found] ` <094A348A-0764-4F46-A422-FBF2F1DC1C28-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2015-10-28 20:10 ` Jason Gunthorpe
[not found] ` <20151028201002.GA27901-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2015-10-28 21:30 ` Chuck Lever
[not found] ` <59849A38-0C8F-46AB-BB76-71216C6C0631-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2015-10-28 21:51 ` Jason Gunthorpe [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151028215119.GA30564@obsidianresearch.com \
--to=jgunthorpe-epgobjl8dl3ta4ec/59zmfatqe2ktcn/@public.gmane.org \
--cc=chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
--cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).