Linux RDMA and InfiniBand development
 help / color / mirror / Atom feed
From: Chuck Lever III <chuck.lever@oracle.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>,
	Linux NFS Mailing List <linux-nfs@vger.kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>
Subject: Re: [PATCH v1] RDMA/core: Fix check_flush_dependency splat on addr_wq
Date: Mon, 29 Aug 2022 17:14:56 +0000	[thread overview]
Message-ID: <90CD6895-348C-4B75-BAC5-2F7A0414494B@oracle.com> (raw)
In-Reply-To: <YwztJVdYq6f5M9yZ@nvidia.com>



> On Aug 29, 2022, at 12:45 PM, Jason Gunthorpe <jgg@nvidia.com> wrote:
> 
> On Fri, Aug 26, 2022 at 07:57:04PM +0000, Chuck Lever III wrote:
>> The connect APIs would be a place to start. In the meantime, though...
>> 
>> Two or three years ago I spent some effort to ensure that closing
>> an RDMA connection leaves a client-side RPC/RDMA transport with no
>> RDMA resources associated with it. It releases the CQs, QP, and all
>> the MRs. That makes initial connect and reconnect both behave exactly
>> the same, and guarantees that a reconnect does not get stuck with
>> an old CQ that is no longer working or a QP that is in TIMEWAIT.
>> 
>> However that does mean that substantial resource allocation is
>> done on every reconnect.
> 
> And if the resource allocations fail then what happens? The storage
> ULP retries forever and is effectively deadlocked?

The reconnection attempt fails, and any resources allocated during
that attempt are released. The ULP waits a bit then tries again
until it works or is interrupted.

A deadlock might occur if one of those allocations triggers
additional reclaim activity.


> How much allocation can you safely do under GFP_NOIO?

My naive take is that doing those allocations under NOIO would
help avoid recursion during memory exhaustion.


--
Chuck Lever




  reply	other threads:[~2022-08-29 17:15 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-22 15:30 [PATCH v1] RDMA/core: Fix check_flush_dependency splat on addr_wq Chuck Lever
2022-08-23  8:09 ` Leon Romanovsky
2022-08-23 13:58   ` Chuck Lever III
2022-08-24  9:20     ` Leon Romanovsky
2022-08-24 14:09       ` Chuck Lever III
2022-08-26 13:29         ` Jason Gunthorpe
2022-08-26 14:02           ` Chuck Lever III
2022-08-26 14:08             ` Jason Gunthorpe
2022-08-26 19:57               ` Chuck Lever III
2022-08-29 16:45                 ` Jason Gunthorpe
2022-08-29 17:14                   ` Chuck Lever III [this message]
2022-08-29 17:22                     ` Jason Gunthorpe
2022-08-29 18:15                       ` Chuck Lever III
2022-08-29 18:26                         ` Jason Gunthorpe
2022-08-29 19:31                           ` Chuck Lever III

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=90CD6895-348C-4B75-BAC5-2F7A0414494B@oracle.com \
    --to=chuck.lever@oracle.com \
    --cc=jgg@nvidia.com \
    --cc=leon@kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox