From: Ka-Cheong Poon <ka-cheong.poon@oracle.com>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: linux-rdma@vger.kernel.org
Subject: Re: RDMA subsystem namespace related questions (was Re: Finding the namespace of a struct ib_device)
Date: Wed, 30 Sep 2020 18:32:28 +0800 [thread overview]
Message-ID: <2859e4a8-777b-48a5-d3c6-2f2effbebef9@oracle.com> (raw)
In-Reply-To: <20200929174037.GW9916@ziepe.ca>
On 9/30/20 1:40 AM, Jason Gunthorpe wrote:
> On Wed, Sep 30, 2020 at 12:57:48AM +0800, Ka-Cheong Poon wrote:
>> On 9/7/20 9:48 PM, Ka-Cheong Poon wrote:
>>
>>> This may require a number of changes and the way a client interacts with
>>> the current RDMA framework. For example, currently a client registers
>>> once using one struct ib_client and gets device notifications for all
>>> namespaces and devices. Suppose there is rdma_[un]register_net_client(),
>>> it may need to require a client to use a different struct ib_client to
>>> register for each net namespace. And struct ib_client probably needs to
>>> have a field to store the net namespace. Probably all those client
>>> interaction functions will need to be modified. Since the clients xarray
>>> is global, more clients may mean performance implication, such as it takes
>>> longer to go through the whole clients xarray.
>>>
>>> There are probably many other subtle changes required. It may turn out to
>>> be not so straight forward. Is this community willing the take such changes?
>>> I can take a stab at it if the community really thinks that this is preferred.
>>
>>
>> Attached is a diff of a prototype for the above. This exercise is
>> to see what needs to be done to have a more network namespace aware
>> interface for RDMA client registration.
>
> An RDMA device is either in all namespaces or in a single
> namespace. If a client has some interest in only some namespaces then
> it should check the namespace during client registration and not
> register if it isn't interested. No need to change anything in the
> core code.
After the aforementioned check on a namespace, what can the client
do? It still needs to use the existing ib_register_client() to
register with RDMA subsystem. And after registration, it will get
notifications for all add/remove upcalls on devices not related
to the namespace it is interested in. The client can work around
this if there is a supported way to find out the namespace of a
device, hence the original proposal of having rdma_dev_to_netns().
>> Is the RDMA shared namespace mode the preferred mode to use as it is the
>> default mode?
>
> Shared is the legacy mode, modern systems should switch to namespace
> mode at early boot
Thanks for the clarification. I originally thought that the shared
mode was for supporting a large number of namespaces. In the
exclusive mode, a device needs to be assigned to a namespace for
that namespace to use it. If there are a large number of namespaces,
there won't be enough devices to assign to all of them (e.g. the
hardware I have access to only supports up to 24 VFs). The shared
mode can be used in this case. Could you please explain what needs
to be done to support a large number of namespaces in exclusive
mode?
BTW, if exclusive mode is the future, it may make sense to have
something like rdma_[un]register_net_client().
>> Is it expected that a client knows the running mode before
>> interacting with the RDMA subsystem?
>
> Why would a client care?
Because it may want to behave differently. For example, in shared
mode, it may want to create shadow device structure to hold per
namespace info for a device. But in exclusive mode, a device can
only be in one namespace, there is no need to have such shadow
device structure.
>> Is a client not supposed to differentiate different namespaces?
>
> None do today.
This is probably the case as calling rdma_create_id() in kernel can
disallow a namespace to be deleted. There must be no client doing
that right now. My code is using RDMA in a namespace, hence I'd
like to understand more about the RDMA subsystem's namespace support.
For example, what is the reason that the cma_wq is a global queue
shared by all namespaces instead of per namespace? Is it expected
that the work load will be low enough for all namespaces such that
they will not interfere with each other?
>> A new connection comes in and the event handler is called for an
>> RDMA_CM_EVENT_CONNECT_REQUEST event. There is no obvious namespace info regarding
>> the event. It seems that the only way to find out the namespace info is to
>> use the context of struct rdma_cm_id.
>
> The rdma_cm_id has only a single namespace, the ULP knows what it is
> because it created it. A listening ID can't spawn new IDs in different
> namespaces.
The problem is that the handler is not given the listener's
rdma_cm_id when it is called. It is only given the new rdma_cm_id.
Do you mean that there is a way to find out the listener's rdma_cm_id
given the new rdma_cm_id? But even if the listener's rdma_cm_id can
be found, what is the mechanism to find out the namespace of that
listener's namespace in the handler? The client may compare that
pointer with every listeners it creates. Is there a better way?
>> (*) Note that in __rdma_create_id(), it does a get_net(net) to put a
>> reference on a namespace. Suppose a kernel module calls rdma_create_id()
>> in its namespace .init function to create an RDMA listener and calls
>> rdma_destroy_id() in its namespace .exit function to destroy it.
>
> Yes, namespaces remain until all objects touching them are deleted.
>
> It seems like a ULP error to drive cm_id lifetime entirely from the
> per-net stuff.
It is not an ULP error. While there are many reasons to delete
a listener, it is not necessary for the listener to die unless the
namespace is going away.
> This would be similar to creating a socket in the kernel.
Right and a kernel socket does not prevent a namespace to be deleted.
>> __rdma_create_id() adds a reference to a namespace, when a sys admin
>> deletes a namespace (say `ip netns del ...`), the namespace won't be
>> deleted because of this reference. And the module will not release this
>> reference until its .exit function is called only when the namespace is
>> deleted. To resolve this issue, in the diff (in __rdma_create_id()), I
>> did something similar to the kern check in sk_alloc().
>
> What you are running into is there is no kernel user of net
> namespaces, all current ULPs exclusively use the init_net.
>
> Without an example of what that is supposed to be like it is hard to
> really have a discussion. You should reference other TCP in the kernel
> to see if someone has figured out how to make this work for TCP. It
> should be basically the same.
The kern check in sk_alloc() decides whether to hold a reference on
the namespace. What is in the diff follows the same mechanism.
--
K. Poon
ka-cheong.poon@oracle.com
next prev parent reply other threads:[~2020-09-30 10:32 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-03 14:02 Finding the namespace of a struct ib_device Ka-Cheong Poon
2020-09-03 17:39 ` Jason Gunthorpe
2020-09-04 4:01 ` Ka-Cheong Poon
2020-09-04 11:32 ` Jason Gunthorpe
2020-09-04 14:02 ` Ka-Cheong Poon
2020-09-06 7:44 ` Leon Romanovsky
2020-09-07 3:33 ` Ka-Cheong Poon
2020-09-07 7:18 ` Leon Romanovsky
2020-09-07 8:24 ` Ka-Cheong Poon
2020-09-07 9:04 ` Leon Romanovsky
2020-09-07 9:28 ` Ka-Cheong Poon
2020-09-07 10:22 ` Leon Romanovsky
2020-09-07 13:48 ` Ka-Cheong Poon
2020-09-29 16:57 ` RDMA subsystem namespace related questions (was Re: Finding the namespace of a struct ib_device) Ka-Cheong Poon
2020-09-29 17:40 ` Jason Gunthorpe
2020-09-30 10:32 ` Ka-Cheong Poon [this message]
2020-10-02 14:04 ` Jason Gunthorpe
2020-10-05 10:27 ` Ka-Cheong Poon
2020-10-05 13:16 ` Jason Gunthorpe
2020-10-05 13:57 ` Ka-Cheong Poon
2020-10-05 14:25 ` Jason Gunthorpe
2020-10-05 15:02 ` Ka-Cheong Poon
2020-10-05 15:45 ` Jason Gunthorpe
2020-10-06 9:36 ` Ka-Cheong Poon
2020-10-06 12:46 ` Jason Gunthorpe
2020-10-07 8:38 ` Ka-Cheong Poon
2020-10-07 11:16 ` Leon Romanovsky
2020-10-08 10:22 ` Ka-Cheong Poon
2020-10-08 10:36 ` Leon Romanovsky
2020-10-08 11:08 ` Ka-Cheong Poon
2020-10-08 16:08 ` Jason Gunthorpe
2020-10-08 16:21 ` Chuck Lever
2020-10-08 16:46 ` Jason Gunthorpe
2020-10-09 4:49 ` Ka-Cheong Poon
2020-10-09 14:39 ` Jason Gunthorpe
2020-10-09 14:48 ` Chuck Lever
2020-10-09 14:57 ` Jason Gunthorpe
2020-10-09 15:00 ` Chuck Lever
2020-10-09 15:07 ` Jason Gunthorpe
2020-10-09 15:27 ` Chuck Lever
2020-10-09 15:34 ` Jason Gunthorpe
2020-10-09 15:52 ` Chuck Lever
2020-10-12 8:20 ` Ka-Cheong Poon
2020-10-16 18:54 ` Jason Gunthorpe
2020-10-16 20:49 ` Chuck Lever
2020-10-19 18:31 ` Jason Gunthorpe
2020-10-07 12:28 ` Jason Gunthorpe
2020-10-08 10:49 ` Ka-Cheong Poon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2859e4a8-777b-48a5-d3c6-2f2effbebef9@oracle.com \
--to=ka-cheong.poon@oracle.com \
--cc=jgg@ziepe.ca \
--cc=linux-rdma@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox