linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chuck Lever <cel@kernel.org>
To: Dai Ngo <dai.ngo@oracle.com>,
	Chuck Lever <chuck.lever@oracle.com>,
	Jeff Layton <jlayton@kernel.org>,
	neilb@ownmail.net, Olga Kornievskaia <okorniev@redhat.com>,
	Tom Talpey <tom@talpey.com>, Christoph Hellwig <hch@lst.de>
Cc: linux-nfs@vger.kernel.org
Subject: Re: [PATCH v3 1/1] NFSD: Track SCSI Persistent Registration Fencing per Client with xarray
Date: Wed, 24 Dec 2025 14:10:06 -0500	[thread overview]
Message-ID: <2b5a99aa-6bd6-478c-b734-82ea9d440ecd@kernel.org> (raw)
In-Reply-To: <0bc92d15-09cd-481f-8874-8dcc5a7b9d39@oracle.com>

On 12/24/25 12:52 PM, Dai Ngo wrote:
> 
> On 12/24/25 6:57 AM, Chuck Lever wrote:
>>
>> On Tue, Dec 23, 2025, at 5:34 PM, Dai Ngo wrote:
>>> On 12/23/25 11:47 AM, Chuck Lever wrote:
>>>> On Tue, Dec 23, 2025, at 1:54 PM, Dai Ngo wrote:

>>>>>> Another question is: Can cl_fenced_devs grow without bounds?
>>>>> I think it has the same limitation for any xarray. The hard limit
>>>>> is the availability of memory in the system.
>>>> My question isn't about how much can any xarray hold, it's how
>>>> much will NFSD ask /cl_fenced_devs/ to hold. IIUC, the upper
>>>> bound for each nfs4_client's cl_fenced_devs will be the number
>>>> of exported block devices, and no more than that.
>>>>
>>>> I want to avoid a potential denial of service vector -- NFSD
>>>> should not be able to create an unlimited number of items
>>>> in cl_fenced_devs... but sounds like there is a natural limit.
>>> Oh I see. I did not even think about this DOS since I think this
>>> is under the control of the admin on NFSD and a sane admin would
>>> not configure a massive amount of exported block devices.
>> Ultimately, the upper bound on the number entries in cl_fenced_devs
>> is indeed under the control of the NFS server administrator,
>> indirectly. But looking only at the code in the patch:
>>
>>   - New entries are created in cl_fenced_devs via GETDEVICEINFO,
>>     a client (remote host) action
>>   - There's nothing that removes these entries over time
> 
> I don't understand why these entries need to be removed over time.

They don't need to be removed over time. I'm saying you should document
the assumptions better -- that NFSD indirectly enforces a limit on the
number of items in this xarray. That isn't clear if a reviewer is
looking only at the code in nfsd4_block_get_device_info_scsi.


> These entries are created only when the clients accessing NFS exports
> that use pNFS SCSI layout. So long as the clients still mount these
> exports, don't we need to keep these entries around?
> 
>>
>> The duplicate checking logic needs to ensure that client actions
>> cannot create more entries than that upper bound.
> 
> How can this happens? repeated GETDEVICEINFO ops of the same device
> still use the same entry so how do the clients can indirectly create
> more entries than the number of pNFS exports?

It can happen if a race occurs, for example. A broken or malicious
client could send two concurrent GETDEVICEINFO operations for the same
device. If the new code isn't careful, it could unintentionally add two
copies of the device to that array due to that race.

What I'm saying is that this needs to be documented: why does the
addition of the device to cl_fenced_devs need to be atomic? Because
if it isn't, a race could allow an errant client to add extra devices
to cl_fenced_devs over time. That could cause cl_fenced_devs to grow
without bound.


-- 
Chuck Lever

      reply	other threads:[~2025-12-24 19:10 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-22 19:07 [PATCH v3 1/1] NFSD: Track SCSI Persistent Registration Fencing per Client with xarray Dai Ngo
2025-12-23 15:32 ` Chuck Lever
2025-12-23 16:31 ` Chuck Lever
2025-12-23 18:54   ` Dai Ngo
2025-12-23 19:47     ` Chuck Lever
2025-12-23 22:34       ` Dai Ngo
2025-12-24 14:57         ` Chuck Lever
2025-12-24 17:52           ` Dai Ngo
2025-12-24 19:10             ` Chuck Lever [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2b5a99aa-6bd6-478c-b734-82ea9d440ecd@kernel.org \
    --to=cel@kernel.org \
    --cc=chuck.lever@oracle.com \
    --cc=dai.ngo@oracle.com \
    --cc=hch@lst.de \
    --cc=jlayton@kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=neilb@ownmail.net \
    --cc=okorniev@redhat.com \
    --cc=tom@talpey.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).