From: David Howells <dhowells@redhat.com>
To: Tom Talpey <tom@talpey.com>, Stefan Metzmacher <metze@samba.org>
Cc: dhowells@redhat.com, linux-cifs@vger.kernel.org,
Steve French <sfrench@samba.org>
Subject: Re: [PATCH 2/2] smb: client: let smbd_post_send_iter() respect the peers max_send_size and transmit all data
Date: Wed, 25 Jun 2025 08:59:51 +0100 [thread overview]
Message-ID: <1283546.1750838391@warthog.procyon.org.uk> (raw)
In-Reply-To: <962036.1750422586@warthog.procyon.org.uk>
David Howells <dhowells@redhat.com> wrote:
> > > + if (iter && iov_iter_count(iter) > 0) {
> > > + /*
> > > + * There is more data to send
> > > + */
> > > + goto wait_credit;
> >
> > But, shouldn't the caller have done this overflow check, and looped on
> > the fragments and credits? It seems wrong to push the credit check down
> > to this level.
>
> Fair point. There's retry handling in the netfs layer - though that only
> applies to reads and writes that go through that. Can RDMA be used to
> transfer data for other large calls? Dir enumeration or ioctl, for instance.
Actually, I'm wrong. We do need this because we can come down this path from
non-netfs generated RPC ops. I stuck a WARN_ON_ONCE() on the path to see what
generated it, and got:
WARNING: CPU: 0 PID: 6773 at fs/smb/client/smbdirect.c:980 smbd_post_send_iter+0x768/0x840
...
RIP: 0010:smbd_post_send_iter+0x768/0x840
...
Call Trace:
<TASK>
smbd_send+0x1bb/0x280
? __smb_send_rqst+0x7c/0x3c0
__smb_send_rqst+0x7c/0x3c0
? rb_erase+0x30/0x280
smb_send_rqst+0x6a/0x150
? remove_hrtimer+0x5e/0x70
compound_send_recv+0x31b/0x650
? __kmalloc_noprof+0x262/0x290
? kmem_cache_debug_flags+0xc/0x20
cifs_send_recv+0x1f/0x30
SMB2_open+0x22d/0x4b0
? smb2_open_file+0xd3/0x310
smb2_open_file+0xd3/0x310
cifs_nt_open+0x182/0x280
cifs_open+0x463/0x650
? __pfx_cifs_open+0x10/0x10
? do_dentry_open+0x218/0x390
do_dentry_open+0x218/0x390
vfs_open+0x28/0x50
do_open+0x216/0x2c0
path_openat+0x140/0x1b0
do_filp_open+0xb8/0x120
? kmem_cache_debug_flags+0xc/0x20
? kmem_cache_alloc_noprof+0x201/0x230
? getname_flags.part.0+0x24/0x180
do_sys_openat2+0x6e/0xc0
do_sys_open+0x37/0x60
__x64_sys_openat+0x1b/0x30
do_syscall_64+0x80/0x170
entry_SYSCALL_64_after_hwframe+0x71/0x79
David
next prev parent reply other threads:[~2025-06-25 8:00 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-18 16:51 [PATCH 0/2] smb: client: fix problems with smbdirect/rdma mounts Stefan Metzmacher
2025-06-18 16:51 ` [PATCH 1/2] smb: client: fix max_sge overflow in smb_extract_folioq_to_rdma() Stefan Metzmacher
2025-06-19 11:41 ` David Howells
2025-06-19 19:07 ` Tom Talpey
2025-06-18 16:51 ` [PATCH 2/2] smb: client: let smbd_post_send_iter() respect the peers max_send_size and transmit all data Stefan Metzmacher
2025-06-19 11:49 ` David Howells
2025-06-19 19:22 ` Tom Talpey
2025-06-20 12:29 ` David Howells
2025-06-20 13:33 ` Tom Talpey
2025-06-20 14:56 ` David Howells
2025-06-25 7:59 ` David Howells [this message]
2025-06-23 15:46 ` Stefan Metzmacher
2025-06-23 17:28 ` Steve French
2025-06-23 19:48 ` Sasha Levin
2025-06-25 8:00 ` David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1283546.1750838391@warthog.procyon.org.uk \
--to=dhowells@redhat.com \
--cc=linux-cifs@vger.kernel.org \
--cc=metze@samba.org \
--cc=sfrench@samba.org \
--cc=tom@talpey.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox