From: Christian Schoenebeck <linux_oss@crudebyte.com>
To: Dominique Martinet <asmadeus@codewreck.org>,
Kent Overstreet <kent.overstreet@gmail.com>
Cc: linux-kernel@vger.kernel.org,
v9fs-developer@lists.sourceforge.net,
Eric Van Hensbergen <ericvh@gmail.com>,
Latchesar Ionkov <lucho@ionkov.net>
Subject: Re: [PATCH 3/3] 9p: Add mempools for RPCs
Date: Sat, 09 Jul 2022 16:21:46 +0200 [thread overview]
Message-ID: <1690934.P4sCSNuWZQ@silver> (raw)
In-Reply-To: <Yskxs4uQ4v8l7Zb9@codewreck.org>
On Samstag, 9. Juli 2022 09:43:47 CEST Dominique Martinet wrote:
> I've taken the mempool patches to 9p-next
>
> Christian Schoenebeck wrote on Mon, Jul 04, 2022 at 03:56:55PM +0200:
> >> (I appreciate the need for testing, but this feels much less risky than
> >> the iovec series we've had recently... Famous last words?)
> >
> > Got it, consider my famous last words dropped. ;-)
>
> Ok, so I think you won this one...
>
> Well -- when testing normally it obviously works well, performance wise
> is roughly the same (obviously since it tries to allocate from slab
> first and in normal case that will work)
>
> When I tried gaming it with very low memory though I thought it worked
> well, but I managed to get a bunch of processes stuck in mempool_alloc
> with no obvious tid waiting for a reply.
> I had the bright idea of using fio with io_uring and interestingly the
> uring worker doesn't show up in ps or /proc/<pid>, but with qemu's gdb
> and lx-ps I could find a bunch of iou-wrk-<pid> that are all with
> similar stacks
> 1 │ [<0>] mempool_alloc+0x136/0x180
> 2 │ [<0>] p9_fcall_init+0x63/0x80 [9pnet]
> 3 │ [<0>] p9_client_prepare_req+0xa9/0x290 [9pnet]
> 4 │ [<0>] p9_client_rpc+0x64/0x610 [9pnet]
> 5 │ [<0>] p9_client_write+0xcb/0x210 [9pnet]
> 6 │ [<0>] v9fs_file_write_iter+0x4d/0xc0 [9p]
> 7 │ [<0>] io_write+0x129/0x2c0
> 8 │ [<0>] io_issue_sqe+0xa1/0x25b0
> 9 │ [<0>] io_wq_submit_work+0x90/0x190
> 10 │ [<0>] io_worker_handle_work+0x211/0x550
> 11 │ [<0>] io_wqe_worker+0x2c5/0x340
> 12 │ [<0>] ret_from_fork+0x1f/0x30
>
> or, and that's the interesting part
> 1 │ [<0>] mempool_alloc+0x136/0x180
> 2 │ [<0>] p9_fcall_init+0x63/0x80 [9pnet]
> 3 │ [<0>] p9_client_prepare_req+0xa9/0x290 [9pnet]
> 4 │ [<0>] p9_client_rpc+0x64/0x610 [9pnet]
> 5 │ [<0>] p9_client_flush+0x81/0xc0 [9pnet]
> 6 │ [<0>] p9_client_rpc+0x591/0x610 [9pnet]
> 7 │ [<0>] p9_client_write+0xcb/0x210 [9pnet]
> 8 │ [<0>] v9fs_file_write_iter+0x4d/0xc0 [9p]
> 9 │ [<0>] io_write+0x129/0x2c0
> 10 │ [<0>] io_issue_sqe+0xa1/0x25b0
> 11 │ [<0>] io_wq_submit_work+0x90/0x190
> 12 │ [<0>] io_worker_handle_work+0x211/0x550
> 13 │ [<0>] io_wqe_worker+0x2c5/0x340
> 14 │ [<0>] ret_from_fork+0x1f/0x30
>
> The problem is these flushes : the same task is holding a buffer for the
> original rpc and tries to get a new one, but waits for someone to free
> and.. obviously there isn't anyone (I cound 11 flushes pending, so more
> than the minimum number of buffers we'd expect from the mempool, and I
> don't think we missed any free)
>
> Now I'm not sure what's best here.
> The best thing to do would probably to just tell the client it can't use
> the mempools for flushes -- the flushes are rare and will use small
> buffers with your smaller allocations patch; I bet I wouldn't be able to
> reproduce that anymore but it should probably just forbid the mempool
> just in case.
So the problem is that one task ends up with more than 1 request at a time,
and the buffer is allocated and associated per request, not per task. If I am
not missing something, then this scenario (>1 request simultaniously per task)
currently may actually only happen with p9_client_flush() calls. Which
simplifies the problem.
So probably the best way would be to simply flip the call order such that
p9_tag_remove() is called before p9_client_flush(), similar to how it's
already done with p9_client_clunk() calls?
> Anyway, I'm not comfortable with this patch right now, a hang is worse
> than an allocation failure warning.
As you already mentioned, with the pending 'net/9p: allocate appropriate
reduced message buffers' patch those hangs should not happen, as Tflush would
then just kmalloc() a small buffer. But I would probably still fix this issue
here nevertheless, as it might hurt in other ways in future. Shouldn't be too
much noise to swap the call order, right?
> > > > How about I address the already discussed issues and post a v5 of
> > > > those
> > > > patches this week and then we can continue from there?
> > >
> > > I would have been happy to rebase your patches 9..12 on top of Kent's
> > > this weekend but if you want to refresh them this week we can continue
> > > from there, sure.
> >
> > I'll rebase them on master and address what we discussed so far. Then
> > we'll
> > see.
>
> FWIW and regarding the other thread with virito queue sizes, I was only
> considering the later patches with small RPCs for this merge window.
I would also recommend to leave out the virtio patches, yes.
> Shall we try to focus on that first, and then revisit the virtio and
> mempool patches once that's done?
Your call. I think both ways are viable.
Best regards,
Christian Schoenebeck
next prev parent reply other threads:[~2022-07-09 14:21 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20220704010945.C230AC341C7@smtp.kernel.org>
2022-07-04 1:42 ` [PATCH 1/3] 9p: Drop kref usage Kent Overstreet
2022-07-04 1:42 ` [PATCH 2/3] 9p: Add client parameter to p9_req_put() Kent Overstreet
2022-07-04 1:42 ` [PATCH 3/3] 9p: Add mempools for RPCs Kent Overstreet
2022-07-04 2:22 ` Dominique Martinet
2022-07-04 3:05 ` Kent Overstreet
2022-07-04 3:38 ` Dominique Martinet
2022-07-04 3:52 ` Kent Overstreet
2022-07-04 11:12 ` Christian Schoenebeck
2022-07-04 13:06 ` Dominique Martinet
2022-07-04 13:56 ` Christian Schoenebeck
2022-07-09 7:43 ` Dominique Martinet
2022-07-09 14:21 ` Christian Schoenebeck [this message]
2022-07-09 14:42 ` Dominique Martinet
2022-07-09 18:08 ` Christian Schoenebeck
2022-07-09 20:50 ` Dominique Martinet
2022-07-10 12:57 ` Christian Schoenebeck
2022-07-10 13:19 ` Dominique Martinet
2022-07-10 15:16 ` Christian Schoenebeck
2022-07-13 4:17 ` [RFC PATCH] 9p: forbid use of mempool for TFLUSH Dominique Martinet
2022-07-13 6:39 ` Kent Overstreet
2022-07-13 7:12 ` Dominique Martinet
2022-07-13 7:40 ` Kent Overstreet
2022-07-13 8:18 ` Dominique Martinet
2022-07-14 19:16 ` Christian Schoenebeck
2022-07-14 22:31 ` Dominique Martinet
2022-07-15 10:23 ` Christian Schoenebeck
2022-07-04 13:06 ` [PATCH 3/3] 9p: Add mempools for RPCs Kent Overstreet
2022-07-04 13:39 ` Christian Schoenebeck
2022-07-04 14:19 ` Kent Overstreet
2022-07-05 9:59 ` Christian Schoenebeck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1690934.P4sCSNuWZQ@silver \
--to=linux_oss@crudebyte.com \
--cc=asmadeus@codewreck.org \
--cc=ericvh@gmail.com \
--cc=kent.overstreet@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lucho@ionkov.net \
--cc=v9fs-developer@lists.sourceforge.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox