* [PATCH] ceph: make the ceph-cap workqueue UNBOUND
@ 2024-03-21 2:15 xiubli
2024-05-20 19:37 ` Ilya Dryomov
0 siblings, 1 reply; 5+ messages in thread
From: xiubli @ 2024-03-21 2:15 UTC (permalink / raw)
To: ceph-devel; +Cc: idryomov, jlayton, vshankar, mchangir, Xiubo Li, Stefan Kooman
From: Xiubo Li <xiubli@redhat.com>
There is not harm to mark the ceph-cap workqueue unbounded, just
like we do in ceph-inode workqueue.
URL: https://www.spinics.net/lists/ceph-users/msg78775.html
URL: https://tracker.ceph.com/issues/64977
Reported-by: Stefan Kooman <stefan@bit.nl>
Signed-off-by: Xiubo Li <xiubli@redhat.com>
---
fs/ceph/super.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/ceph/super.c b/fs/ceph/super.c
index 4dcbbaa297f6..0bfe4f8418fd 100644
--- a/fs/ceph/super.c
+++ b/fs/ceph/super.c
@@ -851,7 +851,7 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
fsc->inode_wq = alloc_workqueue("ceph-inode", WQ_UNBOUND, 0);
if (!fsc->inode_wq)
goto fail_client;
- fsc->cap_wq = alloc_workqueue("ceph-cap", 0, 1);
+ fsc->cap_wq = alloc_workqueue("ceph-cap", WQ_UNBOUND, 1);
if (!fsc->cap_wq)
goto fail_inode_wq;
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] ceph: make the ceph-cap workqueue UNBOUND
2024-03-21 2:15 [PATCH] ceph: make the ceph-cap workqueue UNBOUND xiubli
@ 2024-05-20 19:37 ` Ilya Dryomov
2024-05-20 19:48 ` Ilya Dryomov
[not found] ` <56c20af2-bf22-4cc2-b0db-637a51511c12@redhat.com>
0 siblings, 2 replies; 5+ messages in thread
From: Ilya Dryomov @ 2024-05-20 19:37 UTC (permalink / raw)
To: xiubli; +Cc: ceph-devel, jlayton, vshankar, mchangir, Stefan Kooman
On Thu, Mar 21, 2024 at 3:18 AM <xiubli@redhat.com> wrote:
>
> From: Xiubo Li <xiubli@redhat.com>
>
> There is not harm to mark the ceph-cap workqueue unbounded, just
> like we do in ceph-inode workqueue.
>
> URL: https://www.spinics.net/lists/ceph-users/msg78775.html
> URL: https://tracker.ceph.com/issues/64977
> Reported-by: Stefan Kooman <stefan@bit.nl>
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
> fs/ceph/super.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/ceph/super.c b/fs/ceph/super.c
> index 4dcbbaa297f6..0bfe4f8418fd 100644
> --- a/fs/ceph/super.c
> +++ b/fs/ceph/super.c
> @@ -851,7 +851,7 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
> fsc->inode_wq = alloc_workqueue("ceph-inode", WQ_UNBOUND, 0);
> if (!fsc->inode_wq)
> goto fail_client;
> - fsc->cap_wq = alloc_workqueue("ceph-cap", 0, 1);
> + fsc->cap_wq = alloc_workqueue("ceph-cap", WQ_UNBOUND, 1);
Hi Xiubo,
You wrote that there is no harm in making ceph-cap workqueue unbound,
but, if it's made unbound, it would be almost the same as ceph-inode
workqueue. The only difference would be that max_active parameter for
ceph-cap workqueue is 1 instead of 0 (i.e. some default which is pretty
high). Given that max_active is interpreted as a per-CPU number even
for unbound workqueues, up to $NUM_CPUS work items submitted to
ceph-cap workqueue could still be active in a system.
Does CephFS need/rely on $NUM_CPUS limit sowewhere? If not, how about
removing ceph-cap workqueue and submitting its work items to ceph-inode
workqueue instead?
Thanks,
Ilya
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] ceph: make the ceph-cap workqueue UNBOUND
2024-05-20 19:37 ` Ilya Dryomov
@ 2024-05-20 19:48 ` Ilya Dryomov
[not found] ` <56c20af2-bf22-4cc2-b0db-637a51511c12@redhat.com>
1 sibling, 0 replies; 5+ messages in thread
From: Ilya Dryomov @ 2024-05-20 19:48 UTC (permalink / raw)
To: xiubli; +Cc: ceph-devel, jlayton, vshankar, mchangir, Stefan Kooman
On Mon, May 20, 2024 at 9:37 PM Ilya Dryomov <idryomov@gmail.com> wrote:
>
> On Thu, Mar 21, 2024 at 3:18 AM <xiubli@redhat.com> wrote:
> >
> > From: Xiubo Li <xiubli@redhat.com>
> >
> > There is not harm to mark the ceph-cap workqueue unbounded, just
> > like we do in ceph-inode workqueue.
> >
> > URL: https://www.spinics.net/lists/ceph-users/msg78775.html
> > URL: https://tracker.ceph.com/issues/64977
> > Reported-by: Stefan Kooman <stefan@bit.nl>
> > Signed-off-by: Xiubo Li <xiubli@redhat.com>
> > ---
> > fs/ceph/super.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/fs/ceph/super.c b/fs/ceph/super.c
> > index 4dcbbaa297f6..0bfe4f8418fd 100644
> > --- a/fs/ceph/super.c
> > +++ b/fs/ceph/super.c
> > @@ -851,7 +851,7 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
> > fsc->inode_wq = alloc_workqueue("ceph-inode", WQ_UNBOUND, 0);
> > if (!fsc->inode_wq)
> > goto fail_client;
> > - fsc->cap_wq = alloc_workqueue("ceph-cap", 0, 1);
> > + fsc->cap_wq = alloc_workqueue("ceph-cap", WQ_UNBOUND, 1);
>
> Hi Xiubo,
>
> You wrote that there is no harm in making ceph-cap workqueue unbound,
> but, if it's made unbound, it would be almost the same as ceph-inode
> workqueue. The only difference would be that max_active parameter for
> ceph-cap workqueue is 1 instead of 0 (i.e. some default which is pretty
> high). Given that max_active is interpreted as a per-CPU number even
> for unbound workqueues, up to $NUM_CPUS work items submitted to
> ceph-cap workqueue could still be active in a system.
>
> Does CephFS need/rely on $NUM_CPUS limit sowewhere? If not, how about
> removing ceph-cap workqueue and submitting its work items to ceph-inode
> workqueue instead?
Related question: why ceph_force_reconnect() flushes only one of these
workqueues (ceph-inode) instead of both? When invalidating everything,
aren't we concerned with potential stale work items from before the
session is recovered?
Thanks,
Ilya
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] ceph: make the ceph-cap workqueue UNBOUND
[not found] ` <56c20af2-bf22-4cc2-b0db-637a51511c12@redhat.com>
@ 2024-05-21 7:39 ` Ilya Dryomov
2024-05-21 7:51 ` Xiubo Li
0 siblings, 1 reply; 5+ messages in thread
From: Ilya Dryomov @ 2024-05-21 7:39 UTC (permalink / raw)
To: Xiubo Li; +Cc: ceph-devel, jlayton, vshankar, mchangir, Stefan Kooman
On Tue, May 21, 2024 at 5:37 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 5/21/24 03:37, Ilya Dryomov wrote:
>
> On Thu, Mar 21, 2024 at 3:18 AM <xiubli@redhat.com> wrote:
>
> From: Xiubo Li <xiubli@redhat.com>
>
> There is not harm to mark the ceph-cap workqueue unbounded, just
> like we do in ceph-inode workqueue.
>
> URL: https://www.spinics.net/lists/ceph-users/msg78775.html
> URL: https://tracker.ceph.com/issues/64977
> Reported-by: Stefan Kooman <stefan@bit.nl>
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
> fs/ceph/super.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/ceph/super.c b/fs/ceph/super.c
> index 4dcbbaa297f6..0bfe4f8418fd 100644
> --- a/fs/ceph/super.c
> +++ b/fs/ceph/super.c
> @@ -851,7 +851,7 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
> fsc->inode_wq = alloc_workqueue("ceph-inode", WQ_UNBOUND, 0);
> if (!fsc->inode_wq)
> goto fail_client;
> - fsc->cap_wq = alloc_workqueue("ceph-cap", 0, 1);
> + fsc->cap_wq = alloc_workqueue("ceph-cap", WQ_UNBOUND, 1);
>
> Hi Xiubo,
>
> You wrote that there is no harm in making ceph-cap workqueue unbound,
> but, if it's made unbound, it would be almost the same as ceph-inode
> workqueue. The only difference would be that max_active parameter for
> ceph-cap workqueue is 1 instead of 0 (i.e. some default which is pretty
> high). Given that max_active is interpreted as a per-CPU number even
> for unbound workqueues, up to $NUM_CPUS work items submitted to
> ceph-cap workqueue could still be active in a system.
>
> Does CephFS need/rely on $NUM_CPUS limit sowewhere? If not, how about
> removing ceph-cap workqueue and submitting its work items to ceph-inode
> workqueue instead?
>
> Checked it again and found that an UNBOUND and max_active==1 combo has a special meaning:
>
> Some users depend on the strict execution ordering of ST wq. The combination of @max_active of 1 and WQ_UNBOUND is used to achieve this behavior. Work items on such wq are always queued to the unbound worker-pools and only one work item can be active at any given time thus achieving the same ordering property as ST wq.
This hasn't been true for 10 years, see commit
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3bc1e711c26bff01d41ad71145ecb8dcb4412576
Thanks,
Ilya
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] ceph: make the ceph-cap workqueue UNBOUND
2024-05-21 7:39 ` Ilya Dryomov
@ 2024-05-21 7:51 ` Xiubo Li
0 siblings, 0 replies; 5+ messages in thread
From: Xiubo Li @ 2024-05-21 7:51 UTC (permalink / raw)
To: Ilya Dryomov; +Cc: ceph-devel, jlayton, vshankar, mchangir, Stefan Kooman
On 5/21/24 15:39, Ilya Dryomov wrote:
> On Tue, May 21, 2024 at 5:37 AM Xiubo Li <xiubli@redhat.com> wrote:
>>
>> On 5/21/24 03:37, Ilya Dryomov wrote:
>>
>> On Thu, Mar 21, 2024 at 3:18 AM <xiubli@redhat.com> wrote:
>>
>> From: Xiubo Li <xiubli@redhat.com>
>>
>> There is not harm to mark the ceph-cap workqueue unbounded, just
>> like we do in ceph-inode workqueue.
>>
>> URL: https://www.spinics.net/lists/ceph-users/msg78775.html
>> URL: https://tracker.ceph.com/issues/64977
>> Reported-by: Stefan Kooman <stefan@bit.nl>
>> Signed-off-by: Xiubo Li <xiubli@redhat.com>
>> ---
>> fs/ceph/super.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/ceph/super.c b/fs/ceph/super.c
>> index 4dcbbaa297f6..0bfe4f8418fd 100644
>> --- a/fs/ceph/super.c
>> +++ b/fs/ceph/super.c
>> @@ -851,7 +851,7 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
>> fsc->inode_wq = alloc_workqueue("ceph-inode", WQ_UNBOUND, 0);
>> if (!fsc->inode_wq)
>> goto fail_client;
>> - fsc->cap_wq = alloc_workqueue("ceph-cap", 0, 1);
>> + fsc->cap_wq = alloc_workqueue("ceph-cap", WQ_UNBOUND, 1);
>>
>> Hi Xiubo,
>>
>> You wrote that there is no harm in making ceph-cap workqueue unbound,
>> but, if it's made unbound, it would be almost the same as ceph-inode
>> workqueue. The only difference would be that max_active parameter for
>> ceph-cap workqueue is 1 instead of 0 (i.e. some default which is pretty
>> high). Given that max_active is interpreted as a per-CPU number even
>> for unbound workqueues, up to $NUM_CPUS work items submitted to
>> ceph-cap workqueue could still be active in a system.
>>
>> Does CephFS need/rely on $NUM_CPUS limit sowewhere? If not, how about
>> removing ceph-cap workqueue and submitting its work items to ceph-inode
>> workqueue instead?
>>
>> Checked it again and found that an UNBOUND and max_active==1 combo has a special meaning:
>>
>> Some users depend on the strict execution ordering of ST wq. The combination of @max_active of 1 and WQ_UNBOUND is used to achieve this behavior. Work items on such wq are always queued to the unbound worker-pools and only one work item can be active at any given time thus achieving the same ordering property as ST wq.
> This hasn't been true for 10 years, see commit
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3bc1e711c26bff01d41ad71145ecb8dcb4412576
Okay, I found the doc I read is out of date.
- Xiubo
> Thanks,
>
> Ilya
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-05-21 7:51 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-21 2:15 [PATCH] ceph: make the ceph-cap workqueue UNBOUND xiubli
2024-05-20 19:37 ` Ilya Dryomov
2024-05-20 19:48 ` Ilya Dryomov
[not found] ` <56c20af2-bf22-4cc2-b0db-637a51511c12@redhat.com>
2024-05-21 7:39 ` Ilya Dryomov
2024-05-21 7:51 ` Xiubo Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).