* Re: [PATCH] io_uring/io-wq: stop setting PF_NO_SETAFFINITY on io-wq workers [not found] ` <0f0e791b-8eb8-fbb2-ea94-837645037fae-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> @ 2023-03-14 10:07 ` Daniel Dao 2023-03-14 16:25 ` Michal Koutný 0 siblings, 1 reply; 4+ messages in thread From: Daniel Dao @ 2023-03-14 10:07 UTC (permalink / raw) To: Jens Axboe; +Cc: io-uring, Waiman Long, cgroups-u79uwXL29TY76Z2rM5mHXA On Wed, Mar 8, 2023 at 2:27 PM Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> wrote: > > Every now and then reports come in that are puzzled on why changing > affinity on the io-wq workers fails with EINVAL. This happens because they > set PF_NO_SETAFFINITY as part of their creation, as io-wq organizes > workers into groups based on what CPU they are running on. > > However, this is purely an optimization and not a functional requirement. > We can allow setting affinity, and just lazily update our worker to wqe > mappings. If a given io-wq thread times out, it normally exits if there's > no more work to do. The exception is if it's the last worker available. > For the timeout case, check the affinity of the worker against group mask > and exit even if it's the last worker. New workers should be created with > the right mask and in the right location. The patch resolved the bug around enabling cpuset for subtree_control for me. However, it also doesn't prevent user from setting cpuset value that is incompatible with iou threads. For example, on a 2-numa 4-cpu node, new iou-wrks are bound to 2-3 while we can set cpuset.cpus to 1-2 successfully. The end result is a mix of cpu distribution such as: pid 533's current affinity list: 1,2 # process pid 720's current affinity list: 1,2 # iou-wrk-533 pid 5236's current affinity list: 2,3 # iou-wrk-533, running outside of cpuset IMO this violated the principle of cpuset and can be confusing for end users. I think I prefer Waiman's suggestion of allowing an implicit move to cpuset when enabling cpuset with subtree_control but not explicit moves such as when setting cpuset.cpus or writing the pids into cgroup.procs. It's easier to reason about and make the failure mode more explicit. What do you think ? Cheers, Daniel. ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] io_uring/io-wq: stop setting PF_NO_SETAFFINITY on io-wq workers 2023-03-14 10:07 ` [PATCH] io_uring/io-wq: stop setting PF_NO_SETAFFINITY on io-wq workers Daniel Dao @ 2023-03-14 16:25 ` Michal Koutný 2023-03-14 16:48 ` Jens Axboe 2023-03-14 18:17 ` Waiman Long 0 siblings, 2 replies; 4+ messages in thread From: Michal Koutný @ 2023-03-14 16:25 UTC (permalink / raw) To: Daniel Dao; +Cc: Jens Axboe, io-uring, Waiman Long, cgroups [-- Attachment #1: Type: text/plain, Size: 1802 bytes --] Hello. On Tue, Mar 14, 2023 at 10:07:40AM +0000, Daniel Dao <dqminh@cloudflare.com> wrote: > IMO this violated the principle of cpuset and can be confusing for end users. > I think I prefer Waiman's suggestion of allowing an implicit move to cpuset > when enabling cpuset with subtree_control but not explicit moves such as when > setting cpuset.cpus or writing the pids into cgroup.procs. It's easier to reason > about and make the failure mode more explicit. > > What do you think ? I think cpuset should top IO worker's affinity (like sched_setaffinity(2)). Thus: - modifying cpuset.cpus update task's affinity, for sure - implicit migration (enabling cpuset) update task's affinity, effective nop - explicit migration (meh) update task's affinity, ¯\_(ツ)_/¯ My understanding of PF_NO_SETAFFINITY is that's for kernel threads that do work that's functionally needed on a given CPU and thus they cannot be migrated [1]. As said previously for io_uring workers, affinity is for performance only. Hence, I'd also suggest on top of 01e68ce08a30 ("io_uring/io-wq: stop setting PF_NO_SETAFFINITY on io-wq workers"): --- a/io_uring/sqpoll.c +++ b/io_uring/sqpoll.c @@ -233,7 +233,6 @@ static int io_sq_thread(void *data) set_cpus_allowed_ptr(current, cpumask_of(sqd->sq_cpu)); else set_cpus_allowed_ptr(current, cpu_online_mask); - current->flags |= PF_NO_SETAFFINITY; mutex_lock(&sqd->lock); while (1) { Afterall, io_uring_setup(2) already mentions: > When cgroup setting cpuset.cpus changes (typically in container > environment), the bounded cpu set may be changed as well. HTH, Michal [1] Ideally, those should always remain in the root cpuset cgroup. [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 228 bytes --] ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] io_uring/io-wq: stop setting PF_NO_SETAFFINITY on io-wq workers 2023-03-14 16:25 ` Michal Koutný @ 2023-03-14 16:48 ` Jens Axboe 2023-03-14 18:17 ` Waiman Long 1 sibling, 0 replies; 4+ messages in thread From: Jens Axboe @ 2023-03-14 16:48 UTC (permalink / raw) To: Michal Koutný, Daniel Dao; +Cc: io-uring, Waiman Long, cgroups On 3/14/23 10:25 AM, Michal Koutný wrote: > Hello. > > On Tue, Mar 14, 2023 at 10:07:40AM +0000, Daniel Dao <dqminh@cloudflare.com> wrote: >> IMO this violated the principle of cpuset and can be confusing for end users. >> I think I prefer Waiman's suggestion of allowing an implicit move to cpuset >> when enabling cpuset with subtree_control but not explicit moves such as when >> setting cpuset.cpus or writing the pids into cgroup.procs. It's easier to reason >> about and make the failure mode more explicit. >> >> What do you think ? > > I think cpuset should top IO worker's affinity (like sched_setaffinity(2)). > Thus: > - modifying cpuset.cpus update task's affinity, for sure > - implicit migration (enabling cpuset) update task's affinity, effective nop > - explicit migration (meh) update task's affinity, ¯\_(ツ)_/¯ > > My understanding of PF_NO_SETAFFINITY is that's for kernel threads that > do work that's functionally needed on a given CPU and thus they cannot > be migrated [1]. As said previously for io_uring workers, affinity is > for performance only. > > Hence, I'd also suggest on top of 01e68ce08a30 ("io_uring/io-wq: stop > setting PF_NO_SETAFFINITY on io-wq workers"): > > --- a/io_uring/sqpoll.c > +++ b/io_uring/sqpoll.c > @@ -233,7 +233,6 @@ static int io_sq_thread(void *data) > set_cpus_allowed_ptr(current, cpumask_of(sqd->sq_cpu)); > else > set_cpus_allowed_ptr(current, cpu_online_mask); > - current->flags |= PF_NO_SETAFFINITY; > > mutex_lock(&sqd->lock); > while (1) { Ah yes, let's get that done as well in the same release. Do you want to send a patch for this? -- Jens Axboe ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] io_uring/io-wq: stop setting PF_NO_SETAFFINITY on io-wq workers 2023-03-14 16:25 ` Michal Koutný 2023-03-14 16:48 ` Jens Axboe @ 2023-03-14 18:17 ` Waiman Long 1 sibling, 0 replies; 4+ messages in thread From: Waiman Long @ 2023-03-14 18:17 UTC (permalink / raw) To: Michal Koutný, Daniel Dao; +Cc: Jens Axboe, io-uring, cgroups On 3/14/23 12:25, Michal Koutný wrote: > Hello. > > On Tue, Mar 14, 2023 at 10:07:40AM +0000, Daniel Dao <dqminh@cloudflare.com> wrote: >> IMO this violated the principle of cpuset and can be confusing for end users. >> I think I prefer Waiman's suggestion of allowing an implicit move to cpuset >> when enabling cpuset with subtree_control but not explicit moves such as when >> setting cpuset.cpus or writing the pids into cgroup.procs. It's easier to reason >> about and make the failure mode more explicit. >> >> What do you think ? > I think cpuset should top IO worker's affinity (like sched_setaffinity(2)). > Thus: > - modifying cpuset.cpus update task's affinity, for sure > - implicit migration (enabling cpuset) update task's affinity, effective nop Note that since commit 7fd4da9c158 ("cgroup/cpuset: Optimize cpuset_attach() on v2") in v6.2, implicit migration (enabling cpuset) will not affect the cpu affinity of the process. > - explicit migration (meh) update task's affinity, ¯\_(ツ)_/¯ > > My understanding of PF_NO_SETAFFINITY is that's for kernel threads that > do work that's functionally needed on a given CPU and thus they cannot > be migrated [1]. As said previously for io_uring workers, affinity is > for performance only. > > Hence, I'd also suggest on top of 01e68ce08a30 ("io_uring/io-wq: stop > setting PF_NO_SETAFFINITY on io-wq workers"): > > --- a/io_uring/sqpoll.c > +++ b/io_uring/sqpoll.c > @@ -233,7 +233,6 @@ static int io_sq_thread(void *data) > set_cpus_allowed_ptr(current, cpumask_of(sqd->sq_cpu)); > else > set_cpus_allowed_ptr(current, cpu_online_mask); > - current->flags |= PF_NO_SETAFFINITY; > > mutex_lock(&sqd->lock); > while (1) { > > Afterall, io_uring_setup(2) already mentions: >> When cgroup setting cpuset.cpus changes (typically in container >> environment), the bounded cpu set may be changed as well. Using sched_setaffiinity(2) can be another alternative. Starting from v6.2, cpu affinity set by sched_affiinity(2) will be more or less maintained and constrained by the current cpuset even if the cpu list is being changed as long as there is overlap between the two. The intersection between cpu affinity set by sched_setaffinity(2) and the effective_cpus in cpuset will be the effective cpu affinity of the task. Cheers, Longman ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-03-14 18:17 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <0f0e791b-8eb8-fbb2-ea94-837645037fae@kernel.dk>
[not found] ` <0f0e791b-8eb8-fbb2-ea94-837645037fae-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
2023-03-14 10:07 ` [PATCH] io_uring/io-wq: stop setting PF_NO_SETAFFINITY on io-wq workers Daniel Dao
2023-03-14 16:25 ` Michal Koutný
2023-03-14 16:48 ` Jens Axboe
2023-03-14 18:17 ` Waiman Long
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox