From: Ryan Cumming <bodnar42@phalynx.dhs.org>
To: Robert Love <rml@tech9.net>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [patch] sched_[set|get]_affinity() syscall, 2.4.15-pre9
Date: Thu, 22 Nov 2001 16:20:11 -0800 [thread overview]
Message-ID: <E16744i-0004zQ-00@localhost> (raw)
In-Reply-To: <Pine.LNX.4.33.0111220951240.2446-300000@localhost.localdomain> <1006472754.1336.0.camel@icbm>
In-Reply-To: <1006472754.1336.0.camel@icbm>
On November 22, 2001 15:45, Robert Love wrote:
>
> Ie, we would have a /proc/<pid>/cpu_affinity which is the same as your
> `unsigned long * user_mask_ptr'. Reading and writing of the proc
> interface would correspond to your get and set syscalls. Besides the
> sort of relevancy and useful abstraction of putting the affinity in the
> procfs, it eliminates any sizeof(cpus_allowed) problem since the read
> string is the size in characters of cpus_allowed.
>
> I would use your syscall code, though -- just reimplement it as a procfs
> file. This would mean adding a proc_write function, since the _actual_
> procfs (the proc part) only has a read method, but that is simple.
>
> Thoughts?
Here here, I was just thinking "Well, I like the CPU affinity idea, but I
loathe syscall creep... I hope this Robert Love fellow says something about
that" as I read your email's header.
In addition to keeping the syscall table from being filled with very
specific, non-standard, and use-once syscalls, a /proc interface would allow
me to change the CPU affinity of processes that aren't {get, set}_affinity
aware (i.e., all Linux applications written up to this point). This isn't
very different from how it's possible to change a processes other scheduling
properties (priority, scheduler) from another process. Imagine if renice(8)
had to be implemented as attaching to a process and calling nice(2)... ick.
Also, as an application developer, I try to avoid conditionally compiled,
system-specific calls. I would have much less "cleanliness" objections
towards testing for the /proc/<pid>/cpu_affinity files existance and
conditionally writing to it. Compare this to the hacks some network servers
use to try to detect sendfile(2)'s presence at runtime, and you'll see what I
mean. Remember, everything is a file ;)
And one final thing... what sort of benifit does CPU affinity have if we
have the scheduler take in account CPU migration costs correctly? I can think
of a lot of corner cases, but in general, it seems to me that it's a lot more
sane to have the scheduler decide where processes belong. What if an
application with n threads, where n is less than the number of CPUs, has to
decide which CPUs to bind its threads to? What if a similar app, or another
instance of the same app, already decided to bind against the same set of
CPUs? The scheduler is stuck with an unfair scheduling load on those poor
CPUs, because the scheduling decision was moved away from where it really
should take place: the scheduler. I'm sure I'm missing something, though.
-Ryan
next prev parent reply other threads:[~2001-11-23 0:21 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-11-22 8:59 [patch] sched_[set|get]_affinity() syscall, 2.4.15-pre9 Ingo Molnar
2001-11-22 20:22 ` Davide Libenzi
2001-11-22 23:45 ` Robert Love
2001-11-23 0:20 ` Ryan Cumming [this message]
2001-11-23 0:36 ` Mark Hahn
2001-11-23 11:46 ` Ingo Molnar
2001-11-24 22:44 ` Davide Libenzi
2001-11-23 0:51 ` Robert Love
2001-11-23 1:11 ` Andreas Dilger
2001-11-23 1:16 ` Robert Love
2001-11-23 11:36 ` Ingo Molnar
2001-11-24 2:01 ` Davide Libenzi
2001-11-27 3:39 ` Robert Love
2001-11-27 7:13 ` Joe Korty
2001-11-27 20:53 ` Robert Love
2001-11-27 21:31 ` Nathan Dabney
2001-11-27 8:04 ` procfs bloat, syscall bloat [in reference to cpu affinity] Joe Korty
2001-11-27 11:32 ` Ingo Molnar
2001-11-27 20:56 ` Robert Love
2001-11-27 14:04 ` Phil Howard
2001-11-27 18:05 ` Tim Hockin
2001-11-27 8:40 ` [patch] sched_[set|get]_affinity() syscall, 2.4.15-pre9 Ingo Molnar
2001-11-27 4:41 ` a nohup-like interface to cpu affinity Linux maillist account
2001-11-27 4:49 ` Robert Love
2001-11-27 6:32 ` Linux maillist account
2001-11-27 6:39 ` Robert Love
2001-11-27 8:42 ` Sean Hunter
2001-12-06 1:35 ` Matthew Dobson
2001-12-06 1:37 ` [RFC][PATCH] cpus_allowed/launch_policy patch, 2.4.16 Matthew Dobson
2001-12-06 2:08 ` Davide Libenzi
2001-12-06 2:17 ` Matthew Dobson
2001-12-06 2:39 ` Davide Libenzi
2001-12-06 2:42 ` Robert Love
2001-12-06 22:21 ` Matthew Dobson
2001-11-27 6:50 ` a nohup-like interface to cpu affinity Linux maillist account
2001-11-27 8:26 ` Ingo Molnar
2001-11-23 11:02 ` [patch] sched_[set|get]_affinity() syscall, 2.4.15-pre9 Ingo Molnar
[not found] <1006832357.1385.3.camel@icbm.suse.lists.linux.kernel>
[not found] ` <5.0.2.1.2.20011127020817.009ed3d0@pop.mindspring.com.suse.lists.linux.kernel>
2001-11-27 7:32 ` Andi Kleen
2001-11-27 21:01 ` Robert Love
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=E16744i-0004zQ-00@localhost \
--to=bodnar42@phalynx.dhs.org \
--cc=linux-kernel@vger.kernel.org \
--cc=rml@tech9.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox