From: dingel@linux.vnet.ibm.com (Dominik Dingel)
To: kernelnewbies@lists.kernelnewbies.org
Subject: Distributed Process Scheduling Algorithm
Date: Tue, 16 Feb 2016 09:42:52 +0100 [thread overview]
Message-ID: <20160216094252.194c7907@BR9TG4T3.de.ibm.com> (raw)
In-Reply-To: <42173.1455599614@turing-police.cc.vt.edu>
On Tue, 16 Feb 2016 00:13:34 -0500
Valdis.Kletnieks at vt.edu wrote:
> On Tue, 16 Feb 2016 10:18:26 +0530, Nitin Varyani said:
>
> > 1) Sending process context via network
>
> Note that this is a non-trivial issue by itself. At a *minimum*,
> you'll need all the checkpoint-restart code. Plus, if the process
> has any open TCP connections, *those* have to be migrated without
> causing a security problem. Good luck on figuring out how to properly
> route packets in this case - consider 4 nodes 10.0.0.1 through 10.0.0.4,
> you migrate a process from 10.0.0.1 to 10.0.0.3, How do you make sure
> *that process*'s packets go to 0.3 while all other packets still go to
> 0.1. Also, consider the impact this may have on iptables, if there is
> a state=RELATED,CONNECTED on 0.1 - that info needs to be relayed to 0.3
> as well.
>
> For bonus points, what's the most efficient way to transfer a large
> process image (say 500M, or even a bloated Firefox at 3.5G), without
> causing timeouts while copying the image?
>
> I hope your research project is *really* well funded - you're going
> to need a *lot* of people (Hint - find out how many people work on
> VMWare - that should give you a rough idea)
I wouldn't see things that dark. Also this is an interesting puzzle.
To migrate processes I would pick an already existing solution.
Like there is for container. So every process should be, if possible, in a container.
To migrate them efficiently without having some distributed shared memory,
you might want to look at userfaultfd.
So now back to the scheduling, I do not think that every node should keep track
of every process on every other node, as this would mean a massive need for
communication and hurt scalability. So either you would implement something like work stealing or go for a central entity like mesos. Which could do process/job/container scheduling for you.
There are now two pitfalls which are hard enough on their own:
- interprocess communication between two process with something different than a socket
in such an case you would probably need to merge the two distinct containers
- dedicated hardware
Dominik
next prev parent reply other threads:[~2016-02-16 8:42 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-15 16:05 Distributed Process Scheduling Algorithm Nitin Varyani
2016-02-15 16:52 ` Henrik Austad
2016-02-16 4:48 ` Nitin Varyani
2016-02-16 5:13 ` Valdis.Kletnieks at vt.edu
2016-02-16 8:42 ` Dominik Dingel [this message]
2016-02-16 9:46 ` Nitin Varyani
2016-02-16 10:43 ` Nitin Varyani
2016-02-16 11:22 ` Dominik Dingel
2016-02-16 16:35 ` Valdis.Kletnieks at vt.edu
2016-02-17 4:51 ` Nitin Varyani
2016-02-17 6:10 ` Valdis.Kletnieks at vt.edu
2016-02-17 6:37 ` Miles Fidelman
2016-02-17 10:35 ` Nitin Varyani
2016-02-17 15:32 ` Greg KH
2016-02-18 4:35 ` Nitin Varyani
2016-02-18 9:31 ` Mulyadi Santosa
2016-02-19 5:06 ` Greg KH
2016-02-19 15:31 ` Ruben Safir
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160216094252.194c7907@BR9TG4T3.de.ibm.com \
--to=dingel@linux.vnet.ibm.com \
--cc=kernelnewbies@lists.kernelnewbies.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).