public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* latest linus-2.5 BK broken
@ 2002-06-18 17:18 James Simmons
  2002-06-18 17:46 ` Robert Love
  0 siblings, 1 reply; 87+ messages in thread
From: James Simmons @ 2002-06-18 17:18 UTC (permalink / raw)
  To: Linux Kernel Mailing List



  gcc -Wp,-MD,./.sched.o.d -D__KERNEL__ -I/tmp/fbdev-2.5/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -malign-functions=4  -nostdinc -iwithprefix include    -fno-omit-frame-pointer -DKBUILD_BASENAME=sched   -c -o sched.o sched.c
sched.c: In function `sys_sched_setaffinity':
sched.c:1329: `cpu_online_map' undeclared (first use in this function)
sched.c:1329: (Each undeclared identifier is reported only once
sched.c:1329: for each function it appears in.)
sched.c: In function `sys_sched_getaffinity':
sched.c:1389: `cpu_online_map' undeclared (first use in this function)
make[1]: *** [sched.o] Error 1

   . ---
   |o_o |
   |:_/ |   Give Micro$oft the Bird!!!!
  //   \ \  Use Linux!!!!
 (|     | )
 /'\_   _/`\
 \___)=(___/


^ permalink raw reply	[flat|nested] 87+ messages in thread
* Re: McVoy's Clusters (was Re: latest linus-2.5 BK broken)
@ 2002-06-20 17:23 Jesse Pollard
  2002-06-20 17:43 ` Nick LeRoy
  0 siblings, 1 reply; 87+ messages in thread
From: Jesse Pollard @ 2002-06-20 17:23 UTC (permalink / raw)
  To: pashley, Linux Kernel Mailing List

Sandy Harris <pashley@storm.ca>
> 
> [ I removed half a dozen cc's on this, and am just sending to the
>   list. Do people actually want the cc's?]
> 
> Larry McVoy wrote:
> 
> > > Checkpointing buys three things.  The ability to preempt jobs, the
> > > ability to migrate processes,
> 
> For large multi-processor systems, it isn't clear that those matter
> much. On single user systems I've tried , ps -ax | wc -l usually
> gives some number 50 < n < 100. For a multi-user general purpose
> system, my guess would be something under 50 system processes plus
> 50 per user. So for a dozen to 20 users on a departmental server,
> under 1000. A server for a big application, like database or web,
> would have fewer users and more threads, but still only a few 100
> or at most, say 2000.

You don't use compute servers much? The problems we are currently running
require the cluster (IBM SP) to have 100% uptime for a single job. that
job may run for several days. If a detected problem is reported (not yet
catastrophic) it is desired/demanded to checkpoint the users process.

Currently, we can't - but should be able to by this fall.

Having the users job checkpoint midway in it's computations will allow us
to remove a node from active service, substitute a different node, and
resume the users process without losing many hours of computation (we have
a maximum of 300 nodes for computation, another 30 for I/O and front end).

Just because a network interface fails is no reason to lose the job.

> So at something like 8 CPUs in a personal workstation and 128 or
> 256 for a server, things average out to 8 processes per CPU, and
> it is not clear that process migration or any form of pre-emption
> beyond the usual kernel scheduling is needed.
> 
> What combination of resources and loads do you think preemption
> and migration are need for?

It depends on the job. A web server farm shouldn't need one. A distributed
compute cluster needs it to:

a. be able to suspend large (256-300 nodes), long running (4-8 hours),
   low priority jobs, to favor high priority production jobs (which may
   also be relatively long running: say 2-4 hours on 256 nodes.
b. be able to replace/substitute nodes (switch processing from a failing
   node to allow for on-line replacement of the failing node or to wait for
   spare parts).

> > > and the ability to recover from failed nodes, (assuming the 
> > > failed hardware didn't corrupt your jobs checkpoint).
> 
> That matters, but it isn't entirely clear that it needs to be done
> in the kernel. Things like databases and journalling filesystems
> already have their own mechanisms and it is not remarkably onerous
> to put them into applications where required.

Which is why I realized you don't use compute clusters very often.

1. User jobs, written in fortran/C/other do not usually come with the ability
   to take snapshots of computation.
2. there is the problem of redirecting network connections (MPI/PVM) from one
   place to another.
3. (related to 2) Synchronized process suspension is difficult-to-impossible
   to do outside the kernel.

> [big snip]
> 
> > Larry McVoy's SMP Clusters
> > 
> > Discussion on November 8, 2001
> > 
> > Larry McVoy, Ted T'so, and Paul McKenney
> > 
> > What is SMP Clusters?
> > 
> >      SMP Clusters is a method of partioning an SMP (symmetric
> >      multiprocessing) machine's CPUs, memory, and I/O devices
> >      so that multiple "OSlets" run on this machine.  Each OSlet
> >      owns and controls its partition.  A given partition is
> >      expected to contain from 4-8 CPUs, its share of memory,
> >      and its share of I/O devices.  A machine large enough to
> >      have SMP Clusters profitably applied is expected to have
> >      enough of the standard I/O adapters (e.g., ethernet,
> >      SCSI, FC, etc.) so that each OSlet would have at least
> >      one of each.
> 
> I'm not sure whose definition this is:
>    supercomputer: a device for converting compute-bound problems
>       into I/O-bound problems
> but I suspect it is at least partially correct, and Beowulfs are
> sometimes just devices to convert them to network-bound problems.
> 
> For a network-bound task like web serving, I can see a large
> payoff in having each OSlet doing its own I/O.
> 
> However, in general I fail to see why each OSlet should have
> independent resources rather than something like using one to
> run a shared file system and another to handle the networking
> for everybody.

How about reliability, security isolation (accounting server isolated
from a web server or audit server.. or both).

See Suns use of "domains" in Solaris which does this in a single host.

-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil

Any opinions expressed are solely my own.

^ permalink raw reply	[flat|nested] 87+ messages in thread

end of thread, other threads:[~2002-06-24 13:06 UTC | newest]

Thread overview: 87+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-06-18 17:18 latest linus-2.5 BK broken James Simmons
2002-06-18 17:46 ` Robert Love
2002-06-18 18:51   ` Rusty Russell
2002-06-18 18:43     ` Zwane Mwaikambo
2002-06-18 18:56     ` Linus Torvalds
2002-06-18 18:59       ` Robert Love
2002-06-18 20:05       ` Rusty Russell
2002-06-18 20:05         ` Linus Torvalds
2002-06-18 20:31           ` Rusty Russell
2002-06-18 20:41             ` Linus Torvalds
2002-06-18 21:12               ` Benjamin LaHaise
2002-06-18 21:08                 ` Cort Dougan
2002-06-18 21:47                   ` Linus Torvalds
2002-06-19 12:29                     ` Eric W. Biederman
2002-06-19 17:27                       ` Linus Torvalds
2002-06-20  3:57                         ` Eric W. Biederman
2002-06-20  5:24                           ` Larry McVoy
2002-06-20  7:26                             ` Andreas Dilger
2002-06-20 14:54                             ` Eric W. Biederman
2002-06-20 15:41                             ` McVoy's Clusters (was Re: latest linus-2.5 BK broken) Sandy Harris
2002-06-20 17:10                               ` William Lee Irwin III
2002-06-20 20:42                                 ` Timothy D. Witham
2002-06-21  5:16                               ` Eric W. Biederman
2002-06-22 14:14                               ` Kai Henningsen
2002-06-20 16:30                           ` latest linus-2.5 BK broken Cort Dougan
2002-06-20 17:15                             ` Linus Torvalds
2002-06-21  6:15                               ` Eric W. Biederman
2002-06-21 17:50                                 ` Larry McVoy
2002-06-21 17:55                                   ` Robert Love
2002-06-21 18:09                                   ` Linux, the microkernel (was Re: latest linus-2.5 BK broken) Jeff Garzik
2002-06-21 18:46                                     ` Cort Dougan
2002-06-21 20:25                                       ` Daniel Phillips
2002-06-22  1:07                                         ` Horst von Brand
2002-06-22  1:23                                           ` Larry McVoy
2002-06-22 12:41                                             ` Roman Zippel
2002-06-23 15:15                                             ` Sandy Harris
2002-06-23 17:29                                               ` Jakob Oestergaard
2002-06-24  6:27                                               ` Craig I. Hagan
2002-06-24 13:06                                                 ` J.A. Magallon
2002-06-24 10:59                                               ` Eric W. Biederman
2002-06-21 19:34                                     ` Rob Landley
2002-06-22 15:31                                       ` Alan Cox
2002-06-22 12:24                                         ` Rob Landley
2002-06-22 19:00                                           ` Ruth Ivimey-Cook
2002-06-22 21:09                                         ` jdow
2002-06-23 17:56                                           ` John Alvord
2002-06-23 20:48                                             ` jdow
2002-06-23 21:40                                         ` [OT] " Xavier Bestel
2002-06-22 18:25                                   ` latest linus-2.5 BK broken Eric W. Biederman
2002-06-22 19:26                                     ` Larry McVoy
2002-06-22 22:25                                       ` Eric W. Biederman
2002-06-22 23:10                                         ` Larry McVoy
2002-06-23  6:34                                       ` William Lee Irwin III
2002-06-23 22:56                                       ` Kai Henningsen
2002-06-20 17:16                             ` RW Hawkins
2002-06-20 17:23                               ` Cort Dougan
2002-06-20 20:40                             ` Martin Dalecki
2002-06-20 20:53                               ` Linus Torvalds
2002-06-20 21:27                                 ` Martin Dalecki
2002-06-20 21:37                                   ` Linus Torvalds
2002-06-20 21:59                                     ` Martin Dalecki
2002-06-20 22:18                                       ` Linus Torvalds
2002-06-20 22:41                                         ` Martin Dalecki
2002-06-21  0:09                                           ` Allen Campbell
2002-06-21  7:43                                       ` Zwane Mwaikambo
2002-06-21 21:02                                       ` Rob Landley
2002-06-22  3:57                                         ` (RFC)i386 arch autodetect( was Re: latest linus-2.5 BK broken ) Matthew D. Pitts
2002-06-22  4:54                                           ` William Lee Irwin III
2002-06-21 16:01                                     ` Re: latest linus-2.5 BK broken Sandy Harris
2002-06-21 20:38                                   ` Rob Landley
2002-06-20 21:13                               ` Timothy D. Witham
2002-06-21 19:53                               ` Rob Landley
2002-06-21  5:34                             ` Eric W. Biederman
2002-06-19 10:21                   ` Padraig Brady
2002-06-18 21:45                 ` Bill Huey
2002-06-18 20:55             ` Robert Love
2002-06-19 13:31               ` Rusty Russell
2002-06-18 19:29     ` Benjamin LaHaise
2002-06-18 19:19       ` Zwane Mwaikambo
2002-06-18 19:49         ` Benjamin LaHaise
2002-06-18 19:27           ` Zwane Mwaikambo
2002-06-18 20:13       ` Rusty Russell
2002-06-18 20:21         ` Linus Torvalds
2002-06-18 22:03         ` Ingo Molnar
  -- strict thread matches above, loose matches on Subject: below --
2002-06-20 17:23 McVoy's Clusters (was Re: latest linus-2.5 BK broken) Jesse Pollard
2002-06-20 17:43 ` Nick LeRoy
2002-06-20 18:32   ` Jesse Pollard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox