From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NAN35-0001NH-T6 for qemu-devel@nongnu.org; Tue, 17 Nov 2009 07:20:43 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NAN2z-0001LN-Nk for qemu-devel@nongnu.org; Tue, 17 Nov 2009 07:20:43 -0500 Received: from [199.232.76.173] (port=52388 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NAN2z-0001LJ-7n for qemu-devel@nongnu.org; Tue, 17 Nov 2009 07:20:37 -0500 Received: from mx20.gnu.org ([199.232.41.8]:61067) by monty-python.gnu.org with esmtps (TLS-1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1NAN2y-0000wK-Vd for qemu-devel@nongnu.org; Tue, 17 Nov 2009 07:20:37 -0500 Received: from mail.codesourcery.com ([38.113.113.100]) by mx20.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NAN2x-0005RX-9U for qemu-devel@nongnu.org; Tue, 17 Nov 2009 07:20:35 -0500 From: Paul Brook Subject: Re: [Qemu-devel] QEMU redesigned for MPI (Message Passing Interface) Date: Tue, 17 Nov 2009 12:20:28 +0000 References: <747a56b80911130629q4046b4fbg400f7566997aa931@mail.gmail.com> <4B019203.7030308@codemonkey.ws> In-Reply-To: <4B019203.7030308@codemonkey.ws> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <200911171220.28511.paul@codesourcery.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Victor Vasilchenko > > The practical example below will explain it completely: > > > > 1) we take 4 common modern computers - CoreQuad + 8 GB Memory. > > 2) we assemble a standard Linux cluster with 16 cores and 32G memory. > > 3) and now - we run the only one virtual guest system, but give it ALL > > available resources. If the guest isn't aware of this discontinuity then performance will really suck. Generally speaking you have to split jobs anyway, the same as you would on a regular cluster, the SSI just makes migration and programming a little easier. If you don't believe me then talk to anyone who's used large SSI systems (e.g. SGI Altix) - these systems have dedicated hardware assist and interconnect designed for SSI operation and you still have to be fairly selective about how you use them. > What you're describing is commonly referred to as a Single System > Image. It's been around for a while and can be found in software-only > verses (pre-Xen VirtualIron, ScaleMP) and hardware-assisted (IBM, 3leaf). Or better still do it at the OS level (e.g. OpenSSI). Paul