From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: [PATCH] kvm-vmx: add module parameter to avoid trapping HLT instructions (v2) Date: Fri, 03 Dec 2010 08:25:14 -0600 Message-ID: <4CF8FDCA.8030303@codemonkey.ws> References: <1291298357-5695-1-git-send-email-aliguori@us.ibm.com> <20101202191416.GQ10050@sequoia.sous-sol.org> <20101202204016.GB31316@amt.cnet> <20101202210737.GS10050@sequoia.sous-sol.org> <4CF81FC6.7090503@codemonkey.ws> <20101203024221.GX10050@sequoia.sous-sol.org> <4CF86235.7050409@codemonkey.ws> <20101203034415.GZ10050@sequoia.sous-sol.org> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , kvm@vger.kernel.org, Avi Kivity , Srivatsa Vaddagiri To: Chris Wright Return-path: Received: from mail-qy0-f181.google.com ([209.85.216.181]:55362 "EHLO mail-qy0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756553Ab0LCOZR (ORCPT ); Fri, 3 Dec 2010 09:25:17 -0500 Received: by qyk12 with SMTP id 12so11745206qyk.19 for ; Fri, 03 Dec 2010 06:25:17 -0800 (PST) In-Reply-To: <20101203034415.GZ10050@sequoia.sous-sol.org> Sender: kvm-owner@vger.kernel.org List-ID: On 12/02/2010 09:44 PM, Chris Wright wrote: >> Yes. >> >> There's definitely a use-case to have a hard cap. >> > OK, good, just wanted to be clear. Because this started as a discussion > of hard caps, and it began to sound as if you were no longer advocating > for them. > > >> But I think another common use-case is really just performance >> isolation. If over the course of a day, you go from 12CU, to 6CU, >> to 4CU, that might not be that bad of a thing. >> > I guess it depends on your SLA. We don't have to do anything to give > varying CU based on host load. That's the one thing CFS will do for > us quite well ;) > I'm really anticipating things like the EC2 micro instance where the CPU allotment is variable. Variable allotments are interesting from a density perspective but having interdependent performance is definitely a problem. Another way to think about it: a customer reports a performance problem at 1PM. With non-yielding guests, you can look at logs and see that the expected capacity was 2CU (it may have changed to 4CU at 3PM). However, without something like non-yielding guests, the performance is almost entirely unpredictable and unless you have an exact timestamp from the customer along with a fine granularity performance log, there's no way to determine whether it's expected behavior. >> If the environment is designed correctly, of N nodes, N-1 will >> always be at capacity so it's really just a single node hat is under >> utilized. >> > Many clouds do a variation on Small, Medium, Large sizing. So depending > on the scheduler (best fit, rr...) even the notion of at capacity may > change from node to node and during the time of day. > An ideal cloud will make sure that something like 4 Small == 2 Medium == 1 Large instance and that the machine capacity is always a multiple of Large instance size. With a division like this, you can always achieve maximum density provided that you can support live migration. Regards, Anthony Liguori > thanks, > -chris >