From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bob Liu Subject: Re: Kernel 3.11 / 3.12 OOM killer and Xen ballooning Date: Wed, 11 Dec 2013 17:54:38 +0800 Message-ID: <52A8365E.8050909@oracle.com> References: <52A602E5.3080300@zynstra.com> <20131209214816.GA3000@phenom.dumpdata.com> <52A72AB8.9060707@zynstra.com> <20131210152746.GF3184@phenom.dumpdata.com> <52A812B0.6060607@oracle.com> <52A82F75.3000609@zynstra.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <52A82F75.3000609@zynstra.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: James Dingwall Cc: xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org On 12/11/2013 05:25 PM, James Dingwall wrote: > Bob Liu wrote: >> On 12/10/2013 11:27 PM, Konrad Rzeszutek Wilk wrote: >>> On Tue, Dec 10, 2013 at 02:52:40PM +0000, James Dingwall wrote: >>>> Konrad Rzeszutek Wilk wrote: >>>>> On Mon, Dec 09, 2013 at 05:50:29PM +0000, James Dingwall wrote: >>>>>> Hi, >>>>>> >>>>>> Since 3.11 I have noticed that the OOM killer quite frequently >>>>>> triggers in my Xen guest domains which use ballooning to >>>>>> increase/decrease their memory allocation according to their >>>>>> requirements. One example domain I have has a maximum memory >>>>>> setting of ~1.5Gb but it usually idles at ~300Mb, it is also >>>>>> configured with 2Gb swap which is almost 100% free. >>>>>> >>>>>> # free >>>>>> total used free shared buffers >>>>>> cached >>>>>> Mem: 272080 248108 23972 0 1448 63064 >>>>>> -/+ buffers/cache: 183596 88484 >>>>>> Swap: 2097148 8 2097140 >>>>>> >>>>>> There is plenty of available free memory in the hypervisor to >>>>>> balloon to the maximum size: >>>>>> # xl info | grep free_mem >>>>>> free_memory : 14923 >>>>>> >>>>>> An example trace (they are always the same) from the oom killer in >>>>>> 3.12 is added below. So far I have not been able to reproduce this >>>>>> at will so it is difficult to start bisecting it to see if a >>>>>> particular change introduced this. However it does seem that the >>>>>> behaviour is wrong because a) ballooning could give the guest more >>>>>> memory, b) there is lots of swap available which could be used as a >>>>>> fallback. >>> Keep in mind that swap with tmem is actually no more swap. Heh, that >>> sounds odd -but basically pages that are destined for swap end up >>> going in the tmem code which pipes them up to the hypervisor. >>> >>>>>> If other information could help or there are more tests that I could >>>>>> run then please let me know. >>>>> I presume you have enabled 'tmem' both in the hypervisor and in >>>>> the guest right? >>>> Yes, domU and dom0 both have the tmem module loaded and tmem >>>> tmem_dedup=on tmem_compress=on is given on the xen command line. >>> Excellent. The odd thing is that your swap is not used that much, but >>> it should be (as that is part of what the self-balloon is suppose to >>> do). >>> >>> Bob, you had a patch for the logic of how self-balloon is suppose >>> to account for the slab - would this be relevant to this problem? >> Perhaps, I have attached the patch. >> James, could you please apply it and try your application again? You >> have to rebuild the guest kernel. >> Oh, and also take a look at whether frontswap is in use, you can check >> it by watching "cat /sys/kernel/debug/frontswap/*". > I will test this patch later today and let you know how it goes. In the Thanks! > meantime I have found that loading tmem with selfshrinking=0 prevents > the problem I originally reported. Frontswap is in use, these values It's strange. I think maybe I can set up a testing environment if you can share me your guest kernel's configuration and also the workload you are running on the guest. -- Regards, -Bob