From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bob Liu Subject: Re: Kernel 3.11 / 3.12 OOM killer and Xen ballooning Date: Wed, 18 Dec 2013 20:04:20 +0800 Message-ID: <52B18F44.2030500@oracle.com> References: <52A602E5.3080300@zynstra.com> <20131209214816.GA3000@phenom.dumpdata.com> <52A72AB8.9060707@zynstra.com> <20131210152746.GF3184@phenom.dumpdata.com> <52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <52A89334.3090007@zynstra.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: James Dingwall Cc: xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org On 12/12/2013 12:30 AM, James Dingwall wrote: > Bob Liu wrote: >> On 12/10/2013 11:27 PM, Konrad Rzeszutek Wilk wrote: >>> On Tue, Dec 10, 2013 at 02:52:40PM +0000, James Dingwall wrote: >>>> Konrad Rzeszutek Wilk wrote: >>>>> On Mon, Dec 09, 2013 at 05:50:29PM +0000, James Dingwall wrote: >>>>>> Hi, >>>>>> >>>>>> Since 3.11 I have noticed that the OOM killer quite frequently >>>>>> triggers in my Xen guest domains which use ballooning to >>>>>> increase/decrease their memory allocation according to their >>>>>> requirements. One example domain I have has a maximum memory >>>>>> setting of ~1.5Gb but it usually idles at ~300Mb, it is also >>>>>> configured with 2Gb swap which is almost 100% free. >>>>>> >>>>>> # free >>>>>> total used free shared buffers >>>>>> cached >>>>>> Mem: 272080 248108 23972 0 1448 63064 >>>>>> -/+ buffers/cache: 183596 88484 >>>>>> Swap: 2097148 8 2097140 >>>>>> >>>>>> There is plenty of available free memory in the hypervisor to >>>>>> balloon to the maximum size: >>>>>> # xl info | grep free_mem >>>>>> free_memory : 14923 >>>>>> >>>>>> An example trace (they are always the same) from the oom killer in >>>>>> 3.12 is added below. So far I have not been able to reproduce this >>>>>> at will so it is difficult to start bisecting it to see if a >>>>>> particular change introduced this. However it does seem that the >>>>>> behaviour is wrong because a) ballooning could give the guest more >>>>>> memory, b) there is lots of swap available which could be used as a >>>>>> fallback. >>> Keep in mind that swap with tmem is actually no more swap. Heh, that >>> sounds odd -but basically pages that are destined for swap end up >>> going in the tmem code which pipes them up to the hypervisor. >>> >>>>>> If other information could help or there are more tests that I could >>>>>> run then please let me know. >>>>> I presume you have enabled 'tmem' both in the hypervisor and in >>>>> the guest right? >>>> Yes, domU and dom0 both have the tmem module loaded and tmem >>>> tmem_dedup=on tmem_compress=on is given on the xen command line. >>> Excellent. The odd thing is that your swap is not used that much, but >>> it should be (as that is part of what the self-balloon is suppose to >>> do). >>> >>> Bob, you had a patch for the logic of how self-balloon is suppose >>> to account for the slab - would this be relevant to this problem? >>> >> Perhaps, I have attached the patch. >> James, could you please apply it and try your application again? You >> have to rebuild the guest kernel. >> Oh, and also take a look at whether frontswap is in use, you can check >> it by watching "cat /sys/kernel/debug/frontswap/*". > I have tested this patch with a workload where I have previously seen > failures and so far so good. I'll try to keep a guest with it stressed > to see if I do get any problems. I don't know if it is expected but I By the way, besides longer time of kswapd, is this patch work well during your stress testing? Have you seen the OOM killer triggered quite frequently again?(with selfshrink=true) Thanks, -Bob