From mboxrd@z Thu Jan 1 00:00:00 1970 From: James Dingwall Subject: Re: Kernel 3.11 / 3.12 OOM killer and Xen ballooning Date: Wed, 11 Dec 2013 16:30:44 +0000 Message-ID: <52A89334.3090007@zynstra.com> References: <52A602E5.3080300@zynstra.com> <20131209214816.GA3000@phenom.dumpdata.com> <52A72AB8.9060707@zynstra.com> <20131210152746.GF3184@phenom.dumpdata.com> <52A812B0.6060607@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <52A812B0.6060607@oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Bob Liu , Konrad Rzeszutek Wilk Cc: xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org Bob Liu wrote: > On 12/10/2013 11:27 PM, Konrad Rzeszutek Wilk wrote: >> On Tue, Dec 10, 2013 at 02:52:40PM +0000, James Dingwall wrote: >>> Konrad Rzeszutek Wilk wrote: >>>> On Mon, Dec 09, 2013 at 05:50:29PM +0000, James Dingwall wrote: >>>>> Hi, >>>>> >>>>> Since 3.11 I have noticed that the OOM killer quite frequently >>>>> triggers in my Xen guest domains which use ballooning to >>>>> increase/decrease their memory allocation according to their >>>>> requirements. One example domain I have has a maximum memory >>>>> setting of ~1.5Gb but it usually idles at ~300Mb, it is also >>>>> configured with 2Gb swap which is almost 100% free. >>>>> >>>>> # free >>>>> total used free shared buffers cached >>>>> Mem: 272080 248108 23972 0 1448 63064 >>>>> -/+ buffers/cache: 183596 88484 >>>>> Swap: 2097148 8 2097140 >>>>> >>>>> There is plenty of available free memory in the hypervisor to >>>>> balloon to the maximum size: >>>>> # xl info | grep free_mem >>>>> free_memory : 14923 >>>>> >>>>> An example trace (they are always the same) from the oom killer in >>>>> 3.12 is added below. So far I have not been able to reproduce this >>>>> at will so it is difficult to start bisecting it to see if a >>>>> particular change introduced this. However it does seem that the >>>>> behaviour is wrong because a) ballooning could give the guest more >>>>> memory, b) there is lots of swap available which could be used as a >>>>> fallback. >> Keep in mind that swap with tmem is actually no more swap. Heh, that >> sounds odd -but basically pages that are destined for swap end up >> going in the tmem code which pipes them up to the hypervisor. >> >>>>> If other information could help or there are more tests that I could >>>>> run then please let me know. >>>> I presume you have enabled 'tmem' both in the hypervisor and in >>>> the guest right? >>> Yes, domU and dom0 both have the tmem module loaded and tmem >>> tmem_dedup=on tmem_compress=on is given on the xen command line. >> Excellent. The odd thing is that your swap is not used that much, but >> it should be (as that is part of what the self-balloon is suppose to do). >> >> Bob, you had a patch for the logic of how self-balloon is suppose >> to account for the slab - would this be relevant to this problem? >> > Perhaps, I have attached the patch. > James, could you please apply it and try your application again? You > have to rebuild the guest kernel. > Oh, and also take a look at whether frontswap is in use, you can check > it by watching "cat /sys/kernel/debug/frontswap/*". I have tested this patch with a workload where I have previously seen failures and so far so good. I'll try to keep a guest with it stressed to see if I do get any problems. I don't know if it is expected but I did note that the system running with this patch + selfshrink has a kswapd0 run time of ~30mins. A guest without it and selfshrink disabled having run a similar workload has ~5mins. With the patch I also noted the following kernel messages which I haven't seen before: [ 8733.646820] init_memory_mapping: [mem 0x120000000-0x127ffffff] [ 8733.646825] [mem 0x120000000-0x127ffffff] page 4k [10506.639875] init_memory_mapping: [mem 0x128000000-0x137ffffff] [10506.639881] [mem 0x128000000-0x137ffffff] page 4k James > > balloon.patch > > > diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c > index 21e18c1..4814759 100644 > --- a/drivers/xen/xen-selfballoon.c > +++ b/drivers/xen/xen-selfballoon.c > @@ -191,6 +191,8 @@ static void selfballoon_process(struct work_struct *work) > tgt_pages = cur_pages; /* default is no change */ > goal_pages = vm_memory_committed() + > totalreserve_pages + > + global_page_state(NR_SLAB_RECLAIMABLE) + > + global_page_state(NR_SLAB_UNRECLAIMABLE) + > MB2PAGES(selfballoon_reserved_mb); > #ifdef CONFIG_FRONTSWAP > /* allow space for frontswap pages to be repatriated */ > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 580a5f0..863b05c 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -16,6 +16,7 @@ > > #include > #include > +#include > #include > #include > #include > @@ -3075,7 +3076,7 @@ void show_free_areas(unsigned int filter) > " dirty:%lu writeback:%lu unstable:%lu\n" > " free:%lu slab_reclaimable:%lu slab_unreclaimable:%lu\n" > " mapped:%lu shmem:%lu pagetables:%lu bounce:%lu\n" > - " free_cma:%lu\n", > + " free_cma:%lu totalram:%lu balloontarget:%lu\n", > global_page_state(NR_ACTIVE_ANON), > global_page_state(NR_INACTIVE_ANON), > global_page_state(NR_ISOLATED_ANON), > @@ -3093,7 +3094,9 @@ void show_free_areas(unsigned int filter) > global_page_state(NR_SHMEM), > global_page_state(NR_PAGETABLE), > global_page_state(NR_BOUNCE), > - global_page_state(NR_FREE_CMA_PAGES)); > + global_page_state(NR_FREE_CMA_PAGES), > + totalram_pages, > + vm_memory_committed() + totalreserve_pages); > > for_each_populated_zone(zone) { > int i; -- *James Dingwall* Script Monkey zynstra-signature-logo twitter-black linkedin-black Zynstra is a private limited company registered in England and Wales (registered number 07864369). Our registered office is 5 New Street Square, London, EC4A 3TW and our headquarters are at Bath Ventures, Broad Quay, Bath, BA1 1UD.