From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933704AbcBQDdU (ORCPT ); Tue, 16 Feb 2016 22:33:20 -0500 Received: from szxga02-in.huawei.com ([119.145.14.65]:4763 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933169AbcBQDdT (ORCPT ); Tue, 16 Feb 2016 22:33:19 -0500 Subject: Re: [PATCH] zsmalloc: drop unused member 'mapping_area->huge' To: Sergey Senozhatsky , YiPing Xu References: <1455674199-6227-1-git-send-email-xuyiping@huawei.com> <20160217022552.GB535@swordfish> CC: , , , , , , From: xuyiping Message-ID: <56C3E91B.1030101@hisilicon.com> Date: Wed, 17 Feb 2016 11:29:31 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <20160217022552.GB535@swordfish> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.184.213.22] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A0B0206.56C3E931.0001,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 808ef1ce1fe5acaa01c813aa505830f7 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org HI, Sergery On 2016/2/17 10:26, Sergey Senozhatsky wrote: > Hello, > > On (02/17/16 09:56), YiPing Xu wrote: >> static int create_handle_cache(struct zs_pool *pool) >> @@ -1127,11 +1126,9 @@ static void __zs_unmap_object(struct mapping_area *area, >> goto out; >> >> buf = area->vm_buf; >> - if (!area->huge) { >> - buf = buf + ZS_HANDLE_SIZE; >> - size -= ZS_HANDLE_SIZE; >> - off += ZS_HANDLE_SIZE; >> - } >> + buf = buf + ZS_HANDLE_SIZE; >> + size -= ZS_HANDLE_SIZE; >> + off += ZS_HANDLE_SIZE; >> >> sizes[0] = PAGE_SIZE - off; >> sizes[1] = size - sizes[0]; > > > hm, indeed. > > shouldn't it depend on class->huge? > > void *zs_map_object() > { if (off + class->size <= PAGE_SIZE) { for huge object, the code will get into this branch, there is no more huge object process in __zs_map_object. /* this object is contained entirely within a page */ area->vm_addr = kmap_atomic(page); ret = area->vm_addr + off; goto out; } > void *ret = __zs_map_object(area, pages, off, class->size); > > if (!class->huge) > ret += ZS_HANDLE_SIZE; /* area->vm_buf + ZS_HANDLE_SIZE */ > > return ret; > } void zs_unmap_object(struct zs_pool *pool, unsigned long handle) { .. area = this_cpu_ptr(&zs_map_area); if (off + class->size <= PAGE_SIZE) for huge object, the code will get into this branch, so, in __zs_unmap_object there is no depend on class->huge. it is a little implicated here. kunmap_atomic(area->vm_addr); else { struct page *pages[2]; pages[0] = page; pages[1] = get_next_page(page); BUG_ON(!pages[1]); __zs_unmap_object(area, pages, off, class->size); } .. } > static void __zs_unmap_object(struct mapping_area *area...) > { > char *buf = area->vm_buf; > > /* handle is in page->private for class->huge */ > > buf = buf + ZS_HANDLE_SIZE; > size -= ZS_HANDLE_SIZE; > off += ZS_HANDLE_SIZE; > > memcpy(..); > } > > -ss > > . >