From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e23smtp05.au.ibm.com (e23smtp05.au.ibm.com [202.81.31.147]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "e23smtp05.au.ibm.com", Issuer "GeoTrust SSL CA" (not verified)) by ozlabs.org (Postfix) with ESMTPS id 2F1132C07AD for ; Mon, 23 Jul 2012 17:14:17 +1000 (EST) Received: from /spool/local by e23smtp05.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 23 Jul 2012 17:14:06 +1000 Received: from d23av02.au.ibm.com (d23av02.au.ibm.com [9.190.235.138]) by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q6N75TAj29425850 for ; Mon, 23 Jul 2012 17:05:29 +1000 Received: from d23av02.au.ibm.com (loopback [127.0.0.1]) by d23av02.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q6N7DiHV030687 for ; Mon, 23 Jul 2012 17:13:44 +1000 From: "Aneesh Kumar K.V" To: Paul Mackerras Subject: Re: [PATCH -V3 07/11] arch/powerpc: Increase the slice range to 64TB In-Reply-To: <20120723000048.GG17790@bloggs.ozlabs.ibm.com> References: <1341839621-28332-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> <1341839621-28332-8-git-send-email-aneesh.kumar@linux.vnet.ibm.com> <20120723000048.GG17790@bloggs.ozlabs.ibm.com> Date: Mon, 23 Jul 2012 12:43:41 +0530 Message-ID: <878vebhrve.fsf@skywalker.in.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: linuxppc-dev@lists.ozlabs.org List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Paul Mackerras writes: > On Mon, Jul 09, 2012 at 06:43:37PM +0530, Aneesh Kumar K.V wrote: >> From: "Aneesh Kumar K.V" >> >> This patch makes the high psizes mask as an unsigned char array >> so that we can have more than 16TB. Currently we support upto >> 64TB > > Some comments inline... > >> @@ -804,16 +804,19 @@ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap) >> #ifdef CONFIG_PPC_MM_SLICES >> unsigned int get_paca_psize(unsigned long addr) >> { >> - unsigned long index, slices; >> + u64 lpsizes; >> + unsigned char *hpsizes; >> + unsigned long index, mask_index; >> >> if (addr < SLICE_LOW_TOP) { >> - slices = get_paca()->context.low_slices_psize; >> + lpsizes = get_paca()->context.low_slices_psize; >> index = GET_LOW_SLICE_INDEX(addr); >> - } else { >> - slices = get_paca()->context.high_slices_psize; >> - index = GET_HIGH_SLICE_INDEX(addr); >> + return (lpsizes >> (index * 4)) & 0xF; >> } >> - return (slices >> (index * 4)) & 0xF; >> + hpsizes = get_paca()->context.high_slices_psize; >> + index = GET_HIGH_SLICE_INDEX(addr) >> 1; >> + mask_index = GET_HIGH_SLICE_INDEX(addr) - (index << 1); >> + return (hpsizes[index] >> (mask_index * 4)) & 0xF; > > The last 3 lines here feel awkward. How about: > index = GET_HIGH_SLICE_INDEX(addr); mask_index = index & 1; return (hpsizes[index >> 1] >> (mask_index * 4)) & 0xF; That is much simpler. I updated the patch, changing to the above format in all the location. > >> static struct slice_mask slice_mask_for_size(struct mm_struct *mm, int psize) >> { >> + unsigned char *hpsizes; >> + int index, mask_index; >> struct slice_mask ret = { 0, 0 }; >> unsigned long i; >> - u64 psizes; >> + u64 lpsizes; >> >> - psizes = mm->context.low_slices_psize; >> + lpsizes = mm->context.low_slices_psize; >> for (i = 0; i < SLICE_NUM_LOW; i++) >> - if (((psizes >> (i * 4)) & 0xf) == psize) >> + if (((lpsizes >> (i * 4)) & 0xf) == psize) >> ret.low_slices |= 1u << i; >> >> - psizes = mm->context.high_slices_psize; >> - for (i = 0; i < SLICE_NUM_HIGH; i++) >> - if (((psizes >> (i * 4)) & 0xf) == psize) >> + hpsizes = mm->context.high_slices_psize; >> + for (i = 0; i < SLICE_NUM_HIGH; i++) { >> + index = i >> 1; >> + mask_index = i - (index << 1); > > Again, seems like a complicated way to do mask_index = i & 1 (or > even i % 2, if you prefer, but then make i an unsigned type). > > Paul. -aneesh