From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yVqVT3HtjzDrJw for ; Mon, 6 Nov 2017 22:05:53 +1100 (AEDT) Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id vA6B4oNu077783 for ; Mon, 6 Nov 2017 06:05:50 -0500 Received: from e31.co.us.ibm.com (e31.co.us.ibm.com [32.97.110.149]) by mx0a-001b2d01.pphosted.com with ESMTP id 2e2nnx2rt8-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Mon, 06 Nov 2017 06:05:50 -0500 Received: from localhost by e31.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 6 Nov 2017 04:05:49 -0700 Subject: Re: [PATCH 1/5] powerpc/64s/hash: Fix 128TB-512TB virtual address boundary case allocation To: Nicholas Piggin Cc: linuxppc-dev@lists.ozlabs.org, Florian Weimer References: <20171106100315.29720-1-npiggin@gmail.com> <20171106100315.29720-2-npiggin@gmail.com> <87y3njsne9.fsf@linux.vnet.ibm.com> <20171106215447.787e58fd@roar.ozlabs.ibm.com> From: "Aneesh Kumar K.V" Date: Mon, 6 Nov 2017 16:35:43 +0530 MIME-Version: 1.0 In-Reply-To: <20171106215447.787e58fd@roar.ozlabs.ibm.com> Content-Type: text/plain; charset=utf-8; format=flowed Message-Id: <13f9578b-f907-1809-9aaa-cbb87c419bc6@linux.vnet.ibm.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On 11/06/2017 04:24 PM, Nicholas Piggin wrote: > On Mon, 06 Nov 2017 16:08:06 +0530 > "Aneesh Kumar K.V" wrote: > >> Nicholas Piggin writes: >> >>> When allocating VA space with a hint that crosses 128TB, the SLB addr_limit >>> variable is not expanded if addr is not > 128TB, but the slice allocation >>> looks at task_size, which is 512TB. This results in slice_check_fit() >>> incorrectly succeeding because the slice_count truncates off bit 128 of the >>> requested mask, so the comparison to the available mask succeeds. >> >> >> But then the mask passed to slice_check_fit() is generated using >> context.addr_limit as max value. So how did that return succcess? ie, >> we get the request mask via >> >> slice_range_to_mask(addr, len, &mask); >> >> And the potential/possible mask using >> >> slice_mask_for_size(mm, psize, &good_mask); >> >> So how did slice_check_fit() return sucess with >> >> slice_check_fit(mm, mask, good_mask); > > Because the addr_limit check is used to *limit* the comparison. > > The available mask had bit up to 127 set, and the mask had 127 and > 128 set. However the 128T addr_limit causes only bits 0-127 to be > compared. > Should we fix it then via ? I haven't tested this yet. Also this result in us comparing more bits? modified arch/powerpc/mm/slice.c @@ -169,13 +169,12 @@ static int slice_check_fit(struct mm_struct *mm, struct slice_mask mask, struct slice_mask available) { DECLARE_BITMAP(result, SLICE_NUM_HIGH); - unsigned long slice_count = GET_HIGH_SLICE_INDEX(mm->context.addr_limit); bitmap_and(result, mask.high_slices, - available.high_slices, slice_count); + available.high_slices, SLICE_NUM_HIGH); return (mask.low_slices & available.low_slices) == mask.low_slices && - bitmap_equal(result, mask.high_slices, slice_count); + bitmap_equal(result, mask.high_slices, SLICE_NUM_HIGH) -aneesh