From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39688C7EE32 for ; Tue, 6 Jun 2023 07:53:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236570AbjFFHw6 (ORCPT ); Tue, 6 Jun 2023 03:52:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232067AbjFFHwc (ORCPT ); Tue, 6 Jun 2023 03:52:32 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CB12270C; Tue, 6 Jun 2023 00:48:36 -0700 (PDT) Received: from [192.168.10.48] (unknown [119.152.150.198]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) (Authenticated sender: usama.anjum) by madras.collabora.co.uk (Postfix) with ESMTPSA id B02776602242; Tue, 6 Jun 2023 08:48:32 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1686037715; bh=JhbByRlc6EGRRYGw0Ja885SVDTyM19OMADJSBxVAHMQ=; h=Date:Cc:Subject:To:References:From:In-Reply-To:From; b=jZCrUbuXc5HBQl7ZnAG9GwfIUKrSJyfWYwn/H5AzCKlu/M5kCg7hlaPhvkcCIBgaO tJYP4dhCc4+2UV/R48mZsep/zKOErtgFWKOV7wUwpkibe++m/f2UzSI/z9WzT3zCf3 lsyO6u8o5iGF8vJI0QSyrMiM4gaRtftzzAp6GZvd4juO54Ra2JxRI+xNmQKr2yQCie NH6KrJy6dvj2V1w/FubZH8QPHrd3SnkygET47BviXOPwhHfUxEdl0HI7N81idecZ2F sJsAtzo2bEJ4+yOew7Ay/+/KT6mpkNxkrqnR1/uGPa8YiZ7Ltah9RvL4/2pT20ma0d 4589MziVr8rmg== Message-ID: Date: Tue, 6 Jun 2023 12:48:29 +0500 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Cc: Muhammad Usama Anjum , David Hildenbrand , Peter Xu , Shuah Khan , Nathan Chancellor , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, LKML Subject: Re: [PATCH v3 02/11] selftests/mm: fix unused variable warnings in hugetlb-madvise.c, migration.c Content-Language: en-US To: John Hubbard , Andrew Morton References: <20230606071637.267103-1-jhubbard@nvidia.com> <20230606071637.267103-3-jhubbard@nvidia.com> From: Muhammad Usama Anjum In-Reply-To: <20230606071637.267103-3-jhubbard@nvidia.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org On 6/6/23 12:16 PM, John Hubbard wrote: > Dummy variables are required in order to make these two (similar) > routines work, so in both cases, declare the variables as volatile in > order to avoid the clang compiler warning. > > Furthermore, in order to ensure that each test actually does what is > intended, add an asm volatile invocation (thanks to David Hildenbrand > for the suggestion), with a clarifying comment so that it survives > future maintenance. > > Reviewed-by: David Hildenbrand > Reviewed-by: Peter Xu > Signed-off-by: John Hubbard Tested-by: Muhammad Usama Anjum > --- > tools/testing/selftests/mm/hugetlb-madvise.c | 8 ++++++-- > tools/testing/selftests/mm/migration.c | 5 ++++- > 2 files changed, 10 insertions(+), 3 deletions(-) > > diff --git a/tools/testing/selftests/mm/hugetlb-madvise.c b/tools/testing/selftests/mm/hugetlb-madvise.c > index 28426e30d9bc..d55322df4b73 100644 > --- a/tools/testing/selftests/mm/hugetlb-madvise.c > +++ b/tools/testing/selftests/mm/hugetlb-madvise.c > @@ -65,11 +65,15 @@ void write_fault_pages(void *addr, unsigned long nr_pages) > > void read_fault_pages(void *addr, unsigned long nr_pages) > { > - unsigned long dummy = 0; > + volatile unsigned long dummy = 0; > unsigned long i; > > - for (i = 0; i < nr_pages; i++) > + for (i = 0; i < nr_pages; i++) { > dummy += *((unsigned long *)(addr + (i * huge_page_size))); > + > + /* Prevent the compiler from optimizing out the entire loop: */ > + asm volatile("" : "+r" (dummy)); > + } > } > > int main(int argc, char **argv) > diff --git a/tools/testing/selftests/mm/migration.c b/tools/testing/selftests/mm/migration.c > index 1cec8425e3ca..379581567f27 100644 > --- a/tools/testing/selftests/mm/migration.c > +++ b/tools/testing/selftests/mm/migration.c > @@ -95,12 +95,15 @@ int migrate(uint64_t *ptr, int n1, int n2) > > void *access_mem(void *ptr) > { > - uint64_t y = 0; > + volatile uint64_t y = 0; > volatile uint64_t *x = ptr; > > while (1) { > pthread_testcancel(); > y += *x; > + > + /* Prevent the compiler from optimizing out the writes to y: */ > + asm volatile("" : "+r" (y)); > } > > return NULL; -- BR, Muhammad Usama Anjum