From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61C49C3DA4A for ; Fri, 16 Aug 2024 11:32:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6B578D0073; Fri, 16 Aug 2024 07:32:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DF3D66B0294; Fri, 16 Aug 2024 07:32:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CBB148D0073; Fri, 16 Aug 2024 07:32:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id ACD416B0293 for ; Fri, 16 Aug 2024 07:32:27 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 520CA141AC5 for ; Fri, 16 Aug 2024 11:32:27 +0000 (UTC) X-FDA: 82457895534.11.277FD50 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf27.hostedemail.com (Postfix) with ESMTP id 4A8FD4001A for ; Fri, 16 Aug 2024 11:32:25 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf27.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723807885; a=rsa-sha256; cv=none; b=U3lHhMMSUrIY2WyelszoIn3U+M0TUfUXd2/92fCmCr5iineC1sk+LzMFXb1ti1uaiyaRHx AX+gMtJY6Uk1eIdJSep2mF2hxpkASUGuIPvXLKaBZGrMwv3iiNtBxvrMuTLHgOPqLWHNZ0 WlkvIAZLkOwCvSPLVxW4Dh796Mxjoic= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf27.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723807885; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LbCS23JgqWQvxEbt6zQUZHQzIoRzD7GeNHcjS2DmeoU=; b=FOWUzGPuMmdQ/qVhjpFLt1YfuHD/PG+/qXzJALDOcWcGJuVf8ewqi9Y19aNG0G3DUkoB95 tcrSNf5AoeFqrhHWRmIm6DkC1n3Ow7+5Q4a0ROdIA8cm+gAqPqRcVd4hzS0WjUrLhOOgzD yi0VfmDdKoDXxxzO4/EQV+EnePR9O6E= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5A117143D; Fri, 16 Aug 2024 04:32:50 -0700 (PDT) Received: from [10.163.86.101] (unknown [10.163.86.101]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 609373F73B; Fri, 16 Aug 2024 04:32:13 -0700 (PDT) Message-ID: Date: Fri, 16 Aug 2024 17:01:58 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] mm: Retry migration earlier upon refcount mismatch From: Dev Jain To: "Huang, Ying" Cc: akpm@linux-foundation.org, shuah@kernel.org, david@redhat.com, willy@infradead.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, osalvador@suse.de, baolin.wang@linux.alibaba.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, ioworker0@gmail.com, gshan@redhat.com, mark.rutland@arm.com, kirill.shutemov@linux.intel.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, broonie@kernel.org, mgorman@techsingularity.net, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org References: <20240809103129.365029-1-dev.jain@arm.com> <20240809103129.365029-2-dev.jain@arm.com> <87frrauwwv.fsf@yhuang6-desk2.ccr.corp.intel.com> <15dbe4ac-a036-4029-ba08-e12a236f448a@arm.com> <87bk1yuuzu.fsf@yhuang6-desk2.ccr.corp.intel.com> <95b72817-5444-4ced-998a-1cb90f42bf49@arm.com> <8734naurhm.fsf@yhuang6-desk2.ccr.corp.intel.com> <9d84e4e8-ac54-4eb1-a113-3f32aea864c9@arm.com> <391d4f4f-e642-4c11-a36b-190874963f8a@arm.com> Content-Language: en-US In-Reply-To: <391d4f4f-e642-4c11-a36b-190874963f8a@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 4A8FD4001A X-Stat-Signature: 3yx3sc9wmhef8d1cpr859f5b7shhyamx X-Rspam-User: X-HE-Tag: 1723807945-383236 X-HE-Meta: U2FsdGVkX1+BGKcHS7Wp22/hUPDgo0Fo8zB7jr5cqdC8f3Q2SuVrwS6XklUTe6PaFp1PPCExOnpdTMgAKwanXAFKXkKLhzBYPEKj7wEBQw/aH1p8UwdtRBU1VRIHTspEGUsE+cyqNe3vAdZp5U9XGHtW22kVoxY7lDq5ViAB8PbhiuMTc7NyC+jvjsF1U44vCtBF1YkZCUiuhly9mijcICSqBwWsKYPHVHBheuzk0xN398ujm28Qh2LDetAouiaD9BVi+q1a431WBA2GE0s4MVpwUq9iJpNR60X9PoCAg1/tBpuiybeJNHS80GBtnqH+6cHJdYqX34lslra5Wupq+l9fxxuVAaCD/yGBtuUPmMET45ybVshZy7QmunyKFcTChIKD4MXOUc4KQM3/pEHV4hdisgfJzU3GFNJ514ofdU2fH6DiCqBLLR/nCEpezg31HiE1YRCM2A1kI0YttuqDDvXyh1jECTYCy1UrMe+K+2gJzBzztPcCnTVgNxTbw2sq7yYoZzb3eCgQNCcmJ3rLHSPRstIL3C2e8OX5su8VsASruu+ZHmruSyAwJ04vhyr/2tQ+QUtQR0+v2EDncozviv5k3cfXpd1Vl4ZBVFwz6ixxwIzBKAIHPukMhuPVaqM8ZU7ai+V+1WGqwPci1Zs1Tmg37Ajxpqpz0khaV+0ib9YF7n0DvXF9W72LuFXmBGyki1czokidF2QZjTP1EBsvflYNAfTgGDPuZeCL+YCTKg05QCGh5gpwwC92rYUSP6XSnqfDqeT7ayp8wPygTwohxRUSZqcdfcZusf5PvWAli3Sm5vxu2tAx3gyLcCCfthHRUJDioHeaXGLJcZ9njKXwE1MmmzHZsdRl+MYDSWXMYrJ5097uMM6TGCeKrSTXU9DwdfB9QKovv6h24hNrPaL7dwX9DBvYTztnEUeyY4mnYuccQSOtrY17S3WshdflicTjSKrjWRxI3jDtsUOXjd+ H8khuO2v ftuEijVq9Kr9XYavMdjAPKeUcwHsbitoE2TjmaJISODsSIF4rDwpalDt8CP8jKIjEYGb/yAmPtQUt6smS4e7ZMQiZ3uajMHNaqo6NhMzxyEJQ0/VVo0niZLctbDril0kxYKHU1HS1KUWQpOnlFVFLWwJ/mmT5grA6MHQMo4Hi/f6AIyH/1f5+0O36V4pF/smAl7PAd6tOTsM592IWFpUkQOVL24c7riR0xn5LGBiPhyDd3rQwOe0miDsSXBchwY4///M8Mv7bxyY94tJHDmq36+ReuJ/SW4fX4c4seZiQcdpJ94gATH1+VT7tMFbcLzs849JEVZN9mwX9WFZoV186nlgbWkLGy2Bg2RDbIiI89K72cTT3RPZLNX0mqP4CnB6HZUhRtK5h5fdbKz0LfbIet2BB4A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 8/13/24 12:52, Dev Jain wrote: > > On 8/13/24 10:30, Dev Jain wrote: >> >> On 8/12/24 17:38, Dev Jain wrote: >>> >>> On 8/12/24 13:01, Huang, Ying wrote: >>>> Dev Jain writes: >>>> >>>>> On 8/12/24 11:45, Huang, Ying wrote: >>>>>> Dev Jain writes: >>>>>> >>>>>>> On 8/12/24 11:04, Huang, Ying wrote: >>>>>>>> Hi, Dev, >>>>>>>> >>>>>>>> Dev Jain writes: >>>>>>>> >>>>>>>>> As already being done in __migrate_folio(), wherein we backoff >>>>>>>>> if the >>>>>>>>> folio refcount is wrong, make this check during the unmapping >>>>>>>>> phase, upon >>>>>>>>> the failure of which, the original state of the PTEs will be >>>>>>>>> restored and >>>>>>>>> the folio lock will be dropped via migrate_folio_undo_src(), >>>>>>>>> any racing >>>>>>>>> thread will make progress and migration will be retried. >>>>>>>>> >>>>>>>>> Signed-off-by: Dev Jain >>>>>>>>> --- >>>>>>>>>     mm/migrate.c | 9 +++++++++ >>>>>>>>>     1 file changed, 9 insertions(+) >>>>>>>>> >>>>>>>>> diff --git a/mm/migrate.c b/mm/migrate.c >>>>>>>>> index e7296c0fb5d5..477acf996951 100644 >>>>>>>>> --- a/mm/migrate.c >>>>>>>>> +++ b/mm/migrate.c >>>>>>>>> @@ -1250,6 +1250,15 @@ static int >>>>>>>>> migrate_folio_unmap(new_folio_t get_new_folio, >>>>>>>>>         } >>>>>>>>>           if (!folio_mapped(src)) { >>>>>>>>> +        /* >>>>>>>>> +         * Someone may have changed the refcount and maybe >>>>>>>>> sleeping >>>>>>>>> +         * on the folio lock. In case of refcount mismatch, >>>>>>>>> bail out, >>>>>>>>> +         * let the system make progress and retry. >>>>>>>>> +         */ >>>>>>>>> +        struct address_space *mapping = folio_mapping(src); >>>>>>>>> + >>>>>>>>> +        if (folio_ref_count(src) != >>>>>>>>> folio_expected_refs(mapping, src)) >>>>>>>>> +            goto out; >>>>>>>>>             __migrate_folio_record(dst, old_page_state, >>>>>>>>> anon_vma); >>>>>>>>>             return MIGRATEPAGE_UNMAP; >>>>>>>>>         } >>>>>>>> Do you have some test results for this?  For example, after >>>>>>>> applying the >>>>>>>> patch, the migration success rate increased XX%, etc. >>>>>>> I'll get back to you on this. >>>>>>> >>>>>>>> My understanding for this issue is that the migration success >>>>>>>> rate can >>>>>>>> increase if we undo all changes before retrying. This is the >>>>>>>> current >>>>>>>> behavior for sync migration, but not for async migration.  If >>>>>>>> so, we can >>>>>>>> use migrate_pages_sync() for async migration too to increase >>>>>>>> success >>>>>>>> rate?  Of course, we need to change the function name and >>>>>>>> comments. >>>>>>> As per my understanding, this is not the current behaviour for sync >>>>>>> migration. After successful unmapping, we fail in >>>>>>> migrate_folio_move() >>>>>>> with -EAGAIN, we do not call undo src+dst (rendering the loop >>>>>>> around >>>>>>> migrate_folio_move() futile), we do not push the failed folio >>>>>>> onto the >>>>>>> ret_folios list, therefore, in _sync(), _batch() is never tried >>>>>>> again. >>>>>> In migrate_pages_sync(), migrate_pages_batch(,MIGRATE_ASYNC) will be >>>>>> called first, if failed, the folio will be restored to the original >>>>>> state (unlocked).  Then migrate_pages_batch(,_SYNC*) is called >>>>>> again. >>>>>> So, we unlock once.  If it's necessary, we can unlock more times via >>>>>> another level of loop. >>>>> Yes, that's my point. We need to undo src+dst and retry. >>>> For sync migration, we undo src+dst and retry now, but only once.  You >>>> have shown that more retrying increases success rate. >>>> >>>>> We will have >>>>> to decide where we want this retrying to be; do we want to change the >>>>> return value, end up in the while loop wrapped around _sync(), and >>>>> retry >>>>> there by adding another level of loop, or do we want to make use >>>>> of the >>>>> existing retry loops, one of which is wrapped around _unmap(); the >>>>> latter >>>>> is my approach. The utility I see for the former approach is that, >>>>> in case >>>>> of a large number of page migrations (which should usually be the >>>>> case), >>>>> we are giving more time for the folio to get retried. The latter >>>>> does not >>>>> give much time and discards the folio if it did not succeed under >>>>> 7 times. >>>> Because it's a race, I guess that most folios will be migrated >>>> successfully in the first pass. >>>> >>>> My concerns of your method are that it deal with just one case >>>> specially.  While retrying after undoing all appears more general. >>> >>> >>> Makes sense. Also, please ignore my "change the return value" >>> thing, I got confused between unmap_folios, ret_folios, etc. >>> Now I think I understood what the lists are doing :) >>> >>>> >>>> If it's really important to retry after undoing all, we can either >>>> convert two retying loops of migrate_pages_batch() into one loop, or >>>> remove retry loop in migrate_pages_batch() and retry in its caller >>>> instead. >>> >>> And if I implemented this correctly, the following makes the test >>> pass always: >>> https://www.codedump.xyz/diff/Zrn7EdxzNXmXyNXe >> >> >> Okay, I did mess up with the implementation, leading to a false >> positive. Let me try again :) > > > Hopefully this should do the job: > https://www.codedump.xyz/diff/ZrsIV8JSOPYx5V_u > > But the result is worse than the patch proposed; I rarely hit > a 3 digit number of successes of move_pages(). But, on a > base kernel without any changes, when I apply David's > suggestion to change the test, if I choose 7 as the number > of retries (= NR_MAX_MIGRATE_SYNC_RETRY) in the test, I > can touch even 4 digits. I am puzzled. > We can also try merging the for loops of unmap and move... If people are okay with this change, I guess I can send it as a v2? I concur with your assessment that my initial approach is solving a specific case; the above approach does give me a slight improvement on arm64 and should be an improvement in general, since it makes sense to defer retrying the failed folio as much as we can. >